MLflow Complete ML Platform
End-to-end ML lifecycle platform with experiment tracking, model registry, model serving, and PostgreSQL backend.
Overview
MLflow is an open-source machine learning lifecycle management platform developed by Databricks that addresses the challenges of experiment tracking, model reproducibility, and deployment in ML workflows. Originally created to solve the fragmented nature of ML tooling, MLflow provides a unified interface for managing experiments, packaging ML models in a reusable format, and deploying models to various serving platforms. The platform has become a cornerstone in the MLOps ecosystem, enabling data scientists and ML engineers to track parameters, metrics, and artifacts across different ML frameworks like scikit-learn, TensorFlow, PyTorch, and XGBoost.
This deployment creates a production-grade MLflow environment with five interconnected services: the MLflow server itself, PostgreSQL as the metadata backend store, MinIO for S3-compatible artifact storage, an initialization container to set up MinIO buckets, and NGINX as a reverse proxy. The MLflow server connects to PostgreSQL to store experiment metadata, runs, and model registry information, while using MinIO to store large artifacts like model files, datasets, and plots. This architecture separates metadata from artifacts, providing better performance and scalability compared to file-based storage backends.
This stack is ideal for ML teams transitioning from local development to production environments, organizations requiring self-hosted ML infrastructure, and companies needing complete control over their ML artifacts and metadata. The combination provides enterprise-grade features like model versioning, experiment comparison, and centralized artifact storage while maintaining the flexibility to customize and scale components independently based on workload requirements.
Key Features
- Comprehensive ML experiment tracking with parameter, metric, and artifact logging across multiple frameworks
- Centralized model registry with versioning, stage transitions, and model lineage tracking
- PostgreSQL backend for reliable metadata storage with ACID compliance and query performance
- S3-compatible artifact storage via MinIO for models, datasets, and experiment outputs
- Model serving capabilities with REST API endpoints for real-time inference
- Multi-user experiment sharing and collaboration with organized project workspaces
- Automatic model packaging and containerization for deployment portability
- Web-based UI for experiment comparison, model performance visualization, and registry management
Common Use Cases
- 1ML research teams tracking hundreds of experiments across different algorithms and hyperparameters
- 2Production ML pipelines requiring model version control and automated deployment workflows
- 3Data science organizations needing centralized artifact storage for large models and datasets
- 4MLOps teams implementing CI/CD for machine learning with automated model validation
- 5Financial services companies requiring on-premises ML infrastructure for regulatory compliance
- 6Healthcare organizations managing medical AI models with strict data governance requirements
- 7Startups building ML products who need professional experiment tracking without cloud vendor costs
Prerequisites
- Minimum 2GB RAM for MLflow server and PostgreSQL database operations
- At least 4GB available disk space for PostgreSQL data and MinIO artifact storage
- Ports 80, 5000, 9000, and 9001 available for NGINX, MLflow UI, MinIO API, and MinIO console
- Basic understanding of ML experiment tracking concepts and model lifecycle management
- Docker Compose environment variables configured for database and MinIO credentials
- Custom nginx.conf file configured for production proxy requirements and security headers
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 mlflow: 3 image: ghcr.io/mlflow/mlflow:latest4 ports: 5 - "5000:5000"6 environment: 7 - MLFLOW_BACKEND_STORE_URI=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}8 - MLFLOW_DEFAULT_ARTIFACT_ROOT=s3://mlflow9 - AWS_ACCESS_KEY_ID=${MINIO_ACCESS_KEY}10 - AWS_SECRET_ACCESS_KEY=${MINIO_SECRET_KEY}11 - MLFLOW_S3_ENDPOINT_URL=http://minio:900012 command: mlflow server --host 0.0.0.0 --port 500013 depends_on: 14 postgres: 15 condition: service_healthy16 minio: 17 condition: service_started18 networks: 19 - mlflow-net20 restart: unless-stopped2122 postgres: 23 image: postgres:16-alpine24 environment: 25 POSTGRES_USER: ${POSTGRES_USER}26 POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}27 POSTGRES_DB: ${POSTGRES_DB}28 volumes: 29 - postgres_data:/var/lib/postgresql/data30 healthcheck: 31 test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]32 interval: 10s33 timeout: 5s34 retries: 535 networks: 36 - mlflow-net37 restart: unless-stopped3839 minio: 40 image: minio/minio:latest41 ports: 42 - "9000:9000"43 - "9001:9001"44 environment: 45 MINIO_ROOT_USER: ${MINIO_ACCESS_KEY}46 MINIO_ROOT_PASSWORD: ${MINIO_SECRET_KEY}47 volumes: 48 - minio_data:/data49 command: server /data --console-address ":9001"50 networks: 51 - mlflow-net52 restart: unless-stopped5354 minio-init: 55 image: minio/mc:latest56 depends_on: 57 - minio58 entrypoint: >59 /bin/sh -c "60 sleep 5;61 mc alias set myminio http: //minio:9000 ${MINIO_ACCESS_KEY} ${MINIO_SECRET_KEY};62 mc mb myminio/mlflow --ignore-existing;63 exit 0;64 "65 networks: 66 - mlflow-net6768 nginx: 69 image: nginx:alpine70 ports: 71 - "80:80"72 volumes: 73 - ./nginx.conf:/etc/nginx/nginx.conf:ro74 depends_on: 75 - mlflow76 networks: 77 - mlflow-net78 restart: unless-stopped7980volumes: 81 postgres_data: 82 minio_data: 8384networks: 85 mlflow-net: 86 driver: bridge.env Template
.env
1# MLflow Configuration2POSTGRES_USER=mlflow3POSTGRES_PASSWORD=secure_mlflow_password4POSTGRES_DB=mlflow56# MinIO Configuration7MINIO_ACCESS_KEY=minioadmin8MINIO_SECRET_KEY=secure_minio_passwordUsage Notes
- 1MLflow UI at http://localhost:5000
- 2MinIO console at http://localhost:9001
- 3Create nginx.conf for production proxy
- 4Model artifacts stored in MinIO S3-compatible storage
Individual Services(5 services)
Copy individual services to mix and match with your existing compose files.
mlflow
mlflow:
image: ghcr.io/mlflow/mlflow:latest
ports:
- "5000:5000"
environment:
- MLFLOW_BACKEND_STORE_URI=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
- MLFLOW_DEFAULT_ARTIFACT_ROOT=s3://mlflow
- AWS_ACCESS_KEY_ID=${MINIO_ACCESS_KEY}
- AWS_SECRET_ACCESS_KEY=${MINIO_SECRET_KEY}
- MLFLOW_S3_ENDPOINT_URL=http://minio:9000
command: mlflow server --host 0.0.0.0 --port 5000
depends_on:
postgres:
condition: service_healthy
minio:
condition: service_started
networks:
- mlflow-net
restart: unless-stopped
postgres
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test:
- CMD-SHELL
- pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}
interval: 10s
timeout: 5s
retries: 5
networks:
- mlflow-net
restart: unless-stopped
minio
minio:
image: minio/minio:latest
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: ${MINIO_ACCESS_KEY}
MINIO_ROOT_PASSWORD: ${MINIO_SECRET_KEY}
volumes:
- minio_data:/data
command: server /data --console-address ":9001"
networks:
- mlflow-net
restart: unless-stopped
minio-init
minio-init:
image: minio/mc:latest
depends_on:
- minio
entrypoint: |
/bin/sh -c " sleep 5; mc alias set myminio http://minio:9000 ${MINIO_ACCESS_KEY} ${MINIO_SECRET_KEY}; mc mb myminio/mlflow --ignore-existing; exit 0; "
networks:
- mlflow-net
nginx
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- mlflow
networks:
- mlflow-net
restart: unless-stopped
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 mlflow:5 image: ghcr.io/mlflow/mlflow:latest6 ports:7 - "5000:5000"8 environment:9 - MLFLOW_BACKEND_STORE_URI=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}10 - MLFLOW_DEFAULT_ARTIFACT_ROOT=s3://mlflow11 - AWS_ACCESS_KEY_ID=${MINIO_ACCESS_KEY}12 - AWS_SECRET_ACCESS_KEY=${MINIO_SECRET_KEY}13 - MLFLOW_S3_ENDPOINT_URL=http://minio:900014 command: mlflow server --host 0.0.0.0 --port 500015 depends_on:16 postgres:17 condition: service_healthy18 minio:19 condition: service_started20 networks:21 - mlflow-net22 restart: unless-stopped2324 postgres:25 image: postgres:16-alpine26 environment:27 POSTGRES_USER: ${POSTGRES_USER}28 POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}29 POSTGRES_DB: ${POSTGRES_DB}30 volumes:31 - postgres_data:/var/lib/postgresql/data32 healthcheck:33 test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]34 interval: 10s35 timeout: 5s36 retries: 537 networks:38 - mlflow-net39 restart: unless-stopped4041 minio:42 image: minio/minio:latest43 ports:44 - "9000:9000"45 - "9001:9001"46 environment:47 MINIO_ROOT_USER: ${MINIO_ACCESS_KEY}48 MINIO_ROOT_PASSWORD: ${MINIO_SECRET_KEY}49 volumes:50 - minio_data:/data51 command: server /data --console-address ":9001"52 networks:53 - mlflow-net54 restart: unless-stopped5556 minio-init:57 image: minio/mc:latest58 depends_on:59 - minio60 entrypoint: >61 /bin/sh -c "62 sleep 5;63 mc alias set myminio http://minio:9000 ${MINIO_ACCESS_KEY} ${MINIO_SECRET_KEY};64 mc mb myminio/mlflow --ignore-existing;65 exit 0;66 "67 networks:68 - mlflow-net6970 nginx:71 image: nginx:alpine72 ports:73 - "80:80"74 volumes:75 - ./nginx.conf:/etc/nginx/nginx.conf:ro76 depends_on:77 - mlflow78 networks:79 - mlflow-net80 restart: unless-stopped8182volumes:83 postgres_data:84 minio_data:8586networks:87 mlflow-net:88 driver: bridge89EOF9091# 2. Create the .env file92cat > .env << 'EOF'93# MLflow Configuration94POSTGRES_USER=mlflow95POSTGRES_PASSWORD=secure_mlflow_password96POSTGRES_DB=mlflow9798# MinIO Configuration99MINIO_ACCESS_KEY=minioadmin100MINIO_SECRET_KEY=secure_minio_password101EOF102103# 3. Start the services104docker compose up -d105106# 4. View logs107docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/mlflow-full-stack/run | bashTroubleshooting
- MLflow server fails to start with database connection error: Verify PostgreSQL health check passes and POSTGRES_* environment variables match between mlflow and postgres services
- Artifact logging fails with S3 endpoint errors: Ensure minio-init container completed successfully and created the mlflow bucket, check MINIO_ACCESS_KEY and MINIO_SECRET_KEY variables
- PostgreSQL container restarts repeatedly: Check available disk space for postgres_data volume and verify POSTGRES_PASSWORD is set in environment variables
- MinIO console inaccessible at port 9001: Confirm MINIO_ROOT_USER and MINIO_ROOT_PASSWORD are properly configured and container logs show successful startup
- NGINX proxy returns 502 Bad Gateway: Verify mlflow service is healthy and running on port 5000, check nginx.conf syntax and container dependency order
- Model artifacts not appearing in MLflow UI: Validate S3 endpoint URL configuration points to minio:9000 and AWS credentials match MinIO root credentials
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Components
mlflowpostgresqlminionginxprometheus
Tags
#mlflow#ml-ops#model-registry#experiment-tracking#production
Category
AI & Machine LearningAd Space
Shortcuts: C CopyF FavoriteD Download