Seldon Core
ML deployment platform for Kubernetes.
Overview
Seldon Core is an open-source machine learning deployment platform built specifically for Kubernetes that transforms ML models into production-ready microservices. Originally developed by Seldon Technologies and now maintained by the CNCF, it provides a comprehensive framework for deploying, scaling, and managing ML inference workloads with advanced capabilities like A/B testing, canary deployments, multi-armed bandits, and inference graph orchestration. The platform supports multiple ML frameworks including TensorFlow, PyTorch, scikit-learn, and XGBoost while providing standardized REST and gRPC APIs for model serving.
This configuration provides the essential supporting infrastructure for Seldon Core deployments by combining MinIO for S3-compatible model artifact storage and Redis for caching and session management. MinIO serves as the model registry where trained models, preprocessing pipelines, and deployment artifacts are stored and versioned, while Redis handles inference result caching and maintains state for advanced deployment patterns like multi-armed bandits. Since Seldon Core itself runs on Kubernetes, this Docker setup creates the data layer that your K8s-based ML services will connect to for model retrieval and state management.
ML engineers and data scientists working with Kubernetes-based ML platforms will find this stack invaluable for creating a complete MLOps pipeline. The combination addresses the critical challenge of model artifact management and inference optimization in production ML systems, providing both the storage backend for model versioning and the caching layer needed for high-performance inference serving at scale.
Key Features
- S3-compatible model artifact storage through MinIO with versioning and lifecycle policies for ML model management
- High-performance object storage with erasure coding and encryption for secure model and data asset protection
- Redis-based inference caching to reduce model prediction latency and improve throughput
- MinIO web console for visual model artifact management and storage monitoring
- Bucket-level access controls and IAM integration for secure model repository management
- Support for large model files with multipart uploads and resume capabilities
- Redis persistence options for maintaining inference cache across container restarts
- Cross-region replication capabilities for distributed ML model serving architectures
Common Use Cases
- 1Model artifact repository for Kubernetes-based Seldon Core ML inference deployments
- 2Development environment for testing ML model serving pipelines before Kubernetes deployment
- 3Centralized storage for multiple ML model versions supporting A/B testing and canary releases
- 4Inference result caching layer for high-throughput ML prediction services
- 5Local development setup for data scientists building and testing model deployment configurations
- 6Backup storage for production ML models with automated versioning and retention policies
- 7Multi-tenant ML platform supporting different teams with isolated model storage buckets
Prerequisites
- Kubernetes cluster available for actual Seldon Core deployment (this provides supporting services only)
- Minimum 4GB RAM recommended (2GB for MinIO, 1GB for Redis, 1GB for system overhead)
- Basic understanding of Kubernetes concepts and kubectl for managing Seldon deployments
- Familiarity with S3 APIs and bucket management for model artifact organization
- Ports 9000, 9001, and 6379 available on the host system
- Docker Compose v2.0 or higher with support for named volumes and custom networks
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1# Seldon Core requires Kubernetes2# This provides supporting services3services: 4 minio: 5 image: minio/minio:latest6 container_name: seldon-minio7 command: server /data --console-address ":9001"8 environment: 9 MINIO_ROOT_USER: minioadmin10 MINIO_ROOT_PASSWORD: minioadmin11 volumes: 12 - minio_data:/data13 ports: 14 - "9000:9000"15 - "9001:9001"16 networks: 17 - seldon1819 redis: 20 image: redis:alpine21 container_name: seldon-redis22 ports: 23 - "6379:6379"24 networks: 25 - seldon2627volumes: 28 minio_data: 2930networks: 31 seldon: 32 driver: bridge.env Template
.env
1# Full Seldon requires KubernetesUsage Notes
- 1Docs: https://docs.seldon.io/projects/seldon-core/
- 2Full Seldon Core requires Kubernetes - this provides storage backend
- 3MinIO at http://localhost:9001 for model artifact storage
- 4Install on K8s: helm install seldon-core seldon-core-operator
- 5Deploy models via SeldonDeployment CRD with inference graphs
- 6Supports A/B testing, canary deployments, and multi-armed bandits
Individual Services(2 services)
Copy individual services to mix and match with your existing compose files.
minio
minio:
image: minio/minio:latest
container_name: seldon-minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
volumes:
- minio_data:/data
ports:
- "9000:9000"
- "9001:9001"
networks:
- seldon
redis
redis:
image: redis:alpine
container_name: seldon-redis
ports:
- "6379:6379"
networks:
- seldon
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3# Seldon Core requires Kubernetes4# This provides supporting services5services:6 minio:7 image: minio/minio:latest8 container_name: seldon-minio9 command: server /data --console-address ":9001"10 environment:11 MINIO_ROOT_USER: minioadmin12 MINIO_ROOT_PASSWORD: minioadmin13 volumes:14 - minio_data:/data15 ports:16 - "9000:9000"17 - "9001:9001"18 networks:19 - seldon2021 redis:22 image: redis:alpine23 container_name: seldon-redis24 ports:25 - "6379:6379"26 networks:27 - seldon2829volumes:30 minio_data:3132networks:33 seldon:34 driver: bridge35EOF3637# 2. Create the .env file38cat > .env << 'EOF'39# Full Seldon requires Kubernetes40EOF4142# 3. Start the services43docker compose up -d4445# 4. View logs46docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/seldon-core/run | bashTroubleshooting
- MinIO console shows 'Invalid Access Key': Verify MINIO_ROOT_USER and MINIO_ROOT_PASSWORD environment variables are set correctly and container has been restarted
- Connection refused on port 9000: Check if MinIO service is running and ports are not blocked by firewall, use 'docker logs seldon-minio' to verify startup
- Redis connection timeout from Seldon: Ensure Redis container is in the same Docker network and verify network connectivity with 'docker network inspect seldon'
- MinIO bucket access denied: Create buckets through the web console at localhost:9001 and configure appropriate access policies for your use case
- Large model upload failures: Increase MinIO client timeout settings and verify sufficient disk space in the minio_data volume
- Redis memory issues during inference: Monitor Redis memory usage with 'docker stats' and consider adjusting Redis maxmemory configuration for your workload
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Ad Space
Shortcuts: C CopyF FavoriteD Download