docker.recipes

Seldon Core

advanced

ML deployment platform for Kubernetes.

Overview

Seldon Core is an open-source machine learning deployment platform built specifically for Kubernetes that transforms ML models into production-ready microservices. Originally developed by Seldon Technologies and now maintained by the CNCF, it provides a comprehensive framework for deploying, scaling, and managing ML inference workloads with advanced capabilities like A/B testing, canary deployments, multi-armed bandits, and inference graph orchestration. The platform supports multiple ML frameworks including TensorFlow, PyTorch, scikit-learn, and XGBoost while providing standardized REST and gRPC APIs for model serving. This configuration provides the essential supporting infrastructure for Seldon Core deployments by combining MinIO for S3-compatible model artifact storage and Redis for caching and session management. MinIO serves as the model registry where trained models, preprocessing pipelines, and deployment artifacts are stored and versioned, while Redis handles inference result caching and maintains state for advanced deployment patterns like multi-armed bandits. Since Seldon Core itself runs on Kubernetes, this Docker setup creates the data layer that your K8s-based ML services will connect to for model retrieval and state management. ML engineers and data scientists working with Kubernetes-based ML platforms will find this stack invaluable for creating a complete MLOps pipeline. The combination addresses the critical challenge of model artifact management and inference optimization in production ML systems, providing both the storage backend for model versioning and the caching layer needed for high-performance inference serving at scale.

Key Features

  • S3-compatible model artifact storage through MinIO with versioning and lifecycle policies for ML model management
  • High-performance object storage with erasure coding and encryption for secure model and data asset protection
  • Redis-based inference caching to reduce model prediction latency and improve throughput
  • MinIO web console for visual model artifact management and storage monitoring
  • Bucket-level access controls and IAM integration for secure model repository management
  • Support for large model files with multipart uploads and resume capabilities
  • Redis persistence options for maintaining inference cache across container restarts
  • Cross-region replication capabilities for distributed ML model serving architectures

Common Use Cases

  • 1Model artifact repository for Kubernetes-based Seldon Core ML inference deployments
  • 2Development environment for testing ML model serving pipelines before Kubernetes deployment
  • 3Centralized storage for multiple ML model versions supporting A/B testing and canary releases
  • 4Inference result caching layer for high-throughput ML prediction services
  • 5Local development setup for data scientists building and testing model deployment configurations
  • 6Backup storage for production ML models with automated versioning and retention policies
  • 7Multi-tenant ML platform supporting different teams with isolated model storage buckets

Prerequisites

  • Kubernetes cluster available for actual Seldon Core deployment (this provides supporting services only)
  • Minimum 4GB RAM recommended (2GB for MinIO, 1GB for Redis, 1GB for system overhead)
  • Basic understanding of Kubernetes concepts and kubectl for managing Seldon deployments
  • Familiarity with S3 APIs and bucket management for model artifact organization
  • Ports 9000, 9001, and 6379 available on the host system
  • Docker Compose v2.0 or higher with support for named volumes and custom networks

For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms

docker-compose.yml

docker-compose.yml
1# Seldon Core requires Kubernetes
2# This provides supporting services
3services:
4 minio:
5 image: minio/minio:latest
6 container_name: seldon-minio
7 command: server /data --console-address ":9001"
8 environment:
9 MINIO_ROOT_USER: minioadmin
10 MINIO_ROOT_PASSWORD: minioadmin
11 volumes:
12 - minio_data:/data
13 ports:
14 - "9000:9000"
15 - "9001:9001"
16 networks:
17 - seldon
18
19 redis:
20 image: redis:alpine
21 container_name: seldon-redis
22 ports:
23 - "6379:6379"
24 networks:
25 - seldon
26
27volumes:
28 minio_data:
29
30networks:
31 seldon:
32 driver: bridge

.env Template

.env
1# Full Seldon requires Kubernetes

Usage Notes

  1. 1Docs: https://docs.seldon.io/projects/seldon-core/
  2. 2Full Seldon Core requires Kubernetes - this provides storage backend
  3. 3MinIO at http://localhost:9001 for model artifact storage
  4. 4Install on K8s: helm install seldon-core seldon-core-operator
  5. 5Deploy models via SeldonDeployment CRD with inference graphs
  6. 6Supports A/B testing, canary deployments, and multi-armed bandits

Individual Services(2 services)

Copy individual services to mix and match with your existing compose files.

minio
minio:
  image: minio/minio:latest
  container_name: seldon-minio
  command: server /data --console-address ":9001"
  environment:
    MINIO_ROOT_USER: minioadmin
    MINIO_ROOT_PASSWORD: minioadmin
  volumes:
    - minio_data:/data
  ports:
    - "9000:9000"
    - "9001:9001"
  networks:
    - seldon
redis
redis:
  image: redis:alpine
  container_name: seldon-redis
  ports:
    - "6379:6379"
  networks:
    - seldon

Quick Start

terminal
1# 1. Create the compose file
2cat > docker-compose.yml << 'EOF'
3# Seldon Core requires Kubernetes
4# This provides supporting services
5services:
6 minio:
7 image: minio/minio:latest
8 container_name: seldon-minio
9 command: server /data --console-address ":9001"
10 environment:
11 MINIO_ROOT_USER: minioadmin
12 MINIO_ROOT_PASSWORD: minioadmin
13 volumes:
14 - minio_data:/data
15 ports:
16 - "9000:9000"
17 - "9001:9001"
18 networks:
19 - seldon
20
21 redis:
22 image: redis:alpine
23 container_name: seldon-redis
24 ports:
25 - "6379:6379"
26 networks:
27 - seldon
28
29volumes:
30 minio_data:
31
32networks:
33 seldon:
34 driver: bridge
35EOF
36
37# 2. Create the .env file
38cat > .env << 'EOF'
39# Full Seldon requires Kubernetes
40EOF
41
42# 3. Start the services
43docker compose up -d
44
45# 4. View logs
46docker compose logs -f

One-Liner

Run this command to download and set up the recipe in one step:

terminal
1curl -fsSL https://docker.recipes/api/recipes/seldon-core/run | bash

Troubleshooting

  • MinIO console shows 'Invalid Access Key': Verify MINIO_ROOT_USER and MINIO_ROOT_PASSWORD environment variables are set correctly and container has been restarted
  • Connection refused on port 9000: Check if MinIO service is running and ports are not blocked by firewall, use 'docker logs seldon-minio' to verify startup
  • Redis connection timeout from Seldon: Ensure Redis container is in the same Docker network and verify network connectivity with 'docker network inspect seldon'
  • MinIO bucket access denied: Create buckets through the web console at localhost:9001 and configure appropriate access policies for your use case
  • Large model upload failures: Increase MinIO client timeout settings and verify sufficient disk space in the minio_data volume
  • Redis memory issues during inference: Monitor Redis memory usage with 'docker stats' and consider adjusting Redis maxmemory configuration for your workload

Community Notes

Loading...
Loading notes...

Download Recipe Kit

Get all files in a ready-to-deploy package

Includes docker-compose.yml, .env template, README, and license

Ad Space