Rook-Ceph Development Stack
Cloud-native storage orchestrator for Kubernetes with Ceph for block, object, and file storage.
Overview
Ceph is a distributed storage system that provides unified block, object, and file storage through a cluster of storage nodes. Originally developed by Sage Weil at UC Santa Cruz, Ceph eliminates single points of failure through its CRUSH algorithm that distributes data across multiple nodes with built-in replication. The system consists of several key daemons: ceph-mon maintains cluster maps and consensus, ceph-osd handles actual data storage and replication, ceph-mgr provides monitoring and management interfaces, and ceph-rgw offers S3-compatible object storage gateway.
This development stack combines all essential Ceph components with Prometheus monitoring to create a complete software-defined storage environment. The ceph-mon daemon acts as the cluster's brain, maintaining authoritative copies of cluster maps, while multiple ceph-osd instances simulate distributed storage nodes for redundancy testing. The ceph-mgr daemon enables the web dashboard and exposes metrics, while ceph-rgw provides object storage compatibility with AWS S3 APIs. Prometheus integration captures detailed storage metrics, IOPS performance, and cluster health indicators.
Storage architects, Kubernetes platform engineers, and infrastructure developers building cloud-native applications will find this stack invaluable for prototyping distributed storage solutions. This combination enables testing of storage failure scenarios, capacity planning, and integration patterns before deploying to production Rook-Ceph clusters. The setup provides hands-on experience with Ceph's self-healing capabilities and performance characteristics under various load conditions.
Key Features
- Multi-daemon Ceph cluster with monitor, manager, and dual OSD configuration for replication testing
- Built-in Ceph Dashboard on port 7000 for visual cluster management and health monitoring
- S3-compatible RADOS Gateway supporting AWS SDK integration and bucket operations
- Native Prometheus metrics export from ceph-mgr for storage performance monitoring
- CRUSH map simulation with configurable placement groups and replication rules
- Block device support through RBD (RADOS Block Device) for persistent volume testing
- Object storage pools with configurable erasure coding and compression algorithms
- Cluster health monitoring with automatic OSD failure detection and recovery simulation
Common Use Cases
- 1Kubernetes persistent volume development with CSI driver integration testing
- 2S3-compatible object storage backend for applications requiring AWS S3 API compatibility
- 3Storage failure scenario testing and disaster recovery procedure validation
- 4Multi-tenant storage solutions with isolated pools and user access controls
- 5Container registry storage backend using Ceph's object storage capabilities
- 6Development of storage-intensive applications requiring block and object storage
- 7Performance benchmarking and capacity planning for production Ceph deployments
Prerequisites
- Docker host with minimum 4GB RAM and 20GB available disk space
- Basic understanding of distributed storage concepts and RADOS architecture
- Familiarity with Ceph cluster components and their interdependencies
- Network configuration allowing container communication on 172.29.0.0/16 subnet
- Available ports 7000 (dashboard), 8080 (RGW), 9090 (Prometheus), 9283 (metrics)
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 ceph-mon: 3 image: ceph/ceph:latest4 hostname: ceph-mon5 environment: 6 - MON_IP=172.29.0.27 - CEPH_PUBLIC_NETWORK=172.29.0.0/168 volumes: 9 - ceph_etc:/etc/ceph10 - ceph_mon_data:/var/lib/ceph/mon11 command: mon12 networks: 13 ceph-net: 14 ipv4_address: 172.29.0.215 restart: unless-stopped1617 ceph-mgr: 18 image: ceph/ceph:latest19 hostname: ceph-mgr20 ports: 21 - "7000:7000" # Dashboard22 - "9283:9283" # Prometheus metrics23 environment: 24 - CEPH_PUBLIC_NETWORK=172.29.0.0/1625 volumes: 26 - ceph_etc:/etc/ceph27 - ceph_mgr_data:/var/lib/ceph/mgr28 command: mgr29 depends_on: 30 - ceph-mon31 networks: 32 - ceph-net33 restart: unless-stopped3435 ceph-osd-1: 36 image: ceph/ceph:latest37 hostname: ceph-osd-138 privileged: true39 environment: 40 - OSD_DEVICE=/dev/sdb41 - CEPH_PUBLIC_NETWORK=172.29.0.0/1642 volumes: 43 - ceph_etc:/etc/ceph44 - ceph_osd1_data:/var/lib/ceph/osd45 - /dev:/dev46 command: osd_directory47 depends_on: 48 - ceph-mon49 networks: 50 - ceph-net51 restart: unless-stopped5253 ceph-osd-2: 54 image: ceph/ceph:latest55 hostname: ceph-osd-256 privileged: true57 environment: 58 - OSD_DEVICE=/dev/sdc59 - CEPH_PUBLIC_NETWORK=172.29.0.0/1660 volumes: 61 - ceph_etc:/etc/ceph62 - ceph_osd2_data:/var/lib/ceph/osd63 - /dev:/dev64 command: osd_directory65 depends_on: 66 - ceph-mon67 networks: 68 - ceph-net69 restart: unless-stopped7071 ceph-rgw: 72 image: ceph/ceph:latest73 hostname: ceph-rgw74 ports: 75 - "8080:8080"76 volumes: 77 - ceph_etc:/etc/ceph78 - ceph_rgw_data:/var/lib/ceph/radosgw79 command: rgw80 depends_on: 81 - ceph-mon82 - ceph-osd-183 - ceph-osd-284 networks: 85 - ceph-net86 restart: unless-stopped8788 prometheus: 89 image: prom/prometheus:latest90 ports: 91 - "9090:9090"92 volumes: 93 - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro94 - prometheus_data:/prometheus95 networks: 96 - ceph-net97 restart: unless-stopped9899volumes: 100 ceph_etc: 101 ceph_mon_data: 102 ceph_mgr_data: 103 ceph_osd1_data: 104 ceph_osd2_data: 105 ceph_rgw_data: 106 prometheus_data: 107108networks: 109 ceph-net: 110 driver: bridge111 ipam: 112 config: 113 - subnet: 172.29.0.0/16.env Template
.env
1# Ceph Configuration2CEPH_FSID=$(uuidgen)3CEPH_MON_KEY=$(ceph-authtool --gen-print-key)4CEPH_ADMIN_KEY=$(ceph-authtool --gen-print-key)56# Dashboard7CEPH_DASHBOARD_USER=admin8CEPH_DASHBOARD_PASSWORD=secure_ceph_passwordUsage Notes
- 1Ceph Dashboard at http://localhost:7000
- 2S3-compatible gateway at http://localhost:8080
- 3Prometheus metrics at http://localhost:9283
- 4Development setup - not for production
Individual Services(6 services)
Copy individual services to mix and match with your existing compose files.
ceph-mon
ceph-mon:
image: ceph/ceph:latest
hostname: ceph-mon
environment:
- MON_IP=172.29.0.2
- CEPH_PUBLIC_NETWORK=172.29.0.0/16
volumes:
- ceph_etc:/etc/ceph
- ceph_mon_data:/var/lib/ceph/mon
command: mon
networks:
ceph-net:
ipv4_address: 172.29.0.2
restart: unless-stopped
ceph-mgr
ceph-mgr:
image: ceph/ceph:latest
hostname: ceph-mgr
ports:
- "7000:7000"
- "9283:9283"
environment:
- CEPH_PUBLIC_NETWORK=172.29.0.0/16
volumes:
- ceph_etc:/etc/ceph
- ceph_mgr_data:/var/lib/ceph/mgr
command: mgr
depends_on:
- ceph-mon
networks:
- ceph-net
restart: unless-stopped
ceph-osd-1
ceph-osd-1:
image: ceph/ceph:latest
hostname: ceph-osd-1
privileged: true
environment:
- OSD_DEVICE=/dev/sdb
- CEPH_PUBLIC_NETWORK=172.29.0.0/16
volumes:
- ceph_etc:/etc/ceph
- ceph_osd1_data:/var/lib/ceph/osd
- /dev:/dev
command: osd_directory
depends_on:
- ceph-mon
networks:
- ceph-net
restart: unless-stopped
ceph-osd-2
ceph-osd-2:
image: ceph/ceph:latest
hostname: ceph-osd-2
privileged: true
environment:
- OSD_DEVICE=/dev/sdc
- CEPH_PUBLIC_NETWORK=172.29.0.0/16
volumes:
- ceph_etc:/etc/ceph
- ceph_osd2_data:/var/lib/ceph/osd
- /dev:/dev
command: osd_directory
depends_on:
- ceph-mon
networks:
- ceph-net
restart: unless-stopped
ceph-rgw
ceph-rgw:
image: ceph/ceph:latest
hostname: ceph-rgw
ports:
- "8080:8080"
volumes:
- ceph_etc:/etc/ceph
- ceph_rgw_data:/var/lib/ceph/radosgw
command: rgw
depends_on:
- ceph-mon
- ceph-osd-1
- ceph-osd-2
networks:
- ceph-net
restart: unless-stopped
prometheus
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
networks:
- ceph-net
restart: unless-stopped
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 ceph-mon:5 image: ceph/ceph:latest6 hostname: ceph-mon7 environment:8 - MON_IP=172.29.0.29 - CEPH_PUBLIC_NETWORK=172.29.0.0/1610 volumes:11 - ceph_etc:/etc/ceph12 - ceph_mon_data:/var/lib/ceph/mon13 command: mon14 networks:15 ceph-net:16 ipv4_address: 172.29.0.217 restart: unless-stopped1819 ceph-mgr:20 image: ceph/ceph:latest21 hostname: ceph-mgr22 ports:23 - "7000:7000" # Dashboard24 - "9283:9283" # Prometheus metrics25 environment:26 - CEPH_PUBLIC_NETWORK=172.29.0.0/1627 volumes:28 - ceph_etc:/etc/ceph29 - ceph_mgr_data:/var/lib/ceph/mgr30 command: mgr31 depends_on:32 - ceph-mon33 networks:34 - ceph-net35 restart: unless-stopped3637 ceph-osd-1:38 image: ceph/ceph:latest39 hostname: ceph-osd-140 privileged: true41 environment:42 - OSD_DEVICE=/dev/sdb43 - CEPH_PUBLIC_NETWORK=172.29.0.0/1644 volumes:45 - ceph_etc:/etc/ceph46 - ceph_osd1_data:/var/lib/ceph/osd47 - /dev:/dev48 command: osd_directory49 depends_on:50 - ceph-mon51 networks:52 - ceph-net53 restart: unless-stopped5455 ceph-osd-2:56 image: ceph/ceph:latest57 hostname: ceph-osd-258 privileged: true59 environment:60 - OSD_DEVICE=/dev/sdc61 - CEPH_PUBLIC_NETWORK=172.29.0.0/1662 volumes:63 - ceph_etc:/etc/ceph64 - ceph_osd2_data:/var/lib/ceph/osd65 - /dev:/dev66 command: osd_directory67 depends_on:68 - ceph-mon69 networks:70 - ceph-net71 restart: unless-stopped7273 ceph-rgw:74 image: ceph/ceph:latest75 hostname: ceph-rgw76 ports:77 - "8080:8080"78 volumes:79 - ceph_etc:/etc/ceph80 - ceph_rgw_data:/var/lib/ceph/radosgw81 command: rgw82 depends_on:83 - ceph-mon84 - ceph-osd-185 - ceph-osd-286 networks:87 - ceph-net88 restart: unless-stopped8990 prometheus:91 image: prom/prometheus:latest92 ports:93 - "9090:9090"94 volumes:95 - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro96 - prometheus_data:/prometheus97 networks:98 - ceph-net99 restart: unless-stopped100101volumes:102 ceph_etc:103 ceph_mon_data:104 ceph_mgr_data:105 ceph_osd1_data:106 ceph_osd2_data:107 ceph_rgw_data:108 prometheus_data:109110networks:111 ceph-net:112 driver: bridge113 ipam:114 config:115 - subnet: 172.29.0.0/16116EOF117118# 2. Create the .env file119cat > .env << 'EOF'120# Ceph Configuration121CEPH_FSID=$(uuidgen)122CEPH_MON_KEY=$(ceph-authtool --gen-print-key)123CEPH_ADMIN_KEY=$(ceph-authtool --gen-print-key)124125# Dashboard126CEPH_DASHBOARD_USER=admin127CEPH_DASHBOARD_PASSWORD=secure_ceph_password128EOF129130# 3. Start the services131docker compose up -d132133# 4. View logs134docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/rook-ceph-dev/run | bashTroubleshooting
- MON_IP connectivity issues: Verify ceph-mon container can bind to 172.29.0.2 and check network subnet conflicts
- OSD activation failures: Ensure privileged mode is enabled and /dev directory mount is accessible for device simulation
- Dashboard login authentication: Default admin user credentials may need manual setup through ceph-mgr container exec
- RGW S3 access denied errors: Create RGW user credentials using radosgw-admin commands within the ceph-rgw container
- Prometheus scraping failures: Confirm ceph-mgr prometheus module is enabled and metrics endpoint responds on port 9283
- Cluster health HEALTH_WARN: Check OSD count meets minimum replication requirements and placement group distribution
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Components
ceph-monceph-osdceph-mgrceph-rgwprometheus
Tags
#rook#ceph#kubernetes#block-storage#object-storage
Category
Storage & BackupAd Space
Shortcuts: C CopyF FavoriteD Download