Ceph Demo
Distributed storage system for development.
Overview
Ceph is a unified, distributed storage system designed to provide excellent performance, reliability and scalability. Originally developed by Sage Weil at UC Santa Cruz and now maintained by Red Hat, Ceph provides object, block, and file storage in one unified cluster. It uses the CRUSH algorithm for data placement and features self-healing, self-managing capabilities that eliminate single points of failure. The system is built on a foundation of RADOS (Reliable Autonomic Distributed Object Store) which handles data replication, failure detection, and recovery automatically across commodity hardware.
This Docker configuration deploys a complete Ceph demo environment within a single container, including Object Storage Daemons (OSD) for data storage, Metadata Server (MDS) for CephFS file system operations, and RADOS Gateway (RGW) for S3-compatible object storage access. The demo container simulates a full Ceph cluster topology while running all components on one node, making it perfect for understanding Ceph's architecture without the complexity of multi-node deployment. The configuration exposes both the S3 gateway interface and management dashboard for hands-on interaction with Ceph's storage capabilities.
This setup targets developers building cloud storage applications, system administrators evaluating distributed storage solutions, and students learning about software-defined storage architectures. The single-container approach makes Ceph accessible for rapid prototyping and testing scenarios where you need to understand how applications interact with object, block, and file storage interfaces. While never suitable for production workloads, this demo environment provides an authentic Ceph experience that translates directly to understanding real multi-node clusters.
Key Features
- Complete RADOS distributed object store with automatic data placement using CRUSH algorithm
- S3-compatible REST API through RADOS Gateway for seamless integration with cloud applications
- CephFS metadata server enabling POSIX-compliant distributed file system operations
- Built-in self-healing capabilities with automatic data replication and recovery simulation
- CRUSH map visualization and manipulation for understanding data distribution patterns
- Native support for multiple storage interfaces: object, block, and file in unified cluster
- Ceph dashboard interface providing cluster health monitoring and storage pool management
- Librados API access for direct application integration with Ceph storage primitives
Common Use Cases
- 1Prototyping cloud applications requiring S3-compatible object storage before AWS deployment
- 2Learning distributed storage concepts and CRUSH algorithm behavior in controlled environment
- 3Testing backup and archival workflows against Ceph RGW S3 interface
- 4Developing applications that need to integrate with Ceph librados APIs
- 5Evaluating CephFS distributed file system performance characteristics
- 6Training environments for system administrators learning Ceph cluster management
- 7CI/CD pipeline testing for applications deploying to production Ceph clusters
Prerequisites
- Minimum 4GB RAM as Ceph daemons are memory-intensive even in demo mode
- Docker host with at least 20GB free disk space for Ceph data pools and metadata
- Host networking capability since Ceph demo uses network_mode: host for daemon communication
- Understanding of object storage concepts and S3 API fundamentals
- Basic knowledge of distributed systems and storage clustering principles
- Familiarity with RADOS, OSD, and MON concepts in Ceph architecture
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 ceph-demo: 3 image: quay.io/ceph/demo:latest4 container_name: ceph-demo5 environment: 6 MON_IP: 127.0.0.17 CEPH_PUBLIC_NETWORK: 0.0.0.0/08 DEMO_DAEMONS: osd,mds,rgw9 volumes: 10 - ceph_etc:/etc/ceph11 - ceph_lib:/var/lib/ceph12 ports: 13 - "8080:8080"14 - "5000:5000"15 network_mode: host1617volumes: 18 ceph_etc: 19 ceph_lib: .env Template
.env
1# For development/testing onlyUsage Notes
- 1Docs: https://docs.ceph.com/
- 2Demo mode for development/testing only - NOT FOR PRODUCTION
- 3S3 gateway (RGW) on port 8080
- 4Includes OSD, MDS, RGW daemons in single container
- 5For production: deploy on bare metal with multiple nodes
- 6Dashboard at http://localhost:5000 (if enabled)
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 ceph-demo:5 image: quay.io/ceph/demo:latest6 container_name: ceph-demo7 environment:8 MON_IP: 127.0.0.19 CEPH_PUBLIC_NETWORK: 0.0.0.0/010 DEMO_DAEMONS: osd,mds,rgw11 volumes:12 - ceph_etc:/etc/ceph13 - ceph_lib:/var/lib/ceph14 ports:15 - "8080:8080"16 - "5000:5000"17 network_mode: host1819volumes:20 ceph_etc:21 ceph_lib:22EOF2324# 2. Create the .env file25cat > .env << 'EOF'26# For development/testing only27EOF2829# 3. Start the services30docker compose up -d3132# 4. View logs33docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/ceph-demo/run | bashTroubleshooting
- Container exits with 'MON_IP bind failed': Ensure port 6789 is not in use by other services on host
- S3 operations return 'connection refused' on port 8080: Wait 2-3 minutes for RGW daemon to fully initialize after container start
- Dashboard not accessible on port 5000: Dashboard may not be enabled in demo mode, check container logs for mgr module status
- OSD creation fails with 'no space left': Increase Docker daemon storage or prune unused volumes and images
- CRUSH map errors in logs: Normal in single-node demo as CRUSH expects multiple failure domains
- High memory usage warnings: Expected behavior as all Ceph daemons run in single container, monitor with docker stats
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Ad Space
Shortcuts: C CopyF FavoriteD Download