SeaweedFS Distributed Storage
Fast distributed object storage system for billions of files.
Overview
SeaweedFS is a highly scalable distributed object storage system designed to handle billions of files efficiently. Originally developed by Chris Lu, it combines the simplicity of file systems with the scalability of distributed storage, offering both POSIX-compatible file access and S3-compatible object storage APIs. The system achieves exceptional performance by separating metadata management from data storage and using a unique approach that eliminates the need for centralized metadata servers.
This distributed storage stack consists of four essential components working in concert: seaweedfs-master handles cluster coordination and volume management, multiple seaweedfs-volume servers provide the actual data storage capacity, seaweedfs-filer offers POSIX-like file system semantics and acts as a metadata layer, and seaweedfs-s3 provides Amazon S3-compatible API access. The master server maintains the global view of all volumes and their locations, while volume servers store the actual file chunks. The filer enables hierarchical file organization and supports various metadata stores, while the S3 gateway allows existing S3-compatible applications to use SeaweedFS without modification.
This configuration is ideal for organizations requiring petabyte-scale storage with high throughput and low latency, particularly those dealing with large numbers of small to medium-sized files. Unlike traditional distributed file systems that struggle with metadata bottlenecks, SeaweedFS excels in scenarios involving massive file counts while maintaining simplicity in deployment and operation. It's particularly valuable for content delivery networks, backup systems, data lakes, and any application requiring both file system and object storage interfaces.
Key Features
- Automatic volume management with configurable size limits and replication policies
- Dual storage interfaces supporting both POSIX file operations and S3-compatible object storage
- Horizontal scalability allowing dynamic addition of volume servers without downtime
- Efficient small file handling through volume-based storage architecture eliminating metadata bottlenecks
- Built-in data deduplication and compression to optimize storage utilization
- Cross-datacenter replication support for geographic distribution and disaster recovery
- Erasure coding options for space-efficient data protection across multiple volumes
- Real-time cluster monitoring and management through web-based administrative interfaces
Common Use Cases
- 1Content delivery networks requiring fast access to millions of images, videos, and static assets
- 2Backup and archival systems handling large volumes of incremental backups and snapshots
- 3Data lakes storing diverse datasets for analytics workloads with mixed access patterns
- 4Container image registries needing efficient storage and distribution of Docker images
- 5Media processing pipelines handling video transcoding and image manipulation workflows
- 6IoT data ingestion systems collecting and storing sensor data from thousands of devices
- 7Development environments requiring S3-compatible storage for testing cloud applications locally
Prerequisites
- Minimum 4GB RAM recommended for master server and 2GB per volume server for optimal performance
- Available ports 9333, 19333 for master services, 8888, 18888 for filer, and 8333 for S3 API
- Understanding of distributed storage concepts including replication, consistency, and volume management
- S3 configuration file (s3.json) properly configured with authentication and bucket settings
- Sufficient disk space across volume servers based on expected data growth and replication factor
- Network connectivity between all nodes with low latency for optimal cluster communication
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 master: 3 image: chrislusf/seaweedfs:latest4 container_name: seaweedfs-master5 command: master -ip=master -volumeSizeLimitMB=1006 ports: 7 - "9333:9333"8 - "19333:19333"9 volumes: 10 - master-data:/data11 networks: 12 - seaweedfs-network13 restart: unless-stopped1415 volume1: 16 image: chrislusf/seaweedfs:latest17 container_name: seaweedfs-volume118 command: volume -mserver="master:9333" -ip=volume1 -port=8080 -max=10019 volumes: 20 - volume1-data:/data21 depends_on: 22 - master23 networks: 24 - seaweedfs-network25 restart: unless-stopped2627 volume2: 28 image: chrislusf/seaweedfs:latest29 container_name: seaweedfs-volume230 command: volume -mserver="master:9333" -ip=volume2 -port=8080 -max=10031 volumes: 32 - volume2-data:/data33 depends_on: 34 - master35 networks: 36 - seaweedfs-network37 restart: unless-stopped3839 filer: 40 image: chrislusf/seaweedfs:latest41 container_name: seaweedfs-filer42 command: filer -master="master:9333" -ip=filer43 ports: 44 - "8888:8888"45 - "18888:18888"46 volumes: 47 - filer-data:/data48 depends_on: 49 - master50 - volume151 - volume252 networks: 53 - seaweedfs-network54 restart: unless-stopped5556 s3: 57 image: chrislusf/seaweedfs:latest58 container_name: seaweedfs-s359 command: s3 -filer="filer:8888" -config=/etc/seaweedfs/s3.json60 ports: 61 - "8333:8333"62 volumes: 63 - ./s3.json:/etc/seaweedfs/s3.json:ro64 depends_on: 65 - filer66 networks: 67 - seaweedfs-network68 restart: unless-stopped6970volumes: 71 master-data: 72 volume1-data: 73 volume2-data: 74 filer-data: 7576networks: 77 seaweedfs-network: 78 driver: bridge.env Template
.env
1# SeaweedFS2# Create s3.json for S3 credentials:3# {4# "identities": [5# {6# "name": "admin",7# "credentials": [8# {"accessKey": "admin", "secretKey": "adminpassword"}9# ],10# "actions": ["Admin", "Read", "Write"]11# }12# ]13# }Usage Notes
- 1Master UI at http://localhost:9333
- 2Filer at http://localhost:8888
- 3S3 API at http://localhost:8333
- 4High-performance object storage
- 5Supports file upload via HTTP
Individual Services(5 services)
Copy individual services to mix and match with your existing compose files.
master
master:
image: chrislusf/seaweedfs:latest
container_name: seaweedfs-master
command: master -ip=master -volumeSizeLimitMB=100
ports:
- "9333:9333"
- "19333:19333"
volumes:
- master-data:/data
networks:
- seaweedfs-network
restart: unless-stopped
volume1
volume1:
image: chrislusf/seaweedfs:latest
container_name: seaweedfs-volume1
command: volume -mserver="master:9333" -ip=volume1 -port=8080 -max=100
volumes:
- volume1-data:/data
depends_on:
- master
networks:
- seaweedfs-network
restart: unless-stopped
volume2
volume2:
image: chrislusf/seaweedfs:latest
container_name: seaweedfs-volume2
command: volume -mserver="master:9333" -ip=volume2 -port=8080 -max=100
volumes:
- volume2-data:/data
depends_on:
- master
networks:
- seaweedfs-network
restart: unless-stopped
filer
filer:
image: chrislusf/seaweedfs:latest
container_name: seaweedfs-filer
command: filer -master="master:9333" -ip=filer
ports:
- "8888:8888"
- "18888:18888"
volumes:
- filer-data:/data
depends_on:
- master
- volume1
- volume2
networks:
- seaweedfs-network
restart: unless-stopped
s3
s3:
image: chrislusf/seaweedfs:latest
container_name: seaweedfs-s3
command: s3 -filer="filer:8888" -config=/etc/seaweedfs/s3.json
ports:
- "8333:8333"
volumes:
- ./s3.json:/etc/seaweedfs/s3.json:ro
depends_on:
- filer
networks:
- seaweedfs-network
restart: unless-stopped
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 master:5 image: chrislusf/seaweedfs:latest6 container_name: seaweedfs-master7 command: master -ip=master -volumeSizeLimitMB=1008 ports:9 - "9333:9333"10 - "19333:19333"11 volumes:12 - master-data:/data13 networks:14 - seaweedfs-network15 restart: unless-stopped1617 volume1:18 image: chrislusf/seaweedfs:latest19 container_name: seaweedfs-volume120 command: volume -mserver="master:9333" -ip=volume1 -port=8080 -max=10021 volumes:22 - volume1-data:/data23 depends_on:24 - master25 networks:26 - seaweedfs-network27 restart: unless-stopped2829 volume2:30 image: chrislusf/seaweedfs:latest31 container_name: seaweedfs-volume232 command: volume -mserver="master:9333" -ip=volume2 -port=8080 -max=10033 volumes:34 - volume2-data:/data35 depends_on:36 - master37 networks:38 - seaweedfs-network39 restart: unless-stopped4041 filer:42 image: chrislusf/seaweedfs:latest43 container_name: seaweedfs-filer44 command: filer -master="master:9333" -ip=filer45 ports:46 - "8888:8888"47 - "18888:18888"48 volumes:49 - filer-data:/data50 depends_on:51 - master52 - volume153 - volume254 networks:55 - seaweedfs-network56 restart: unless-stopped5758 s3:59 image: chrislusf/seaweedfs:latest60 container_name: seaweedfs-s361 command: s3 -filer="filer:8888" -config=/etc/seaweedfs/s3.json62 ports:63 - "8333:8333"64 volumes:65 - ./s3.json:/etc/seaweedfs/s3.json:ro66 depends_on:67 - filer68 networks:69 - seaweedfs-network70 restart: unless-stopped7172volumes:73 master-data:74 volume1-data:75 volume2-data:76 filer-data:7778networks:79 seaweedfs-network:80 driver: bridge81EOF8283# 2. Create the .env file84cat > .env << 'EOF'85# SeaweedFS86# Create s3.json for S3 credentials:87# {88# "identities": [89# {90# "name": "admin",91# "credentials": [92# {"accessKey": "admin", "secretKey": "adminpassword"}93# ],94# "actions": ["Admin", "Read", "Write"]95# }96# ]97# }98EOF99100# 3. Start the services101docker compose up -d102103# 4. View logs104docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/seaweedfs-distributed/run | bashTroubleshooting
- Master server fails to start with 'bind: address already in use': Check if ports 9333 or 19333 are occupied by other services and modify port mappings if necessary
- Volume servers cannot connect to master: Verify network connectivity between containers and ensure master container is fully started before volume servers attempt connection
- S3 API returns 'filer not accessible' errors: Confirm s3.json configuration file exists and contains correct filer endpoint, and verify filer service is running and accessible
- File uploads fail with 'no writable volumes' error: Check volume server capacity limits and increase -max parameter or add additional volume servers to the cluster
- Filer web interface shows empty directories: Ensure volume servers are properly registered with master and check volume server logs for storage backend issues
- High memory usage on master server: Reduce -volumeSizeLimitMB parameter or distribute load across multiple master servers in production environments
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Components
seaweedfs-masterseaweedfs-volumeseaweedfs-filerseaweedfs-s3
Tags
#storage#object-storage#seaweedfs#distributed#s3
Category
Storage & BackupAd Space
Shortcuts: C CopyF FavoriteD Download