Rclone Web UI
Cloud storage sync tool with web interface.
Overview
Rclone is a command-line program developed by Nick Craig-Wood that manages files on cloud storage, often called "rsync for cloud storage." Since 2012, it has become the go-to solution for syncing, copying, and mounting files across over 70 cloud storage providers including Amazon S3, Google Drive, Dropbox, OneDrive, Backblaze B2, and many others. Its robust feature set includes encryption, compression, bandwidth limiting, and detailed progress reporting, making it essential for data migration, backup strategies, and cloud storage management. The web interface (rcd) provides a browser-based control panel that exposes all of rclone's functionality through an intuitive GUI, eliminating the need for complex command-line operations while maintaining full access to advanced features. This containerized setup runs the rclone daemon with the web GUI enabled, creating a persistent service that can manage multiple cloud storage remotes simultaneously. The configuration combines rclone's powerful sync engine with HTTP-based remote control capabilities, allowing users to configure storage providers, monitor transfer progress, and execute complex operations through a web browser. Organizations benefit from this deployment when they need centralized cloud storage management without requiring technical staff to master rclone's extensive command-line interface. The persistent configuration storage and always-available web interface make it ideal for environments where multiple team members need access to cloud storage operations, automated backup scheduling, and real-time monitoring of data transfer tasks across different cloud providers.
Key Features
- Web-based configuration wizard for 70+ cloud storage providers including S3-compatible services, Google Workspace, Microsoft 365, and specialized providers
- Real-time transfer monitoring with bandwidth graphs, file counts, progress bars, and detailed logging through the browser interface
- Built-in file browser with drag-and-drop uploads, downloads, and direct cloud-to-cloud transfers without local storage requirements
- Advanced sync options including checksum verification, partial file resume, deduplication, and client-side encryption with multiple cipher options
- Scheduled job management with cron-like syntax for automated backups, syncs, and cleanup operations across multiple cloud remotes
- Mount operations that present cloud storage as local filesystems with caching, prefetching, and write-back capabilities
- Server protocols including WebDAV, SFTP, HTTP, and FTP for exposing cloud storage to other applications and services
- Bandwidth limiting, connection pooling, and retry logic with exponential backoff for reliable transfers over unstable connections
Common Use Cases
- 1Media production companies syncing large video files between local NAS storage and cloud providers like Backblaze B2 or AWS S3 Glacier
- 2IT departments migrating mailboxes and documents from one cloud provider to another (Office 365 to Google Workspace) without downloading locally
- 3Photography studios automatically backing up client shoots to multiple cloud storage providers with encryption and deduplication
- 4Development teams distributing build artifacts and releases across CDN providers and cloud storage buckets from a central interface
- 5Small businesses creating scheduled backups of accounting data, customer files, and project documents to encrypted cloud storage
- 6Home lab enthusiasts mounting cloud storage as network drives for Plex media servers, backup destinations, and file sharing
- 7Research organizations synchronizing datasets between on-premises storage and cloud compute platforms for data analysis workflows
Prerequisites
- Minimum 512MB RAM for basic operations, 2GB+ recommended for large file transfers and multiple simultaneous sync jobs
- Port 5572 available for the web interface, with firewall rules configured if accessing remotely
- Valid accounts and API credentials for target cloud storage providers (API keys, OAuth tokens, service account files)
- Understanding of cloud storage concepts like buckets, objects, regions, and storage classes for proper remote configuration
- SSL certificate and reverse proxy setup recommended for production deployments with external access
- Sufficient disk space in the data volume for temporary files during large transfers and chunked uploads
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 rclone: 3 image: rclone/rclone:latest4 container_name: rclone5 restart: unless-stopped6 command: rcd --rc-web-gui --rc-addr=:5572 --rc-user=admin --rc-pass=${RCLONE_PASSWORD}7 volumes: 8 - rclone_config:/config/rclone9 - /path/to/data:/data10 ports: 11 - "5572:5572"1213volumes: 14 rclone_config: .env Template
.env
1RCLONE_PASSWORD=changemeUsage Notes
- 1Docs: https://rclone.org/docs/
- 2Web UI at http://localhost:5572 (admin / RCLONE_PASSWORD)
- 3Supports 70+ cloud providers: S3, Google Drive, Dropbox, OneDrive, etc.
- 4Configure remotes: rclone config (or in web UI)
- 5Operations: sync, copy, move, mount, serve (WebDAV, HTTP, FTP)
- 6CLI: rclone sync /local remote:bucket --progress
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 rclone:5 image: rclone/rclone:latest6 container_name: rclone7 restart: unless-stopped8 command: rcd --rc-web-gui --rc-addr=:5572 --rc-user=admin --rc-pass=${RCLONE_PASSWORD}9 volumes:10 - rclone_config:/config/rclone11 - /path/to/data:/data12 ports:13 - "5572:5572"1415volumes:16 rclone_config:17EOF1819# 2. Create the .env file20cat > .env << 'EOF'21RCLONE_PASSWORD=changeme22EOF2324# 3. Start the services25docker compose up -d2627# 4. View logs28docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/rclone-webui/run | bashTroubleshooting
- 401 Unauthorized errors when accessing remotes: Refresh OAuth tokens through the web interface or reconfigure API credentials with proper permissions
- Web UI shows 'Failed to load config' on startup: Check that the rclone_config volume has proper permissions and the container can write to /config/rclone directory
- Sync operations fail with 'chunk upload failed': Increase chunk size in advanced settings or enable resume for large files, especially with slower connections
- Google Drive transfers hit quota limits: Configure service account with domain-wide delegation or enable exponential backoff with longer retry intervals
- Mount operations show stale data: Adjust cache settings like dir-cache-time and vfs-cache-mode for your specific access patterns and consistency requirements
- High memory usage during large transfers: Enable streaming uploads, reduce --transfers concurrency, and configure appropriate buffer sizes for your available RAM
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Ad Space
Shortcuts: C CopyF FavoriteD Download