A full self-hosted homelab stack that I have been actively running and refining for the past four years. Everything runs in Docker, managed with individual Compose files per service, systemd units for auto-start, and a handful of automation scripts to keep things ticking without much manual effort.
I recently made this public so other people can use it as a reference or a starting point for their own setup. It is a real, actively deployed stack, not a demo.
This repo is also where the GitHub Actions automation lives, which uses Gemini CLI to review pull requests, triage issues, and handle general tasks on demand.
Paths like /home/raspberrypi/dockerapps/... are hardcoded throughout the Compose files. That is intentional. This is my personal project and these are the actual paths on my machines. If you are adapting this for your own use, do a find-and-replace on /home/raspberrypi with your own home directory and you are good to go.
This stack runs across two machines:
Raspberry Pi — the primary home server. Handles most of the always-on services: media, home automation, DNS, downloads, and local utilities.
Oracle Cloud ARM A1 — a free-tier ARM instance used for services that benefit from being publicly reachable without depending on home internet uptime. Oracle's ARM A1 instances are surprisingly capable for a free tier and handle the workload well.
Both machines run the same Docker Compose structure and the same systemd service pattern, so adding a service on either machine is straightforward.
Services are exposed publicly through one of three methods depending on the use case:
Pangolin is the primary method for secure remote access. It uses WireGuard tunnels to connect origin servers to a public endpoint without requiring open inbound ports. Traefik handles routing and SSL. This is the preferred setup for anything sensitive.
Cloudflare Tunnels is used as an alternative for services where Pangolin is not in play. The tunnel runs on the machine and traffic comes in via Cloudflare's edge, so again no open ports needed.
Nginx Proxy Manager handles local network reverse proxying and SSL for services that are only accessed within the home network or already sit behind one of the above two methods.
There are roughly 40 services split across categories. The networking stack handles secure remote access without exposing raw ports to the internet. The media stack handles everything from requesting to downloading to playing. The automation layer handles home devices and workflows. The rest covers file management, monitoring, backups, and utilities.
All services share a single external Docker network called nginx-network. Most go through Nginx Proxy Manager or Pangolin for external access. DNS ad-blocking runs at the network level via AdGuardHome.
- A Linux host running Docker and Docker Compose v2 (tested on Raspberry Pi OS and Oracle Linux ARM)
- An external Docker network already created:
docker network create nginx-network - A domain name with DNS pointed at your server or VPS (for Pangolin and SSL)
- GitHub CLI (
gh) installed and authenticated, for the backup script systemdfor service auto-start on boot
dockerapps/
├── <service>/
│ └── docker-compose.yml # One Compose file per service
├── pangolin/
│ ├── docker-compose.yml # Pangolin + Gerbil + Traefik + CrowdSec
│ └── .env # Cloudflare token + CrowdSec bouncer key
├── services/
│ └── docker-<name>.service # Systemd unit for each service
├── scripts/
│ ├── backup-configs-to-release.sh
│ ├── setup-backup.sh
│ └── update-traefik-plugins.sh
├── .github/
│ ├── workflows/ # GitHub Actions workflows
│ └── commands/ # Gemini CLI prompt definitions
└── renovate.json # Automated dependency updates
Config files for each service live outside the repo at /home/raspberrypi/dockerapps/configs/ and are excluded from git via .gitignore. They get backed up separately to GitHub Releases (see the backup section below).
Pangolin (ports 80, 443, 51820 UDP, 21820 UDP)
A self-hosted tunnelled reverse proxy with identity-aware access control. Think of it as a fully self-hosted Cloudflare Tunnels alternative. The stack here runs four containers together: Pangolin as the control plane, Gerbil for WireGuard interface management, Traefik as the reverse proxy with automatic Let's Encrypt SSL, and CrowdSec for threat detection at the edge. Remote services can be exposed securely through encrypted WireGuard tunnels without any open ports on the origin server. Cloudflare DNS API is used for certificate issuance via the .env file.
Nginx Proxy Manager (ports 80, 81, 443) Web UI for managing reverse proxy rules and SSL certificates on the local network. Port 81 is the admin panel.
AdGuardHome (port 53 TCP/UDP, 3000, 853)
Network-level DNS ad and tracker blocker. Runs in network_mode: host so it can bind directly to port 53. Also exposes DNS-over-TLS on 853 and DNS-over-QUIC.
WireGuard Easy (ports 51821 UDP, 51822 TCP)
WireGuard VPN with a clean web UI for managing peers. Sits behind Nginx Proxy Manager. Configured with INSECURE=true since it runs behind a reverse proxy that handles HTTPS.
Authelia (port 8091) Single sign-on and two-factor authentication portal. Protects services behind Nginx Proxy Manager.
CrowdSec (port 8080) Intrusion detection and prevention. Runs as a standalone instance separate from the Pangolin stack for local firewall bouncing. Ingests auth logs, Traefik access logs, and Docker container logs. The bouncer key connects it to the Pangolin/Traefik stack.
Byparr (port 8191) A drop-in FlareSolverr replacement that handles Cloudflare challenge bypassing for indexers like Prowlarr and Jackett that need it.
Plex (port 32400)
Media server. Libraries are mapped from /home/raspberrypi/Downloads and a shared camera recordings folder.
Sonarr (port 8989) TV show collection manager. Monitors RSS feeds and automates downloading. Uses the nightly image.
Radarr (port 7878) Same as Sonarr but for films. Also on the nightly image.
Bazarr (port 6767) Subtitle management for Sonarr and Radarr. Automatically fetches subtitles from configured providers.
Prowlarr (port 9696) Centralised indexer manager. Syncs indexer configurations to Sonarr, Radarr, and other apps automatically. On the nightly image.
Jackett (port 9117) Legacy torrent indexer proxy. Kept alongside Prowlarr for indexers not yet supported there.
Overseerr (port 5055) Media request interface for Plex. Users can request films and TV shows and Overseerr pushes them to Radarr/Sonarr automatically. On the develop image.
Tautulli (port 8181) Plex monitoring and analytics. Tracks play history, user activity, and notifications.
Autobrr (port 7474) IRC announce monitoring and automated torrent grabbing. Works alongside Prowlarr for speed-critical releases.
Recyclarr Syncs quality profiles and custom formats from the TRaSH Guides to Sonarr and Radarr automatically. Runs on a schedule, no web UI.
Watchtower (no exposed port)
Checks for and applies image updates daily at 04:40. Configured to also revive stopped containers after updates and clean up old images. Uses a dev fork (nickfedor/watchtower) that has extra features.
qBittorrent (port 8082, torrenting on 6881)
Primary torrent client. Mapped with the VueTorrent UI at /VueTorrent for a cleaner interface. Uses the libtorrentv1 image variant for compatibility.
Deluge (port 8112) Secondary torrent client kept as a backup option.
MeTube (port 8081)
Web frontend for yt-dlp. Downloads video and audio from YouTube and other supported sites directly to /home/raspberrypi/Downloads. Cookie support is configured for age-gated or restricted content.
Cobalt (API on 9000, web on 9700) Another media downloader focused on clean URL-based downloading. Runs as two containers: an API backend and a web frontend.
Downtify (port 8002) Spotify track downloader. Port 8002 is used because 8000 is taken by PyLoad.
PyLoad (port 8000) General-purpose download manager with a web UI. Supports direct links, file hosters, and more.
Home Assistant (port 8123)
Full home automation platform. Config lives at /configs/homeassistant. Shares a camera recordings volume with Plex. An optional hacs-init profile service is included to install HACS in one shot.
Node-RED (port 1880) Flow-based visual programming for automation. Connected to the shared camera recordings folder as read-only.
Mosquitto (port 1883, WebSocket on 9001) Eclipse Mosquitto MQTT broker. Used by Home Assistant and Node-RED for IoT device messaging.
File Browser (port 6969)
Web-based file manager with full access to the host filesystem (/:/data). Uses the beta image from gtstef/filebrowser.
CopyParty (ports 3923, 3921, 3969 UDP, 3945, 12000-12099)
Feature-rich file server with FTP, TFTP, and WebDAV support. Also runs audio BPM and key analysis via custom media tag scripts. Accessible at the /copyparty path.
Calibre-Web (port 8083)
eBook library manager and reader. Library is mapped from /home/raspberrypi/Downloads/books. Includes the universal Calibre DOCKER mod.
ConvertX (port 3030) Self-hosted file converter supporting a wide range of formats. Runs up to four concurrent conversions. Files auto-delete after 24 hours.
Vert (port 3002) Another browser-based file converter. Lightweight, no accounts needed.
BentoPDF (port 3800) PDF tools suite. Handles merge, split, compress, and other PDF operations.
Duplicati (port 8200)
Encrypted backup client. Source is the entire /home/raspberrypi/dockerapps directory (read-only). Backups go to /home/raspberrypi/Downloads/backups.
Enclosed (port 8788) Encrypted, self-destructing note sharing. Port 8788 is used because 8787 is taken by Portainer.
Portainer (ports 8787, 9443)
Docker management UI. Runs the Enterprise Edition image with HTTP enabled via --http-enabled.
Homarr (port 7575) Dashboard for all the services. Integrates with Docker to show container status automatically.
Dashdot (port 3001) Live system stats dashboard showing CPU, RAM, disk, and network usage. Runs in privileged mode for hardware access.
LibreSpeed (port 5757) Self-hosted internet speed test. No external dependencies.
MySpeed (port 5216) Runs scheduled speed tests over time and stores the results for graphing trends.
HomHub (port 5000) Personal Flask-based home hub for uploads, media, and PDFs.
Three automated workflows run via Gemini CLI, triggered through a central dispatch workflow at .github/workflows/gemini-dispatch.yml.
PR review (gemini-review.yml)
Fires automatically when a pull request is opened. Gemini analyses the diff and posts an inline code review with severity ratings, suggestions, and a summary comment. The review covers correctness, security, efficiency, maintainability, and test coverage. A 7-minute timeout is set to keep runs from hanging.
Issue triage (gemini-triage.yml)
Fires when an issue is opened or reopened. Gemini reads the issue title and body alongside the repository's label list and applies the most relevant labels. It only uses labels that already exist in the repo, so no phantom labels get created. The analysis step explicitly has no GITHUB_TOKEN passed to it since issue content is treated as untrusted input.
Scheduled triage (gemini-scheduled-triage.yml)
Runs hourly via cron. Fetches all unlabelled open issues in bulk and processes them together in one Gemini call, outputting a JSON array of triage decisions. A separate label job then applies those decisions using the GitHub API with validation to prevent prompt injection from slipping through.
On-demand invocation (gemini-invoke.yml)
Triggered by commenting @gemini-cli followed by a request on any issue or PR. Restricted to users with OWNER, MEMBER, or COLLABORATOR association. Gemini follows a strict plan-then-approve workflow: it posts a plan as a comment, waits for /approve, and only then executes. It uses the GitHub MCP server for all repository operations rather than raw shell commands.
All workflows support both Gemini API key auth and Google Cloud Workload Identity Federation via the gcp_* variables.
Renovate is also configured (renovate.json) to run daily dependency updates. Docker image updates are grouped and not auto-merged. Minor and patch updates for other dependencies auto-merge.
Backs up selected config directories to a GitHub Release as a zip file. The script creates a timestamped release tagged backup-YYYY-MM-DD_HH-MM-SS, attaches the zip as an asset, and prunes releases older than the most recent 30 (which is five days worth at four-hour intervals).
Directories backed up:
adguardhome/conf, autobrr, bazarr, calibre-web, crowdsec/config, filebrowser, homarr, homehub, jackett, pangolin
Sensitive files are excluded from the zip: private keys, PEM files, acme.json, databases, cache, and logs.
One-shot setup script for the backup system. Installs GitHub CLI if missing, authenticates, installs the systemd service and timer, and runs the first backup.
Checks GitHub for the latest releases of the Traefik plugins configured in traefik_config.yml (Badger and the CrowdSec bouncer plugin), updates the version strings in place, and restarts Traefik. Keeps up to 10 backups of the config file before each update. Supports --dry-run, --force, and --install to set up a daily systemd timer at 03:00.
Every service has a corresponding unit file in services/docker-<name>.service. They all follow the same pattern:
- Type
oneshotwithRemainAfterExit=yes - Require Docker to be running first
- 20-second pre-start sleep to give Docker time after boot
- Restart on failure with a 10-second delay
- Start and stop via
docker compose up -d/docker compose down
To enable a service:
sudo cp services/docker-<name>.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now docker-<name>.serviceThe config-backup.timer and config-backup.service in services/ handle the automated config backup. The timer runs at 00:00, 04:00, 08:00, 12:00, 16:00, and 20:00 with a 15-minute random delay.
Each service is fully independent. To start any one of them:
cd <service>/
docker compose up -dThe Pangolin stack is the exception. It manages its own Traefik and CrowdSec instances. If you are also running the standalone crowdsec/ directory, make sure the two CrowdSec instances are not conflicting on port 8080.
All secrets (API keys, passwords, tokens) use REDACTED as a placeholder in the committed files. Real values go in the actual config files or via environment variables on the host.
The .env file in pangolin/ holds the Cloudflare DNS API token for cert issuance and the CrowdSec bouncer key. These are also REDACTED in the repo.
Configs are intentionally excluded from git (via .gitignore) and backed up separately via the GitHub Releases backup script.