Converts your home media library to HEVC (H.265) using Intel Arc / Quick Sync GPU hardware acceleration via FFmpeg VAAPI. Files are scanned, queued, and encoded one at a time in the background. Every file is tracked in a SQLite database to prevent duplicate work, originals are moved to an archive folder on success, and a live web dashboard lets you monitor progress and retry failures — all in two Docker containers.
HEVC-Converter/
├── converter/ # FFmpeg VAAPI encoder + cron-triggered scanner
│ ├── Dockerfile
│ ├── handbrake.py
│ └── entrypoint.sh
├── dashboard/ # Live HTML stats page served over HTTP
│ ├── Dockerfile
│ ├── dashboard.py
│ └── entrypoint.sh
├── docker-compose.yml
├── .env.example ← copy to .env and fill in your paths
└── .gitignore
| Requirement | Notes |
|---|---|
| Docker + Docker Compose v2 | docker compose version to verify |
| Intel Arc / Quick Sync GPU on host | Gen 11+ (Xe / Arc Alchemist) |
/dev/dri/renderD128 accessible |
confirm with ls -la /dev/dri/ |
Verify GPU access before starting
vainfo --display drm --device /dev/dri/renderD128You should see
VAProfileHEVCMain : VAEntrypointEncSliceLPin the output. Ifvainfois not installed:apt install -y vainfo intel-media-va-driver-non-free
# 1. Clone
git clone https://github.com/mCo0L/HEVC-Converter.git
cd HEVC-Converter
# 2. Configure
cp .env.example .env
$EDITOR .env
# 3. Build and start
docker compose up -d --build
# 4. Open the dashboard
open http://localhost:8080The converter runs an initial scan on startup so the worker has files to process immediately. Subsequent scans happen on the SCANNER_CRON schedule (default: every 6 hours).
All options live in .env (never committed — see .gitignore).
| Variable | Default | Description |
|---|---|---|
MOVIES_PATH |
— | Host path to your movies directory |
TV_PATH |
— | Host path to your TV shows directory |
MOVIES_ARCHIVE_PATH |
— | Where originals are moved after conversion |
TV_ARCHIVE_PATH |
— | Archive path for TV originals |
DATA_PATH |
— | Persistent data dir (SQLite DB lives here) |
LOGS_PATH |
— | Log file directory |
VAAPI_DEVICE |
/dev/dri/renderD128 |
VAAPI render node |
CQP_QUALITY |
26 |
CQP quality — lower = better quality / larger file. Recommended: 26–28 |
SCAN_WORKERS |
4 |
Parallel workers for the scan phase (hash + ffprobe) |
MIN_FILE_MB |
100 |
Minimum file size in MB — files below this are ignored (trailers, extras) |
SCANNER_CRON |
0 */6 * * * |
Cron schedule for library scans (default: every 6 hours) |
ENABLE_TV |
false |
Set to true to also convert TV shows |
DASHBOARD_PORT |
8080 |
Host port for the dashboard |
DASHBOARD_USER |
(empty) | Dashboard username — leave empty to disable auth |
DASHBOARD_PASSWORD |
(empty) | Dashboard password — leave empty to disable auth |
The system runs two persistent services:
Converter — two cooperating processes inside one container:
- A cron job fires on the
SCANNER_CRONschedule to scan your media library for new or changed files and queue them in the SQLite database. - A persistent worker runs continuously, picking up queued files one at a time and encoding them sequentially via FFmpeg VAAPI. It sleeps for 30 seconds when the queue is empty rather than polling constantly.
- An initial scan runs at container startup so the worker has work immediately without waiting for the first cron tick.
Dashboard — reads the same SQLite database every 30 seconds, regenerates the HTML, and serves it over HTTP.
- Walks
MOVIES_PATH(andTV_PATHifENABLE_TV=true) usingSCAN_WORKERSparallel threads. - Each candidate file is SHA-256 hashed (first 10 MB only, for speed) and probed with
ffprobe. - Files already in HEVC are recorded as
skipped_already_hevcand left alone. - Files smaller than
MIN_FILE_MBare ignored entirely. - New or changed files are queued as
pendingin the database. - Orphaned database rows (source or converted file deleted from disk) are cleaned up automatically. If any configured media root is unreachable (e.g. NAS unmounted), orphan cleanup is skipped to prevent data loss.
- Processes
pendingfiles sequentially — one GPU encode at a time. - Encodes via
hevc_vaapi(Intel hardware encoder) with CQP rate control. - HDR / 10-bit content is detected automatically and encoded using the
p010pixel format (Main 10 profile) with full color metadata passthrough. - All audio and subtitle streams are copied as-is into an MKV container.
- Disk space guard — before each encode, checks that free space ≥ 2× source file size. If not, the worker pauses for 5 minutes and retries.
- No-savings check — if the HEVC output is larger than or equal to the original, the output is discarded and the file is recorded as
skipped_no_savings. This protects already-efficient files. - On success, the original is moved to the archive path and the HEVC file takes its place.
- Failed conversions are automatically reset to
pendingon the next scan (up to 3 attempts by default, controlled bymax_retriesin the DB). - After all retries are exhausted the file is marked permanently failed and excluded from future scans.
- The dashboard shows a Retry button for retryable failures and a red Force Retry button for permanently failed files.
Open http://localhost:8080 (or your configured DASHBOARD_PORT).
- Stats cards — space saved, efficiency %, total original size, total converted size, and per-status counts. Clicking a status card filters the table.
- Tabs — switch between All / Movies / TV Shows.
- Status filters — Pending · In Progress · Failed · Completed · Skipped.
- Paginated table — 50 files per page.
- Retry buttons — retry a failed conversion directly from the UI.
- URL state — active tab, filter, and page are stored in the URL hash so the view survives refreshes.
- Auto-refresh — page reloads every 30 seconds to show the latest progress.
docker logs -f hevc-converterdocker exec hevc-converter python3 -c "
import sqlite3, os
with sqlite3.connect('/data/conversion.db') as c:
for r in c.execute('SELECT source_path, status, file_size_original, file_size_converted FROM conversions ORDER BY end_time DESC LIMIT 20'):
print(r)
"docker exec hevc-converter python3 -c "
import sqlite3
with sqlite3.connect('/data/conversion.db') as c:
n = c.execute(\"UPDATE conversions SET status='pending', retry_count=0, error_message=NULL WHERE status='failed'\").rowcount
print(f'Reset {n} file(s)')
"Set DASHBOARD_USER and DASHBOARD_PASSWORD in .env to enable HTTP Basic Auth. Leave both empty for localhost-only setups where no auth is needed.
The dashboard includes the following protections out of the box:
| Protection | How |
|---|---|
| Authentication | HTTP Basic Auth with timing-safe secrets.compare_digest |
| Brute-force lockout | IP locked out for 15 min after 10 failed attempts |
| CSRF | Per-process token embedded in the page, required on every POST |
| XSS | All user-derived values escaped with html.escape before rendering |
| SQL injection | All queries use parameterized statements |
docker compose down
# Remove containers and images
docker compose down --rmi allThe SQLite database and logs live in DATA_PATH / LOGS_PATH on your host and are not removed by docker compose down.
MIT