A highly scalable and efficient system for transforming video files into HLS (HTTP Live Streaming) format using a microservices architecture.
This project has been completely rewritten from its original Node.js/Express stack to a high-performance Bun + Hono architecture.
- Bun Native: Replaced Node.js,
npm, andts-node. Bun is now the runtime (Bun.serve), package manager (Bun Workspaces), bundler, and test runner. - Hono.js: Replaced Express with the blazing-fast Hono framework, utilizing
Bun.writefor zero-overhead multi-part file uploads (replacingmulter). - Zod Validation: Replaced manual checks with strict
Zodschemas and@hono/zod-validator. - Resilient RabbitMQ: Replaced raw
amqplibwith a robustRabbitManagerfeaturing exponential backoff, jitter, quorum queues, and a Dead Letter Queue (DLQ). - Structured Logging: Replaced
winstonwithpinofor low-overhead, request-scoped JSON logging. - Monorepo: Extracted shared logic into a
@hls/sharedworkspace package to eliminate cross-service coupling. - Production Kubernetes: Fully rewritten Helm chart with KEDA (queue-based scaling), HPAs, proper probes, and ReadWriteMany PVCs.
The system consists of two microservices that communicate asynchronously via RabbitMQ to process heavy video payloads without blocking the API:
- API Gateway (
src/): A stateless Hono server that receives uploads, validates them with Zod, writes them to disk, and queues them in RabbitMQ. - Video Processor (
video-processing-service/): A stateful worker that consumes RabbitMQ jobs, transcodes the video to HLS viafluent-ffmpeg(with a nativeBun.spawnfallback), and manages the Mongoose state machine.
hls-microservice-backend/
├── package.json # Bun workspace root
├── packages/
│ └── shared/ # @hls/shared workspace (RabbitManager, Pino, Errors)
├── src/ # API Gateway (Hono)
│ ├── server.ts # Bun.serve() entry point
│ ├── routes/ # Hono routers (upload, videos, health)
│ └── schemas/ # Zod validation schemas
├── video-processing-service/ # Worker Service
│ ├── src/worker.ts # Consumer entry point
│ ├── src/consumers/ # RabbitMQ message handlers
│ └── src/services/ # FFmpeg transcoder and storage
├── models/ # Mongoose schemas (shared)
├── Dockerfile # Multi-stage Bun build for API
└── charts/hls-microservice-backend-chart/ # Production Helm Chart
- Bun (v1.x)
- MongoDB (v8.x recommended)
- RabbitMQ (v4.x with management plugin recommended)
- FFmpeg (installed locally for dev)
- Docker & Kubernetes (optional, for prod deployment)
-
Clone the repository
git clone https://github.com/ShivamB25/hls-microservice-backend.git cd hls-microservice-backend -
Install dependencies using Bun
bun install
(This installs dependencies for the root, the shared package, and the worker simultaneously via Bun workspaces).
-
Set up environment variables
cp .env.example .env
(Ensure MongoDB and RabbitMQ are running locally or via Docker Compose).
-
Run the services locally
- Run the API Gateway (Port 3000):
bun run dev
- Run the Video Processor (Port 3001):
bun run dev:worker
- Run the API Gateway (Port 3000):
-
Docker Compose (Local Infra)
docker compose up -d
-
Optional: build the edge-friendly API entrypoint
bun run build:edge
This builds
src/edge.ts, which keeps the Hono API router portable for platforms that expect afetch()handler.
For detailed architecture and deployment information, see:
- Code Architecture - Deep dive into the Bun/Hono monorepo, data flow, and error handling.
- RabbitMQ Usage - Details on Quorum queues, the Dead Letter Exchange, and the custom RabbitManager.
- Kubernetes Deployment Guide - How to deploy the Helm chart with KEDA scaling and ReadWriteMany PVCs.
Current code is Bun container-native. Hono is multi-runtime, but this repository uses Bun-specific and server/container-only primitives.
| Target | Status | Notes |
|---|---|---|
| Docker / Kubernetes | ✅ Supported | Primary deployment path for this repo. |
| VM / Bare metal (Bun) | ✅ Supported | Run with bun run start and external Mongo/RabbitMQ/FFmpeg installed. |
| Cloudflare Workers | src/edge.ts provides a portable Hono fetch entrypoint, but upload persistence, MongoDB, RabbitMQ, and ffmpeg still require container mode or service replacement. |
|
| Vercel Edge / Netlify Edge | ❌ Not direct | Same constraints as Workers for filesystem/TCP/processes. |
- API uses
Bun.serve()and upload path usesBun.write()to local disk. - Worker uses
fluent-ffmpegandBun.spawnto execute ffmpeg. - Data plane uses MongoDB and RabbitMQ over TCP (
mongoose,amqplib).
- A real Hono
fetch()entrypoint insrc/edge.ts. - Runtime-aware upload behavior: on non-Bun runtimes, binary ingestion returns
501instead of pretending to work. - Shared router/middleware logic that stays reusable across Bun server mode and edge-style fetch mode.
- Keep the worker in containers/Kubernetes for ffmpeg.
- Optionally move API ingress to Workers with Hono adapter.
- Replace local disk with object storage (e.g., R2/S3).
- Replace broker/DB access with edge-compatible services or HTTP APIs.
This project is licensed under the MIT License. See the LICENSE file for more details.