Streaming Engine is an audio processing server — it processes audio on the fly via URL parameters (like Thumbor/Imagor for images, but for audio). It is not an AI product. The MCP server integration exists to let LLMs use the audio processing API as a tool, but the core project is a deterministic audio processing pipeline built on FFmpeg.
just— List available recipesjust dev— Run with auto-reloadjust build— Build the projectjust test— Run all testsjust test-name <name>— Run a specific testjust bench— Run benchmarksjust lint— Run linter (clippy)just fmt— Format codejust check— Full check: format, lint, build, test
src/— Core streaming engine (Rust)crates/ffmpeg— Safe FFmpeg wrappercrates/ffmpeg-sys— Raw FFI bindings to FFmpegmcp-server/— MCP integration for LLM tool use (Node.js)config/— YAML configuration filesbenches/— Performance benchmarksscripts/— Deployment and CI scripts
- Imports: Group std, external crates, then local modules
- Error Handling: Use
color_eyre::Result,thiserrorfor custom errors - Logging: Use
tracingwith structured logging and#[instrument]for functions - Types: Prefer explicit types, use
Uuidfor IDs,DateTime<Utc>for timestamps - Naming: snake_case for functions/variables, PascalCase for types, modules in snake_case
- Async: Use
tokio::mainand async/await throughout - API: Use Axum with
Stateextraction andJsonresponses