Split any song into vocals, drums, bass, guitar, piano, and other
then play, mix, and export stems right in the app.
| Feature | Description | |
|---|---|---|
| ✂️ | Stem Separation | Split audio into 4 or 6 stems using Meta's Demucs. Supports WAV, MP3, FLAC, OGG, M4A, WMA, AIFF, and AU. |
| 🎛️ | Real-Time Mixer | Per-stem volume sliders, mute/solo toggles, and synchronized playback via Web Audio API. |
| 🌊 | Waveform Display | Normalized waveform visualization for every stem with a real-time playhead. |
| 🎹 | MIDI Conversion | Convert stems to MIDI using Spotify's basic-pitch. Notes visualized as a piano roll overlay. |
| ⚡ | GPU Acceleration | Auto-detects NVIDIA GPUs and configures CUDA PyTorch for faster processing. |
| 📊 | EQ Spectrum | Live 8-band frequency spectrum (60 Hz – 12 kHz) during playback. |
| 📁 | Batch Processing | Queue multiple songs and split them all in one run. |
| 💾 | Mix Export | Export your custom stem combination as a single WAV file. |
| 🎨 | Themes | Dark, light, and system themes with high-contrast accessibility mode. |
# Clone
git clone https://github.com/Prime8Chris/Stem-Splitter.git
cd Stem-Splitter
# Set up environment
python -m venv venv
venv\Scripts\activate
# Install
pip install pywebview
pip install -r requirements.txt
# Launch
python -m stem_splitterOr use the launcher: Stem Splitter.bat (Windows) or ./stem_splitter.sh (macOS/Linux).
Note
On first launch, a splash screen handles all setup automatically — installing Demucs, detecting your GPU, and configuring CUDA PyTorch if available. Subsequent launches are instant.
Audio File (.mp3, .wav, .flac, ...)
|
v
+---------------+
| Demucs | Meta's AI stem separation
| (PyTorch) | 4-stem or 6-stem model
+-------+-------+
|
+-----+-----+-----+-----+-----+
v v v v v v
Vocals Drums Bass Guitar Piano Other
| | | | | |
+-----+-----+-----+-----+-----+
|
v
+---------------+
| Stem Mixer | Play, mix, mute, solo
| + Waveforms | Volume control per stem
| + EQ Display | Real-time visualization
+-------+-------+
|
+-----+-----+
v v
Export MIDI
(.wav) (.mid)
| Step | Action |
|---|---|
| 1 | Click the drop zone or Browse to add audio files |
| 2 | Choose a model — 4 stems or 6 stems (+ guitar, piano) |
| 3 | Set your output directory |
| 4 | Select CPU or GPU if available |
| 5 | Click Split — progress is displayed in real time |
| 6 | Expand a completed file to open the mixer panel |
| 7 | Adjust volume, mute/solo stems, convert to MIDI |
| 8 | Click Export Mix to save your combination |
~/Music/Stem Splitter Output/
├── htdemucs/ # 4-stem model
│ └── My Song/
│ ├── vocals.wav
│ ├── drums.wav
│ ├── bass.wav
│ └── other.wav
└── htdemucs_6s/ # 6-stem model
└── My Song/
├── vocals.wav
├── vocals.mid # MIDI (when converted)
├── drums.wav
├── bass.wav
├── guitar.wav
├── piano.wav
└── other.wav
| Layer | Technology |
|---|---|
| Desktop | pywebview — native window with embedded web UI |
| Separation | Demucs — Meta's source separation model (PyTorch) |
| MIDI | basic-pitch — Spotify's pitch detection (TensorFlow) |
| Audio | Web Audio API, soundfile, NumPy |
| Frontend | Vanilla JS, CSS glassmorphism, Canvas 2D |
| Testing | pytest (backend), Jest + jsdom (frontend) |
| Requirement | Details |
|---|---|
| Python | 3.11 or higher |
| pip | For automatic dependency installation at first launch |
| NVIDIA GPU | Optional — enables CUDA acceleration for faster splits |
| pywebview | Installed separately (platform-specific backend) |
Project Structure
Stem Splitter/
├── Stem Splitter.bat # Windows launcher
├── stem_splitter.sh # macOS/Linux launcher
├── requirements.txt
├── stem_splitter/
│ ├── __init__.py
│ ├── __main__.py # Entry point
│ ├── app.py # Splash screen, window lifecycle
│ ├── api.py # Python <-> JS bridge
│ ├── config.py # Constants
│ ├── processing.py # Demucs + MIDI conversion
│ ├── server.py # Local audio server
│ ├── settings.py # User preferences
│ ├── setup.py # First-run setup
│ ├── assets/ # Logos
│ ├── static/
│ │ ├── index.html
│ │ ├── style.css
│ │ └── js/
│ │ ├── app.js # State management
│ │ ├── mixer.js # Audio playback
│ │ ├── waveform.js # Waveform rendering
│ │ ├── eq.js # EQ spectrum
│ │ ├── render.js # DOM rendering
│ │ ├── settings.js # Theme management
│ │ └── __tests__/
│ └── tests/
└── docs/
Testing
Backend:
python -m pytest stem_splitter/tests/ -vFrontend:
cd stem_splitter/static/js
npm install
npm testSee docs/TESTING.md for details.
Security
Stem Splitter runs entirely offline — no cloud dependencies, API keys, or telemetry.
- Audio server binds to
127.0.0.1only - 3-layer file validation (directory allowlist, extension whitelist, magic bytes)
- All Python-to-JS arguments serialized via
json.dumps() - ML models run in isolated subprocesses
- Settings use schema validation with automatic backup
See docs/SECURITY.md for the full security model.
User Data
Settings are stored in a platform-appropriate location:
- Windows:
%APPDATA%\StemSplitter\ - macOS:
~/Library/Application Support/StemSplitter/ - Linux:
~/.local/share/stem_splitter/
| File | Purpose |
|---|---|
settings.json |
Theme and accessibility preferences |
settings.json.bak |
Automatic backup |
setup_state.json |
Cached dependency check results |
logs/stem_splitter.log |
Warnings and errors |
| Document | Description |
|---|---|
| Architecture | System design, data flow, threading model |
| Setup Guide | Installation, configuration, GPU setup |
| User Guide | Complete feature walkthrough |
| API Reference | Python API, JS modules, server endpoints |
| Security | Security model and mitigations |
| Testing | Test architecture and coverage |
| Contributing | Code conventions and workflow |
Built with Demucs by Meta and basic-pitch by Spotify

