-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Developer Guide Engine
The mixing engine is the part of Mixxx that is in charge of resampling, amplifying, and mixing the audio from decks and samplers into a main, headphone, and booth output.
Hundreds of times per second, the operating system's audio API requests
a certain number of audio samples from Mixxx. This request is delivered
to the SoundManager class via an operating system callback (see
SoundManager (OS audio interface)).
SoundManager in turn requests that Mixxx's engine produce and mix
together the next buffer of audio.
The operating system callback requesting samples from Mixxx is running in what we call the callback thread. This is usually a realtime thread and is performance sensitive. Doing any kind of I/O or locking of mutexes in this thread is highly discouraged. Anything that can block the callback thread is in danger of causing user-audible skips (called xruns or buffer under-runs) in the output audio.
The goal of the callback thread is to fulfill the operating system's request for the next buffer of audio to play out the computer's speakers. The length of this buffer depends on the latency and samplerate settings the user has configured their soundcard at (configurable in the Mixxx Sound Hardware preferences).
At a latency of X milliseconds and a samplerate of Y samples per
second per channel, and stereo channels the number of samples that Mixxx
must generate to fill the buffer is given by this simple relationship:
X * Y * 2.
For example:
double latency = 0.001; // 1 millisecond
int sampleRate = 44100; // 44.1 thousand samples per second (kHz)
int numChannels = 2; // stereo, 2 channels
int samples_per_buffer = sampleRate * latency * numChannels;At a latency of 1 millisecond, the operating system will request buffers of audio every 1 millisecond or 1000 times per second.
Almost all mixing components in the engine follow the EngineObject
interface. This interface is very simple:
class EngineObject : public QObject {
Q_OBJECT
public:
EngineObject();
~EngineObject() override;
virtual void process(CSAMPLE* pInOut,
const std::size_t bufferSize) = 0;
// Sub-classes re-implement and populate GroupFeatureState with the features
// they extract.
virtual void collectFeatures(GroupFeatureState* pGroupFeatures) const;
};The process method performs all work in-place: it receives a
single CSAMPLE buffer pInOut and a bufferSize in samples (not
frames). The EngineObject reads from and writes back to the same
buffer.
For components that truly need a separate read-only input alongside a
writable output, the engine also provides EngineObjectConstIn:
class EngineObjectConstIn : public QObject {
Q_OBJECT
public:
virtual void process(const CSAMPLE* pIn, CSAMPLE* pOut,
const std::size_t bufferSize) = 0;
};Almost all components of the mixing engine implement EngineObject. The
benefit is that the mixing engine is modular and you can mix and match
different mixing components together to get the desired chain of audio
processing hooked up.
EngineMixer is the main class that drives the entire mixing engine.
SoundManager calls EngineMixer directly to request that the next
buffer of audio be generated.
EngineMixer inherits from QObject and AudioSource. Its core work
is done in its process method.
Terminology Alert: In the engine, there are two different types of
channels. In the context of the EngineMixer mixing together different
sources of audio, a channel is a source of audio like a deck, a sampler,
a microphone, etc. In the context of buffers of audio, the number of
channels refers to how many different signals are present in the buffer
(e.g. mono, stereo, multi-channel). Mixxx's mixing engine usually only
deals with stereo audio.
EngineMixer supports mixing multiple streams of audio together. To
add a channel of audio to EngineMixer you must create an
EngineChannel class that represents your channel of audio. For
example, decks and samplers both use the EngineDeck class, and
microphones use the EngineMicrophone class. All of these are
children of EngineChannel. To add a sampler or deck or microphone
to EngineMixer you call the addChannel method on EngineMixer.
src/engine/enginemixer.cpp EngineMixer::addChannel
As you will find in mixer/microphone.cpp:
ChannelHandleAndGroup channelGroup = pEngine->registerChannelGroup(group);
auto pMicrophone = std::make_unique<EngineMicrophone>(channelGroup, pEffectsManager);
pEngine->addChannel(std::move(pMicrophone));Note that addChannel takes ownership of the channel via
std::unique_ptr<EngineChannel>.
This registers an EngineMicrophone class with the EngineMixer. When
mixing the main and headphone outputs, EngineMixer will query the
EngineMicrophone that is created for whether it is active, and if so,
ask it to process itself to generate audio. Once EngineMicrophone
generates audio, EngineMixer will mix that audio into the main
output.
In EngineMixer::process, the EngineMixer does many tasks related
to mixing the audio together. First it looks for all active
EngineChannels and then calls process on each one of them so that
they each generate the audio from their channel to be mixed in this
callback. Next, the EngineMixer applies the volume to each channel
and adds their sample data to the headphone and main outputs,
depending on the results of EngineChannel::isPflEnabled() and
EngineChannel::isMainMixEnabled().
After mixing the crossfader orientation buses into the main mix, a series of post-processing steps are applied:
- Main channel effects (
EngineEffectsManager) are applied to the main mix. - Talkover ducking gain is applied to the main mix based on microphone activity.
- The headphone output is processed: PFL channels are mixed with
optional main mix contribution (controlled by
[Main],headMix), and post-fader effects are applied. - The booth output is derived from the main mix with its own gain
(
[Main],booth_gain). - Talkover (microphone) channels are mixed into the main output (mode-dependent).
- The main gain (
[Main],gain) is applied to the main mix. - The sidechain mix (used for recording and broadcasting) is
submitted to
EngineSideChain. - Balance is applied to the main output based on the
[Main],balancecontrol. -
EngineVuMeter-- Measures the spectral audio energy of the signal and updates VU meter controls. -
EngineDelayis applied to the main, headphone, and booth outputs independently.
After process is done,
SoundManager retrieves output buffers via
EngineMixer::buffer(const AudioOutput& output).
EngineChannel is the interface that all audio channels must implement
to integrate with EngineMixer.
The following methods are used by EngineMixer to determine how to mix
the EngineChannel:
-
isActive()-- if this method returns true then theEngineChannelis asked to produce audio via itsprocessmethod. Concrete subclasses update this state by implementingupdateActiveState(). -
isPflEnabled()-- if this method returns true then the result of theprocesscall will be mixed into the engine PFL (pre-fader listen, headphone) output. -
EngineChannel's default implementation ofisPflEnabled()looks at the value of apflcontrol to determine whether the channel should be heard in the headphone output. This allows other parts of Mixxx to control whether a channel is heard in the headphones or not. -
isMainMixEnabled()-- if this method returns true then the result of theprocesscall will be mixed into the engine main output. -
EngineChannel's default implementation ofisMainMixEnabled()defaults to true. -
getOrientation()-- the return of this method determines what orientation thisEngineChannelhas. Orientations can be the left-side of the crossfader, the center (not affected by the crossfader), and right side of the crossfader. -
EngineChannel's default implementation ofgetOrientation()looks at the value of anorientationcontrol to determine which mix orientation the channel should have. This allows other parts of Mixxx to control which side of the crossfader a channel is oriented on.
Decks and samplers are fundamentally the same thing to the mixing
engine. They are both represented by the EngineDeck class, which is a
sub-class of EngineChannel. If you take a look at
the EngineDeck implementation in src/engine/channels/enginedeck.cpp
you'll see that it is
pretty straightforward and composed of a small list of EngineObjects
which process the audio for each deck and sampler.
The processing steps performed in-order when EngineDeck::process is
called are:
-
EngineBuffer-- (See also Introduction to Mixxx's Deck/Sampler Processing) Contains almost all player logic -- decodes, re-samples audio, processes loops, hotcues, and syncing. For stem tracks (when compiled with__STEM__), the multi-channel stem buffer is processed and mixed down to stereo here. -
EnginePregain-- Applies gain and replaygain to the audio, also tracks playback speed/scratching state. - Pre-fader effects --
EngineEffectsManager::processPreFaderInPlaceapplies the deck's pre-fader effects chain (EQ, flanger, and any other user-configured effects). This replaces the oldEngineFilterBlockandEngineFlangerclasses, which have been removed. -
EngineVuMeter-- Measures the spectral audio energy of the signal and updates VU meter controls (embedded inEngineChannelasm_vuMeter).
Note: EngineVinylSoundEmu, EngineFilterBlock, EngineFlanger, and
EngineClipping have all been removed from the deck pipeline.
The resulting buffer of audio is mixed into the main and headphone
outputs by EngineMixer.
- The
updateActiveStatemethod is implemented byEngineDeckand determines whether the deck is active (e.g. a track is loaded and playing). The base classisActive()returns the cached result. - The
isPflEnabledmethod is implemented byEngineChannel. - The
isMainMixEnabledmethod is implemented byEngineChanneland defaults to true. - The
getOrientationmethod is implemented byEngineChannel.
Mixxx is a free and open-source DJ software.
Manual
Hardware Compatibility
Reporting Bugs
Getting Involved
Contribution Guidelines
Coding Guidelines
Using Git
Developer Guide
Creating Skins
Contributing Mappings
Mixxx Controls
MIDI Scripting
Components JS
HID Scripting