1
votes

I have a DSP software which captures the audio playing using the WASAPI api in shared loopback mode.

hr = _pAudioClient->Initialize(AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_LOOPBACK, 0, 0, _pFormat, 0);

This part works fine, but now I want to be able to detect the number of channels actually playing. In other words how would I be able to detect if the audio playing is in stereo, 5.1, 7.1?

The problem is:
* Since loopback have to use shared mode there could be multiple sources playing.
* This analysis has to be done in real-time. Can't wait until playback is done.
* Detect the difference between a channel not used at all by any playback source and a channel that is temporarily silent

The best solution in my mind would be If I could retrieve a list of all playback source/sub mixes and query them each for the number of channels. That way I don't have to analyse the audio data stream itself.

2

2 Answers

0
votes

Loopback recording takes place in mix format defined on the endpoint, so regardless of what the original audio format was you get the data in the mix format, mixed from possibly multiple played sources and also converted to such shared format.

WASAPI loopback contains the mix of all audio being played...

The GetMixFormat method retrieves the stream format that the audio engine uses for its internal processing of shared-mode streams...

After an application has used GetMixFormat or IsFormatSupported to find an appropriate format for a shared-mode or exclusive-mode stream, the application can call the Initialize method to initialize a stream with that format. An application that attempts to initialize a shared-mode stream with a format that is not identical to the mix format obtained from the GetMixFormat method, but that has the same number of channels and the same sample rate as the mix format, is likely to succeed. Before calling Initialize, the application can call IsFormatSupported to verify that Initialize will accept the format.

That is, even though WASAPI offers some flexibility in audio format, channel configuration and sample rate are defined by shared format when it comes to loopback capture.

As you are getting the mix, you cannot really identify "non-active" channels: this information is lost during mixing to shared format.

Also, the actual shared format can be configured interactively via Control Panel:

enter image description here

0
votes

Ok I now have a solution to my problem. As far as I know you can not detect sub-mixes in the shared mix so the only option was to analyze the audio stream/capture buffer.

First during my main capture loop I set the current timestamp for all channels playing.

const time_t now = Date::getCurrentTimeMillis();
//Iterate all capture frames
for (i = 0; i < numFramesAvailable; ++i) {
    for (j = 0; j < _nChannelsIn; ++j) {
        //Identify which channels are playing.
        if (pCaptureBuffer[j] != 0) {
            _pUsedChannels[j] = now;
        }
    }
}

Then every second I call this function which evaluates if a channel has played the last second. Based upon which channels are playing I can do conditional routing.

void checkUsedChannels() {
    const time_t now = Date::getCurrentTimeMillis();
    //Compare now against last used timestamp and determine active channels
    for (size_t i = 0; i < _nChannelsIn; ++i) {
        if (now - _pUsedChannels[i] > 1000) {
            _pUsedChannels[i] = 0;
        }
    }
    //Update conditional routing
    for (const Input *pInut : _inputs) {
        pInut->evalConditions();
    }
}

Very simple solution but it appears to be working.