2
votes

I have four buffers that I am using for audio playback in a synthesizer. I submit two buffers initially, and then in the callback routine I write data into the next buffer and then submit that buffer.

When I generate each buffer I'm just putting a sine wave into it whose period is a multiple of the buffer length.

When I execute I hear brief pauses between each buffer. I've increased the buffer size to 16K samples at 44100 Hz so I can clearly hear that the whole buffer is playing, but there is an interruption between each.

What I think is happening is that the callback function is only called when ALL buffers that have been written are complete. I need the synthesis to stay ahead of the playback so I need a callback when each buffer is completed.

How do people usually solve this problem?

Update: I've been asked to add code. Here's what I have:

First I connect to the WaveOut device:

// Always grab the mapped wav device.
UINT deviceId = WAVE_MAPPER;

// This is an excelent tutorial:
// http://planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=4422&lngWId=3

WAVEFORMATEX wfx; 
wfx.nSamplesPerSec = 44100; 
wfx.wBitsPerSample = 16; 
wfx.nChannels = 1; 
wfx.cbSize = 0; 
wfx.wFormatTag = WAVE_FORMAT_PCM;
wfx.nBlockAlign = (wfx.wBitsPerSample >> 3) * wfx.nChannels;
wfx.nAvgBytesPerSec = wfx.nBlockAlign * wfx.nSamplesPerSec;

_waveChangeEventHandle = CreateMutex(NULL,false,NULL);

MMRESULT res;
res = waveOutOpen(&_wo, deviceId, &wfx, (DWORD_PTR)WavCallback, 
    (DWORD_PTR)this, CALLBACK_FUNCTION);

I initialize the four frames I'll be using:

for (int i=0; i<_numFrames; ++i)
{
    WAVEHDR *header = _outputFrames+i;
    ZeroMemory(header, sizeof(WAVEHDR));
    // Block size is in bytes.  We have 2 bytes per sample.
        header->dwBufferLength = _codeSpec->OutputNumSamples*2; 
    header->lpData = (LPSTR)malloc(2 * _codeSpec->OutputNumSamples);
    ZeroMemory(header->lpData, 2*_codeSpec->OutputNumSamples);
    res = waveOutPrepareHeader(_wo, header, sizeof(WAVEHDR));
    if (res != MMSYSERR_NOERROR)
    {
        printf("Error preparing header: %d\n", res - MMSYSERR_BASE);
    }
}
SubmitBuffer();
SubmitBuffer();

Here is the SubmitBuffer code:

void Vodec::SubmitBuffer()
{
WAVEHDR *header = _outputFrames+_curFrame;
MMRESULT res;
res = waveOutWrite(_wo, header, sizeof(WAVEHDR));
if (res != MMSYSERR_NOERROR)
{
    if (res = WAVERR_STILLPLAYING)
    {
        printf("Cannot write when still playing.");
    }
    else
    {
        printf("Error calling waveOutWrite: %d\n", res-WAVERR_BASE);
    }
}

_curFrame = (_curFrame+1)&0x3;

if (_pointQueue != NULL)
{
        RenderQueue();
    _nextFrame = (_nextFrame + 1) & 0x3;
}
}

And here is my callback code:

void CALLBACK Vodec::WavCallback(HWAVEOUT hWaveOut, 
    UINT uMsg, 
    DWORD dwInstance, 
    DWORD dwParam1,
    DWORD dwParam2 )
{
// Only listen for end of block messages.
if(uMsg != WOM_DONE) return;

    Vodec *instance = (Vodec *)dwInstance;
instance->SubmitBuffer();
}

The RenderQueue code is pretty simple - just copies a piece of a template buffer into the output buffer:

void Vodec::RenderQueue()
{
double white = _pointQueue->White;
white = 10.0; // For now just override with a constant value
int numSamples = _codeSpec->OutputNumSamples;
signed short int *data = (signed short int *)_outputFrames[_nextFrame].lpData;
for (int i=0; i<numSamples; ++i)
{
    Sample x = white * _noise->Samples[i];
    data[i] = (signed short int)(x);
}
_sampleOffset += numSamples;
if (_sampleOffset >= _pointQueue->DurationInSamples)
{
    _sampleOffset = 0;
    _pointQueue = _pointQueue->next;
}
}

UPDATE: Mostly solved the issue. I need to increment _nextFrame along with _curFrame (not conditionally). The playback buffer was getting ahead of the writing buffer.

However, when I decrease the playback buffer to 1024 samples, it gets choppy again. At 2048 samples it is clear. This happens for both Debug and Release builds.

1
Do you hear a pause or do you hear a sharp click? Post code. - Hans Passant
Most audio APIs hace at least two buffers, one which is being played back and one which is being written to. The most common reason for dropouts between buffers is that a user of the library has done some sort of blocking (like a mutex). This might help: blog.bjornroche.com/2011/11/… - Bjorn Roche
here is some more info that might help: waveOutWrite( ) stuttering - Remy Lebeau
Hans: It is clearly a pause, not a sharp click. It does not sound like the buffers are being played on top of each other and truncating the waveform resulting in a click. I have also tried with white noise and it's a pause, not a clock. I've updated the above with code. - MattD
1024 samples is only 20 ms. The only API which you can work with at this low latencies is WASAPI, and API you are using is a layer on top of that, so certain overhead is involved. - Roman R.

1 Answers

0
votes

1024 samples is just about 23ms of audio data. wav is pretty high level API from Windows Vista onwards. If you want low-latency audio playback, you should use CoreAudio. You can get latencies down to 10 ms in shared mode and 3 ms in exclusive mode. Also, the audio depends upon the processes currently running on your system. In other words, it depends on how frequently your audio thread can run to get data. You should also look at MultiMedia Class Scheduler Service and AvSetMmThreadCharacteristics function.