1
votes

What I'm trying to accomplish is to process an array of audio data through a Core Audio effect unit and get the manipulated data back (without playing it -- i.e. offline). I've hit a wall and it's probably something very basic that I'm not understanding.

Ideally what I want to have is a single audio unit (like a delay effect) bring in raw data via a render callback and then I call AudioUnitRender() on that unit over and over, saving the resulting buffer for later. So: {RENDER CALLBACK}->[EFFECT UNIT]->{Render Loop}->{Data}. But when I do this, no matter how many times I call AudioUnitRender() on the AudioUnit in a loop, the render callback is only called the first time.

Things I've tried:

  1. Worked: Setup a render callback on kAudioUnitSubType_DefaultOutput and called AudioOutputUnitStart(). This worked fine and played my audio data out of the speakers.

  2. Worked: Setup a render callback on kAudioUnitSubType_GenericOutput and called AudioUnitRender() in a loop. This seemed to work and passed out an unmodified copy of the original data just fine.

  3. Worked: Setup a render callback on an kAudioUnitSubType_Delay unit and connected its output to kAudioUnitSubType_DefaultOutput. Calling AudioOutputUnitStart() played my audio data out of the speakers with a delay as expected.

  4. Failed: Finally, I setup a render callback on the kAudioUnitSubType_Delay unit and connected it's output to kAudioUnitSubType_GenericOutput. Calling AudioUnitRender() in a loop only calls the render callback on the first call to AudioUnitRender(), just like what happens if I try to render the effect directly.

I don't get any OSStatus errors from any of the function calls that might point to a problem. Can someone help me understand why the effect isn't calling the render callback function more than once unless it's hooked up to the Default Output?

Thanks!

Below is a sample of the relevant code from my tests above. I can provide more details if necessary, but the setup code for connecting the units is there.

// Test Functions

// [EFFECT ONLY] - FAILS! - ONLY CALLS RENDER CALLBACK ON FIRST CALL TO RENDER
func TestRenderingEffectOnly() {
    var testUnit = CreateUnit(type: .TestEffect)
    AddRenderCallbackToUnit(&testUnit, callback: RenderCallback)
    RenderUnit(testUnit)
}


// [DEFAULT OUTPUT ONLY] - WORKS!
func TestDefaultOutputPassthrough() {
    var testUnit = CreateUnit(type: .DefaultOutput)
    AddRenderCallbackToUnit(&testUnit, callback: RenderCallback)
    AudioOutputUnitStart(testUnit)
}


// [GENERIC OUTPUT ONLY] - SEEMS TO WORK!
func TestRenderingToGenericOutputOnly() {
    var testUnit = CreateUnit(type: .GenericOutput)
    AddRenderCallbackToUnit(&testUnit, callback: RenderCallback)
    RenderUnit(testUnit)
}


// [EFFECT]->[DEFAULT OUTPUT] - WORKS!
func TestEffectToDefaultOutput() {

    var effectUnit = CreateUnit(type: .TestEffect)
    var outputUnit = CreateUnit(type: .DefaultOutput)

    AddRenderCallbackToUnit(&effectUnit, callback: RenderCallback)

    var connection = AudioUnitConnection()
    connection.sourceAudioUnit    = effectUnit
    connection.sourceOutputNumber = 0
    connection.destInputNumber    = 0

    let result = AudioUnitSetProperty(outputUnit, kAudioUnitProperty_MakeConnection, kAudioUnitScope_Input, 0, &connection, UInt32(MemoryLayout<AudioUnitConnection>.stride))
    NSLog("connection result = \(result)")

    AudioOutputUnitStart(outputUnit)
}


// [EFFECT]->[GENERIC OUTPUT] - FAILS! - ONLY CALLS RENDER CALLBACK ON FIRST CALL TO RENDER
func TestRenderingEffectToGenericOutput() {

    var effectUnit = CreateUnit(type: .TestEffect)
    var outputUnit = CreateUnit(type: .GenericOutput)

    AddRenderCallbackToUnit(&effectUnit, callback: RenderCallback)

    var connection = AudioUnitConnection()
    connection.sourceAudioUnit    = effectUnit
    connection.sourceOutputNumber = 0
    connection.destInputNumber    = 0

    let result = AudioUnitSetProperty(outputUnit, kAudioUnitProperty_MakeConnection, kAudioUnitScope_Input, 0, &connection, UInt32(MemoryLayout<AudioUnitConnection>.stride))
    NSLog("connection result = \(result)")

    // Manually render audio
    RenderUnit(outputUnit)
}



// SETUP FUNCTIONS


// AudioUnitRender callback. Read in float data from left and right channel into buffer as necessary
let RenderCallback: AURenderCallback = {(inRefCon, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, ioData) -> OSStatus in
    NSLog("render \(inNumberFrames) frames")
    // Load audio data into ioData here… my data is floating point and plays back ok
    return noErr
}


// Configure new audio unit
func CreateUnit(type: UnitType) -> AudioUnit {

    var unit: AudioUnit? = nil
    var outputcd = AudioComponentDescription()

    switch type {

    case .DefaultOutput:
        outputcd.componentType = kAudioUnitType_Output
        outputcd.componentSubType = kAudioUnitSubType_DefaultOutput

    case .GenericOutput:
        outputcd.componentType = kAudioUnitType_Output
        outputcd.componentSubType = kAudioUnitSubType_GenericOutput

    case .TestEffect:
        outputcd.componentType = kAudioUnitType_Effect
        outputcd.componentSubType = kAudioUnitSubType_Delay

    }

    outputcd.componentManufacturer = kAudioUnitManufacturer_Apple
    outputcd.componentFlags = 0
    outputcd.componentFlagsMask = 0

    let comp = AudioComponentFindNext(nil, &outputcd)

    if comp == nil {
        print("can't get output unit")
        exit(-1)
    }

    let status = AudioComponentInstanceNew(comp!, &unit)
    NSLog("new unit status = \(status)")


    // Initialize the unit -- not actually sure *when* is best to do this
    AudioUnitInitialize(unit!)

    return unit!
}


// Attach a callback to an audio unit
func AddRenderCallbackToUnit(_ unit: inout AudioUnit, callback: @escaping AURenderCallback) {
    var input = AURenderCallbackStruct(inputProc: callback, inputProcRefCon: &unit)
    AudioUnitSetProperty(unit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &input, UInt32(MemoryLayout<AURenderCallbackStruct>.size))
}


// Render up to 'numberOfFramesToRender' frames for testing
func RenderUnit(_ unitToRender: AudioUnit) {

    let numberOfFramesToRender = UInt32(20_000) // Incoming data length: 14,463,360

    let inUnit = unitToRender
    var ioActionFlags = AudioUnitRenderActionFlags()
    var inTimeStamp = AudioTimeStamp()
    let inOutputBusNumber: UInt32 = 0
    let inNumberFrames: UInt32 = 512
    var ioData = AudioBufferList.allocate(maximumBuffers: 2)

    var currentFrame: UInt32 = 0

    while currentFrame < numberOfFramesToRender {

        currentFrame += inNumberFrames

        NSLog("call render…")
        let status = AudioUnitRender(inUnit, &ioActionFlags, &inTimeStamp, inOutputBusNumber, inNumberFrames, ioData.unsafeMutablePointer)
        if (status != noErr) {
            NSLog("render status = \(status)")
            break
        }

        // Read new buffer data here and save it for later…

    }
}
2

2 Answers

1
votes

You possibly need to have your code exit to the run loop between each call to render. This allows the OS to schedule some time for the audio thread to run the OS audio unit(s) between each successive render call.

1
votes

Turns out when manually calling AudioUnitRender(), I was not incrementing the timestamp through each loop. Playing the Default Output node does this automatically. Adding inTimeStamp.mSampleTime += Float64(inNumberFrames) to the loop works! Now the render loop can process data through a single AudioUnit and retrieve the processed data back.

My code needs a lot more work – error checking, buffer index checking, etc. But the core functionality is there. (CoreAudio really needs much better documentation at the unit level.)