What I'm trying to accomplish is to process an array of audio data through a Core Audio effect unit and get the manipulated data back (without playing it -- i.e. offline). I've hit a wall and it's probably something very basic that I'm not understanding.
Ideally what I want to have is a single audio unit (like a delay effect) bring in raw data via a render callback and then I call AudioUnitRender()
on that unit over and over, saving the resulting buffer for later. So: {RENDER CALLBACK}->[EFFECT UNIT]->{Render Loop}->{Data}
. But when I do this, no matter how many times I call AudioUnitRender()
on the AudioUnit in a loop, the render callback is only called the first time.
Things I've tried:
Worked: Setup a render callback on
kAudioUnitSubType_DefaultOutput
and calledAudioOutputUnitStart()
. This worked fine and played my audio data out of the speakers.Worked: Setup a render callback on
kAudioUnitSubType_GenericOutput
and calledAudioUnitRender()
in a loop. This seemed to work and passed out an unmodified copy of the original data just fine.Worked: Setup a render callback on an
kAudioUnitSubType_Delay
unit and connected its output tokAudioUnitSubType_DefaultOutput
. CallingAudioOutputUnitStart()
played my audio data out of the speakers with a delay as expected.Failed: Finally, I setup a render callback on the
kAudioUnitSubType_Delay
unit and connected it's output tokAudioUnitSubType_GenericOutput
. CallingAudioUnitRender()
in a loop only calls the render callback on the first call toAudioUnitRender()
, just like what happens if I try to render the effect directly.
I don't get any OSStatus errors from any of the function calls that might point to a problem. Can someone help me understand why the effect isn't calling the render callback function more than once unless it's hooked up to the Default Output?
Thanks!
Below is a sample of the relevant code from my tests above. I can provide more details if necessary, but the setup code for connecting the units is there.
// Test Functions
// [EFFECT ONLY] - FAILS! - ONLY CALLS RENDER CALLBACK ON FIRST CALL TO RENDER
func TestRenderingEffectOnly() {
var testUnit = CreateUnit(type: .TestEffect)
AddRenderCallbackToUnit(&testUnit, callback: RenderCallback)
RenderUnit(testUnit)
}
// [DEFAULT OUTPUT ONLY] - WORKS!
func TestDefaultOutputPassthrough() {
var testUnit = CreateUnit(type: .DefaultOutput)
AddRenderCallbackToUnit(&testUnit, callback: RenderCallback)
AudioOutputUnitStart(testUnit)
}
// [GENERIC OUTPUT ONLY] - SEEMS TO WORK!
func TestRenderingToGenericOutputOnly() {
var testUnit = CreateUnit(type: .GenericOutput)
AddRenderCallbackToUnit(&testUnit, callback: RenderCallback)
RenderUnit(testUnit)
}
// [EFFECT]->[DEFAULT OUTPUT] - WORKS!
func TestEffectToDefaultOutput() {
var effectUnit = CreateUnit(type: .TestEffect)
var outputUnit = CreateUnit(type: .DefaultOutput)
AddRenderCallbackToUnit(&effectUnit, callback: RenderCallback)
var connection = AudioUnitConnection()
connection.sourceAudioUnit = effectUnit
connection.sourceOutputNumber = 0
connection.destInputNumber = 0
let result = AudioUnitSetProperty(outputUnit, kAudioUnitProperty_MakeConnection, kAudioUnitScope_Input, 0, &connection, UInt32(MemoryLayout<AudioUnitConnection>.stride))
NSLog("connection result = \(result)")
AudioOutputUnitStart(outputUnit)
}
// [EFFECT]->[GENERIC OUTPUT] - FAILS! - ONLY CALLS RENDER CALLBACK ON FIRST CALL TO RENDER
func TestRenderingEffectToGenericOutput() {
var effectUnit = CreateUnit(type: .TestEffect)
var outputUnit = CreateUnit(type: .GenericOutput)
AddRenderCallbackToUnit(&effectUnit, callback: RenderCallback)
var connection = AudioUnitConnection()
connection.sourceAudioUnit = effectUnit
connection.sourceOutputNumber = 0
connection.destInputNumber = 0
let result = AudioUnitSetProperty(outputUnit, kAudioUnitProperty_MakeConnection, kAudioUnitScope_Input, 0, &connection, UInt32(MemoryLayout<AudioUnitConnection>.stride))
NSLog("connection result = \(result)")
// Manually render audio
RenderUnit(outputUnit)
}
// SETUP FUNCTIONS
// AudioUnitRender callback. Read in float data from left and right channel into buffer as necessary
let RenderCallback: AURenderCallback = {(inRefCon, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, ioData) -> OSStatus in
NSLog("render \(inNumberFrames) frames")
// Load audio data into ioData here… my data is floating point and plays back ok
return noErr
}
// Configure new audio unit
func CreateUnit(type: UnitType) -> AudioUnit {
var unit: AudioUnit? = nil
var outputcd = AudioComponentDescription()
switch type {
case .DefaultOutput:
outputcd.componentType = kAudioUnitType_Output
outputcd.componentSubType = kAudioUnitSubType_DefaultOutput
case .GenericOutput:
outputcd.componentType = kAudioUnitType_Output
outputcd.componentSubType = kAudioUnitSubType_GenericOutput
case .TestEffect:
outputcd.componentType = kAudioUnitType_Effect
outputcd.componentSubType = kAudioUnitSubType_Delay
}
outputcd.componentManufacturer = kAudioUnitManufacturer_Apple
outputcd.componentFlags = 0
outputcd.componentFlagsMask = 0
let comp = AudioComponentFindNext(nil, &outputcd)
if comp == nil {
print("can't get output unit")
exit(-1)
}
let status = AudioComponentInstanceNew(comp!, &unit)
NSLog("new unit status = \(status)")
// Initialize the unit -- not actually sure *when* is best to do this
AudioUnitInitialize(unit!)
return unit!
}
// Attach a callback to an audio unit
func AddRenderCallbackToUnit(_ unit: inout AudioUnit, callback: @escaping AURenderCallback) {
var input = AURenderCallbackStruct(inputProc: callback, inputProcRefCon: &unit)
AudioUnitSetProperty(unit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &input, UInt32(MemoryLayout<AURenderCallbackStruct>.size))
}
// Render up to 'numberOfFramesToRender' frames for testing
func RenderUnit(_ unitToRender: AudioUnit) {
let numberOfFramesToRender = UInt32(20_000) // Incoming data length: 14,463,360
let inUnit = unitToRender
var ioActionFlags = AudioUnitRenderActionFlags()
var inTimeStamp = AudioTimeStamp()
let inOutputBusNumber: UInt32 = 0
let inNumberFrames: UInt32 = 512
var ioData = AudioBufferList.allocate(maximumBuffers: 2)
var currentFrame: UInt32 = 0
while currentFrame < numberOfFramesToRender {
currentFrame += inNumberFrames
NSLog("call render…")
let status = AudioUnitRender(inUnit, &ioActionFlags, &inTimeStamp, inOutputBusNumber, inNumberFrames, ioData.unsafeMutablePointer)
if (status != noErr) {
NSLog("render status = \(status)")
break
}
// Read new buffer data here and save it for later…
}
}