0
votes

I use Apple's example code iPhoneMixerEQGraphTest https://developer.apple.com/library/ios/samplecode/iPhoneMixerEQGraphTest/Introduction/Intro.html and simply exchanged the AudioUnit iPodEQ by a Reverb2 (same error when using a Delay instead) I found the hint to insert a converter unit before the reverb, what I did. Whatever I tried so far, AUGraphInitialze returns error FFFFD58C

This is the output of CAShow before calling AUGraphInitialze: AudioUnitGraph 0x10300A: Member Nodes: node 1: 'aufc' 'conv' 'appl', instance 0x16da8c50 O
node 2: 'auou' 'rioc' 'appl', instance 0x16db0900 O
node 3: 'aufc' 'conv' 'appl', instance 0x16e5d630 O
node 4: 'aufx' 'rvb2' 'appl', instance 0x16e90a40 O
node 5: 'aumx' 'mcmx' 'appl', instance 0x16ea07e0 O
Connections: node 5 bus 0 => node 3 bus 0 [ 2 ch, 44100 Hz, 'lpcm' (0x00000C2C) 8.24-bit little-endian signed integer, deinterleaved] node 3 bus 0 => node 4 bus 0 [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] node 4 bus 0 => node 2 bus 0 [ 2 ch, 0 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] Input Callbacks: {0x76e71, 0x16da4024} => node 5 bus 0 [2 ch, 44100 Hz] {0x76e71, 0x16da4024} => node 5 bus 1 [2 ch, 44100 Hz] CurrentState: mLastUpdateError=0, eventsToProcess=F, isInitialized=F, isRunning=F

For comparison, this is the output for the working AUGraph (with iPodEQ):

AudioUnitGraph 0x12100A: Member Nodes: node 1: 'auou' 'rioc' 'appl', instance 0x17d62840 O
node 2: 'aufx' 'ipeq' 'appl', instance 0x17d79630 O
node 3: 'aumx' 'mcmx' 'appl', instance 0x17d71770 O
Connections: node 3 bus 0 => node 2 bus 0 [ 2 ch, 44100 Hz, 'lpcm' (0x00000C2C) 8.24-bit little-endian signed integer, deinterleaved] node 2 bus 0 => node 1 bus 0 [ 2 ch, 0 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] Input Callbacks: {0x92fd5, 0x17d77f00} => node 3 bus 0 [2 ch, 44100 Hz] {0x92fd5, 0x17d77f00} => node 3 bus 1 [2 ch, 44100 Hz] CurrentState: mLastUpdateError=0, eventsToProcess=F, isInitialized=F, isRunning=F

And to be complete here most of the code (which works with the iPodEQ instead of the reverb, or converter+reverb):

...

CAComponentDescription rev_desc(kAudioUnitType_Effect, kAudioUnitSubType_Reverb2, kAudioUnitManufacturer_Apple);

AudioComponentDescription convertUnitDescription;
convertUnitDescription.componentManufacturer  = kAudioUnitManufacturer_Apple;
convertUnitDescription.componentType          = kAudioUnitType_FormatConverter;
convertUnitDescription.componentSubType       = kAudioUnitSubType_AUConverter;
convertUnitDescription.componentFlags         = 0;
convertUnitDescription.componentFlagsMask     = 0;
result = AUGraphAddNode (mGraph, &convertUnitDescription, &convertNode);

// multichannel mixer unit
CAComponentDescription mixer_desc(kAudioUnitType_Mixer, kAudioUnitSubType_MultiChannelMixer, kAudioUnitManufacturer_Apple);

result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
result = AUGraphAddNode (mGraph, &convertUnitDescription, &convertNode);
result = AUGraphAddNode(mGraph, &rev_desc, &revNode);
result = AUGraphAddNode(mGraph, &mixer_desc, &mixerNode);

// connect a node's output to a node's input
result = AUGraphConnectNodeInput(mGraph, mixerNode, 0, convertNode, 0);
result = AUGraphConnectNodeInput(mGraph, convertNode, 0, revNode, 0);
result = AUGraphConnectNodeInput(mGraph, revNode, 0, outputNode, 0);

result = AUGraphOpen(mGraph);

result = AUGraphNodeInfo(mGraph, mixerNode, NULL, &mMixer);
result = AUGraphNodeInfo(mGraph, revNode, NULL, &mRev);
result = AUGraphNodeInfo(mGraph, convertNode, NULL, &mConvert);
// match mixer output with converter input
AudioStreamBasicDescription mixerStreamFormat;
UInt32 streamFormatSize = sizeof(mixerStreamFormat);
result = AudioUnitGetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &mixerStreamFormat, &streamFormatSize);

result = AudioUnitSetProperty(mConvert, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &mixerStreamFormat, streamFormatSize);

// match converter output with reverb input
AudioStreamBasicDescription revStreamFormat;
streamFormatSize = sizeof(revStreamFormat);
result = AudioUnitGetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &revStreamFormat, &streamFormatSize);

result = AudioUnitSetProperty(mConvert, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &revStreamFormat, streamFormatSize);

// set bus count
UInt32 numbuses = 2;

result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &numbuses, sizeof(numbuses));

for (UInt32 i = 0; i < numbuses; ++i) {
    // setup render callback struct
    AURenderCallbackStruct rcbs;
    rcbs.inputProc = &renderInput;
    rcbs.inputProcRefCon = &mUserData;

    // set a callback for the specified node's specified input
    result = AUGraphSetNodeInputCallback(mGraph, mixerNode, i, &rcbs);

    // set the input stream format, this is the format of the audio for mixer input
    result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, i, &mClientFormat, sizeof(mClientFormat));
}



AudioUnitSetParameter(mRev, kAudioUnitScope_Global, 0, kReverb2Param_DryWetMix, 50, 0);
[self setAudioUnitFloatParam:mRev paramID:kReverb2Param_DryWetMix inValue:44.44f];
[self setAudioUnitFloatParam:mRev paramID:kReverb2Param_Gain inValue:20.0f];
// set the output stream format of the mixer
result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &mOutputFormat, sizeof(mOutputFormat));

// add a render notification, this is a callback that the graph will call every time the graph renders
// the callback will be called once before the graph’s render operation, and once after the render operation is complete
result = AUGraphAddRenderNotify(mGraph, renderNotification, &mUserData);

printf("pre AUGraphInitialize\n");
CAShow(mGraph);

The output of this CAShow() is shown above. Above and below this code-snippet I use the the original example code: https://developer.apple.com/library/ios/samplecode/iPhoneMixerEQGraphTest/Introduction/Intro.html Thank you very much for your advices!

1

1 Answers

0
votes

There's just a stupid error, the following line

... result = AudioUnitGetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &revStreamFormat, &streamFormatSize);

needs mRev as argument and not mMixer. This way it works. What I wonder is that there are 2 converter nodes: node 1: 'aufc' 'conv' 'appl', instance 0x16da8c50 O node 3: 'aufc' 'conv' 'appl', instance 0x16e5d630 O

And when I use the AU delay instead of a Reverb2 I can only manipulate the param WetDryMix and not the other 3 params:

AudioUnitSetParameter(mRev, kAudioUnitScope_Global, 0, kDelayParam_WetDryMix, 35.0, 0);   // this works, but the following Set's have no effect:
AudioUnitSetParameter(mRev, kAudioUnitScope_Global, 0, kDelayParam_LopassCutoff, 2500.0, 0);   // 15000.0
AudioUnitSetParameter(mRev, kAudioUnitScope_Global, 0, kDelayParam_DelayTime, 0.5, 0);   // 1.0
AudioUnitSetParameter(mRev, kAudioUnitScope_Global, 0, kDelayParam_Feedback, 18.0, 0);  // 50.0

AudioUnitGetParameter shows that these params stay at their fixed values (see comments above) Any ideas regarding this?