I'm working on an VOIP project for iOS and noticed a weird issue. My setup looks like below:
- Capture end has an VoiceProcessingIO unit (for echo cancellation), with both output scope, output bus and input scope, input bus enabled.
- Render end has an RemoteIO unit, with output scope, output bus enabled.
I don't use any audio graph so far.
When I start recording voice, I notice the setup above caused a very low output voice, until I turn off the output scope, output bus of VoiceProcessingIO. Though it sounds like a bug in my code (setting wrong IO bus), it still makes no sense that why a change in capture end affects render end.
After reading the Audio Unit Hosting Guide for iOS from developer.apple.com, I noticed it mentioned multiple times that each design pattern should include only one I/O audio unit. I'm wondering if this is mandatory, or just optional. Is it safe to keep my code with two audio units?
Indeed, using two audio units may have its own reason, as I can simply turn off one unit if I want to mute one end. I can't do it with kAudioUnitProperty_EnableIO because it can't be changed after AudioUnitInitialize(), which means the one-audio-unit solution may have to turn off both channels and reinitialize audio unit again if I want to disable one of them. This causes a bad user experiences because the voice may pause for a short while during this moment.
Thanks, Fuzhou