I'm working on an VOIP project on iOS. As per suggested by Apple's documents, I use VoiceProcessingIO audio unit to get echo cancellation support.
As my app requires separated operations on render and capture sides (e.g., shutdown speaker but let microphone continue), so I create two audio units, one has capture port shutdown, while the other has render port shutdown.
The current code works well until I learn about how echo cancellation works: it requires comparing signals from both microphone and speaker. So my concern is: is it safe to use two voice processing audio unit like my approach? Also, as audio cancellation works mostly from capture side, is it possible to use an RemoteIO audio unit for rendering (connect to speaker)?
I'm not 100% confident as I just enter this area for a short time. I also tried from developer.apple.com, but all examples I found from developer.apple.com typically use only one audio unit.
Could anyone give some hints? Does my approach have any potential effects on features of VoiceProcessingIO unit?
Thanks, Fuzhou