1
votes

I am building an extremely low latency instrument in Core Audio for iOS.

Consider, my instrument has 4 triggers and triggering each one plays a .wav file. When I play a different .wav file, the sound of the previous .wav file should not get cut off.

I also need to support recording.

I have already succeeded in implementing this using OpenAL but I found that I need to use RemoteIO/AudioUnits as OpenAL doesn't allow recording of what is played via OpenAL.

If I use RemoteIO/AudioUnits, do I need to use a multichannel mixer with 4 channels and route the audio for each .wav file to each channel. By doing this, will the sound of the previous .wav file played through the same channel get cut off?

If a mixer is not the right way to do this, then what could be possible alternatives?

1

1 Answers

1
votes

An audio mixer is the right way to do this, either the audio unit multi-channel mixer, or one of your own in DSP code.

If you continue to feed a mixer audio unit with PCM data from a previously started sound, it should continue to play even after starting subsequent sounds through another channel.