I am building an extremely low latency instrument in Core Audio for iOS.
Consider, my instrument has 4 triggers and triggering each one plays a .wav file. When I play a different .wav file, the sound of the previous .wav file should not get cut off.
I also need to support recording.
I have already succeeded in implementing this using OpenAL but I found that I need to use RemoteIO/AudioUnits as OpenAL doesn't allow recording of what is played via OpenAL.
If I use RemoteIO/AudioUnits, do I need to use a multichannel mixer with 4 channels and route the audio for each .wav file to each channel. By doing this, will the sound of the previous .wav file played through the same channel get cut off?
If a mixer is not the right way to do this, then what could be possible alternatives?