0
votes

I am designing an AUGraph for an iOS application and would appreciate help on the following things.

If I want to play a number of audio files at once, does each file need an audio unit?

From the Core-Audio docs

Linear PCM and IMA/ADPCM (IMA4) audio You can play multiple linear PCM or IMA4 format sounds simultaneously in iOS without incurring CPU resource problems.

AAC, MP3, and Apple Lossless (ALAC) audio Playback for AAC, MP3, and Apple Lossless (ALAC) sounds uses efficient hardware-based decoding on iPhone and iPod touch. You can play only one such sound at a time.

So multiple AAC or MP3 files cannot be played at the same time. What is the optimal LPCM format to play multiple sounds at once?

Does this apply to Audio-Units too, as this in under the AudioQueue documentation.

Can an audio unit in an AUGraph be inactive? If an AUGraph looks like this

Speaker/output < recorder unit < mixer unit < number of audio file playing units

what happens if the recorder is not active, would it still pull, but just not write the buffers to a file?

2

2 Answers

2
votes

No; you need to use the mixer audio unit. Check this: http://developer.apple.com/library/ios/DOCUMENTATION/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/ConstructingAudioUnitApps/ConstructingAudioUnitApps.html#//apple_ref/doc/uid/TP40009492-CH16-SW1

Mostly reading the document above, wrapping the sample code in a class and creating a pair of utility structures, I coded this 'Simple Sound Engine' from scratch:

ttp://nicolasmiari.com/blog/a-simple-sound-engine-for-ios-using-the-audio-unit-framework/ (Link to article in my blog containing the source code). Sorry, moved blog to Jekyll/Github and this article didn't make the cut.

...I was going to start a repo on github, but it's too much trouble. I am a visual guy, still pretty much git-phobic. Okay, that was a long time ago... Now I use git from the command line :-)

You can use it as-is, or extract the Audio Unit-related code and adapt it to your project. I believe the Cocos Denshion 'Simple Audio Engine' does pretty much the same thing, but haven't checked the source code.

Known issues If you have an exception breakpoint set for C++ exceptions, when debugging, the code will stop 2 or 3 times on AUGraphInitialize(). This is a 'non-crashing' exception, so you can click on continue and the code works OK.

To convert your wav files to the uncompressed .caf format, use this command on the Terminal:

%afconvert -f caff -d LEI16 mysoundFile.wav mySoundFile.caf

EDIT: So I created a GitHub repo after all: https://github.com/nicolas-miari/Sound-Engine

1
votes

Both ordinary common .wav and .caf files contain raw PCM audio samples, and can be played without hardware assist or DSP processing if already at the destination sample rate.

When there's no audio file or other synthesized data to feed an audio unit that's pulling buffers, the usual practice is to feed it buffers of silence (or perhaps a taper to zero if the previous buffer ended with non-zero amplitude).