2
votes

Given an array (of changing length) of frequencies and amplitudes, can I generate a single audio buffer on a sample by sample basis that includes all the tones in the array? If not, what is the best way to generate multiple tones in a single audio unit? Have each note generate it's own buffer then sum those into an output buffer? Wouldn't that be the same thing as doing it all at once?

Working on an iOS app that generates notes from touches, considering using STK but don't want to have to send note off messages, would rather just generate sinusoidal tones for the notes I'm holding in an array. Each note actually needs to produce two sinusoids, with varying frequency and amplitude. One note may be playing the same frequency as a different note so a note off message at that frequency could cause problems. Eventually I want to manage amplitude (adsr) envelopes for each note outside of the audio unit. I also want response time to be as fast as possible so I'm willing to do some extra work/learning to keep the audio stuff as low level as I can.

I've been working with sine wave single tone generator examples. Tried essentially doubling one of these, something like:

Buffer[frame] = (sin(theta1) + sin(theta2))/2

Incrementing theta1/theta2 by frequency1/frequency2 over sample rate, (I realize this is not the most efficient calling sin() ) but get aliasing effects. I've yet to find an example with multiple frequencies or data sources other than reading audio from file.

Any suggestions/examples? I originally had each note generate its own audio unit, but that gave me too much latency from touch to note sounding (and seems inefficient too). I am newer to this level of programming than I am to digital audio in general, so please be gentle if I'm missing something obvious.

1

1 Answers

2
votes

yes of course you can, you can do whatever you like inside your render callback. when you set this call back up, you can pass in a pointer to an object.

that object could contain the on off states for each tone. in fact the object could contain a method responsible for filling up the buffer. ( just make sure the object is nonatomic if it is a property -- otherwise you will get artefacts due to locking issues )

What exactly are you trying to achieve? Do you really need to generate on-the-fly?

if so, you run the risk of overloading the remoteIO audio unit's render callback, which will give you glitches and artefacts

you might get away with it on the simulator and then move it over to a device and find that mysteriously it isn't working any more because you are running on 50 times less processor, and one callback cannot complete before the next one arrives

having said, you can get away with a lot

I have made a 12 tone player that can simultaneously play any number of individual tones

all I do is have a ring buffer for each tone (I am using quite a complex waveform so this takes a lot of time, in fact I actually calculate it the first time the application is run and subsequently load it from file), and maintain a read-head and an enabled flag for each ring.

Then I add everything up in the render callback, and this handles fine on the device, even if all 12 are playing together. I know the documentation tells you not to do this, it recommends only using this callback in order to fill one buffer from another, but you can get away with a lot, and it is a PITA to code up some sort of buffering system that calculates on a different thread.