I am using the AudioUnitRender() function in my render call back function to get audio data from the microphone in real time in iphone
err = AudioUnitRender(player->outputUnit, ioActioanFlags, inTimeStamp, 1, inNumberFrames, ioData);
The audio data comes into ioData when the callback function is called automatically. I am using the audio data returned in ioData as shown below:
for(frame = 0; frame<inNumberFrames; ++frame){
Float32 *data = (Float32*)ioData->mBuffers[0].mData;
myvar[k++] = (data)[frame];
.
.
.
}
Here myvar
is a array of Float32
type. I had guessed that the input audio is within the +1.0/-1.0 range since the values in myvar[] were always something within that range. I recently found out that if I make loud sounds close to the microphone, sometimes I get values in myvar[]that are outside the +1.0/-1.0 range.
What exactly is the range of the Float32 type data returned by AudioUnitRender() as the microphone audio data?
Is it possible to get whatever raw audio is being returned by AudioUnitRender() as an integer? The AudioRecord
class in android gives me the raw microphone audio as signed short numbers (16-bits). I am looking for it's equivalent in ios, in objective C.
--- EDIT 1 ---
The current configuration used for the audio is given below:
// Configure the audio session
AVAudioSession *sessionInstance = [AVAudioSession sharedInstance];
// we are going to play and record so we pick that category
NSError *error = nil;
[sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
// set the buffer duration to 5 ms
NSTimeInterval bufferDuration = .004; // with setPreferredSampleRate:16000 gives inNumberFrames = 64 in SineWaveRenderProc()
// NSTimeInterval bufferDuration = .016; // with setPreferredSampleRate:16000 gives inNumberFrames = 256 in SineWaveRenderProc() ;; NOTE: 0.004*4 = 0.016
[sessionInstance setPreferredIOBufferDuration:bufferDuration error:&error];
// set the session's sample rate
// [sessionInstance setPreferredSampleRate:44100 error:&error]; // ORIGINAL // inNumberFrames = 256 in SineWaveRenderProc() with bufferDuration = .005; above
[sessionInstance setPreferredSampleRate:16000 error:&error]; // inNumberFrames = 64 in SineWaveRenderProc() with bufferDuration = .005; above
// activate the audio session
[[AVAudioSession sharedInstance] setActive:YES error:&error];
// XThrowIfError((OSStatus)error.code, "couldn't set session active");
// NOTE: looks like this is necessary
UInt32 one = 1;
AudioUnitSetProperty(player->outputUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one) );
AudioUnitSetProperty(player->outputUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one) );