1
votes

I'm using the vpio audio unit for capture and playout on mac os x.

All things go well until I set the input/output format on the vpio unit.

The format I wanted just like this:

AudioStreamBasicDescription audio_format ;
audio_format.mSampleRate     = 8000.0 ;
audio_format.mBitsPerChannel     = 16 ;
audio_format.mChannelsPerFrame = 1 ;
audio_format.mBytesPerFrame      = (audio_format.mBitsPerChannel >> 3)  * audio_format.mChannelsPerFrame ;
audio_format.mFramesPerPacket    = 1 ;
audio_format.mBytesPerPacket     = audio_format.mBytesPerFrame * audio_format.mFramesPerPacket ;
audio_format.mFormatID       = kAudioFormatLlinearPCM ;
audio_format.mFormatFlags    = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked ;

I can set this format on the vpio's (input bus / output scope), but I can't set it on the vpio's (output bus / input scope) that I will get the error code (kAudioUnitErr_FormatNotSupported). But when I use AUHAL unit instead, I can set the format both on the AUHAL's (input bus / output scope) and AUHAL's (output bus / input scope).

I want to know what make this difference between the two unit?

After making some attempts , I finally find one of the available format on the vpio's (output bus / input scope) ,just like this:

AudioStreamBasicDescription audio_format ;
audio_format.mSampleRate     = 8000.0 ;
audio_format.mBitsPerChannel     = 32 ;
audio_format.mChannelsPerFrame   = 1 ;
audio_format.mBytesPerFrame      = (audio_format.mBitsPerChannel >> 3)  * audio_format.mChannelsPerFrame ;
audio_format.mFramesPerPacket    = 1 ;
audio_format.mBytesPerPacket     = audio_format.mBytesPerFrame * audio_format.mFramesPerPacket ;
audio_format.mFormatID       = kAudioFormatLlinearPCM ;
audio_format.mFormatFlags    = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked ;

But what confused me is that the format on the vpio's (input bus / output scope) and (output bus / input scope) were mismatch. And I want to know how go get the available formats information of the vpio unit ? I can't find any documentations about the format of available on the Apple Site.

Can someone answer my question?

Thanks & regard.

3

3 Answers

1
votes

I just found a smarter engineer that worked this VoiceProcessing Audio Unit out :

How to use "kAudioUnitSubType_VoiceProcessingIO" subtype of core audio API in mac os?

The short answer is: set the format BEFOR initializing the unit. How intuitive !

0
votes

I have the same issue as you. When I try to set audio format to the Voice Processing Audio unit, I get error -10865 which means that audio format property is not writable.

If I understand correctly, you cannot set any format on this Audio Unit. If you need audio at 8000 Hz sample rate, you will need to resample it.

It can be done

  1. By creating a Audio Unit graph and adding a Converter Audio Unit in front of Voice Processing Unit
  2. By resampling the audio by using an external lib such as swresample from ffmpeg

Good luck with your development.

-1
votes

I used the following settings to get the best file size and sound quality for recording voice in my project.

  // Setup audio session
    AVAudioSession *session = [AVAudioSession sharedInstance];
    [session setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];

    // Define the recorder setting
    NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init];

    [recordSetting setValue:[NSNumber numberWithInt:kAudioFormatMPEG4AAC] forKey:AVFormatIDKey];
    [recordSetting setValue:[NSNumber numberWithFloat:8000.0] forKey:AVSampleRateKey];
    [recordSetting setValue:[NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey];
    [recordSetting setValue:[NSNumber numberWithInt:AVAudioQualityLow] forKey:AVEncoderAudioQualityKey];

   // Initiate and prepare the recorder
    recorder = [[AVAudioRecorder alloc] initWithURL:outputFileURL settings:recordSetting error:NULL];
    recorder.delegate = self;
    recorder.meteringEnabled = YES;
    [recorder prepareToRecord];