0
votes

My app can record the audio from the chat and save it in file. I recorded some music on the app screen but when I playback the audio.m4a file there is no sound coming out. The file show as "Apple MPEG-4 audio" and has 12KB size. Did I config the setting wrong? Thanks in advence.

edit: I added the stop recording function.

var assetWriter: AVAssetWriter?
var input: AVAssetWriterInput?
var channelLayout = AudioChannelLayout()

func record() {
        guard let doc = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first else {
            return
        }
      
        let inputURL = docURL.appendingPathComponent("audio.m4a")
   
        do {
            try assetWriter = AVAssetWriter(outputURL: inputURL, fileType: .m4a)
        } catch  {
            print("error: \(error)")
            assetWriter = nil
            return
        }
        
        guard let assetWriter = assetWriter else {
            return
        }

        channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_MPEG_5_1_D

    let audioSettings: [String : Any] = [
        AVNumberOfChannelsKey: 6,
        AVFormatIDKey: kAudioFormatMPEG4AAC_HE,
        AVSampleRateKey: 44100,
        AVEncoderBitRateKey: 128000,
        AVChannelLayoutKey: NSData(bytes: &channelLayout, length: MemoryLayout.size(ofValue: channelLayout)),
    ]

        input = AVAssetWriterInput(mediaType: .audio, outputSettings: settings)
        
        guard let audioInput = input else {
            print("Failed to find input.")
            return
        }
        
        audioInput.expectsMediaDataInRealTime = true
        
        if ((assetWriter.canAdd(audioInput)) != nil) {
            assetWriter.add(audioInput)
        }

        RPScreenRecorder.shared().startCapture(handler: { (sample, bufferType, error) in
            guard error == nil else {
                print("Failed to capture with error: \(String(describing: error))")
                return
            }
            
            if bufferType == .audioApp {
                if assetWriter.status == AVAssetWriter.Status.unknown {
                    if ((assetWriter.startWriting()) != nil) {
                        assetWriter.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sample))
                    }
                }
                
                if assetWriter.status == AVAssetWriter.Status.writing {
                    if audioInput.isReadyForMoreMediaData == true {
                        if audioInput.append(sample) == false {
                        }
                    }
                }
            }
        })
    }

    func stopRecord() {
    RPScreenRecorder.shared().stopCapture{ (error) in
        self.audioInput.markAsFinished()
        
        if error == nil{
            self.assetWriter.finishWriting {
                print("finish writing")
            }
        } else {
            print(error as Any)
        }
    }
}
1

1 Answers

2
votes

In light of your comments, you definitely don't need six channel audio. Try these simpler mono audio settings.

let audioSettings: [String : Any] = [
    AVNumberOfChannelsKey: 1,
    AVFormatIDKey: kAudioFormatMPEG4AAC,
    AVSampleRateKey: 44100,
    AVEncoderBitRateKey: 128000,
]

You don't say whether this is on iOS or macOS. You have a problem on macOS because as of 11.2.1 no .audioApp buffers are captured. If you still want microphone, you can configure that:

let recorder = RPScreenRecorder.shared()
recorder.isMicrophoneEnabled = true
    
recorder.startCapture(handler: { (sample, bufferType, error) in
    if bufferType == .audioMic { 
       // etc
    }
})

Don't bother checking for writer status, just append buffers when you can

if audioInput.isReadyForMoreMediaData {
    if !audioInput.append(sample) {
        // do something
    }
}

PREVIOUSLY

You need to call assetWriter.finishWriting at some point.

It's interesting that you have 6 channel input. Are you using a special device or some kind of virtual device?