AudioConverter Services unable to convert from AAC to PCM from raw network stream

Hi everyone!


I'm developing an application that ingest raw AAC data directly from a socket. So I have "packets" in ADTS format extracted from a socket. I have been trying to convert this compressed audio packets to PCM so I can enqueue and play them. I have developed the PCM player but I have been unable to convert to PCM. I have been trying different ways but I cannot accomplish this. I'm using AudioConverterFillComplexBuffer

but I'm always receiving some kind of error which I cannot solve. So far, I'm in a situation where the logs shows something like:


2018-07-08 21:27:51.359716+0200 streaming-test-v3[43319:14013195] AACDecoder.cpp:189:Deserialize:  Too few bits left in input buffer
2018-07-08 21:27:51.360047+0200 streaming-test-v3[43319:14013195] AACDecoder.cpp:220:DecodeFrame:  Error deserializing packet
2018-07-08 21:27:51.360393+0200 streaming-test-v3[43319:14013195] [ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x7ff4f705b240) Error decoding packet 54: err = -1, packet length: 1
2018-07-08 21:27:51.360691+0200 streaming-test-v3[43319:14013195] [ac] ACMP4AACBaseDecoder.cpp:1346:ProduceOutputBufferList: 'A0'
AudioConverterFillComplexBuffer error: 1852797029


I have a lot of doubts regarding this methods:

  1. Should I skip the ADTS header? (7-9 bytes)
  2. What should be the outputBuffer length and allocation capacity? How can I set it? I don't know which size the decoded frames gonna have
  3. Why is it saying too few bits left in input buffer?
  4. I'm streaming a single channel stream, but I had way some problems with this streaming a raw PCM and playing it.


I'm posting my code in case someone can help me.


Setup the audio converter

    func setupAudioConverter() {
        var outputFormat = AudioStreamBasicDescription.init(
            mSampleRate: 44100,
            mFormatID: kAudioFormatLinearPCM,
            mFormatFlags: kLinearPCMFormatFlagIsSignedInteger,
            mBytesPerPacket: 2,
            mFramesPerPacket: 1,
            mBytesPerFrame: 2,
            mChannelsPerFrame: 1,
            mBitsPerChannel: 16,
            mReserved: 0)
        
//        let outputFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat, sampleRate: 44100.0, channels: 1, interleaved: false)
        
        var inputFormat = AudioStreamBasicDescription.init(
            mSampleRate: 44100,
            mFormatID: kAudioFormatMPEG4AAC,
            mFormatFlags: UInt32(MPEG4ObjectID.AAC_LC.rawValue),
            mBytesPerPacket: 0,
            mFramesPerPacket: 0,
            mBytesPerFrame: 0,
            mChannelsPerFrame: 1,
            mBitsPerChannel: 0,
            mReserved: 0)
//        let inputFormat = AVAudioFormat(streamDescription: &inputDesc)
        
        let status: OSStatus =  AudioConverterNew(&inputFormat, &outputFormat, &audioConverter)
        if (status != 0) {
            print("setup converter error, status: \(status)")
        }
        
        print("audioConverter: \(audioConverter)")
    }


2. This is the input callback for FillComplexBuffer

var inputDataVar: AudioConverterComplexInputDataProc = {(
        aAudioConverter: AudioConverterRef,
        aNumDataPackets: UnsafeMutablePointer<UInt32>,
        aData: UnsafeMutablePointer<AudioBufferList>,
        aPacketDesc: UnsafeMutablePointer<UnsafeMutablePointer<AudioStreamPacketDescription>?>?,
        aUserData: UnsafeMutableRawPointer?) -> OSStatus in
        var userData = UnsafeMutablePointer<PassthroughUserData>(OpaquePointer(aUserData)!).pointee
        
        if userData.mDataSize == 0 {
            aNumDataPackets.pointee = 0
            return -9078
        }
        
        print("aUserData: \(aUserData)")
        print("UserData: \(userData)")
        
        if aPacketDesc != nil {
            userData.mPacket.mStartOffset = 0
            userData.mPacket.mVariableFramesInPacket = 0
            userData.mPacket.mDataByteSize = userData.mDataSize
            aPacketDesc?.pointee = UnsafeMutablePointer<AudioStreamPacketDescription>(&userData.mPacket)
        }
        
        UnsafeMutablePointer<AudioBufferList>(OpaquePointer(aData)!).pointee.mBuffers.mNumberChannels = userData.mChannels
        UnsafeMutablePointer<AudioBufferList>(OpaquePointer(aData)!).pointee.mBuffers.mDataByteSize = userData.mDataSize
        UnsafeMutablePointer<AudioBufferList>(OpaquePointer(aData)!).pointee.mBuffers.mData = userData.mData
        
        userData.mDataSize = 0
        
        return noErr
    }


3. And the decode frame function

    func decodeAudioFrame(frame: Data) {
        var frameCopy = frame
        
        if audioConverter == nil {
            self.setupAudioConverter()
        }
        
        var ***1: UInt32 = UInt32(frame.count)
        var ***2 = 0
        var prop = AudioConverterGetProperty(audioConverter!, kAudioConverterPropertyMaximumOutputPacketSize, &***1, &***2)
        
        let packetDescription: AudioStreamPacketDescription = AudioStreamPacketDescription.init(mStartOffset: 0, mVariableFramesInPacket: 0, mDataByteSize: UInt32(frameCopy.count))
        var userData: PassthroughUserData = PassthroughUserData(mChannels: 1, mDataSize: UInt32(frame.count), mData: &frameCopy, mPacket: packetDescription)
                
        let buffer = UnsafeMutablePointer<Int16>.allocate(capacity: frameCopy.count)
        let audioBuffer: AudioBuffer = AudioBuffer.init(mNumberChannels: 1, mDataByteSize: UInt32(MemoryLayout.size(ofValue: buffer)), mData: buffer)
        var decBuffer: AudioBufferList = AudioBufferList.init(mNumberBuffers: 1, mBuffers: audioBuffer)
        
        var outPacketDescription: AudioStreamPacketDescription? = AudioStreamPacketDescription.init()
        memset(&outPacketDescription, 0, MemoryLayout.size(ofValue: outPacketDescription))
        
        var numFrames: UInt32 = 1
        
        let status = AudioConverterFillComplexBuffer(
            audioConverter!,
            inputDataVar,
            &userData,
            &numFrames,
            &decBuffer,
            &outPacketDescription!)
        if status != 0 {
            print("AudioConverterFillComplexBuffer error: \(status)")
        }
        
        print("numFrames: \(numFrames)")
    }


Thank you everyone. In fact, I hope @theAnalogKid can give some light in my path, because this is getting hard.

One thing I forgot to state here:

I know each time I receive a packet from my network stream is an AAC packet containing 1 frame. I know this because the ADTS header


The code I posted was a little mess of different things because I have been trying out a lot of different ways. I'm posting my updated code, in which I have been able to output from AudioConverterFillComplexBuffer just two frames of length 4.


1. setupAudioConverter has changed a bit:

    func setupAudioConverter() {
        var outputFormat = AudioStreamBasicDescription.init(
            mSampleRate: 44100,
            mFormatID: kAudioFormatLinearPCM,
            mFormatFlags: kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked,
            mBytesPerPacket: 4,
            mFramesPerPacket: 1,
            mBytesPerFrame: 4,
            mChannelsPerFrame: 1,
            mBitsPerChannel: 32,
            mReserved: 0)
        
//        let outputFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat, sampleRate: 44100.0, channels: 1, interleaved: false)
        
        var inputFormat = AudioStreamBasicDescription.init(
            mSampleRate: 22050,
            mFormatID: kAudioFormatMPEG4AAC,
            mFormatFlags: UInt32(MPEG4ObjectID.AAC_LC.rawValue),
            mBytesPerPacket: 0,
            mFramesPerPacket: 0,
            mBytesPerFrame: 0,
            mChannelsPerFrame: 1,
            mBitsPerChannel: 0,
            mReserved: 0)
//        let inputFormat = AVAudioFormat(streamDescription: &inputDesc)
        
        let status: OSStatus =  AudioConverterNew(&inputFormat, &outputFormat, &audioConverter)
        if (status != 0) {
            print("setup converter error, status: \(status)")
        }
        
        print("audioConverter: \(audioConverter)")
    }


2. I added one line to my input callback:

UnsafeMutablePointer<UInt32>(OpaquePointer(aNumDataPackets)).pointee = aData.pointee.mBuffers.mDataByteSize / 2


3. the decoding frame function:

func decodeAudioFrame(frame: Data) {
        var frameCopy = frame
        
        if audioConverter == nil {
            self.setupAudioConverter()
        }
        
        let packetDescription: AudioStreamPacketDescription = AudioStreamPacketDescription.init(mStartOffset: 0, mVariableFramesInPacket: 0, mDataByteSize: UInt32(frameCopy.count))
        var userData: PassthroughUserData = PassthroughUserData(mChannels: 1, mDataSize: UInt32(frame.count), mData: &frameCopy, mPacket: packetDescription)
                
        let buffer = UnsafeMutablePointer<Float>.allocate(capacity: 2048)
        let audioBuffer: AudioBuffer = AudioBuffer.init(mNumberChannels: 1, mDataByteSize: UInt32(MemoryLayout.size(ofValue: buffer)), mData: buffer)
        var decBuffer: AudioBufferList = AudioBufferList.init()
        decBuffer.mNumberBuffers = 1
        decBuffer.mBuffers = audioBuffer
        
        var outPacketDescription: AudioStreamPacketDescription? = AudioStreamPacketDescription.init()
        memset(&outPacketDescription, 0, MemoryLayout.size(ofValue: outPacketDescription))
        
        var numFrames: UInt32 = 1
        
        repeat {
            let status = AudioConverterFillComplexBuffer(
                audioConverter!,
                inputDataVar,
                &userData,
                &numFrames,
                &decBuffer,
                &outPacketDescription!)
            if status != 0 {
                print("AudioConverterFillComplexBuffer error: \(status)")
                break
            } else {
                print("status: \(status)")
            }
            
            print("numFrames: \(numFrames)")
            
            if numFrames > 0 {
                let i16bufptr = UnsafeBufferPointer(start: decBuffer.mBuffers.mData?.assumingMemoryBound(to: UInt16.self), count: Int(decBuffer.mBuffers.mDataByteSize))
                print("decBuffer length: \(decBuffer.mBuffers.mDataByteSize)")
                print("decBuffer: \(Array(i16bufptr))")
            } else {
                break
            }
            
        } while true
    }


Again, thank you very much for your help here.

Hi,

I am facing the same issue right now. Have you found a solution for this?

Best regards

AudioConverter Services unable to convert from AAC to PCM from raw network stream
 
 
Q