ShazamKit constantly fail without an error

In continuation to this question, I am now able to use ShazamKit in my app, but for every video I pass to it didFailWithSignature is always called:

func session(_ session: SHSession,
       didNotFindMatchFor signature: SHSignature,
                 error: Error?) {
        DispatchQueue.main.async {
            print("match error: \(String(describing: error))")
        }
    }

with the following output:

match error: nil

This is happening even with videos that are just screen recorded video clips of Ed Sheeran for example, something that Shazam would easily recognized.

Any idea why that would happen? What I should investigate?

(the full code is also updated on my GitHub

Replies

This could be caused by the way the audio file is converted into PCM buffers. There's an example code snippet posted on this thread on how to convert AVAudioFile into AVAudioPCMBuffer.

  • Hey, thanks for the reply. Unfortunately, I my source isn't a AVAudioFile, but an AVAsset (created from a URL) since I am getting the video from the media picker. I did try to write the audio track into a temporary audio file and then use it as an AVAudioFile (and from there using the conversion you mentioned) - but I still constantly get the same results. Any idea?

    P.S. the code that gets the video URL, does all of the processing and uses ShazamKit for identification is found here, would be glad if you could take a look there.

Add a Comment

Hi LGariv

Sorry for the delay in getting back to you but you do indeed have a conversion error in your project. I have updated the convertBuffer function from your project and I am now getting matches.

Firstly you had an assumption about the bytes per frame which was incorrect. I fixed this by reading from the asbd

let frameLength = AVAudioFrameCount(mBuffers.mDataByteSize / **asbd.mBytesPerFrame**)

Then you need to copy the memory to your output buffer, the full method is below. Please use this only as a starting point, it does not handle channels correctly and should only be used as a guide.

func convertBuffer(sampleBuffer: CMSampleBuffer) -> AVAudioPCMBuffer {

        var asbd = CMSampleBufferGetFormatDescription(sampleBuffer)!.audioStreamBasicDescription!

        var audioBufferList = AudioBufferList()

        var blockBuffer : CMBlockBuffer?



        CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(

            sampleBuffer,

            bufferListSizeNeededOut: nil,

            bufferListOut: &audioBufferList,

            bufferListSize: MemoryLayout<AudioBufferList>.size,

            blockBufferAllocator: nil,

            blockBufferMemoryAllocator: nil,

            flags: 0,

            blockBufferOut: &blockBuffer

        )



        let mBuffers = audioBufferList.mBuffers

        let frameLength = AVAudioFrameCount(mBuffers.mDataByteSize / asbd.mBytesPerFrame)

        let pcmBuffer = AVAudioPCMBuffer(pcmFormat: AVAudioFormat(streamDescription: &asbd)!, frameCapacity: frameLength)!

        pcmBuffer.frameLength = frameLength



        memcpy(pcmBuffer.mutableAudioBufferList.pointee.mBuffers.mData, audioBufferList.mBuffers.mData, Int(audioBufferList.mBuffers.mDataByteSize))

        pcmBuffer.mutableAudioBufferList.pointee.mBuffers.mDataByteSize = audioBufferList.mBuffers.mDataByteSize



        return pcmBuffer

    }

Lastly, cool app, keep up the good work!