AVFoundation captureOutput didOutputSampleBuffer Delay

I am using AVFoundation captureOutput didOutputSampleBuffer to extract an image then to be used for a filter.


self.bufferFrameQueue = DispatchQueue(label: "bufferFrame queue", qos: DispatchQoS.background,  attributes: [], autoreleaseFrequency: .inherit)
self.videoDataOutput = AVCaptureVideoDataOutput()
if self.session.canAddOutput(self.videoDataOutput) {
    self.session.addOutput(videoDataOutput)
    self.videoDataOutput!.alwaysDiscardsLateVideoFrames = true
    self.videoDataOutput!.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
    self.videoDataOutput!.setSampleBufferDelegate(self, queue: self.bufferFrameQueue)
}


func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {

    connection.videoOrientation = .portrait

    let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
    let ciImage = CIImage(cvPixelBuffer: pixelBuffer)

    DispatchQueue.main.async {
        self.cameraBufferImage = ciImage
    }
}


Above just updates self.cameraBufferImage anytime there's a new output sample buffer.


Then, when a filter button is pressed, I use self.cameraBufferImage as this:


func filterButtonPressed() {

    if var inputImage = self.cameraBufferImage {

        if let currentFilter = CIFilter(name: "CISepiaTone") {
            currentFilter.setValue(inputImage, forKey: "inputImage")
            currentFilter.setValue(1, forKey: "inputIntensity")
            if let output = currentFilter.outputImage {
                if let cgimg = self.context.createCGImage(output, from: inputImage.extent) {
            
                    self.filterImageLayer = CALayer()
                    self.filterImageLayer!.frame = self.imagePreviewView.bounds
                    self.filterImageLayer!.contents = cgimg
                    self.filterImageLayer!.contentsGravity = kCAGravityResizeAspectFill
                    self.imagePreviewView.layer.addSublayer(self.filterImageLayer!)
            
                }
            }
        }
    }
}


When above method is invoked, it grabs the 'current' self.cameraBufferImage and use it to apply the filter. This works fine in normal exposure duration times (below 1/15 seconds or so...)


Issue


When exposure duration is slow, i.e. 1/3 seconds, it takes a awhile (about 1/3 seconds) to apply the filter. This delay is only present upon the first time after launch. If done again, there is no delay at all.


Thoughts

I understand that if exposure duration is 1/3 seconds, didOutputSampleBuffer only updates every 1/3 seconds. However, why is that initial delay? Shouldn't it just grab whatever self.cameraBufferImage available at that exact time, instead of waiting?


  1. Queue/Thread issue?
  2. CMSampleBuffer retain issue? (Although on Swift 3, there is no CFRetain)
AVFoundation captureOutput didOutputSampleBuffer Delay
 
 
Q