Core Image very slow or completely fails to render when using CIImages from Open GL textures

I am using [CIImage imageWithTexture:size:flipped:colorSpace] to create CIImage objects which are then fed as input(s) into Core Image filters and the result rendered using an OpenGL-based CIContext with a NULL working color space. This works fine in iOS 9 but in iOS 10 beta 8 the rendering is significantly slower or fails completely. When it fails completely the output is black and there is a console message like the following:

Failed to render 750000 pixels because a CIKernel's ROI function did not allow tiling.


This failure happens with both built-in Core Image filters such as CIBlendWithMask as well as my own filter with custom CIKernels. The failure doesn't occur if the input CIImage objects are created from CGImage or CVPixelBuffer objects. It fails more often when the input textures are large (long dimension around 4000 pixels but less than 4096) and when the CIFilter(s) are taking multiple images as input.


If I build exactly the same code using Xcode 7 and the iOS 9 SDK, it runs fine on the same iOS 10 device (An iPhone 6s).


Anyone else run into this issue? Any viable workaround?

My users are getting this error when processing some (but not all) photos under the release ios10. Have you found an answer?

I'm also running into this issue with a simple crop filter



let cropFilter: CIFilter = CIFilter(name: "CICrop")!

let cropRect: CIVector = CIVector(cgRect: CGRect(x: 0, y: 150, width: 600, height: 100))

cropFilter.setValue(scaled, forKey: "inputImage")

cropFilter.setValue(cropRect, forKey: "inputRectangle")

let cropped: CIImage = cropFilter.value(forKey: kCIOutputImageKey) as! CIImage!


where the image is created from a pixel buffer from AVCaptureOutput.

Core Image very slow or completely fails to render when using CIImages from Open GL textures
 
 
Q