Adding distortion effect to to video

My app is displaying video on iOS, currently using AVSampleBufferDisplayLayer. Before video is shown on screen I need to add barrel distortion effect to it. How to do it efficiently? There is no easy way to manipulate CALayer contents without significant performance penalty. Requirement for video is 1920x1080 at 60 frames per second.


First alternative was to use VTDecompressionSession instead of AVSampleBufferDisplayLayer and use OpenGL shaders for distortion. But it looks like reading frames back from GPU is quite slow, it takes 20-25 milliseconds per frame for decoding and reading the YUV data.


Second alternative was using software video decoding and then OpenGL shaders. This is actually faster than using VTDecompressionSession but will kill the battery very quickly.


Third alternative would to apply CIFilter to CALayer, but according to documentation this is not supported on iOS.


Fourth approach would be to render to offscreen using UIView's renderInContext-method. Haven't tried this yet but it is probably as slow as reading YUV-data. Is there a better alternative to manipulate CALayer contents or other way to distort image with good performance? Looks like reading data from GPU is slow and should be avoided.

None of those are good solutions.


You want to use a sampleBufferDelegate for your capture session, or use a AVPlayerItemVideoOutput, make a CVOpenGLESTextureCache and submit the CVPixelBufferRef to your texture cache to make a new CVOpenGLESTexureRef and use that to draw direct to OpenGL via a bound shader, etc.

Thanks for reply, but I'm afraid those solutions are not available. I should have explained in more detail that I'm receiving compressed video frames from network in real time using proprietary protocol.

Ok, but you can still decompress your buffer via ************ / VTDecompressionSession yourself, get the uncompressed pixel buffer out, and send it to the texture cache.

Yes, but reading back the frames is very slow, it can take 20-25 milliseconds. I am trying to minimize the latency and would like to use more efficient method if possible. I wonder what was the first option you recommended, it was censored for some reason 😕

I dont follow. Why do you need to read back?


Network -> data -> VTDecompressionSession -> Pixel Buffer -> Texture Cache -> Texture -> OpenGL Shader -> Screen.

VTDecompression reads back frames from GPU to CPU after decoding. This takes 20-25 milliseconds in case of 1080p video. When using AVSampleBufferDisplayLayer frames are displayed directly after decoding and there is notably less delay in video.

Adding distortion effect to to video
 
 
Q