Core Image

RSS for tag

Use built-in or custom filters to process still and video images using Core Image.

Core Image Documentation

Posts under Core Image tag

50 Posts
Sort by:
Post not yet marked as solved
0 Replies
489 Views
I am trying to carefully process HDR pixel buffers (10-bit YCbCr buffers) from the camera. I have watched all WWDC videos on this topic but have some doubts expressed below. Q. What assumptions are safe to make about sample values in Metal Core Image Kernels? Are the sample values received in Metal Core Image kernel linear or gamma corrected? Or does that depend on workingColorSpace property, or the input image that is supplied (though imageByMatchingToColorSpace() API, etc.)? And what could be the max and min values of these samples in either case? I see that setting workingColorSpace to NSNull() in context creation options will guarantee receiving the samples as is and normalised to [0-1]. But then it's possible the values are non-linear gamma corrected, and extracting linear values would involve writing conversion functions in the shader. In short, how do you safely process HDR pixel buffers received from the camera (which are in YCrCr420_10bit, which I believe have gamma correction applied, so Y in YCbCr is actually Y'. Can AVFoundation team clarify this?) ?
Posted
by
Post not yet marked as solved
0 Replies
453 Views
I understand that by default, Core image uses extended linear sRGB as default working color space for executing kernels. This means that the color values received (or sampled from sampler) in the Metal Core Image kernel are linear values without gamma correction applied. But if we disable color management by setting let options:[CIContextOption:Any] = [CIContextOption.workingColorSpace:NSNull()]; do we receive color values as it exists in the input texture (which may have gamma correction already applied)? In other words, the color values received in the kernel are gamma corrected and we need to manually convert them to linear values in the Metal kernel if required?
Posted
by
Post not yet marked as solved
0 Replies
436 Views
I have set AVCaptureVideoDataOutput with 10-bit 420 YCbCr sample buffers. I use Core Image to process these pixel buffers for simple scaling/translation. var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size /* *srcImage is created from sample buffer received from Video Data Output */ _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) I then set the color attachments to this dstPixelBuffer using set colorProfile in the app settings (BT.709 or BT.2020). switch colorProfile { case .BT709: CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2, .shouldPropagate) case .HLG2100: CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_2020, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_2100_HLG, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_2020, .shouldPropagate) } These pixel buffers are then vended to AVAssetWriter whose videoSettings is set to recommendedSettings by VDO. But the output seems to be washed out completely, esp. for SDR (BT.709). What am I doing wrong?
Posted
by
Post not yet marked as solved
0 Replies
361 Views
I have two CIContexts configured with the following options: let options1:[CIContextOption:Any] = [CIContextOption.cacheIntermediates: false, CIContextOption.outputColorSpace: NSNull(), CIContextOption.workingColorSpace: NSNull()]; let options2:[CIContextOption:Any] = [CIContextOption.cacheIntermediates: false]; And an MTKView with CAMetalLayer configured with HDR output. metalLayer = self.layer as? CAMetalLayer metalLayer?.wantsExtendedDynamicRangeContent = true metalLayer.colorspace = CGColorSpace(name: CGColorSpace.itur_2100_HLG) colorPixelFormat = .bgr10a2Unorm The two context options produce different outputs when input is in BT.2020 pixel buffers. But I believe the outputs shouldn't be different. Because the first option simply disables color management. The second one performs intermediate buffer calculations in sRGB extended linear color space and then converts those buffers to BT.2020 color space in the output.
Posted
by
Post marked as solved
2 Replies
602 Views
It looks like [[stitchable]] Metal Core Image kernels fail to get added in the default metal library. Here is my code: class FilterTwo: CIFilter { var inputImage: CIImage? var inputParam: Float = 0.0 static var kernel: CIKernel = { () -> CIKernel in let url = Bundle.main.url(forResource: "default", withExtension: "metallib")! let data = try! Data(contentsOf: url) let kernelNames = CIKernel.kernelNames(fromMetalLibraryData: data) NSLog("Kernels \(kernelNames)") return try! CIKernel(functionName: "secondFilter", fromMetalLibraryData: data) //<-- This fails! }() override var outputImage : CIImage? { guard let inputImage = inputImage else { return nil } return FilterTwo.kernel.apply(extent: inputImage.extent, roiCallback: { (index, rect) in return rect }, arguments: [inputImage]) } } Here is the Metal code: using namespace metal; [[ stitchable ]] half4 secondFilter (coreimage::sampler inputImage, coreimage::destination dest) { float2 srcCoord = inputImage.coord(); half4 color = half4(inputImage.sample(srcCoord)); return color; } And here is the usage: class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. let filter = FilterTwo() filter.inputImage = CIImage(color: CIColor.red) let outputImage = filter.outputImage! NSLog("Output \(outputImage)") } } And the output: StitchableKernelsTesting/FilterTwo.swift:15: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=CIKernel Code=1 "(null)" UserInfo={CINonLocalizedDescriptionKey=Function does not exist in library data. …•∆} Kernels [] reflect Function 'secondFilter' does not exist.
Posted
by
Post not yet marked as solved
1 Replies
603 Views
I have imported two metal files and defined two stitchable Metal Core Image kernels, one of them being CIColorKernel and other being CIKernel. As outlined in the WWDC video, I need to add a flag -framework CoreImage to other Metal Linker flags. Unfortunately, Xcode 15 puts a double quotes around this and generates an error metal: error: unknown argument: '-framework CoreImage'. So I built without this flag and it works for the first kernel that was added. The other kernel is never added to metal.defaultlib and fails to load. How do I get it working? class SobelEdgeFilterHDR: CIFilter { var inputImage: CIImage? var inputParam: Float = 0.0 static var kernel: CIKernel = { () -> CIKernel in let url = Bundle.main.url(forResource: "default", withExtension: "metallib")! let data = try! Data(contentsOf: url) let kernelNames = CIKernel.kernelNames(fromMetalLibraryData: data) NSLog("Kernels \(kernelNames)") return try! CIKernel(functionName: "sobelEdgeFilterHDR", fromMetalLibraryData: data) }() override var outputImage : CIImage? { guard let inputImage = inputImage else { return nil } return SobelEdgeFilterHDR.kernel.apply(extent: inputImage.extent, roiCallback: { (index, rect) in return rect }, arguments: [inputImage]) } }
Posted
by
Post not yet marked as solved
0 Replies
412 Views
I just noticed that +[CIPlugIn loadAllPlugIns] has been deprecated since 10.15 (OK, I'm a little slow), along with all the other CIPlugIn methods except for loadNonExecutablePlugIns. I don't see any documentation about what I'm supposed to do instead. What's the deal?
Posted
by
Post not yet marked as solved
0 Replies
517 Views
I am currently using CoreImage to process YCbCr422/420 10-bit pixel buffers but it is lacking performance at high frame rates so I decided to switch to Metal. But with Metal I am getting even worse performance. I am loading both the Luma (Y) and Chroma (CbCr) textures in 16-bit format as follows: let pixelFormatY = MTLPixelFormat.r16Unorm let pixelFormatUV = MTLPixelFormat.rg16Unorm renderPassDescriptorY!.colorAttachments[0].texture = texture; renderPassDescriptorY!.colorAttachments[0].loadAction = .clear; renderPassDescriptorY!.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0) renderPassDescriptorY!.colorAttachments[0].storeAction = .store; renderPassDescriptorCbCr!.colorAttachments[0].texture = texture; renderPassDescriptorCbCr!.colorAttachments[0].loadAction = .clear; renderPassDescriptorCbCr!.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0) renderPassDescriptorCbCr!.colorAttachments[0].storeAction = .store; // Vertices and texture coordinates for Metal shader let vertices:[AAPLVertex] = [AAPLVertex(position: vector_float2(-1.0, -1.0), texCoord: vector_float2( 0.0 , 1.0)), AAPLVertex(position: vector_float2(1.0, -1.0), texCoord: vector_float2( 1.0, 1.0)), AAPLVertex(position: vector_float2(-1.0, 1.0), texCoord: vector_float2( 0.0, 0.0)), AAPLVertex(position: vector_float2(1.0, 1.0), texCoord: vector_float2( 1.0, 0.0)) ] let commandBuffer = commandQueue!.makeCommandBuffer() if let commandBuffer = commandBuffer { let renderEncoderY = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptorY!) renderEncoderY?.setRenderPipelineState(pipelineStateY!) renderEncoderY?.setVertexBytes(vertices, length: vertices.count * MemoryLayout<AAPLVertex>.stride, index: 0) renderEncoderY?.setFragmentTexture(CVMetalTextureGetTexture(lumaTexture!), index: 0) renderEncoderY?.setViewport(MTLViewport(originX: 0, originY: 0, width: Double(dstWidthY), height: Double(dstHeightY), znear: 0, zfar: 1)) renderEncoderY?.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1) renderEncoderY?.endEncoding() let renderEncoderCbCr = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptorCbCr!) renderEncoderCbCr?.setRenderPipelineState(pipelineStateCbCr!) renderEncoderCbCr?.setVertexBytes(vertices, length: vertices.count * MemoryLayout<AAPLVertex>.stride, index: 0) renderEncoderCbCr?.setFragmentTexture(CVMetalTextureGetTexture(chromaTexture!), index: 0) renderEncoderCbCr?.setViewport(MTLViewport(originX: 0, originY: 0, width: Double(dstWidthUV), height: Double(dstHeightUV), znear: 0, zfar: 1)) renderEncoderCbCr?.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1) renderEncoderCbCr?.endEncoding() commandBuffer.commit() } And here is shader code: vertex MappedVertex vertexShaderYCbCrPassthru ( constant Vertex *vertices [[ buffer(0) ]], unsigned int vertexId [[vertex_id]] ) { MappedVertex out; Vertex v = vertices[vertexId]; out.renderedCoordinate = float4(v.position, 0.0, 1.0); out.textureCoordinate = v.texCoord; return out; } fragment half fragmentShaderYPassthru ( MappedVertex in [[ stage_in ]], texture2d<float, access::sample> textureY [[ texture(0) ]] ) { constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear); float Y = float(textureY.sample(s, in.textureCoordinate).r); return half(Y); } fragment half2 fragmentShaderCbCrPassthru ( MappedVertex in [[ stage_in ]], texture2d<float, access::sample> textureCbCr [[ texture(0) ]] ) { constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear); float2 CbCr = float2(textureCbCr.sample(s, in.textureCoordinate).rg); return half2(CbCr); } Is there anything fundamentally wrong in the code that makes it slow?
Posted
by
Post not yet marked as solved
0 Replies
549 Views
I have written and used the code to get the colors from CGImage and it worked fine up to iOS16. However, when I use the same code in iOS17, Red and Blue out of RGB are reversed. Is this a temporary bug in the OS and will it be fixed in the future? Or has the specification changed and will it remain this way after iOS17? Here is my code: let pixelDataByteSize = 4 guard let cfData = image.cgImage?.dataProvider?.data else { return } let pointer:UnsafePointer = CFDataGetBytePtr(cfData) let scale = UIScreen.main.nativeScale let address = ( Int(image.size.width * scale) * Int(image.size.height * scale / 2) + Int(image.size.width * scale / 2) ) * pixelDataByteSize let r = CGFloat(pointer[address]) / 255 let g = CGFloat(pointer[address+1]) / 255 let b = CGFloat(pointer[address+2]) / 255
Posted
by
Post not yet marked as solved
0 Replies
480 Views
macOS 14 Sonoma cifilter issue (Designed for iPad) I just updated my M1 Mac to MacOS 14 Sonoma. I ran the iOS/iPadOS app built in Xcode 15.0 on a Mac whose OS was updated using Testflight. (My app is Designed for iPad and is set to run on macOS.) When UIImage is processed using cifilter, the generated image may be grayed out. The same code does not occur on iOS and iPadOS. In other words, it does not occur on iPhone and iPad iOS/iPadOS 17. Has anyone experienced a similar issue? And I would like to know the solution ASAP. I am distributing the app as Designed for iPad. Every year, when Mac OS is updated, an issue arises. So, I am considering not allowing Vision Pro to run the current iOS/iPadOS app.
Posted
by
Post not yet marked as solved
2 Replies
729 Views
Hello All, I am trying to compress PNG image by applying PNG Filters like(Sub, Up, Average, Paeth), I am applying filers using property kCGImagePropertyPNGCompressionFilter but there is no change seen in resultant images after trying any of the filter. What is the issue here can someone help me with this. Do I have compress image data after applying filter? If yes how to do that? Here is my source code CGImageDestinationRef outImageDestRef = NULL; long keyCounter = kzero; CFStringRef dstImageFormatStrRef = NULL; CFMutableDataRef destDataRef = CFDataCreateMutable(kCFAllocatorDefault,0); Handle srcHndl = //source image handle; ImageTypes srcImageType = //'JPEG', 'PNGf, etct; CGImageRef inImageRef = CreateCGImageFromHandle(srcHndl,srcImageType); if(inImageRef) { CFTypeRef keys[4] = {nil}; CFTypeRef values[4] = {nil}; dstImageFormatStrRef = CFSTR("public.png"); long png_filter = IMAGEIO_PNG_FILTER_SUB; //IMAGEIO_PNG_FILTER_SUB, IMAGEIO_PNG_FILTER_UP, IMAGEIO_PNG_FILTER_AVG, IMAGEIO_PNG_FILTER_PAETH .. it is one of this at a time keys[keyCounter] = kCGImagePropertyPNGCompressionFilter; values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&png_filter); keyCounter++; outImageDestRef = CGImageDestinationCreateWithData(destDataRef, dstImageFormatStrRef, 1, NULL); if(outImageDestRef) { // keys[keyCounter] = kCGImagePropertyDPIWidth; // values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&Resolution); // keyCounter++; // // keys[keyCounter] = kCGImagePropertyDPIHeight; // values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&Resolution); // keyCounter++; CFDictionaryRef options = CFDictionaryCreate(NULL,keys,values,keyCounter,&kCFTypeDictionaryKeyCallBacks,&kCFTypeDictionaryValueCallBacks); CGImageDestinationAddImage(outImageDestRef,inImageRef, options); CFRelease(options); status = CGImageDestinationFinalize(outImageDestRef); if(status == true) { UInt8 *destImagePtr = CFDataGetMutableBytePtr(destDataRef); destSize = CFDataGetLength(destDataRef); //using destImagePtr after this ... } CFRelease(outImageDestRef); } for(long cnt = kzero; cnt < keyCounter; cnt++) if(values[cnt]) CFRelease(values[cnt]); if(inImageRef) CGImageRelease(inImageRef); }
Posted
by
Post not yet marked as solved
3 Replies
1.1k Views
Hello! After recent talk on the WWDC2023 about HDR support and finding this documentation page on Applying Apple HDR effect on photos, I became very interested in the HDR Gain Map format. From documentation page it is clear how we can restore original HDR from SDR and Gain Map representation, but my question is - how from HDR we can convert back to the SDR + Gain Map representation? As I understand right know, conversion from HDR to SDR + Gain Map includes two steps: Tone mapping of HDR for getting correct SDR When we have both HDR and SDR, from equation in the documentation page we can calculate Gain Map Am I correct? If so, what tone mapping algorithm for HDR -> SDR conversion is used right know? Can't find any information about this in the internet:( Would be very grateful for your response!
Posted
by
Post not yet marked as solved
1 Replies
659 Views
When initializing a CIColor with a dynamic UIColor (like the system colors that resolve differently based on light/dark mode) on macOS 14 (Mac Catalyst), the resulting CIColor is invalid/uninitialized. For instance: po CIColor(color: UIColor.systemGray2) → <uninitialized> po CIColor(color: UIColor.systemGray2.resolvedColor(with: .current)) → <CIColor 0x60000339afd0 (0.388235 0.388235 0.4 1) ExtendedSRGB> But also, not all colors work even when resolved: po CIColor(color: UIColor.systemGray.resolvedColor(with: .current)) → <uninitialized> I think this is caused by the color space of the resulting UIColor: po UIColor.systemGray.resolvedColor(with: .current) → kCGColorSpaceModelRGB 0.596078 0.596078 0.615686 1 po UIColor.systemGray2.resolvedColor(with: .current) → UIExtendedSRGBColorSpace 0.388235 0.388235 0.4 1 This worked correctly before in macOS 13.
Posted
by
Post not yet marked as solved
1 Replies
607 Views
I'm trying to create a sky mask on pictures taken from my iPhone. I've seen in the documentation that CoreImage support semantic segmentation for Sky among other type for person (skin, hair etc...) For now, I didn't found the proper workflow to use it. First, I watched https://developer.apple.com/videos/play/wwdc2019/225/ I understood that images must be captured with the segmentation with this kind of code: photoSettings.enabledSemanticSegmentationMatteTypes = self.photoOutput.availableSemanticSegmentationMatteTypes photoSettings.embedsSemanticSegmentationMattesInPhoto = true I capture the image on my iPhone, save it as HEIC format then later, I try to load the matte like that : let skyMatte = CIImage(contentsOf: imageURL, options: [.auxiliarySemanticSegmentationSkyMatte: true]) Unfortunately, self.photoOutput.availableSemanticSegmentationMatteTypes always give me a list of types for person only and never a types Sky. Anyway, the AVSemanticSegmentationMatte.MatteType is just [Hair, Skin, Teeth, Glasses] ... No Sky !!! So, How am I supposed to use semanticSegmentationSkyMatteImage ?!? Is there any simple workaround ?
Posted
by
Post not yet marked as solved
2 Replies
801 Views
I have written two custom Core Image metal kernels which I'm using to produce a CIImage (by chaining several filters). I'm drawing the output image in a simple view and whatever I use (CIImage, NSImage, CGImageRef), the image appears corrupted on screen, like some sort of graphics corruption (I've tried on two different machines with different systems). However if I add a simple step to write the image to disk from the CIImage then read it from disk and draw it in that very same view, then all is fine and the image appears correctly. What could possibly be happening here?
Posted
by
Post not yet marked as solved
0 Replies
714 Views
When using the heif10Representation and writeHEIF10Representation APIs of CIContext, the resulting image doesn’t contain an alpha channel. When using the heifRepresentation and writeHEIFRepresentation APIs, the alpha channel is properly preserved, i.e., the resulting HEIC will contain a urn:mpeg:hevc:2015:auxid:1 auxiliary image. This image is missing when exporting as HEIF10. Is this a bug or is this intentional? If I understand the spec correctly, HEIF10 should be able to support alpha via auxiliary image (like HEIF8).
Posted
by
Post not yet marked as solved
4 Replies
1.2k Views
I am processing CVPixelBuffers received from camera using both Metal and CoreImage, and comparing the performance. The only processing that is done is taking a source pixel buffer and applying crop & affine transforms, and saving the result to another pixel buffer. What I do notice is CPU usage is as high a 50% when using CoreImage and only 20% when using Metal. The profiler shows most of the time spent is in CIContext render: let cropRect = AVMakeRect(aspectRatio: CGSize(width: dstWidth, height: dstHeight), insideRect: srcImage.extent) var dstImage = srcImage.cropped(to: cropRect) let translationTransform = CGAffineTransform(translationX: -cropRect.minX, y: -cropRect.minY) var transform = CGAffineTransform.identity transform = transform.concatenating(CGAffineTransform(translationX: -(dstImage.extent.origin.x + dstImage.extent.width/2), y: -(dstImage.extent.origin.y + dstImage.extent.height/2))) transform = transform.concatenating(translationTransform) transform = transform.concatenating(CGAffineTransform(translationX: (dstImage.extent.origin.x + dstImage.extent.width/2), y: (dstImage.extent.origin.y + dstImage.extent.height/2))) dstImage = dstImage.transformed(by: translationTransform) let scale = max(dstWidth/(dstImage.extent.width), CGFloat(dstHeight/dstImage.extent.height)) let scalingTransform = CGAffineTransform(scaleX: scale, y: scale) transform = CGAffineTransform.identity transform = transform.concatenating(scalingTransform) dstImage = dstImage.transformed(by: transform) if flipVertical { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: 1, y: -1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: 0, y: dstImage.extent.size.height)) } if flipHorizontal { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: -1, y: 1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: dstImage.extent.size.width, y: 0)) } var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) Here is how CIContext was created: _ciContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [CIContextOption.cacheIntermediates: false]) I want to know if I am doing anything wrong and what could be done to lower CPU usage in CoreImage?
Posted
by
Post not yet marked as solved
0 Replies
643 Views
I am currently working on a SwiftUI video app. When I load a slow motion video being in 240 IPS (239.68), I use "asset.loadTracks" and then ".load(.nominalFrameRate)" which returns 30 IPS (29.xx), asset being AVAsset(url: ). And the duration in asset.load(.duration) is also 8 times bigger than original duration. Do you know how to get this 239.68 displayed in the Apple Photo app ? Is it stored somewhere in the video metadata or is it computed ?
Posted
by