Core Image

RSS for tag

Use built-in or custom filters to process still and video images using Core Image.

Core Image Documentation

Posts under Core Image tag

68 Posts
Sort by:
Post marked as solved
4 Replies
255 Views
I want to use CIFilter to create a CGImageRef, but when I get cgimage buffer , it is empty CIFilter<CITextImageGenerator> * filter = [CIFilter textImageGeneratorFilter];   filter.text = @"This is a test text";   filter.fontName = @"HoeflerText-Regula";   filter.fontSize = 12;   filter.scaleFactor = 1.0;   CIImage *image = filter.outputImage;   CIContext *context = [CIContext contextWithOptions:nil];   CGImageRef resultRef = [context createCGImage:image fromRect:image.extent];   UIImage *resultImage = [UIImage imageWithCGImage:resultRef];       CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(resultRef));   const unsigned char * buffer = CFDataGetBytePtr(data); And then I could not generate MTLTexture with this cgimage  MTKTextureLoader *loader = [[MTKTextureLoader alloc] initWithDevice:self.device];   NSError*error;   id<MTLTexture> fontTexture = [loader newTextureWithCGImage:resultRef                      options:@{     MTKTextureLoaderOptionOrigin : MTKTextureLoaderOriginFlippedVertically,     MTKTextureLoaderOptionSRGB : @(NO)   }                       error:&error];    How can I finish my work? Any suggestions about this question I am appreciate.
Posted
by JLTG.
Last updated
.
Post marked as solved
1 Replies
223 Views
With macOS 13, the CIColorCube and CIColorCubeWithColorSpace filters gained the extrapolate property for supporting EDR content. When setting this property, we observe that the outputImage of the filter sometimes (~1 in 3 tries) just returns nil. And sometimes it “just” causes artifacts to appear when rendering EDR content (see screenshot below). The artifacts even appear sometimes when extrapolate was not set. input | correct output | broken output This was reproduced on Intel-based and M1 Macs. All of our LUT-based filters in our apps are broken in this way and we could not find a workaround for the issue so far. Does anyone experice the same?
Posted Last updated
.
Post not yet marked as solved
0 Replies
235 Views
I'm running into hard crashes (EXC_BAD_ACCESS) when calling CIContext.writeHEIF10Representation or CIContext.heif10Representation from multiple threads. By contrast, concurrent access to writeHEIFRepresentation works fine. Does anyone know any other way to write a CIImage to 10-bit HEIF? I've tried several alternatives using CVPixelBuffer and MTLTexture without success. While I've filed a report through Feedback Assistant, I'm looking for a workaround. Writing thousands of 10-but hdr heif images within a single thread is an absolute throughput killer, whereas I can write any other image format without concurrency issues. Thanks!
Posted
by mallman.
Last updated
.
Post not yet marked as solved
2 Replies
481 Views
Hi, I have my app on the App Store for about 3 month. Today, I created an update and started to test via TestFlight. However, the image quality of the PNG files are really bad (though haven't changed anything!) There are items in the image that shouldn't be, ... - to me, it looks like the quality of the images were downsampled or whatever. I've included screenshots on how it looks like. On my Mac, the images are correct, also on the device emulator. It only appears when distributing the app ... might there be something wrong with the created archives? Anyone else experienced something like that, how can I fix this? Thanks for your help, Mario 1st Image: how it looks like on the simulator. 2nd Image: how it looks on the iPhone. please note how the buttons appear
Posted
by Mario_mh.
Last updated
.
Post not yet marked as solved
3 Replies
590 Views
Hello there 👋 I've noticed a different behavior between iOS 15 and iOS 16 using CIFilter and SpriteKit. Here is a sample code where I want to display a text and apply a blurry effect on the same text in the back of it. Here is the expected behavior (iOS 15): And the broken behavior on iOS 16: It looks like the text is rotated around the x-axis and way too deep. Here is the sample code: import UIKit import SpriteKit class ViewController: UIViewController {     var skView: SKView?     var scene: SKScene?     override func viewDidLoad() {         super.viewDidLoad()         skView = SKView(frame: view.frame)         scene = SKScene(size: skView?.bounds.size ?? .zero)         scene?.backgroundColor = UIColor.red         view.addSubview(skView!)         skView!.presentScene(scene)         let neonNode = SKNode()         let glowNode = SKEffectNode()         glowNode.shouldEnableEffects = true         glowNode.shouldRasterize = true         let blurFilter = CIFilter(name: "CIGaussianBlur")         blurFilter?.setValue(20, forKey: kCIInputRadiusKey)         glowNode.filter = blurFilter         glowNode.blendMode = .alpha         let labelNode = SKLabelNode(text: "MOJO")         labelNode.fontName = "HelveticaNeue-Medium"         labelNode.fontSize = 60         let labelNodeCopy = labelNode.copy() as! SKLabelNode         glowNode.addChild(labelNode)         neonNode.addChild(glowNode)         neonNode.addChild(labelNodeCopy)         neonNode.position = CGPoint(x: 200, y: 200)         scene?.addChild(neonNode) } }
Posted Last updated
.
Post not yet marked as solved
1 Replies
194 Views
The following code just does not behaves the same way previous to iOS 16 and with iOS 16. The blur effect does not seem to work correctly in iOS 16. class GameScene: SKScene { override func didMove(to view: SKView) { let shapeNode = SKShapeNode(circleOfRadius: 30) shapeNode.fillColor = .green shapeNode.strokeColor = .clear addChild(shapeNode) let blurredShapeNode = SKShapeNode(circleOfRadius: 30) blurredShapeNode.fillColor = .red blurredShapeNode.strokeColor = .clear let effectNode = SKEffectNode() addChild(effectNode) effectNode.addChild(blurredShapeNode) let blurAngle = NSNumber(value: 0) effectNode.filter = CIFilter( name: "CIMotionBlur", parameters: [kCIInputRadiusKey: 30, kCIInputAngleKey: blurAngle]) } }
Posted
by chepiok.
Last updated
.
Post not yet marked as solved
0 Replies
250 Views
A few of our users reported that images saved with our apps disappear from their library in Photos after a few seconds. All of them own a Mac with an old version of macOS, and all of them have iCloud syncing enabled for Photos. Our apps use Core Image to process images. Core Image will transfer most of the input's metadata to the output. While we thought this was generally a good idea, this seems to be causing the issue: The old version of Photos (or even iPhoto?) that is running on the Mac seems to think that the output image of our app is a duplicate of the original image that was loaded into our app. As soon as the iCloud sync happens, the Mac removes the image from the library, even when it's in sleep mode. When the Mac is turned off or disconnected from the internet, the images stay in the library—until the Mac comes back online. This seems to be caused by the output's metadata, but we couldn't figure out what fields are causing the old Photos to detect the new image as duplicate. It's also very hard to reproduce without installing an old macOS on some machine. Does anyone know what metadata field we need to change to not be considered a duplicate?
Posted Last updated
.
Post not yet marked as solved
1 Replies
305 Views
I'm processing a 4K video with a complex Core Image pipeline that also invokes a neural style transfer Core ML model. This works very well, but sometimes, for very few frames, the model execution fails with the following error messages: Execution of the command buffer was aborted due to an error during execution. Internal Error (0000000e:Internal Error) Error: command buffer exited with error status. The Metal Performance Shaders operations encoded on it may not have completed. Error: (null) Internal Error (0000000e:Internal Error) <CaptureMTLCommandBuffer: 0x280b95d90> -> <AGXG15FamilyCommandBuffer: 0x108f143c0> label = <none> device = <AGXG15Device: 0x106034e00> name = Apple A16 GPU commandQueue = <AGXG15FamilyCommandQueue: 0x1206cee40> label = <none> device = <AGXG15Device: 0x106034e00> name = Apple A16 GPU retainedReferences = 1 [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": Internal Error (0000000e:Internal Error); code=1 status=-1 [coreml] Error computing NN outputs -1 [coreml] Failure in -executePlan:error:. It's really hard to reproduce it since it only happens occasionally. I also didn't find a way to access that Internal Error mentioned, so I don't know the real reason why it fails. Any advice would be appreciated!
Posted Last updated
.
Post not yet marked as solved
0 Replies
462 Views
More and more iOS devices can capture content with high/extended dynamic range (HDR/EDR) now, and even more devices have screens that can display that content properly. Apple also gave us developers the means to correctly display and process this EDR content in our apps on macOS and now also on iOS 16. There are a lot of EDR-related sessions from WWDC 2021 and 2022. However, most of them focus on HDR video but not images—even though Camera captures HDR images by default on many devices. Interestingly, those HDR images seem to use a proprietary format that relies on EXIF metadata and an embedded HDR gain map image for displaying the HDR effect in Photos. Some observations: Only Photos will display those metadata-driven HDR images in their proper brightness range. Files, for instance, does not. Photos will not display other HDR formats like OpenEXR or HEIC with BT.2100-PQ color space in their proper brightness. When using the PHPicker, it will even automatically tone-map the EDR values of OpenEXR images to SDR. The only way to load those images is to request the original image via PHAsset, which requires photo library access. And here comes my main point: There is no API that enables us developers to load iPhone HDR images (with metadata and gain map) that will decode image + metadata into EDR pixel values. That means we cannot display and edit those images in our app the same way as Photos. There are ways to extract and embed the HDR gain maps from/into images using Image I/O APIs. But we don't know the algorithm used to blend the gain map with the image's SDR pixel values to get the EDR result. It would be very helpful to know how decoding and encoding from SDR + gain map to HDR and back works. Alternatively (or in addition), it would be great if common image loading APIs like Image I/O and Core Image would provide APIs to load those images into an EDR image representation (16-bit float linear sRGB with extended values, for example) and write EDR images into SDR + gain map images so that they are correctly displayed in Photos. Thanks for your consideration! We really want to support HDR content in our image editing apps, but without the proper APIs, we can only guess how image HDR works on iOS.
Posted Last updated
.
Post not yet marked as solved
0 Replies
523 Views
Hello, since the release of iOS 16 we see a crash EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000000000158 when invoking CIContext.startTask(toRender image: CIImage, to destination: CIRenderDestination) in the custom video compositor of our app. The crash happens exclusively on iOS16. The same call runs fine on iOS 15 and lower. By looking at the crashlytics logs, the crash occurs mostly on iPhone12 (~50% of the occurrences). We are not able to reproduce the bug, as it occurs very randomly. Any suggestion on how to fix this? Or is it a regression on the new OS? Stack trace: Thread 7 name: Thread 7 Crashed: 0 AGXMetalG14 0x00000002060e5b2c AGX::ResourceGroupUsage<AGX::G14::Encoders, AGX::G14::Classes, AGX::G14::ObjClasses>::setTexture(AGXG14FamilyTexture const*, ResourceGroupBindingType, unsigned int) + 40 (agxa_texture_template.h:423) 1 AGXMetalG14 0x000000020601d428 -[AGXG14FamilyComputeContext setTexture:atIndex:] + 168 (agxa_compute_template.hpp:3119) 2 CoreImage 0x000000019b5eb048 CIMetalRenderToTextures + 744 (CIMetalUtils.m:1348) 3 CoreImage 0x000000019b6de5a4 CI::MetalContext::compute_quad(unsigned int, CI::MetalMainProgram const*, CGSize const&, void const**, unsigned long, CI::Dimensions, CI::Dimensions) + 864 (context-metal.mm:1206) 4 CoreImage 0x000000019b6df0e4 CI::MetalContext::render_node(CI::TileTask*, CI::ProgramNode*, CGRect const&, CGRect const&, void const**, __IOSurface**, unsigned long) + 1352 (context-metal.mm:1463) 5 CoreImage 0x000000019b6e0208 CI::MetalContext::render_intermediate_node(CI::TileTask*, CI::ProgramNode*, CGRect const&, CI::intermediate_t*, bool, void () block_pointer) + 472 (context-metal.mm:1621) 6 CoreImage 0x000000019b6e340c CI::Context::recursive_render(CI::TileTask*, CI::roiKey const&, CI::Node*, bool) + 3584 (context.cpp:477) 7 CoreImage 0x000000019b6e2bb4 CI::Context::recursive_render(CI::TileTask*, CI::roiKey const&, CI::Node*, bool) + 1448 (context.cpp:402) 8 CoreImage 0x000000019b6e3c78 CI::Context::render(CI::ProgramNode*, CGRect const&) + 160 (context.cpp:535) 9 CoreImage 0x000000019b7502e4 ___ZN2CI23image_render_to_surfaceEPNS_7ContextEPNS_5ImageE6CGRectP11__IOSurfacePKNS_17RenderDestinationE_block_invoke + 72 (render.cpp:2595) 10 CoreImage 0x000000019b754078 CI::recursive_tile(CI::RenderTask*, CI::Context*, CI::RenderDestination const*, char const*, CI::Node*, CGRect const&, CI::PixelFormat, CI::swizzle_info const&, CI::TileTask* (CI::ProgramNode*, CGR... + 4428 (render.cpp:1824) 11 CoreImage 0x000000019b74ebe4 CI::tile_node_graph(CI::Context*, CI::RenderDestination const*, char const*, CI::Node*, CGRect const&, CI::PixelFormat, CI::swizzle_info const&, CI::TileTask* (CI::ProgramNode*, CGRect) block_pointer) + 444 (render.cpp:1929) 12 CoreImage 0x000000019b74fa54 CI::image_render_to_surface(CI::Context*, CI::Image*, CGRect, __IOSurface*, CI::RenderDestination const*) + 1916 (render.cpp:2592) 13 CoreImage 0x000000019b62e7b4 -[CIContext(CIRenderDestination) _startTaskToRender:toDestination:forPrepareRender:forClear:error:] + 2084 (CIRenderDestination.mm:1943) Crash report: report.crash
Posted
by dimo94.
Last updated
.
Post not yet marked as solved
0 Replies
619 Views
Is this accessible from Swift directly? Visual Look Up Lift subject from background Lift the subject from an image or isolate the subject by removing the background. This works in Photos, Screenshot, Quick Look, Safari, and more. Source: macOS Ventura Preview - New Features - Apple I see that Shortcuts now has a native Remove Background command that wasn't there in iOS 25 or MacOS 12. Is there any way to call that from Swift besides x-callback url schemes?
Posted Last updated
.
Post not yet marked as solved
3 Replies
1.5k Views
We are trying to create a custom CIFilter to add on top of our CALayer's. How ever only the default CIFilters seem to work on a CALayer. We created a small new project on the ViewController.swift we added: import Cocoa import CoreImage class ViewController: NSViewController { override func viewDidLoad() { super.viewDidLoad() // Create some layers to work with! (square with gradient color) let mainLayer = CALayer() let shapeLayer = CAShapeLayer() let gradientLayer = CAGradientLayer() gradientLayer.colors = [NSColor.red.cgColor, NSColor.white.cgColor, NSColor.yellow.cgColor, NSColor.black.cgColor] shapeLayer.path = CGPath(rect: CGRect(x: 0, y: 0, width: 500, height: 500), transform: nil) shapeLayer.fillColor = CGColor.black gradientLayer.frame = CGRect(x: 0, y: 0, width: 500, height: 500) gradientLayer.mask = shapeLayer gradientLayer.setAffineTransform(CGAffineTransform(translationX: 50, y: 50)) mainLayer.addSublayer(gradientLayer) mainLayer.filters = [] self.view.layer?.addSublayer(mainLayer) // Register the custom filter CustomFilterRegister.register() // Test with a normal image file, WORKS! // if let image = NSImage(named: "test"), let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) { // if let filter = CIFilter(name: "CustomFilter") { // filter.setValue(CIImage(cgImage: cgImage), forKey: kCIInputImageKey) // let output = filter.outputImage // // WORKS! Image filtered as expected! // } // } // Does NOT work. No change in color of the layer! if let filter = CIFilter(name: "CustomFilter") { filter.name = "custom" mainLayer.filters?.append(filter) } // This works: mainLayer and sublayers are blurred! // if let filter = CIFilter(name: "CIGaussianBlur") { // filter.name = "blur" // mainLayer.filters?.append(filter) // } } } } We created a simple custom CIFilter to give it a first try before we start building our custom CIFilter. class CustomFilter: CIFilter { // Error in xcode if you don't add this in! override class var supportsSecureCoding: Bool { return true } @objc dynamic var inputImage: CIImage? @objc dynamic var inputSaturation: CGFloat = 1 @objc dynamic var inputBrightness: CGFloat = 0 @objc dynamic var inputContrast: CGFloat = 1 override func setDefaults() { inputSaturation = 1 inputBrightness = 0 inputContrast = 2 } override public var outputImage: CIImage? { guard let image = inputImage else { return nil } return image.applyingFilter("CIPhotoEffectProcess") .applyingFilter("CIColorControls", parameters: [ kCIInputSaturationKey: inputSaturation, kCIInputBrightnessKey: inputBrightness, kCIInputContrastKey: inputContrast ]) } } class CustomFilterRegister: CIFilterConstructor { static func register() { CIFilter.registerName( "CustomFilter", constructor: CustomFilterRegister(), classAttributes: [ kCIAttributeFilterCategories: [kCICategoryBlur, kCICategoryVideo, kCICategoryStillImage] ]) } func filter(withName name: String) -> CIFilter? { switch name { case "CustomFilter": return CustomFilter() default: return nil } } } In the ViewController we added code to test with a normal image. This DOES work so the filter seems to be ok. We also tried a default CIGaussianBlur and that does work on the CALayer. We are lost as to what is needed to get a custom CIFilter working with CALayer, and can't seem to find any information on it. Please note that we are NOT looking for this type of CIFilter or a different way to get the filters result. We need a custom CIFilter to work on a CALayer.
Posted Last updated
.
Post not yet marked as solved
0 Replies
340 Views
It seems custom CIFilter does not work as Core Animation CALayer compositingFilter in recent macOS. It used to be worked fine but under macOS 12.5.1, custom CIFilter outputImage is never called. Are there any working example code out there which demonstrate the use of custom CIFilter as the CALayer compositingFilter?
Posted
by YoshidaT.
Last updated
.
Post not yet marked as solved
0 Replies
326 Views
I am working on an iOS camera app that requires the application of some filters on the image after its capture to enhance its color scheme. I want to apply a Brilliance filter just like we adjust the brilliance of an image in Photos app in iPhone. Is there a way I can implement the brilliance with a slider in my iOS app? I already have explored the 'CIFilter' Library and there might be no filter available for the Brilliance.
Posted Last updated
.
Post not yet marked as solved
0 Replies
344 Views
I am trying to iterate over images in Photo Library and extract faces using CIDetector. The images are required to keep their original resolutions. To do so, I taking the following steps: 1- Getting assets given a date interval (usually more than a year) func loadAssets(from fromDate: Date, to toDate: Date, completion: @escaping ([PHAsset]) -> Void) { fetchQueue.async { let authStatus = PHPhotoLibrary.authorizationStatus() if authStatus == .authorized || authStatus == .limited { let options = PHFetchOptions() options.predicate = NSPredicate(format: "creationDate >= %@ && creationDate <= %@", fromDate as CVarArg, toDate as CVarArg) options.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: false)] let result: PHFetchResult = PHAsset.fetchAssets(with: .image, options: options) var _assets = [PHAsset]() result.enumerateObjects { object, count, stop in _assets.append(object) } completion(_assets) } else { completion([]) } } } where: let fetchQueue = DispatchQueue.global(qos: .background) 2- Extracting faces I then extract face images using: func detectFaces(in image: UIImage, accuracy: String = CIDetectorAccuracyLow, completion: @escaping ([UIImage]) -> Void) { faceDetectionQueue.async { var faceImages = [UIImage]() let outputImageSize: CGFloat = 200.0 / image.scale guard let ciImage = CIImage(image: image), let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: [CIDetectorAccuracy: accuracy]) else { completion(faceImages); return } let faces = faceDetector.features(in: ciImage) // Crash happens here let group = DispatchGroup() for face in faces { group.enter() if let face = face as? CIFaceFeature { let faceBounds = face.bounds let offset: CGFloat = floor(min(faceBounds.width, faceBounds.height) * 0.2) let inset = UIEdgeInsets(top: -offset, left: -offset, bottom: -offset, right: -offset) let rect = faceBounds.inset(by: inset) let croppedFaceImage = ciImage.cropped(to: rect) let scaledImage = croppedFaceImage .transformed(by: CGAffineTransform(scaleX: outputImageSize / croppedFaceImage.extent.width, y: outputImageSize / croppedFaceImage.extent.height)) faceImages.append(UIImage(ciImage: scaledImage)) group.leave() } else { group.leave() } } group.notify(queue: self.faceDetectionQueue) { completion(faceImages) } } } where: private let faceDetectionQueue = DispatchQueue(label: "face detection queue", qos: DispatchQoS.background, attributes: [], autoreleaseFrequency: DispatchQueue.AutoreleaseFrequency.workItem, target: nil) I use the following extension to get the image from assets: extension PHAsset { var image: UIImage { autoreleasepool { let manager = PHImageManager.default() let options = PHImageRequestOptions() var thumbnail = UIImage() let rect = CGRect(x: 0, y: 0, width: pixelWidth, height: pixelHeight) options.isSynchronous = true options.deliveryMode = .highQualityFormat options.resizeMode = .exact options.normalizedCropRect = rect options.isNetworkAccessAllowed = true manager.requestImage(for: self, targetSize: rect.size, contentMode: .aspectFit, options: options, resultHandler: {(result, info) -> Void in if let result = result { thumbnail = result } else { thumbnail = UIImage() } }) return thumbnail } } } The code works fine for a few (usually less that 50) assets, but for more number of images it crashes at: let faces = faceDetector.features(in: ciImage) // Crash happens here I get this error: validateComputeFunctionArguments:858: failed assertion `Compute Function(ciKernelMain): missing sampler binding at index 0 for [0].' If I reduce the size of the image fed to detectFaces(:) e.g. 400 px, I can analyze a few hundred images (usually less than 1000) but as I mentioned, using the asset's image in the original size is a requirement. My guess is it has something to do with a memory issue when I try to extract faces with CIDetector. Any idea what this error is about and how I can fix the issue?
Posted
by Asteroid.
Last updated
.
Post not yet marked as solved
1 Replies
317 Views
Core Image has the concept of Region of Interest (ROI) that allows for nice optimizations during processing. For instance, if a filtered image is cropped before rendering, Core Image can tell the filters to only process that cropped region of the image. This means no pixels are processed that would be discarded by the cropping. Here is an example: let blurred = ciImage.applyingGaussianBlur(sigma: 5) let cropped = blurred.cropped(to: CGRect(x: 100, y: 100, width: 200, height: 200)) First, we apply a gaussian blur filter to the whole image, then we crop to a smaller rect. The corresponding filter graph looks like this: Even though the extent of the image is rather large, the ROI of the crop is propagated back to the filter so that it only processes the pixel within the rendered region. Now to my problem: Core Image can also cache intermediate results of a filter chain. In fact, it does that automatically. This improves performance when, for example, only changing the parameter of a filter in the middle of the chain and rendering again. Then everything before that filter doesn't change, so a cached intermediate result can be used. CI also has a mechanism for explicitly defining such caching point by using insertingIntermediate(cache: true). But I noticed that this doesn't play nicely together with propagating ROI. For example, if I change the example above like this: let blurred = ciImage.applyingGaussianBlur(sigma: 5) let cached = blurred.instertingIntermediate(cache: true) let cropped = cached.cropped(to: CGRect(x: 100, y: 100, width: 200, height: 200)) the filter graph looks like this: As you can see, the blur filter suddenly wants to process the whole image, regardless of the cropping that happens afterward. The inserted cached intermediate always requires the whole input image as its ROI. I found this a bit confusing. It prevents us from inserting explicit caching points into our pipeline since we also support non-destructive cropping using the abovementioned method. Performance is too low, and memory consumption is too high when processing all those unneeded pixels. Is there a way to insert an explicit caching point into the pipeline that correctly propagates the ROI?
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.2k Views
Hi!I recently discovered this session (https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera ) from the WWDC18 and was wondering how one can extract the point cloud from the depth data without using Metal.My end result is to have the pointcloud as a 3D array which I can save into a .txt file.Best,Rico
Posted
by rmeinl.
Last updated
.