Core Image

RSS for tag

Use built-in or custom filters to process still and video images using Core Image.

Posts under Core Image tag

48 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Correct settings to record HDR/SDR with AVAssetWriter
I have set AVCaptureVideoDataOutput with 10-bit 420 YCbCr sample buffers. I use Core Image to process these pixel buffers for simple scaling/translation. var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size /* *srcImage is created from sample buffer received from Video Data Output */ _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) I then set the color attachments to this dstPixelBuffer using set colorProfile in the app settings (BT.709 or BT.2020). switch colorProfile { case .BT709: CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2, .shouldPropagate) case .HLG2100: CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_2020, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_2100_HLG, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_2020, .shouldPropagate) } These pixel buffers are then vended to AVAssetWriter whose videoSettings is set to recommendedSettings by VDO. But the output seems to be washed out completely, esp. for SDR (BT.709). What am I doing wrong?
0
0
567
Nov ’23
Metal Core Image kernel workingColorSpace
I understand that by default, Core image uses extended linear sRGB as default working color space for executing kernels. This means that the color values received (or sampled from sampler) in the Metal Core Image kernel are linear values without gamma correction applied. But if we disable color management by setting let options:[CIContextOption:Any] = [CIContextOption.workingColorSpace:NSNull()]; do we receive color values as it exists in the input texture (which may have gamma correction already applied)? In other words, the color values received in the kernel are gamma corrected and we need to manually convert them to linear values in the Metal kernel if required?
0
0
530
Nov ’23
Correctly process HDR in Metal Core Image Kernels (& Metal)
I am trying to carefully process HDR pixel buffers (10-bit YCbCr buffers) from the camera. I have watched all WWDC videos on this topic but have some doubts expressed below. Q. What assumptions are safe to make about sample values in Metal Core Image Kernels? Are the sample values received in Metal Core Image kernel linear or gamma corrected? Or does that depend on workingColorSpace property, or the input image that is supplied (though imageByMatchingToColorSpace() API, etc.)? And what could be the max and min values of these samples in either case? I see that setting workingColorSpace to NSNull() in context creation options will guarantee receiving the samples as is and normalised to [0-1]. But then it's possible the values are non-linear gamma corrected, and extracting linear values would involve writing conversion functions in the shader. In short, how do you safely process HDR pixel buffers received from the camera (which are in YCrCr420_10bit, which I believe have gamma correction applied, so Y in YCbCr is actually Y'. Can AVFoundation team clarify this?) ?
0
0
643
Nov ’23
MTLTexture streaming to app directory
For better memory usage when working with MTLTextures (editing + displaying in render passes, compute shaders, etc.) is it possible to save the texture to the app's Documents folder, and then use an UnsafeMutablePointer to access/modify the contents of the texture before displaying in a render pass? And would this be performant (i.e 60fps)? That way the texture won't be directly in memory all the time, but the contents can still be edited and displayed when needed.
1
0
639
Nov ’23
CIImage.clampedToExtent() doesn't fill some edges
Hi, I'm trying to find an explanation to strange behaviour of .clampedToExtent() method: I'm doing pretty strait forward thing, clamp the image and then crop it with some insets, so as a result I expert same image as original with padding on every side with repeating last pixel of each edge (to apply CIPixellate filter then), here is the code: originalImage .clampedToExtent() .cropped(to: originalImage.extent.insetBy(dx: -50, dy: -50)) The result is strange: In the result image image has padding as specified, but only there sides have content there (left, right, bottom) and top side has transparent padding. Sometimes right side has transparency instead of content. So the question is why this happens and how to get all sides filled with last pixel data? I tested on two different devices with iOS 16 and 17.
2
0
720
Dec ’23
Create CGColorSpace from CFPropertyList
I am trying to create a custom CGColorSpace in Swift on macOS but am not sure I really understand the concepts. I want to use a custom color space called Spot1 and if I extract the spot color from a PDF I get the following: "ColorSpace<Dictionary>" = { "Cs2<Array>" = ( Separation, Spot1, DeviceCMYK, { "BitsPerSample<Integer>" = 8; "Domain<Array>" = ( 0, 1 ); "Filter<Name>" = FlateDecode; "FunctionType<Integer>" = 0; "Length<Integer>" = 526; "Range<Array>" = ( 0, 1, 0, 1, 0, 1, 0, 1 ); "Size<Array>" = ( 1024 ); } ); }; How can I create this same color space using the CGColorSpace(propertyListPlist: CFPropertyList) API func createSpot1() -> CGColorSpace? { let dict0 : NSDictionary = [ "BitsPerSample": 8, "Domain" : [0,1], "Filter" : "FlateDecode", "FunctionType" : 0, "Length" : 526, "Range" : [0,1,0,1,0,1,0,1], "Size" : [1024]] let dict : NSDictionary = [ "Cs2" : ["Separation","Spot1", "DeviceCMYK", dict0] ] let space = CGColorSpace(propertyListPlist: dict as CFPropertyList) if space == nil { DebugLog("Spot1 color space is nil!") } return space }
0
0
479
Dec ’23
iOS 17 swift get GPS Location from Image
I am fetching an image from the photo library and fetch the GPS Location data, but it's not working. This needs to work on iOS 17 as well, so I used PHPicker. kCGImagePropertyGPSDictionary is always returning nil. The code I tried: import CoreLocation import MobileCoreServices import PhotosUI class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate { @IBOutlet weak var selectedImageView:UIImageView! @IBAction func selectTheImage() { self.pickImageFromLibrary_PH() } func pickImageFromLibrary_PH() { var configuration = PHPickerConfiguration(photoLibrary: PHPhotoLibrary.shared()) configuration.filter = .images let picker = PHPickerViewController(configuration: configuration) picker.delegate = self present(picker, animated: true, completion: nil) } func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) { if let itemProvider = results.first?.itemProvider, itemProvider.canLoadObject(ofClass: UIImage.self) { itemProvider.loadObject(ofClass: UIImage.self) { (image, error) in if let image = image as? UIImage { self.fetchLocation(for: image) } } } picker.dismiss(animated: true, completion: nil) } func fetchLocation(for image: UIImage) { let locationManager = CLLocationManager() guard let imageData = image.jpegData(compressionQuality: 1.0) else { print("Unable to fetch image data.") return } guard let source = CGImageSourceCreateWithData(imageData as CFData, nil) else { print("Unable to create image source.") return } guard let properties = CGImageSourceCopyPropertiesAtIndex(source, 0, nil) as? [String: Any] else { print("Unable to fetch image properties.") return } print(properties) if let gpsInfo = properties[kCGImagePropertyGPSDictionary as String] as? [String: Any], let latitude = gpsInfo[kCGImagePropertyGPSLatitude as String] as? CLLocationDegrees, let longitude = gpsInfo[kCGImagePropertyGPSLongitude as String] as? CLLocationDegrees { let location = CLLocation(latitude: latitude, longitude: longitude) print("Image was taken at \(location.coordinate.latitude), \(location.coordinate.longitude)") } else { print("PHPicker- Location information not found in the image.") } } } Properties available in that image: Exif/Meta-Data is available, I expect GPS location data ColorSpace = 65535; PixelXDimension = 4032; PixelYDimension = 3024; }, "DPIWidth": 72, "Depth": 8, "PixelHeight": 3024, "ColorModel": RGB, "DPIHeight": 72, "{TIFF}": { Orientation = 1; ResolutionUnit = 2; XResolution = 72; YResolution = 72; }, "PixelWidth": 4032, "Orientation": 1, "{JFIF}": { DensityUnit = 0; JFIFVersion = ( 1, 0, 1 ); XDensity = 72; YDensity = 72; }] Note: I'm trying in Xcode 15 and iOS 17. In photos app location data is available, but in code, it's returning nil.
0
0
913
Dec ’23
Resolving Delay in HEIF Image Processing During Screen Recording on macOS
Hello everyone, I'm currently facing a challenging issue with my macOS application that involves HEIF image processing. The application uses an OperationQueue to handle HEIF compression tasks. However, I've observed a significant delay in processing when a screen recording is active. This delay doesn't occur under normal circumstances. Here's a brief overview of the implementation: The HEIF processing task is encapsulated within an Operation added to an OperationQueue. The task involves using CIContext for image processing. When screen recording is initiated, the operation's execution becomes unusually slow or gets delayed extensively. After some research and community feedback, I learned that screen recording might be affecting the system's resource allocation, particularly impacting tasks that utilize GPU resources, like CIContext operations in my case. To address this, I tried the following: Switching to a custom dispatch queue with a .userInitiated QoS. Using GCD instead of OperationQueue. Despite these attempts, the issue persists during screen recording. It seems like the screen recording process is given higher priority by macOS, leading to resource reallocation and thus affecting my application's performance. I'm looking for insights or suggestions on how to handle this scenario more effectively. Specifically, I am interested in: Understanding how screen recording impacts resource allocation in macOS. Exploring ways to ensure that my HEIF processing task is not severely impacted by other system processes like screen recording. Any best practices or alternative approaches for handling image processing tasks that are sensitive to system resource availability. Here's a snippet of the HEIF processing function for reference: import CoreImage struct CommandResult: CustomStringConvertible { let output: String let error: Process.TerminationReason let status: Int32 var description: String { return "error:\(error.rawValue), output:\(output), status:\(status)" } } func heif(at sourceURL: URL, to destinationURL: URL, as quality: Int = 75) -> CommandResult { let compressionQuality = CGFloat(quality) / 100.0 guard let ciImage = CIImage(contentsOf: sourceURL) else { return CommandResult(output: "Load heic image failed \(sourceURL)", error: .exit, status: -1) } let context = CIContext(options: nil) let heifOptions = [kCGImageDestinationLossyCompressionQuality: compressionQuality] as! [CIImageRepresentationOption: Any] do { try context.writeHEIFRepresentation(of: ciImage, to: destinationURL, format: .RGBA8, colorSpace: ciImage.colorSpace!, options: heifOptions) } catch { return CommandResult(output: "Compress and write heic image failed \(sourceURL)", error: .exit, status: -1) } return CommandResult(output: "Compress and write heic image successfully \(sourceURL)", error: .exit, status: 0) } Thank you for your time and any assistance you can provide!
0
0
522
Dec ’23
Animated AVIF is rendered slowly on Safari
Animated AVIF is rendered slowly on Safari Tested with MacBook pro (16" 2019) and Safari (Version 17.0 - 19616.1.27.211.1) and also on several iPhone models (14, 15 Pro) (over BrowserStack) When using macBook pro (16" 2019) with Chrome (Version 120.0.6099.129) it is rendered OK example for 720p@25FPS: https://res.cloudinary.com/yaronshmueli/image/upload/cases/animated_AVIF_Apple/world_flight_fast_decode_tile_clmn_btiolg.avif
0
0
666
Dec ’23
iPhone 15 Pro Front Camera quality issues and poor face photos
This isn't just my observation but lots of people around me and also you can find tonnes of feedback on the inter webs. The processing of images taken with the front facing camera on the 15 (and I think 14 before) is so over processed that I'm aware of people jumping to other phones. And they're right. The 15 exacerbates that even more. You can turn off HDR (a viewing thing), you can prioritise speed over processing but really you cannot turn this off. You can take a Live Photo and then choose a different frame and the processing is less. As a developer I look at that and think it's bonkers, it's just software so why hasn't anyone produced a camera app that makes faces look good (not AI processing) from the front camera. I can be all enthusiastic and say I will develop one but it seems like a simple, obvious fix for Apple. To have the settings so bad that I have friends returning their phones, seems pretty bad. And as a photographer I would agree. There's a lot to love with Apple on the 15 and the log and prores but a simple selfie produces such ugly results. That's an actual problem. So throwing it it out there. What does everyone think? cheers Paul
0
0
775
Jan ’24
I want to move a CoreImage task to the background...
It feels like this should be easy, but I'm having conceptual problems about how to do this. Any help would be appreciated. I have a sample app below that works exactly as expected. I'm able to use the Slider and Stepper to generate inputs to a function that uses CoreImage filters to manipulate my input image. This all works as expected, but it's doing some O(n) CI work on the main thread, and I want to move it to a background thread. I'm pretty sure this can be done using combine, here is the pseudo code I imagine would work for me: func doStuff() { // subscribe to options changes // .receive on background thread // .debounce // .map { model.inputImage.combine(options: $0) // return an object on the main thread. // update an @Published property? } Below is the POC code for my project. Any guidance as to where I should use combine to do this would be greatly appreciate. (Also, please let me know if you think combine is not the best way to tackle this. I'd be open to alternative implementations.) struct ContentView: View { @State private var options = CombineOptions.basic @ObservedObject var model = Model() var body: some View { VStack { Image(uiImage: enhancedImage) .resizable() .aspectRatio(contentMode: .fit) Slider(value: $options.scale) Stepper(value: $options.numberOfImages, label: { Text("\(options.numberOfImages)")}) } } private var enhancedImage: UIImage { return model.inputImage.combine(options: options) } } class Model: ObservableObject { let inputImage: UIImage = UIImage.init(named: "IMG_4097")! } struct CombineOptions: Codable, Equatable { static let basic: CombineOptions = .init(scale: 0.3, numberOfImages: 10) var scale: Double var numberOfImages: Int }
1
0
595
Jan ’24
Use CoreImage filters on Vision Pro (visionOS) view
I have an iOS app that uses (camera) video feed and applies CoreImage filters to simulate a specific real world effect (for educational purposes). Now I wanted to make a similar app for visionOS and apply the same CoreImage filters to the content (live view) users sees while wearing Apple Vision Pro headset. Is there a way to do it with current APIs and what would you recommend? I saw that we cannot get video feed from camera(s), is there a way to do it with ARKit and applying the filters somehow using that? I know visionOS is a young/fresh platform but any help would be great! Thank you!
1
0
879
Jan ’24
CoreImage createCGImage Crash
I found that the app reported a crash of a pure virtual function call, which could not be reproduced. A third-party library is referenced: https://github.com/lincf0912/LFPhotoBrowser Achieve smearing, blurring, and mosaic processing of images Crash code: if (![LFSmearBrush smearBrushCache]) { [_edit_toolBar setSplashWait:YES index:LFSplashStateType_Smear]; CGSize canvasSize = AVMakeRectWithAspectRatioInsideRect(self.editImage.size, _EditingView.bounds).size; [LFSmearBrush loadBrushImage:self.editImage canvasSize:canvasSize useCache:YES complete:^(BOOL success) { [weakToolBar setSplashWait:NO index:LFSplashStateType_Smear]; }]; } - (UIImage *)LFBB_patternGaussianImageWithSize:(CGSize)size orientation:(CGImagePropertyOrientation)orientation filterHandler:(CIFilter *(^ _Nullable )(CIImage *ciimage))filterHandler { CIContext *context = LFBrush_CIContext; NSAssert(context != nil, @"This method must be called using the LFBrush class."); CIImage *midImage = [CIImage imageWithCGImage:self.CGImage]; midImage = [midImage imageByApplyingTransform:[self LFBB_preferredTransform]]; midImage = [midImage imageByApplyingTransform:CGAffineTransformMakeScale(size.width/midImage.extent.size.width,size.height/midImage.extent.size.height)]; if (orientation > 0 && orientation < 9) { midImage = [midImage imageByApplyingOrientation:orientation]; } //图片开始处理 CIImage *result = midImage; if (filterHandler) { CIFilter *filter = filterHandler(midImage); if (filter) { result = filter.outputImage; } } CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]]; UIImage *image = [UIImage imageWithCGImage:outImage]; CGImageRelease(outImage); return image; } This line trigger crash: CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]]; b9c90c7bbf8940e5aabed7f3f62a65a2-symbolicated.crash
1
0
543
Jan ’24
CAMetalLayer renders HDR images with a color shift
I can't get CoreImage to render an HDR image file with correct colors to a CAMetalLayer on macOS 14. I'm comparing the result with NSImageView and the SupportingHDRImagesInYourApp 'HDRDemo23' sample code, which use CVPixelBuffer. With CAMetalLayer, the images are displayed as HDR (definitely more highlights), but they're darker with some kind saturation increase & color shift. Files I've tested include the sample ISO HDR files in the SupportingHDRImagesInYourApp sample code. Methods I've tried to render to CAMetalLayer include: Modifying the GeneratingAnAnimationWithACoreImageRenderDestination sample code's ContentView so it uses HDRDemo23/example-ISO-HDR-images/image_01.heic, loaded using CIImage(contentsOf:) Creating a test AppKit app that uses MTKView and CIRenderDestination the same way. I have NSImageView and the MTKView in the same window for comparison. Using CIRAWFilter > CIRenderDestination > IOSurface > MTKView/CAMetalLayer All these methods produce the image with the exact same appearance; a dark HDR image with some saturation/color shift. The only clue I've found is that when using the Metal Debugger on the test AppKit app, the CAMetalLayer's 'Present' shows the 'input' thumbnail is HDR without the color shift, but the 'output' thumbnail looks like what I actually see. I tried changing the color profile on the layer to various things but nothing looked more correct. I've tried this on two Macs, an M1 Mac Studio with an LG display, and a MacBook Air M2. The MacBook Air shows the same color shift, but since it has less dynamic range overall there isn't as much difference between NSImageView and MTKView.
4
0
802
Jun ’24
What is `CIRAWFilter.linearSpaceFilter` for and when to use it?
I haven't found any really thorough documentation or guidance on the use of CIRAWFilter.linearSpaceFilter. The API documentation calls it An optional filter you can apply to the RAW image while it’s in linear space. Can someone provide insight into what this means and what the linear space filter is useful for? When would we use this linear space filter instead of a filter on the output of CIRAWFilter? Thank you.
0
0
476
Feb ’24
Lossy option has no effect when exporting PNG to HEIF
Under Sonoma 14.4 the compression option doesn't work with PNG images. It works for JPG/HEIF. Preview can export PNG file to HEIC with compression option. What am I missing? Previously this has worked. I am trying with 0.01 and 0.9 as compression quality and the file size is the same for PNG. Is Preview using some trick to convert the image using ciContext.createCGImage? PS: Compression option of 1.0 was broken under 14.4 RC and Preview created empty file. func heifImageDataUsingDestination(at url: URL, compressionQuality : CGFloat) -> Data? { guard let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) else { return nil } guard let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else { return nil } var mutableData = NSMutableData() guard let imageDestination = CGImageDestinationCreateWithData(mutableData, "public.heic" as CFString, 1, nil) else { return nil } let options = [ kCGImageDestinationLossyCompressionQuality: compressionQuality ] as CFDictionary CGImageDestinationAddImage(imageDestination, cgImage, options) let success = CGImageDestinationFinalize(imageDestination) if success { return mutableData as Data } return nil } func heifImageDataUsingCIContext(at url: URL, compressionQuality : CGFloat) -> Data? { guard let ciImage = CIImage(contentsOf: url) else { return nil } let context = CIContext() let colorspace = ciImage.colorSpace ?? CGColorSpaceCreateDeviceRGB() let options = [CIImageRepresentationOption(rawValue: kCGImageDestinationLossyCompressionQuality as String) : compressionQuality] return context.heifRepresentation(of: ciImage, format: .RGBA8, colorSpace: colorspace, options: options) }
5
0
702
2h
Save ARDepthData as .tiff
I would like to save the depth map from ARDepthData as .tiff, but notice my output tiff distances are incorrect. Objects that are close are reported to be slightly farther away, and walls that are around 4 meters away from me have a recorded value of 2 meters. I am using this code to write the tiff: import UIKit # Save method extension CVPixelBuffer { func saveDepthMapToTIFF(to path: URL) { let ciImage = CIImage(cvPixelBuffer: self) let context = CIContext() do { try context.writeTIFFRepresentation( of: ciImage, to: path, format: .Lf, colorSpace: CGColorSpaceCreateDeviceGray() ) } catch { print("Failed to write TIFF: \(error)") } } } # Calling the save arFrame.sceneDepth?.depthMap.saveDepthMapToTIFF(to: depthMapPath) I am reading the file like this in Python import tifffile depth_map = tifffile.imread("test.tiff") plt.imshow(depth_map) plt.colorbar() which creates this image: The farthest parts of the room should be around 4 meters, not 2. The dark blue spot on the lower right is closer than half a meter away. Notably the depth map contains distances from the camera plane to each region, not the distance from the camera sensor to the region. Even correcting for this though, the depth map remains about the same. Is there an issue with how I am saving the depth image? Is there a scale factor or format error?
1
1
504
Mar ’24