Image I/O

RSS for tag

Read and write most image file formats, manage color, access image metadata using Image I/O.

Posts under Image I/O tag

58 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Supporting AdaptiveImageGlyph copying & pasting
I am currently running iOS 18 Beta 3 and am working on enabling users to paste (and copy) custom emojis (AdaptiveImageGlyph, such as Memoji, Stickers, and soon GenMoji) into a text field. I am looking for the UTI for AdaptiveImageGlyph—something similar to "public.adaptive-image-glyph". Does anyone know if such a UTI exists? Here’s my situation: When typing AdaptiveImageGlyph using the System keyboard, everything functions correctly. However, if I copy some text containing AdaptiveImageGlyph from the Notes app and paste it into my playground app, it only pastes the text. The reverse is also true. In fact, if I copy some AdaptiveImageGlyph from the playground app and paste it, it only pasts the text. Interestingly, copying AdaptiveImageGlyph from the Notes app and pasting it into iMessage works flawlessly, and vice versa. I am trying to achieve the same seamless functionality in my app. Given that this feature works in iMessage and Notes, I am inclined to believe the issue might be on my side, though I recognize these are system apps and not third-party. Example Code: import SwiftUI import UIKit struct AdaptiveImageGlyphTextView: UIViewRepresentable { class Coordinator: NSObject, UITextViewDelegate { var parent: AdaptiveImageGlyphTextView init(parent: AdaptiveImageGlyphTextView) { self.parent = parent } func textViewDidChange(_ textView: UITextView) { parent.text = textView.text } func textView(_ textView: UITextView, shouldChangeTextIn range: NSRange, replacementText text: String) -> Bool { // Handle insertion of adaptive image glyphs here if needed return true } } @Binding var text: String func makeCoordinator() -> Coordinator { Coordinator(parent: self) } func makeUIView(context: Context) -> UITextView { let textView = UITextView() textView.delegate = context.coordinator textView.supportsAdaptiveImageGlyph = true textView.isEditable = true textView.isSelectable = true textView.font = UIFont.systemFont(ofSize: 17) // Enable paste with NSAdaptiveImageGlyphs textView.pasteConfiguration = UIPasteConfiguration(acceptableTypeIdentifiers: [ "public.text", "public.image", "public.adaptive-image-glyph" // Replace with the correct UTI if different ]) return textView } func updateUIView(_ uiView: UITextView, context: Context) { if uiView.text != text { uiView.text = text } } } struct ContentView: View { @State private var text: String = "" var body: some View { AdaptiveImageGlyphTextView(text: $text) .frame(height: 200) .padding() } } #Preview { ContentView() }
0
0
90
1w
can not create framework for ealiear code?
I want to create framework for this repo:https://github.com/BradLarson/GPUImage but failed. 1.I downloaded this repo and run below: xcodebuild archive -project GPUImage.xcodeproj -scheme GPUImage -destination "generic/platform=iOS" -archivePath "archives/GPUImage" xcodebuild archive -project GPUImage.xcodeproj -scheme GPUImage -destination "generic/platform=iOS Simulator" -archivePath "archivessimulator/GPUImage" xcodebuild -create-xcframework -archive archives/GPUImage.xcarchive -framework GPUImage.framework -archive archivessimulator/GPUImage.xcarchive -framework GPUImage.framework -output xcframeworks/GPUImage.xcframework there is error :cryptexDiskImage' is an unknown content type and com.apple.platform.xros' is an unknown platform identifier
0
0
113
1w
Updating metadata properties of a DNG (or other) format image
Hi, I'm looking to update the metadata properties of a DNG image stored on disc, saving to a new file. Using ImageIO's CGImageSource and CGImageDestination classes, I run into a problem where by the destination doesn't support the type of the source. For example: let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary if let cgImageSource = CGImageSourceCreateWithURL(sourceURL as CFURL, imageSourceOptions), let type = CGImageSourceGetType(cgImageSource) { guard let imageDestination = CGImageDestinationCreateWithURL(destinationURL as CFURL, type, 1, nil) else { fatalError("Unable to create image destination") } // Code to update properties and write out to destination url } } When this code is executed I get the following errors on the command line when trying to create the destination: 2024-06-30 11:52:25.531530+0100 ABC[7564:273101] [ABC] findWriterForTypeAndAlternateType:119: *** ERROR: unsupported output file format 'com.adobe.raw-image' 2024-06-30 11:52:25.531661+0100 ABC[7564:273101] [ABC] CGImageDestinationCreateWithURL:4429: *** ERROR: CGImageDestinationCreateWithURL: failed to create 'CGImageDestinationRef' I don't see a way to create a destination directly from a source? Yes, the code works for say a JPEG file but I want it to work for any image format that CGImageSource can work with?
0
0
199
3w
Inline Widget Displays Custom Image Abnormally
I want to display my own image in an inline widget. Using the SwiftUI Image syntax doesn't show the image, so following advice from online forums, I used the syntax Image(uiImage: UIImage(named: String)). This successfully displays the image, but if I change the image file name in the app, the image doesn't update properly. I tested displaying the image file name using Text in the inline widget, and it correctly shows the updated file name from my app. So, my AppStorage and AppGroups seem to be working correctly. I'd like to ask if there's a way to update my images properly and if there's an alternative method to display images without converting them to UIImage. Thanks.
0
0
169
4w
Inline Widget Displays Custom Image Abnormally
I want to display my own image in an inline widget. Using the SwiftUI Image syntax doesn't show the image, so following advice from online forums, I used the syntaxImage(uiImage: UIImage(named: String))This successfully displays the image, but if I change the image file name in the app, the image doesn't update properly. I tested displaying the image file name using Text in the inline widget, and it correctly shows the updated file name from my app. So, my AppStorage and AppGroups seem to be working correctly. I'd like to ask if there's a way to update my images properly and if there's an alternative method to display images without converting them to UIImage. Thanks.
0
0
136
4w
How to read back from Spatial Image encoded with HEIC information about which image at which index is left or right?
In the example https://developer.apple.com/documentation/imageio/writing-spatial-photos, we see that for each image encoded with the photo we include the following information: kCGImagePropertyGroups: [ kCGImagePropertyGroupIndex: 0, kCGImagePropertyGroupType: kCGImagePropertyGroupTypeStereoPair, (isLeft ? kCGImagePropertyGroupImageIsLeftImage : kCGImagePropertyGroupImageIsRightImage): true, kCGImagePropertyGroupImageDisparityAdjustment: encodedDisparityAdjustment ], Which will identify which image is left, and which is right, also information about group type = stereo pair. Now, how do you read those back? I tried to implement a reading simply with CGImageSourceCopyPropertiesAtIndex, and that did not work, getting back "No property groups found." func tryToReadThose() { guard let imageData = try? Data(contentsOf: outputImageURL), let source = CGImageSourceCreateWithData(imageData as NSData, nil) else { print("cannot read") return } for i in 0..<CGImageSourceGetCount(source) { guard let imageProperties = CGImageSourceCopyPropertiesAtIndex(source, i, nil) as? [String: Any] else { print("cannot read options") continue } if let propertyGroups = imageProperties[String(kCGImagePropertyGroups)] as? [Any] { // Process the property groups as needed print(propertyGroups) } else { print("No property groups found.") } //print(imageProperties) } } I assume maybe CGImageSourceCopyPropertiesAtIndex expects something as a 3rd parameter. But in the https://developer.apple.com/documentation/imageio/cgimagesource "Specifying the Read Options" I don't see anything related to that.
1
0
289
Jun ’24
Screenshot with ScreenCaptureKit much larger than with Command-Shift-3
I am capturing a screenshot with SCScreenshotManager's captureImageWithFilter. The resulting PNG has the same resolution as the PNG taken from Command-Shift-3 (4112x2658) but is 10x larger (14.4MB vs 1.35MB). My SCStreamConfiguration uses the SCDisplay's width and height and sets the color space to kCGColorSpaceSRGB. I currently save to file by initializing a NSBitmapImageRep using initWithCGImage, then representing as PNG with representationUsingType NSBitmapImageFileTypePNG, then writeToFile:atomically. Is there some configuration or compression I can use to bring down the PNG size to be more closely in-line with a Command-Shift-3 screenshot. Thanks!
1
0
409
May ’24
MacOS Finder Preview/Thumbnail Generation Limited to 20-21 Alpha Channels?
I am using the following shell script to return an image preview for use in FileMaker: qlmanage -t [sourcePath] -s 512 -o [outputPath] This usually works well, but it hangs if the RGB image (.tif, .psb, or .psd) has too many Alpha Channels ( >20 if on transparent background; >21 if flattened). This issue can be also be seen when looking at the image thumbnail or preview in the Finder. It appears MacOS won't create a thumbnail when the image has over 21 Alpha Channels... it just shows the default tif/psb/psd thumbnail, even if the image is very small. Environment MacOS Sonoma 14.4.1 Adobe Photoshop 2024 (25.6.0) Maximize PSD and PSB File Compatibility is enabled when saved from Photoshop Since I'm only able to upload a screenshot to this post, the original test files can be found in the Adobe Forum with the Title: "MacOS Finder Preview Limited to 20-21 Alpha Channels?"
1
0
262
May ’24
iOS 17 UIImageReader has memory leaks
In my SwiftUI view, I try to load the image from data. var body: some View { Group{ if let data = model.detailImageData, let uiimage = UIImage(data: data) {// no memory issue Image(uiImage: uiimage) .resizable() .scaledToFit() } } } But I want to get the HDR style of my image, so I use if let data = model.detailImageData, let uiimage = UIImageReader.default.image(data:data){ //memory leaks!!! When I change the data, the memory of the previous image is never freeed. finally caused my app to crash. You can see it from the Instrument screenshot.
1
1
435
Apr ’24
Memory Leak in ImageIO?
I use this code to show the Image in HDR in SwiftUI struct HDRImageView: UIViewRepresentable { // Set up a common reader for all UIImage read requests. static let reader: UIImageReader = { var config = UIImageReader.Configuration() config.prefersHighDynamicRange = true return UIImageReader(configuration: config) }() let data:Data? let enableHDR:Bool func makeUIView(context: Context) -> UIImageView { let view = UIImageView() view.preferredImageDynamicRange = enableHDR ? .high : .standard update(view) // Set this view to fit itself to the parent view. view.setContentCompressionResistancePriority(.defaultLow, for: .horizontal) view.setContentCompressionResistancePriority(.defaultLow, for: .vertical) view.setContentHuggingPriority(.required, for: .horizontal) view.setContentHuggingPriority(.required, for: .vertical) return view } func updateUIView(_ view: UIImageView, context: Context) { update(view) } func update(_ view: UIImageView) { autoreleasepool{//not working if let data = data { view.image = nil//set to nil first is not working view.image = HDRImageView.reader.image(data: data) } else { view.image = nil } view.preferredImageDynamicRange = enableHDR ? .high : .standard } } } But when I update the input data, seems that the old image data can not be freeed. After several changes, the app takes too much memory and crash. I found it's the VM:ImageIO_Surface_Data and the VM_Image_IO take up the memory. If I change the HDRImageView into a normal Image(uiimage:UIImage(data:)) It no longer have this issue. Is it a memory leak? and how to solve this. Update: I then tried using Image(_:cgImage), and it appear to be the same result.
0
0
407
Apr ’24
Does CVE-2024-1580 affect my app?
I have an image viewing app with support for avif (and avis) images. I'm trying to figure out if the recent bug in CoreMedia (dav1d) affects my app. The apple security update: https://support.apple.com/en-gb/HT214097 The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845), where c is the dav1d context. With some reverse engineering, the way I see CMPhoto calling into VideoToolBox (which internally calls into AV1SW.videodecoder, which is a wrapper around dav1d), the max frame delay is hardcoded to 1 in the dav1d settings which intern means that c->n_fc in dav1d is always 1. The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845). From my understand, this should mean that my app isn't affected. The apple security update however clearly mentions that "Processing an image may lead to arbitrary code execution". Surely I'm missing something?
0
0
429
Apr ’24
iOS ImageRenderer Unable to localize text correctly Bug
A simple view has misaligned localized content after being converted to an image using ImageRenderer. This is still problematic on real phone and TestFlight I'm not sure what the problem is, I'm assuming it's an ImageRenderer bug. I tried to use UIGraphicsImageRenderer, but the UIGraphicsImageRenderer captures the image in an inaccurate position, and it will be offset resulting in a white border. And I don't know why in some cases it encounters circular references that result in blank images. "(1) days" is also not converted to "1 day" properly.
3
1
417
Jun ’24
Capturing the coordinates of an image and locating a second image to those coordinates
if balloon == yellow1_balloon { soundFile = "Sounds/newblop.wav" playSound() balloon.isHidden = true poppedImages.isHidden = false poppedImages.animationImages = ["popyellow-1","popyellow-2","popyellow-3","popyellow-4","popyellow-5","popyellow-6","popyellow-7"] .compactMap({ name in UIImage(named: name) }) let x:CGFloat = yellow1_balloon.frame.origin.x let y:CGFloat = yellow1_balloon.frame.origin.y poppedImages.frame.origin.x = x poppedImages.frame.origin.y = y poppedImages.animationDuration = 1.0 poppedImages.animationRepeatCount = 1 poppedImages.startAnimating() score = score + 10 scoreLbl.text = String(score) return } x,y cordinates are always the same a when yellow1_balloon is first created and not where it ends up after being touched.
0
0
335
Mar ’24
Save ARDepthData as .tiff
I would like to save the depth map from ARDepthData as .tiff, but notice my output tiff distances are incorrect. Objects that are close are reported to be slightly farther away, and walls that are around 4 meters away from me have a recorded value of 2 meters. I am using this code to write the tiff: import UIKit # Save method extension CVPixelBuffer { func saveDepthMapToTIFF(to path: URL) { let ciImage = CIImage(cvPixelBuffer: self) let context = CIContext() do { try context.writeTIFFRepresentation( of: ciImage, to: path, format: .Lf, colorSpace: CGColorSpaceCreateDeviceGray() ) } catch { print("Failed to write TIFF: \(error)") } } } # Calling the save arFrame.sceneDepth?.depthMap.saveDepthMapToTIFF(to: depthMapPath) I am reading the file like this in Python import tifffile depth_map = tifffile.imread("test.tiff") plt.imshow(depth_map) plt.colorbar() which creates this image: The farthest parts of the room should be around 4 meters, not 2. The dark blue spot on the lower right is closer than half a meter away. Notably the depth map contains distances from the camera plane to each region, not the distance from the camera sensor to the region. Even correcting for this though, the depth map remains about the same. Is there an issue with how I am saving the depth image? Is there a scale factor or format error?
1
1
504
Mar ’24
Lossy option has no effect when exporting PNG to HEIF
Under Sonoma 14.4 the compression option doesn't work with PNG images. It works for JPG/HEIF. Preview can export PNG file to HEIC with compression option. What am I missing? Previously this has worked. I am trying with 0.01 and 0.9 as compression quality and the file size is the same for PNG. Is Preview using some trick to convert the image using ciContext.createCGImage? PS: Compression option of 1.0 was broken under 14.4 RC and Preview created empty file. func heifImageDataUsingDestination(at url: URL, compressionQuality : CGFloat) -> Data? { guard let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) else { return nil } guard let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else { return nil } var mutableData = NSMutableData() guard let imageDestination = CGImageDestinationCreateWithData(mutableData, "public.heic" as CFString, 1, nil) else { return nil } let options = [ kCGImageDestinationLossyCompressionQuality: compressionQuality ] as CFDictionary CGImageDestinationAddImage(imageDestination, cgImage, options) let success = CGImageDestinationFinalize(imageDestination) if success { return mutableData as Data } return nil } func heifImageDataUsingCIContext(at url: URL, compressionQuality : CGFloat) -> Data? { guard let ciImage = CIImage(contentsOf: url) else { return nil } let context = CIContext() let colorspace = ciImage.colorSpace ?? CGColorSpaceCreateDeviceRGB() let options = [CIImageRepresentationOption(rawValue: kCGImageDestinationLossyCompressionQuality as String) : compressionQuality] return context.heifRepresentation(of: ciImage, format: .RGBA8, colorSpace: colorspace, options: options) }
4
0
697
May ’24
ICDeviceBrowser detects camera devices
I am developing an app using a data cable to link a camera. When I enter the page for the first time, I can detect the camera device, and then when I exit the page and enter again, I cannot detect the linked camera. - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. self.view.backgroundColor = [UIColor whiteColor]; [self addImageCaptureCore]; } - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{ [self checkCameraConnection]; }); } - (void)checkCameraConnection { if (@available(iOS 13.0, *)) { NSArray<ICDevice *> *connectedDevices = self.browser.devices; if (connectedDevices.count > 0) { NSLog(@"Camera is connected"); } else { NSLog(@"Camera is not connected"); } } else { // Fallback on earlier versions } } - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; if (@available(iOS 13.0, *)) { if (self.cameraDevice) { if (self.cameraDevice.hasOpenSession) { [self.cameraDevice requestCloseSession]; dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{ [self.browser stop]; self.browser.delegate = nil; self.browser = nil; }); } else { dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{ [self.browser stop]; self.browser.delegate = nil; self.browser = nil; }); } } } else { // Fallback on earlier versions } } - (void)addImageCaptureCore { if (@available(iOS 13.0, *)) { ICDeviceBrowser *browser = [[ICDeviceBrowser alloc] init]; browser.delegate = self; [browser start]; self.browser = browser; } else { } } #pragma mark - ICDeviceBrowserDelegate - (void)deviceBrowser:(ICDeviceBrowser*)browser didAddDevice:(ICDevice*)device moreComing:(BOOL) moreComing API_AVAILABLE(ios(13.0)){ NSLog(@"Device name = %@",device.name); if ([device isKindOfClass:[ICCameraDevice class]]) { if ([device.capabilities containsObject:ICCameraDeviceCanAcceptPTPCommands]) { ICCameraDevice *cameraDevice = (ICCameraDevice *)device; cameraDevice.delegate = self; [cameraDevice requestOpenSession]; self.cameraDevice = cameraDevice; } } } - (void)deviceBrowser:(ICDeviceBrowser*)browser didRemoveDevice:(ICDevice*)device moreGoing:(BOOL) moreGoing API_AVAILABLE(ios(13.0)){ if (self.cameraDevice) { if (self.cameraDevice.hasOpenSession) { [self.cameraDevice requestCloseSession]; self.cameraDevice.delegate = nil; self.cameraDevice = nil; } else { self.cameraDevice.delegate = nil; self.cameraDevice = nil; } } } #pragma mark - ICCameraDeviceDelegate - (void)cameraDevice:(ICCameraDevice*)camera didAddItems:(NSArray<ICCameraItem*>*) items API_AVAILABLE(ios(13.0)){ if (items.count > 0) { ICCameraItem *latestItem = items.lastObject; NSLog(@"name = %@",latestItem.name); } } #pragma mark - ICDeviceDelegate - (void)device:(ICDevice*)device didOpenSessionWithError:(NSError* _Nullable) error API_AVAILABLE(ios(13.0)){ if (error) { NSLog(@"Failed to open session %@",error.localizedDescription); } else { NSLog(@"open session success"); } } - (void)device:(ICDevice*)device didCloseSessionWithError:(NSError* _Nullable)error API_AVAILABLE(ios(13.0)){ if (error) { NSLog(@"close session error = %@",error.localizedDescription); } else { NSLog(@"didCloseSession"); } } - (void)didRemoveDevice:(ICDevice*)device { }
0
0
352
Mar ’24
Image processing
How to extract an object from a picture or remove the background of an object just like you can create stickers in Photos app. Is there any other official model or library other than using some website's API? (DeepLabV3.mlmodel cannot infer what I need)
2
0
417
Feb ’24