Image I/O

RSS for tag

Read and write most image file formats, manage color, access image metadata using Image I/O.

Posts under Image I/O tag

28 Posts

Post

Replies

Boosts

Views

Activity

Since iOS 18.3, icons are no longer generated correctly with QLThumbnailGenerator
Since iOS 18.3, icons are no longer generated correctly with QLThumbnailGenerator. No error is returned either. But this error message now appears in the console: Error returned from iconservicesagent image request: <ISTypeIcon: 0x3010f91a0>,Type: com.adobe.pdf - <ISImageDescriptor: 0x302f188c0> - (36.00, 36.00)@3x v:1 l:5 a:0:0:0:0 t:() b:0 s:2 ps:0 digest: B19540FD-0449-3E89-AC50-38F92F9760FE error: Error Domain=NSOSStatusErrorDomain Code=-609 "Client is disallowed from making such an icon request" UserInfo={NSLocalizedDescription=Client is disallowed from making such an icon request} Does anyone know this error? Is there a workaround? Are there new permissions to consider? Here is the code how icons are generated: let request = QLThumbnailGenerator.Request(fileAt: url, size: size, scale: scale, representationTypes: self.thumbnailType) request.iconMode = true let generator = QLThumbnailGenerator.shared generator.generateRepresentations(for: request) { [weak self] thumbnail, _, error in }
16
5
1.7k
Nov ’25
Cannot extract imagePair from generated Spatial Photos
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here. Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo let photos = PHAsset.fetchAssets(with: .image, options: nil) // enumerating photos .... if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) { spatialAsset = asset } // other code show below I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos). // imageCount is 1 when it comes to generated spatial photo let imageCount = CGImageSourceGetCount(source) I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource. The full code below, the imagePair extraction will stop at "no groups found": func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) { let options = PHImageRequestOptions() options.isNetworkAccessAllowed = true options.deliveryMode = .highQualityFormat options.resizeMode = .none options.version = .original return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) { imageData, _, _, _ in guard let imageData, let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil) else { completion(nil) return } let stereoImagePair = stereoImagePair(from: imageSource) completion(stereoImagePair) } } } func stereoImagePair(from source: CGImageSource) -> StereoImagePair? { guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else { return nil } let imageCount = CGImageSourceGetCount(source) print(String(format: "%d images found", imageCount)) guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else { /// function returns here print("no groups found") return nil } guard let stereoGroup = groups.first(where: { let groupType = $0[kCGImagePropertyGroupType] as! CFString return groupType == kCGImagePropertyGroupTypeStereoPair }) else { return nil } guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int, let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int, let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil), let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil), let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil), let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil) else { return nil } return (leftImage, rightImage, self.identifier) } Any suggestion? Thanks visionOS 2.4
3
0
233
Jun ’25
How to transcode HEIC to JPEG while preserving the ISO 21496-1 gain map
When shooting with an iPhone 15 or later, it’s possible to capture HEIC or JPEG images that include gain map information conforming to the ISO 21496-1 standard. However, during image format transcoding, the HEIC codec is able to preserve the ISO 21496-1 gain map. But when converting from HEIC to JPEG, the gain map is transformed into the Apple Gain Map format instead. Is there any solution to this issue?
3
0
1.2k
Jul ’25
Cannot make my app appear in “Share with App” action in Shortcuts – How to allow receiving images from Shortcuts?
Hi, I’m trying to integrate my iOS app with Shortcuts. My goal is: In the Shortcuts app → Create a shortcut → Select an image → Share the image directly to my app for analysis. However, when I try to add the “Share with App” / “Open in App” / “Send to App” action in Shortcuts: My app does NOT appear in the list of available apps. I want my app to be selectable so that Shortcuts can send an image (UIImage / file) to my app. What I have tried My app supports receiving images using UIActivityViewController and Share Extension. I created an App Intents extension (AppIntent + @Parameter(file)...) but the app still does not appear in Shortcuts “Share with App”. I also checked the Info.plist but didn’t find any permission related to Shortcuts. The app is installed on the device and works normally. My question What permission, Info.plist entry, or capability is required so that my app becomes visible in the Shortcuts app as a target for image sharing? More specifically: Which extension type should be used for receiving images from Shortcuts? App Intents Extension? Share Extension? Intent Extension? Do I need a specific NSExtensionPointIdentifier for Shortcuts integration? Do I need to declare a custom Uniform Type Identifier (UTI) or add supported content types so Shortcuts knows my app can handle images? Are there any required entitlements / capabilities to make the app appear inside the “Share with App” action? Goal Summary I simply want: Shortcuts → Pick Image → Send to My App → App receives the image and processes it. But currently my app cannot be selected in Shortcuts. Thanks in advance for any guidance!
3
0
351
Dec ’25
App Store Connect rejects screenshot upload: “incorrect size” (subscription purchase flow) — tried all documented sizes
Hello Apple Developer Forums, I’m preparing to submit an app update that includes an in-app subscription. As part of the submission, I need to provide screenshots showing where the user initiates and completes the subscription purchase flow. The issue is that App Store Connect keeps rejecting my screenshot upload with an “incorrect size” (or size invalid) error. I have already tried exporting the screenshot in all sizes and resolutions described in Apple’s documentation, but none of them are being accepted so far. Could you please advise: What exact pixel dimensions / format requirements App Store Connect currently enforces for these screenshots (including file type and color profile, if relevant)? Whether there are any known issues or common causes for this error (e.g., metadata, alpha channel, scaling, or export settings)? Any recommended workflow/tools to generate a compliant screenshot that reliably uploads? Thank you in advance for your help.
3
0
149
Jan ’26
Images with unusual color spaces not correctly loaded by Core Image
Some users reported that their images are not loading correctly in our app. After a lot of debugging we identified the following: This only happens when the app is build for Mac Catalyst. Not on iOS, iPadOS, or “real” macOS (AppKit). The images in question have unusual color spaces. We observed the issue for uRGB and eciRGB v2. Those images are rendered correctly in Photos and Preview on all platforms. When displaying the image inside of a UIImageView or in a SwiftUI Image, they render correctly. The issue only occurs when loading the image via Core Image. When comparing the different Core Image render graphs between AppKit (working) and Catalyst (faulty) builds, they look identical—except for the result. Mac (AppKit): Catalyst: Something seems to be off when Core Image tries to load an image with foreign color space in Catalyst. We identified a workaround: By using a CGImageDestination to transcode the image using the kCGImageDestinationOptimizeColorForSharing option, Image I/O will convert the image to sRGB (or similar) and Core Image is able to load the image correctly. However, one potentially loses fidelity this way. Or might there be a better workaround?
2
3
221
Aug ’25
UI & App related
Due to it's new UI with transprent theme, sometimes its lacked between switching the apps. Please add AI eraser in photos app so that we can experience new ai tool. Due to india region, unable to use wallet options, Please add some options so that we can add cards and use this option.
2
0
139
Jun ’25
Issue with image uploading from camera
In our web application some functionalities will allow user to upload multiple images (More than 25 images) in a single page It is working find in all OS and browsers except iOS When user try to upload images directly from camera there will be some overlaps, duplication, missing etc. This is happening in both Safari and Chrome, we had a thorough check in our application and found every thing is working fine from our end You can reproduce the issue by creating a web page which accept more than 50 images (we tried the same in ASP MVC Core & PHP) and showing the images in order access the page through your iPhone using Safari or Chrome Try to upload images directly from your camera, try sequential images (Image of a stop watch, or some thing like that) so that you can easily identify the order of files uploaded and check the listing page of uploaded image (Try these steps multiple times) You can find some images are duplicated and some are missing
1
0
141
Apr ’25
ImageIO failed to encode HECIS in macOS 15.5
ImageIO encoding to HEICS fails in macOS 15.5. log writeImageAtIndex:1246: *** CMPhotoCompressionSessionAddImageToSequence: err = kCMPhotoError_UnsupportedOperation [-16994] (codec: 'hvc1') seems to be related with https://github.com/SDWebImage/SDWebImage/issues/3732 affected version iOS 18.4 (sim and device), macOS 15.5 unaffected version iOS 18.3 (sim and device), macOS 15.3
1
0
197
Jun ’25
Image cropping
Currently, I’m working on developing a small macOS utility tool for my photography. In my camera, I have a digital zoom feature. I prefer using this feature when I shoot both JPEG and DNG files. While the JPEG is already cropped to the desired format, the DNG file contains metadata (DefaultUserCrop: 0.22, 0.22, 0.78, 0.78). For instance, when I open that DNG file in Lightroom, it pre-crops the image non-destructively. However, I prefer using Pixelmator Pro for editing. Unfortunately, Pixelmator Pro doesn’t have this feature. So, I thought I could create an app that allows me to pre-crop the image for editing in Pixelmator Pro afterward. Does someone have a better idea or some hints on how I could solve it?
1
0
334
Jul ’25
Why does converting HEIC/HEIF to JPEG using UIImage.jpegData(compressionQuality: 1.0) significantly increase file size?
I'm working with images selected from the iOS Photos app using PHPickerViewController. Some images appear as HEIF in the Photos info panel — which I understand are stored in the HEIC format, i.e., HEIF containers with HEVC-compressed images, commonly used on iOS when "High Efficiency" is enabled. To convert these images to JPEG, I'm using the standard UIKit approach: if let image = UIImage(data: heicData) { let jpegData = image.jpegData(compressionQuality: 1.0) } However, I’ve noticed that this conversion often increases the image size significantly: Original HEIC/HEIF: ~3 MB Converted JPEG (quality: 1.0): ~8–12 MB There’s no resolution change or image editing — it’s just a direct conversion. I understand that HEIC is more efficient than JPEG, but the increase in file size feels disproportionate. Is this kind of jump expected, or are there any recommended workarounds to avoid it?
1
0
301
Jul ’25
“iOS 26 + BGContinuedProcessingTask: Why does a CPU/ML-intensive job run 4-5× slower in background?”
Hello All, I’m a mobile-app developer working with iOS 26+ and I’m using BGContinuedProcessingTask to perform background work. My app’s workflow includes the following business logic: Loading images via PHImageRequest. Using a CLIP model to extract image embeddings. Using an .mlmodel-based model to further process those embeddings. For both model inferences I set computeUnits = .cpuAndNeuralEngine. When the app is moved to the background, I observe that the same workload(all three workload) becomes on average 4-5× slower than when the app is in the foreground. In an attempt to diagnose the slowdown, I tried to profile with Xcode Instruments, but since a debugger was attached, the performance in background appeared nearly identical to foreground. Even when I detached the debugger, the measured system resource metrics (process CPU usage, system CPU usage, memory, QoS class, thermal state) showed no meaningful difference. Below are some of the metrics I captured: Process CPU: 177% (Foreground) → 153% (Background) → ~-24.1% Still >1.5 cores of work. System CPU: 56.1% → 38.4% → ~-17.7% Process Memory: 244.8 MB → 218.1 MB QoS Class: userInitiated in both cases Thermal State: nominal in both cases Given these results, I’m finding it hard to pinpoint why the overall latency is so much worse when the app is backgrounded, even though the obvious metrics show little variation. I suspect the cause may involve P-core vs E-core scheduling, or internal hardware throttling/limit of Neural Engine usage, but I cannot find clear documentation or logging to confirm this. My question is: Does anyone know why a CPU (and Neural Engine)-intensive job like this would slow down so dramatically when using BGContinuedProcessingTask in the background on iOS 26+, despite apparent similar resource-usage metrics? Are there internal iOS scheduling/hardware-allocation behaviors (e.g., falling back to lower-performing cores when backgrounded) that might explain this? Any pointers to Apple technical notes, system logs, or instrumentation I might use to detect which cores or compute units are being used would be greatly appreciated. Thank you for your time and any guidance you can provide. Best regards,
1
0
449
Nov ’25
Any recommended content-aware compression strategy for .ktx textures in Reality Composer Pro?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets. I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity. In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging: Low-complexity images are compressed more aggressively The final packaged file size varies based on content complexity Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment. Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro? Or any best practices to optimize .ktx sizes based on image complexity? Thanks!
0
0
199
May ’25
Any recommended content-aware compression strategy for .ktx textures in Reality Composer Pro?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets. I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity. In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging: Low-complexity images are compressed more aggressively The final packaged file size varies based on content complexity Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment. Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro? Or any best practices to optimize .ktx sizes based on image complexity? Thanks!
0
0
201
Jun ’25
WWDC25 Camera & Photos group lab summary (Part 1 of 3)
(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Introductory kick-off questions Question 1 Tell us a little about the new AVFoundation Capture APIs we've made available in the new iOS 26 developer preview? Cinematic Capture API (strong/weak focus, tracking focus)(scene monitoring)(simulated aperture)(dog/cat heads/groupIDs) Camera Controls and AirPod Stem Clicks Spatial Audio and Studio Quality AirPod Mics in Camera Lens Smudge Detection Exposure and Focus Rect of Interest Question 2 I built QR code scanning into my app, but on newer iPhones I have to hold the phone very far away from the QR code, otherwise the image is blurry and it doesn't scan. Why is this happening and how can I fix it? Every year, the cameras get better and better as we push the state of the art on iPhone photography and videography. This sometimes results in changes to the characteristics of the lenses. min focus distance newer phones have multiple lenses automatic switching behavior Use virtual device like the builtInDualWide or built in Triple, rather than just the builtInWide Set the videoZoomFactor to 2. You're done. Question 3 Last year, we saw some exciting new APIs introduced in AVFoundation in the health space. With Constant Color photography, developers can take pictures that have constant color regardless of ambient lighting. There are some further advancements this year. Davide, could you tell us about them? constant color photography is mean to remove the "tone mapping" applied to photograph captured with camera app, usually incldsuing artistic intent, and instead try to be a close as possible to the real color of the scene, regardless of the illumination constant color images could be captured in HEIF anf JPEG laste year. this year we are adding Support for the DICOM medical imaging photo format. It is a fomrat used by the health industry to store images related to medical subjects like MRI, skin problems, xray and so on. It's writable and also readable format on all OS26, supported through AVCapturePhotoOutput APIs and through the coregraphics api. for coregrapphics there is a new DICOM entry in the property dictionary which includes all the dicom availbale and defined propertie in a file. finder will also display all those in the info panel (Address why a developer would want to use it) - not for regualr picture taking apps. for those HEIF and JPEG are the preferred delivery format. use dicom if your app produces output that are health related, that you can also share with health providers or your doctors Main session developer questions Question 1 LiDAR vs. Dual Camera depth generation: Which resolution does the LiDAR sensor natively have (iPhone 16 Pro) and when to prefer LiDAR over Dual Camera? Both report formats with output resolutions (we don't advertise sensor resolution) Lidar vs Dual, etc: Lidar: Best for absolute depth, real world scale and computer vision Dual, etc: relative, disparity-based, less power, photo effects Also see: 2022 WWDC session "Discovery advancements in iOS camera capture: Depth, focus and multitasking" Question 2 Can true depth and lidar camera run at 60fps? Lidar can do 30fps (edited) Front true depth can do 60fps. Question 3 What’s the first class way to use PhotoKit to reimplement a high performance photo grid? We’ve been using a LazyVGrid and the photos caching manager, but are never able to hit the holy trinity (60hz, efficient memory footprint, minimal flashes of placeholder/empty cells) use the PHCachingImageManager to get media content delivered before you need to display it specify the size you need for grid sized display set the options PHVideoRequestOptionsDeliveryModeFastFormat, PHImageRequestOptionsDeliveryModeFastFormat and PHImageRequestOptionsResizeModeFast Question 4 For rending live preview of video stream, Is there performance overhead from using async and Swift UI for image updates vs UIViewRepresentable + AVCaptureVideoPreviewLayer.self? AVCaptureVideoPreviewLayer is the most efficient display path Use VDO + AVSampleBufferDisplayLayer if you need to modify the image data Swift UI image is optimized for static image content Question 5 Is there a way to configure the AVFoundation BuiltInLiDarDepthCamera mode to provide a depth map as accurate as ARKit at close range? The AVCaptureDepthDataOutput supports filtering that reduces noise and fills in invalid values. Consider using this for smoother depth maps Question 6 Pyramid-based photo editing in core image (such as adobe camera raw highlights and shadows)? First off you may want to look a the builtin filter called CIHighlightShadowAdjust Also the noise reduction in the CIRawFilter uses a pyramid-based algorithm. You can also write your own pyramid-based algorithms by taking an input image: down sample it by two multiply times using imageByApplyingAffineTransform apply additional CIKernels to each downsampled image as needed. use a custom CIKernel to combine the results. Question 7 Is the best way to integrate an in-app camera for a “non-camera” app UIImagePickerController? Yes, UIImagePickerController provides system-provided UI for capturing photos and movies. Question 8 Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready CIFilter can be called on the final at that point Photo will have to be re-inserted into the Photo library as adjustment Question 9 For shipping photo style assets in the app that need transparency what is the best format to use? JPEG2000? will moving to this save a lot of space comapred to PNG or other options? If you want lossless compression PNG is good and supports unpremutiplied alpha If you want lossy compression HEIF supports premutiplied or unpremutiplied alpha (Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
0
0
548
Jul ’25
WWDC25 Camera & Photos group lab summary (Part 2 of 3)
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Question 10 Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation? Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected. Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops. Question 11 What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices? The ImageCaptureCore framework supports camera devices, memory cards, scanners read and write, where supported check out the docs to see how to browse connected devices, folders, files, etc. Question 12 Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info! Yes, this is totally fine. AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance Question 13 What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map? There’s a suite of new System Tone Mapper APIs in this years’ OSes CoreImage ImageKit CoreAnimation, CoreGraphics For example: CoreImage: new CISystemToneMap filter. CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange Can go from high to constrained to standard Tone mapping is provided by the system (CISystemToneMap for controllable example) Question 14 What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image? One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide. Question 15 For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each? AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview. For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output Question 16 Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready CIFilter can be called on the final at that point Photo will have to be re-inserted into the Photo library as adjustment Question 17 Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward? digital zoom upscales the image to output dimensions and cropping will yield a smaller output image while digital zoom will crop, it also upscales Question 18 How do you design camera interfaces that work for both casual users and photography enthusiasts? Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down. Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts A good philosophy is: Keep the simple things easy, make the hard things possible Question 19 Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values. Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available Question 20 a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user? You can't prevent the built-in camera from being available to the user Question 21 Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly? Yes they can, use CoreMLRequest , provide their model container Been supported for a while (iOS 18/macOS 15) For more details go to Machine Learning & AI group lab Thursday use smaller images for better performance Question 22 What would you recommend for capture of the new immersive and spatial formats? To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details Question 23 You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding? For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files. For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs. If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant. (Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
0
0
406
Jul ’25
Since iOS 18.3, icons are no longer generated correctly with QLThumbnailGenerator
Since iOS 18.3, icons are no longer generated correctly with QLThumbnailGenerator. No error is returned either. But this error message now appears in the console: Error returned from iconservicesagent image request: <ISTypeIcon: 0x3010f91a0>,Type: com.adobe.pdf - <ISImageDescriptor: 0x302f188c0> - (36.00, 36.00)@3x v:1 l:5 a:0:0:0:0 t:() b:0 s:2 ps:0 digest: B19540FD-0449-3E89-AC50-38F92F9760FE error: Error Domain=NSOSStatusErrorDomain Code=-609 "Client is disallowed from making such an icon request" UserInfo={NSLocalizedDescription=Client is disallowed from making such an icon request} Does anyone know this error? Is there a workaround? Are there new permissions to consider? Here is the code how icons are generated: let request = QLThumbnailGenerator.Request(fileAt: url, size: size, scale: scale, representationTypes: self.thumbnailType) request.iconMode = true let generator = QLThumbnailGenerator.shared generator.generateRepresentations(for: request) { [weak self] thumbnail, _, error in }
Replies
16
Boosts
5
Views
1.7k
Activity
Nov ’25
Cannot extract imagePair from generated Spatial Photos
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here. Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo let photos = PHAsset.fetchAssets(with: .image, options: nil) // enumerating photos .... if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) { spatialAsset = asset } // other code show below I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos). // imageCount is 1 when it comes to generated spatial photo let imageCount = CGImageSourceGetCount(source) I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource. The full code below, the imagePair extraction will stop at "no groups found": func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) { let options = PHImageRequestOptions() options.isNetworkAccessAllowed = true options.deliveryMode = .highQualityFormat options.resizeMode = .none options.version = .original return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) { imageData, _, _, _ in guard let imageData, let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil) else { completion(nil) return } let stereoImagePair = stereoImagePair(from: imageSource) completion(stereoImagePair) } } } func stereoImagePair(from source: CGImageSource) -> StereoImagePair? { guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else { return nil } let imageCount = CGImageSourceGetCount(source) print(String(format: "%d images found", imageCount)) guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else { /// function returns here print("no groups found") return nil } guard let stereoGroup = groups.first(where: { let groupType = $0[kCGImagePropertyGroupType] as! CFString return groupType == kCGImagePropertyGroupTypeStereoPair }) else { return nil } guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int, let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int, let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil), let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil), let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil), let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil) else { return nil } return (leftImage, rightImage, self.identifier) } Any suggestion? Thanks visionOS 2.4
Replies
3
Boosts
0
Views
233
Activity
Jun ’25
How to transcode HEIC to JPEG while preserving the ISO 21496-1 gain map
When shooting with an iPhone 15 or later, it’s possible to capture HEIC or JPEG images that include gain map information conforming to the ISO 21496-1 standard. However, during image format transcoding, the HEIC codec is able to preserve the ISO 21496-1 gain map. But when converting from HEIC to JPEG, the gain map is transformed into the Apple Gain Map format instead. Is there any solution to this issue?
Replies
3
Boosts
0
Views
1.2k
Activity
Jul ’25
Cannot make my app appear in “Share with App” action in Shortcuts – How to allow receiving images from Shortcuts?
Hi, I’m trying to integrate my iOS app with Shortcuts. My goal is: In the Shortcuts app → Create a shortcut → Select an image → Share the image directly to my app for analysis. However, when I try to add the “Share with App” / “Open in App” / “Send to App” action in Shortcuts: My app does NOT appear in the list of available apps. I want my app to be selectable so that Shortcuts can send an image (UIImage / file) to my app. What I have tried My app supports receiving images using UIActivityViewController and Share Extension. I created an App Intents extension (AppIntent + @Parameter(file)...) but the app still does not appear in Shortcuts “Share with App”. I also checked the Info.plist but didn’t find any permission related to Shortcuts. The app is installed on the device and works normally. My question What permission, Info.plist entry, or capability is required so that my app becomes visible in the Shortcuts app as a target for image sharing? More specifically: Which extension type should be used for receiving images from Shortcuts? App Intents Extension? Share Extension? Intent Extension? Do I need a specific NSExtensionPointIdentifier for Shortcuts integration? Do I need to declare a custom Uniform Type Identifier (UTI) or add supported content types so Shortcuts knows my app can handle images? Are there any required entitlements / capabilities to make the app appear inside the “Share with App” action? Goal Summary I simply want: Shortcuts → Pick Image → Send to My App → App receives the image and processes it. But currently my app cannot be selected in Shortcuts. Thanks in advance for any guidance!
Replies
3
Boosts
0
Views
351
Activity
Dec ’25
App Store Connect rejects screenshot upload: “incorrect size” (subscription purchase flow) — tried all documented sizes
Hello Apple Developer Forums, I’m preparing to submit an app update that includes an in-app subscription. As part of the submission, I need to provide screenshots showing where the user initiates and completes the subscription purchase flow. The issue is that App Store Connect keeps rejecting my screenshot upload with an “incorrect size” (or size invalid) error. I have already tried exporting the screenshot in all sizes and resolutions described in Apple’s documentation, but none of them are being accepted so far. Could you please advise: What exact pixel dimensions / format requirements App Store Connect currently enforces for these screenshots (including file type and color profile, if relevant)? Whether there are any known issues or common causes for this error (e.g., metadata, alpha channel, scaling, or export settings)? Any recommended workflow/tools to generate a compliant screenshot that reliably uploads? Thank you in advance for your help.
Replies
3
Boosts
0
Views
149
Activity
Jan ’26
Images with unusual color spaces not correctly loaded by Core Image
Some users reported that their images are not loading correctly in our app. After a lot of debugging we identified the following: This only happens when the app is build for Mac Catalyst. Not on iOS, iPadOS, or “real” macOS (AppKit). The images in question have unusual color spaces. We observed the issue for uRGB and eciRGB v2. Those images are rendered correctly in Photos and Preview on all platforms. When displaying the image inside of a UIImageView or in a SwiftUI Image, they render correctly. The issue only occurs when loading the image via Core Image. When comparing the different Core Image render graphs between AppKit (working) and Catalyst (faulty) builds, they look identical—except for the result. Mac (AppKit): Catalyst: Something seems to be off when Core Image tries to load an image with foreign color space in Catalyst. We identified a workaround: By using a CGImageDestination to transcode the image using the kCGImageDestinationOptimizeColorForSharing option, Image I/O will convert the image to sRGB (or similar) and Core Image is able to load the image correctly. However, one potentially loses fidelity this way. Or might there be a better workaround?
Replies
2
Boosts
3
Views
221
Activity
Aug ’25
UI & App related
Due to it's new UI with transprent theme, sometimes its lacked between switching the apps. Please add AI eraser in photos app so that we can experience new ai tool. Due to india region, unable to use wallet options, Please add some options so that we can add cards and use this option.
Replies
2
Boosts
0
Views
139
Activity
Jun ’25
Issue with image uploading from camera
In our web application some functionalities will allow user to upload multiple images (More than 25 images) in a single page It is working find in all OS and browsers except iOS When user try to upload images directly from camera there will be some overlaps, duplication, missing etc. This is happening in both Safari and Chrome, we had a thorough check in our application and found every thing is working fine from our end You can reproduce the issue by creating a web page which accept more than 50 images (we tried the same in ASP MVC Core & PHP) and showing the images in order access the page through your iPhone using Safari or Chrome Try to upload images directly from your camera, try sequential images (Image of a stop watch, or some thing like that) so that you can easily identify the order of files uploaded and check the listing page of uploaded image (Try these steps multiple times) You can find some images are duplicated and some are missing
Replies
1
Boosts
0
Views
141
Activity
Apr ’25
ImageIO failed to encode HECIS in macOS 15.5
ImageIO encoding to HEICS fails in macOS 15.5. log writeImageAtIndex:1246: *** CMPhotoCompressionSessionAddImageToSequence: err = kCMPhotoError_UnsupportedOperation [-16994] (codec: 'hvc1') seems to be related with https://github.com/SDWebImage/SDWebImage/issues/3732 affected version iOS 18.4 (sim and device), macOS 15.5 unaffected version iOS 18.3 (sim and device), macOS 15.3
Replies
1
Boosts
0
Views
197
Activity
Jun ’25
Photos acess
i have a wallpaper app , i need a photos access when i try to add it in capabilities i don't find it.is there any solution?
Replies
1
Boosts
0
Views
146
Activity
Jun ’25
When saving a 48MP image, CGImageDestinationFinalize consumes a large amount of memory. Are there any optimization methods available?
it will use about 300MB memory, it cause a memory peak
Replies
1
Boosts
0
Views
498
Activity
Jul ’25
Image cropping
Currently, I’m working on developing a small macOS utility tool for my photography. In my camera, I have a digital zoom feature. I prefer using this feature when I shoot both JPEG and DNG files. While the JPEG is already cropped to the desired format, the DNG file contains metadata (DefaultUserCrop: 0.22, 0.22, 0.78, 0.78). For instance, when I open that DNG file in Lightroom, it pre-crops the image non-destructively. However, I prefer using Pixelmator Pro for editing. Unfortunately, Pixelmator Pro doesn’t have this feature. So, I thought I could create an app that allows me to pre-crop the image for editing in Pixelmator Pro afterward. Does someone have a better idea or some hints on how I could solve it?
Replies
1
Boosts
0
Views
334
Activity
Jul ’25
Why does converting HEIC/HEIF to JPEG using UIImage.jpegData(compressionQuality: 1.0) significantly increase file size?
I'm working with images selected from the iOS Photos app using PHPickerViewController. Some images appear as HEIF in the Photos info panel — which I understand are stored in the HEIC format, i.e., HEIF containers with HEVC-compressed images, commonly used on iOS when "High Efficiency" is enabled. To convert these images to JPEG, I'm using the standard UIKit approach: if let image = UIImage(data: heicData) { let jpegData = image.jpegData(compressionQuality: 1.0) } However, I’ve noticed that this conversion often increases the image size significantly: Original HEIC/HEIF: ~3 MB Converted JPEG (quality: 1.0): ~8–12 MB There’s no resolution change or image editing — it’s just a direct conversion. I understand that HEIC is more efficient than JPEG, but the increase in file size feels disproportionate. Is this kind of jump expected, or are there any recommended workarounds to avoid it?
Replies
1
Boosts
0
Views
301
Activity
Jul ’25
Proper printing page from my app design
My app uses VStack and HStack, instead of the normal table format. When I try to print everything works perfect but it will not print the cell outline. What is the correct line code or instruction terminology? Thanks, Hal
Replies
1
Boosts
0
Views
121
Activity
Aug ’25
unity游戏在新版本iphone手机上卡渲染
在正常游戏中,如果非常频繁的调用assetBundle.Unload接口,会导致游戏应用画面卡死,但是游戏的背景音乐仍然正常播放。这类问题仅发生在iphone16 和iphone17的手机上,低版本的手机没有任何问题,请问该如何解决这个问题?
Replies
1
Boosts
0
Views
704
Activity
Nov ’25
“iOS 26 + BGContinuedProcessingTask: Why does a CPU/ML-intensive job run 4-5× slower in background?”
Hello All, I’m a mobile-app developer working with iOS 26+ and I’m using BGContinuedProcessingTask to perform background work. My app’s workflow includes the following business logic: Loading images via PHImageRequest. Using a CLIP model to extract image embeddings. Using an .mlmodel-based model to further process those embeddings. For both model inferences I set computeUnits = .cpuAndNeuralEngine. When the app is moved to the background, I observe that the same workload(all three workload) becomes on average 4-5× slower than when the app is in the foreground. In an attempt to diagnose the slowdown, I tried to profile with Xcode Instruments, but since a debugger was attached, the performance in background appeared nearly identical to foreground. Even when I detached the debugger, the measured system resource metrics (process CPU usage, system CPU usage, memory, QoS class, thermal state) showed no meaningful difference. Below are some of the metrics I captured: Process CPU: 177% (Foreground) → 153% (Background) → ~-24.1% Still >1.5 cores of work. System CPU: 56.1% → 38.4% → ~-17.7% Process Memory: 244.8 MB → 218.1 MB QoS Class: userInitiated in both cases Thermal State: nominal in both cases Given these results, I’m finding it hard to pinpoint why the overall latency is so much worse when the app is backgrounded, even though the obvious metrics show little variation. I suspect the cause may involve P-core vs E-core scheduling, or internal hardware throttling/limit of Neural Engine usage, but I cannot find clear documentation or logging to confirm this. My question is: Does anyone know why a CPU (and Neural Engine)-intensive job like this would slow down so dramatically when using BGContinuedProcessingTask in the background on iOS 26+, despite apparent similar resource-usage metrics? Are there internal iOS scheduling/hardware-allocation behaviors (e.g., falling back to lower-performing cores when backgrounded) that might explain this? Any pointers to Apple technical notes, system logs, or instrumentation I might use to detect which cores or compute units are being used would be greatly appreciated. Thank you for your time and any guidance you can provide. Best regards,
Replies
1
Boosts
0
Views
449
Activity
Nov ’25
Any recommended content-aware compression strategy for .ktx textures in Reality Composer Pro?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets. I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity. In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging: Low-complexity images are compressed more aggressively The final packaged file size varies based on content complexity Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment. Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro? Or any best practices to optimize .ktx sizes based on image complexity? Thanks!
Replies
0
Boosts
0
Views
199
Activity
May ’25
Any recommended content-aware compression strategy for .ktx textures in Reality Composer Pro?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets. I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity. In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging: Low-complexity images are compressed more aggressively The final packaged file size varies based on content complexity Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment. Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro? Or any best practices to optimize .ktx sizes based on image complexity? Thanks!
Replies
0
Boosts
0
Views
201
Activity
Jun ’25
WWDC25 Camera & Photos group lab summary (Part 1 of 3)
(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Introductory kick-off questions Question 1 Tell us a little about the new AVFoundation Capture APIs we've made available in the new iOS 26 developer preview? Cinematic Capture API (strong/weak focus, tracking focus)(scene monitoring)(simulated aperture)(dog/cat heads/groupIDs) Camera Controls and AirPod Stem Clicks Spatial Audio and Studio Quality AirPod Mics in Camera Lens Smudge Detection Exposure and Focus Rect of Interest Question 2 I built QR code scanning into my app, but on newer iPhones I have to hold the phone very far away from the QR code, otherwise the image is blurry and it doesn't scan. Why is this happening and how can I fix it? Every year, the cameras get better and better as we push the state of the art on iPhone photography and videography. This sometimes results in changes to the characteristics of the lenses. min focus distance newer phones have multiple lenses automatic switching behavior Use virtual device like the builtInDualWide or built in Triple, rather than just the builtInWide Set the videoZoomFactor to 2. You're done. Question 3 Last year, we saw some exciting new APIs introduced in AVFoundation in the health space. With Constant Color photography, developers can take pictures that have constant color regardless of ambient lighting. There are some further advancements this year. Davide, could you tell us about them? constant color photography is mean to remove the "tone mapping" applied to photograph captured with camera app, usually incldsuing artistic intent, and instead try to be a close as possible to the real color of the scene, regardless of the illumination constant color images could be captured in HEIF anf JPEG laste year. this year we are adding Support for the DICOM medical imaging photo format. It is a fomrat used by the health industry to store images related to medical subjects like MRI, skin problems, xray and so on. It's writable and also readable format on all OS26, supported through AVCapturePhotoOutput APIs and through the coregraphics api. for coregrapphics there is a new DICOM entry in the property dictionary which includes all the dicom availbale and defined propertie in a file. finder will also display all those in the info panel (Address why a developer would want to use it) - not for regualr picture taking apps. for those HEIF and JPEG are the preferred delivery format. use dicom if your app produces output that are health related, that you can also share with health providers or your doctors Main session developer questions Question 1 LiDAR vs. Dual Camera depth generation: Which resolution does the LiDAR sensor natively have (iPhone 16 Pro) and when to prefer LiDAR over Dual Camera? Both report formats with output resolutions (we don't advertise sensor resolution) Lidar vs Dual, etc: Lidar: Best for absolute depth, real world scale and computer vision Dual, etc: relative, disparity-based, less power, photo effects Also see: 2022 WWDC session "Discovery advancements in iOS camera capture: Depth, focus and multitasking" Question 2 Can true depth and lidar camera run at 60fps? Lidar can do 30fps (edited) Front true depth can do 60fps. Question 3 What’s the first class way to use PhotoKit to reimplement a high performance photo grid? We’ve been using a LazyVGrid and the photos caching manager, but are never able to hit the holy trinity (60hz, efficient memory footprint, minimal flashes of placeholder/empty cells) use the PHCachingImageManager to get media content delivered before you need to display it specify the size you need for grid sized display set the options PHVideoRequestOptionsDeliveryModeFastFormat, PHImageRequestOptionsDeliveryModeFastFormat and PHImageRequestOptionsResizeModeFast Question 4 For rending live preview of video stream, Is there performance overhead from using async and Swift UI for image updates vs UIViewRepresentable + AVCaptureVideoPreviewLayer.self? AVCaptureVideoPreviewLayer is the most efficient display path Use VDO + AVSampleBufferDisplayLayer if you need to modify the image data Swift UI image is optimized for static image content Question 5 Is there a way to configure the AVFoundation BuiltInLiDarDepthCamera mode to provide a depth map as accurate as ARKit at close range? The AVCaptureDepthDataOutput supports filtering that reduces noise and fills in invalid values. Consider using this for smoother depth maps Question 6 Pyramid-based photo editing in core image (such as adobe camera raw highlights and shadows)? First off you may want to look a the builtin filter called CIHighlightShadowAdjust Also the noise reduction in the CIRawFilter uses a pyramid-based algorithm. You can also write your own pyramid-based algorithms by taking an input image: down sample it by two multiply times using imageByApplyingAffineTransform apply additional CIKernels to each downsampled image as needed. use a custom CIKernel to combine the results. Question 7 Is the best way to integrate an in-app camera for a “non-camera” app UIImagePickerController? Yes, UIImagePickerController provides system-provided UI for capturing photos and movies. Question 8 Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready CIFilter can be called on the final at that point Photo will have to be re-inserted into the Photo library as adjustment Question 9 For shipping photo style assets in the app that need transparency what is the best format to use? JPEG2000? will moving to this save a lot of space comapred to PNG or other options? If you want lossless compression PNG is good and supports unpremutiplied alpha If you want lossy compression HEIF supports premutiplied or unpremutiplied alpha (Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
Replies
0
Boosts
0
Views
548
Activity
Jul ’25
WWDC25 Camera & Photos group lab summary (Part 2 of 3)
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Question 10 Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation? Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected. Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops. Question 11 What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices? The ImageCaptureCore framework supports camera devices, memory cards, scanners read and write, where supported check out the docs to see how to browse connected devices, folders, files, etc. Question 12 Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info! Yes, this is totally fine. AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance Question 13 What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map? There’s a suite of new System Tone Mapper APIs in this years’ OSes CoreImage ImageKit CoreAnimation, CoreGraphics For example: CoreImage: new CISystemToneMap filter. CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange Can go from high to constrained to standard Tone mapping is provided by the system (CISystemToneMap for controllable example) Question 14 What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image? One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide. Question 15 For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each? AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview. For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output Question 16 Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready CIFilter can be called on the final at that point Photo will have to be re-inserted into the Photo library as adjustment Question 17 Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward? digital zoom upscales the image to output dimensions and cropping will yield a smaller output image while digital zoom will crop, it also upscales Question 18 How do you design camera interfaces that work for both casual users and photography enthusiasts? Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down. Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts A good philosophy is: Keep the simple things easy, make the hard things possible Question 19 Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values. Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available Question 20 a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user? You can't prevent the built-in camera from being available to the user Question 21 Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly? Yes they can, use CoreMLRequest , provide their model container Been supported for a while (iOS 18/macOS 15) For more details go to Machine Learning & AI group lab Thursday use smaller images for better performance Question 22 What would you recommend for capture of the new immersive and spatial formats? To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details Question 23 You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding? For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files. For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs. If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant. (Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
Replies
0
Boosts
0
Views
406
Activity
Jul ’25