(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Introductory kick-off questions
Question 1
Tell us a little about the new AVFoundation Capture APIs we've made available in the new iOS 26 developer preview?
Cinematic Capture API (strong/weak focus, tracking focus)(scene monitoring)(simulated aperture)(dog/cat heads/groupIDs)
Camera Controls and AirPod Stem Clicks
Spatial Audio and Studio Quality AirPod Mics in Camera
Lens Smudge Detection
Exposure and Focus Rect of Interest
Question 2
I built QR code scanning into my app, but on newer iPhones I have to hold the phone very far away from the QR code, otherwise the image is blurry and it doesn't scan. Why is this happening and how can I fix it?
Every year, the cameras get better and better as we push the state of the art on iPhone photography and videography. This sometimes results in changes to the characteristics of the lenses.
min focus distance
newer phones have multiple lenses
automatic switching behavior
Use virtual device like the builtInDualWide or built in Triple, rather than just the builtInWide
Set the videoZoomFactor to 2. You're done.
Question 3
Last year, we saw some exciting new APIs introduced in AVFoundation in the health space. With Constant Color photography, developers can take pictures that have constant color regardless of ambient lighting. There are some further advancements this year. Davide, could you tell us about them?
constant color photography is mean to remove the "tone mapping" applied to photograph captured with camera app, usually incldsuing artistic intent, and instead try to be a close as possible to the real color of the scene, regardless of the illumination
constant color images could be captured in HEIF anf JPEG laste year. this year we are adding Support for the DICOM medical imaging photo format. It is a fomrat used by the health industry to store images related to medical subjects like MRI, skin problems, xray and so on.
It's writable and also readable format on all OS26, supported through AVCapturePhotoOutput APIs and through the coregraphics api.
for coregrapphics there is a new DICOM entry in the property dictionary which includes all the dicom availbale and defined propertie in a file. finder will also display all those in the info panel
(Address why a developer would want to use it) - not for regualr picture taking apps. for those HEIF and JPEG are the preferred delivery format. use dicom if your app produces output that are health related, that you can also share with health providers or your doctors
Main session developer questions
Question 1
LiDAR vs. Dual Camera depth generation: Which resolution does the LiDAR sensor natively have (iPhone 16 Pro) and when to prefer LiDAR over Dual Camera?
Both report formats with output resolutions (we don't advertise sensor resolution)
Lidar vs Dual, etc:
Lidar: Best for absolute depth, real world scale and computer vision
Dual, etc: relative, disparity-based, less power, photo effects
Also see: 2022 WWDC session "Discovery advancements in iOS camera capture: Depth, focus and multitasking"
Question 2
Can true depth and lidar camera run at 60fps?
Lidar can do 30fps (edited)
Front true depth can do 60fps.
Question 3
What’s the first class way to use PhotoKit to reimplement a high performance photo grid? We’ve been using a LazyVGrid and the photos caching manager, but are never able to hit the holy trinity (60hz, efficient memory footprint, minimal flashes of placeholder/empty cells)
use the PHCachingImageManager to get media content delivered before you need to display it
specify the size you need for grid sized display
set the options PHVideoRequestOptionsDeliveryModeFastFormat, PHImageRequestOptionsDeliveryModeFastFormat and PHImageRequestOptionsResizeModeFast
Question 4
For rending live preview of video stream, Is there performance overhead from using async and Swift UI for image updates vs UIViewRepresentable + AVCaptureVideoPreviewLayer.self?
AVCaptureVideoPreviewLayer is the most efficient display path
Use VDO + AVSampleBufferDisplayLayer if you need to modify the image data
Swift UI image is optimized for static image content
Question 5
Is there a way to configure the AVFoundation BuiltInLiDarDepthCamera mode to provide a depth map as accurate as ARKit at close range?
The AVCaptureDepthDataOutput supports filtering that reduces noise and fills in invalid values. Consider using this for smoother depth maps
Question 6
Pyramid-based photo editing in core image (such as adobe camera raw highlights and shadows)?
First off you may want to look a the builtin filter called CIHighlightShadowAdjust
Also the noise reduction in the CIRawFilter uses a pyramid-based algorithm.
You can also write your own pyramid-based algorithms by taking an input image:
down sample it by two multiply times using imageByApplyingAffineTransform
apply additional CIKernels to each downsampled image as needed.
use a custom CIKernel to combine the results.
Question 7
Is the best way to integrate an in-app camera for a “non-camera” app UIImagePickerController?
Yes, UIImagePickerController provides system-provided UI for capturing photos and movies.
Question 8
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 9
For shipping photo style assets in the app that need transparency what is the best format to use? JPEG2000? will moving to this save a lot of space comapred to PNG or other options?
If you want lossless compression PNG is good and supports unpremutiplied alpha
If you want lossy compression HEIF supports premutiplied or unpremutiplied alpha
(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
Image I/O
RSS for tagRead and write most image file formats, manage color, access image metadata using Image I/O.
Posts under Image I/O tag
33 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Due to it's new UI with transprent theme, sometimes its lacked between switching the apps.
Please add AI eraser in photos app so that we can experience new ai tool.
Due to india region, unable to use wallet options, Please add some options so that we can add cards and use this option.
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here.
Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo
let photos = PHAsset.fetchAssets(with: .image, options: nil)
// enumerating photos ....
if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) {
spatialAsset = asset
}
// other code show below
I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos).
// imageCount is 1 when it comes to generated spatial photo
let imageCount = CGImageSourceGetCount(source)
I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource.
The full code below, the imagePair extraction will stop at "no groups found":
func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) {
let options = PHImageRequestOptions()
options.isNetworkAccessAllowed = true
options.deliveryMode = .highQualityFormat
options.resizeMode = .none
options.version = .original
return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) {
imageData, _, _, _ in
guard let imageData,
let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil)
else {
completion(nil)
return
}
let stereoImagePair = stereoImagePair(from: imageSource)
completion(stereoImagePair)
}
}
}
func stereoImagePair(from source: CGImageSource) -> StereoImagePair? {
guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else {
return nil
}
let imageCount = CGImageSourceGetCount(source)
print(String(format: "%d images found", imageCount))
guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else {
/// function returns here
print("no groups found")
return nil
}
guard
let stereoGroup = groups.first(where: {
let groupType = $0[kCGImagePropertyGroupType] as! CFString
return groupType == kCGImagePropertyGroupTypeStereoPair
})
else {
return nil
}
guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int,
let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int,
let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil),
let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil),
let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil),
let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil)
else {
return nil
}
return (leftImage, rightImage, self.identifier)
}
Any suggestion? Thanks
visionOS 2.4
ImageIO encoding to HEICS fails in macOS 15.5.
log
writeImageAtIndex:1246: *** CMPhotoCompressionSessionAddImageToSequence: err = kCMPhotoError_UnsupportedOperation [-16994] (codec: 'hvc1')
seems to be related with
https://github.com/SDWebImage/SDWebImage/issues/3732
affected version
iOS 18.4 (sim and device), macOS 15.5
unaffected version
iOS 18.3 (sim and device), macOS 15.3
i have a wallpaper app , i need a photos access when i try to add it in capabilities i don't find it.is there any solution?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
In our web application some functionalities will allow user to upload multiple images (More than 25 images) in a single page
It is working find in all OS and browsers except iOS
When user try to upload images directly from camera there will be some overlaps, duplication, missing etc.
This is happening in both Safari and Chrome, we had a thorough check in our application and found every thing is working fine from our end
You can reproduce the issue by creating a web page which accept more than 50 images (we tried the same in ASP MVC Core & PHP) and showing the images in order
access the page through your iPhone using Safari or Chrome
Try to upload images directly from your camera, try sequential images (Image of a stop watch, or some thing like that) so that you can easily identify the order of files uploaded
and check the listing page of uploaded image (Try these steps multiple times)
You can find some images are duplicated and some are missing
Hello Apple Developer Community,
I’m facing a recurring issue in my SwiftUI iOS app where a specific view fails to reload correctly after saving edits and navigating back to it. The failure happens consistently post-save, and I’m looking for insights from the community.
🛠 App Overview
Purpose
A SwiftUI app that manages user-created data, including images, text fields, and completion tracking.
Tech Stack:
SwiftUI, Swift 5.x
MSAL for authentication
Azure Cosmos DB (NoSQL) for backend data
Azure Blob Storage for images
Environment:
Xcode 15.x
iOS 17 (tested on iOS 18.2 simulator and iPhone 16 Pro)
User Context:
Users authenticate via MSAL.
Data is fetched via Azure Functions, stored in Cosmos DB, and displayed dynamically.
🚨 Issue Description
🔁 Steps to Reproduce
Open a SwiftUI view (e.g., a dashboard displaying a user’s saved data).
Edit an item (e.g., update a name, upload a new image, modify completion progress).
Save changes via an API call (sendDataToBackend).
The view navigates back, but the image URL loses its SAS token.
Navigate away (e.g., back to the home screen or another tab).
Return to the view.
❌ Result
The view crashes, displays blank, or fails to load updated data.
SwiftUI refreshes text-based data correctly, but AsyncImage does not.
Printing the image URL post-save shows that the SAS token (?sv=...) is missing.
❓ Question
How can I properly reload AsyncImage after saving, without losing the SAS token?
🛠 What I’ve Tried
✅ Verified JSON Structure
Debugged pre- and post-save JSON.
Confirmed field names match the Codable model.
✅ Forced SwiftUI to Refresh
Tried .id(UUID()) on NavigationStack:
NavigationStack {
ProjectDashboardView()
.id(UUID()) // Forces reinit
}
Still fails intermittently.
✅ Forced AsyncImage to Reload
Tried appending a UUID() to the image URL:
AsyncImage(url: URL(string: "\(imageUrl)?cacheBust=\(UUID().uuidString)"))
Still fails when URL query parameters (?sv=...) are trimmed.
I’d greatly appreciate any insights, code snippets, or debugging suggestions! Let me know if more logs or sample code would help.
Thanks in advance!
If I want to edit image in preview app. But there is only option to rotate left and right 90degree rotations. No option to rorate in any prticular angle. So Please look into this and provide option in next update
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Graphics and Games
App Review
Media
I am extracting a JPEG2000 (JP2) facial image from an NFC passport chip (ISO/IEC 19794-5) and attempting to create a UIImage from it.
On iOS 16, the following code works fine:
import ImageIO
import UIKit
func getUIImage(from imageData: [UInt8]) -> UIImage? {
let data = Data(imageData)
guard let imageSource = CGImageSourceCreateWithData(data as CFData, nil),
let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else {
print("Failed to decode JP2 image!")
return nil
}
return UIImage(cgImage: cgImage)
}
However, on iOS 18, this fails with errors like:
initialize:1415: *** invalid JPEG2000 file ***
makeImagePlus:3752: *** ERROR: 'JP2 ' - failed to create image [-50]
CGImageSourceCreateImageAtIndex: *** ERROR: failed to create image [-59]
Questions:
Did Apple remove or modify JPEG2000 support in iOS 18?
Is there an official workaround for decoding JPEG2000 on iOS 18?
Should I use Vision/Metal/Core Image instead?
Is there a recommended way to convert JPEG2000 to JPEG/PNG before creating a UIImage?
Are there any Apple-provided APIs that maintain backward compatibility for JPEG2000 decoding?
Additional Info:
The UInt8 array has a valid JPEG2000 header (0x00 0x00 0x00 0x0C 6A 50 ...).
The image works on iOS 16 but fails on iOS 18.
Tested on iPhone running iOS 18.0 beta.
Any insights on how to handle JPEG2000 decoding in iOS 18 would be greatly appreciated! 🚀
In SwiftUI we have these nice properties that we can use for an Image: .saturation, .contrast, .brightness.
In my case I have a backend where I would want to apply the same exact effects using OpenCV.
Can I find the mathematical formulas anywhere?
Any idea how can I achieve this?
Thank you!
Hi, I have a problem when I want to attach my grayscale depth map image into the real image. The produced depth map doesn't have the cameraCalibration value which should responsible to align the depth data to the image. How do I align the depth map? I saw an article about it but it is not really detailed so I might be missing some process.
I tried to:
convert my depth map into pixel buffer
create image destination ref and add the image there.
add the auxData (depth map dict)
This is the output:
There is some black space there and my RGB image colour changes