(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Introductory kick-off questions
Question 1
Tell us a little about the new AVFoundation Capture APIs we've made available in the new iOS 26 developer preview?
Cinematic Capture API (strong/weak focus, tracking focus)(scene monitoring)(simulated aperture)(dog/cat heads/groupIDs)
Camera Controls and AirPod Stem Clicks
Spatial Audio and Studio Quality AirPod Mics in Camera
Lens Smudge Detection
Exposure and Focus Rect of Interest
Question 2
I built QR code scanning into my app, but on newer iPhones I have to hold the phone very far away from the QR code, otherwise the image is blurry and it doesn't scan. Why is this happening and how can I fix it?
Every year, the cameras get better and better as we push the state of the art on iPhone photography and videography. This sometimes results in changes to the characteristics of the lenses.
min focus distance
newer phones have multiple lenses
automatic switching behavior
Use virtual device like the builtInDualWide or built in Triple, rather than just the builtInWide
Set the videoZoomFactor to 2. You're done.
Question 3
Last year, we saw some exciting new APIs introduced in AVFoundation in the health space. With Constant Color photography, developers can take pictures that have constant color regardless of ambient lighting. There are some further advancements this year. Davide, could you tell us about them?
constant color photography is mean to remove the "tone mapping" applied to photograph captured with camera app, usually incldsuing artistic intent, and instead try to be a close as possible to the real color of the scene, regardless of the illumination
constant color images could be captured in HEIF anf JPEG laste year. this year we are adding Support for the DICOM medical imaging photo format. It is a fomrat used by the health industry to store images related to medical subjects like MRI, skin problems, xray and so on.
It's writable and also readable format on all OS26, supported through AVCapturePhotoOutput APIs and through the coregraphics api.
for coregrapphics there is a new DICOM entry in the property dictionary which includes all the dicom availbale and defined propertie in a file. finder will also display all those in the info panel
(Address why a developer would want to use it) - not for regualr picture taking apps. for those HEIF and JPEG are the preferred delivery format. use dicom if your app produces output that are health related, that you can also share with health providers or your doctors
Main session developer questions
Question 1
LiDAR vs. Dual Camera depth generation: Which resolution does the LiDAR sensor natively have (iPhone 16 Pro) and when to prefer LiDAR over Dual Camera?
Both report formats with output resolutions (we don't advertise sensor resolution)
Lidar vs Dual, etc:
Lidar: Best for absolute depth, real world scale and computer vision
Dual, etc: relative, disparity-based, less power, photo effects
Also see: 2022 WWDC session "Discovery advancements in iOS camera capture: Depth, focus and multitasking"
Question 2
Can true depth and lidar camera run at 60fps?
Lidar can do 30fps (edited)
Front true depth can do 60fps.
Question 3
What’s the first class way to use PhotoKit to reimplement a high performance photo grid? We’ve been using a LazyVGrid and the photos caching manager, but are never able to hit the holy trinity (60hz, efficient memory footprint, minimal flashes of placeholder/empty cells)
use the PHCachingImageManager to get media content delivered before you need to display it
specify the size you need for grid sized display
set the options PHVideoRequestOptionsDeliveryModeFastFormat, PHImageRequestOptionsDeliveryModeFastFormat and PHImageRequestOptionsResizeModeFast
Question 4
For rending live preview of video stream, Is there performance overhead from using async and Swift UI for image updates vs UIViewRepresentable + AVCaptureVideoPreviewLayer.self?
AVCaptureVideoPreviewLayer is the most efficient display path
Use VDO + AVSampleBufferDisplayLayer if you need to modify the image data
Swift UI image is optimized for static image content
Question 5
Is there a way to configure the AVFoundation BuiltInLiDarDepthCamera mode to provide a depth map as accurate as ARKit at close range?
The AVCaptureDepthDataOutput supports filtering that reduces noise and fills in invalid values. Consider using this for smoother depth maps
Question 6
Pyramid-based photo editing in core image (such as adobe camera raw highlights and shadows)?
First off you may want to look a the builtin filter called CIHighlightShadowAdjust
Also the noise reduction in the CIRawFilter uses a pyramid-based algorithm.
You can also write your own pyramid-based algorithms by taking an input image:
down sample it by two multiply times using imageByApplyingAffineTransform
apply additional CIKernels to each downsampled image as needed.
use a custom CIKernel to combine the results.
Question 7
Is the best way to integrate an in-app camera for a “non-camera” app UIImagePickerController?
Yes, UIImagePickerController provides system-provided UI for capturing photos and movies.
Question 8
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 9
For shipping photo style assets in the app that need transparency what is the best format to use? JPEG2000? will moving to this save a lot of space comapred to PNG or other options?
If you want lossless compression PNG is good and supports unpremutiplied alpha
If you want lossy compression HEIF supports premutiplied or unpremutiplied alpha
(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 10
Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation?
Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected.
Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops.
Question 11
What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices?
The ImageCaptureCore framework supports camera devices, memory cards, scanners
read and write, where supported
check out the docs to see how to browse connected devices, folders, files, etc.
Question 12
Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info!
Yes, this is totally fine.
AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance
Question 13
What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map?
There’s a suite of new System Tone Mapper APIs in this years’ OSes
CoreImage ImageKit CoreAnimation, CoreGraphics
For example:
CoreImage: new CISystemToneMap filter.
CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh
Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange
Can go from high to constrained to standard
Tone mapping is provided by the system (CISystemToneMap for controllable example)
Question 14
What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image?
One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide.
Question 15
For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each?
AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview.
For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output
Question 16
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 17
Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward?
digital zoom upscales the image to output dimensions and cropping will yield a smaller output image
while digital zoom will crop, it also upscales
Question 18
How do you design camera interfaces that work for both casual users and photography enthusiasts?
Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down.
Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts
A good philosophy is: Keep the simple things easy, make the hard things possible
Question 19
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 20
a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user?
You can't prevent the built-in camera from being available to the user
Question 21
Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly?
Yes they can, use CoreMLRequest , provide their model container
Been supported for a while (iOS 18/macOS 15)
For more details go to Machine Learning & AI group lab Thursday
use smaller images for better performance
Question 22
What would you recommend for capture of the new immersive and spatial formats?
To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property
Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported
See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details
Question 23
You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding?
For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files.
For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs.
If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant.
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
(Note: this is part 3 of a 3 part posting. See Part 1 or Part 2)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 24
What’s the best approach for optimizing barcode scanning using AVFoundation or Vision in low-light or angled scenarios
Turn on flash in low-light scenarios
Lower framerate to improve exposure and reduce noise
Wait until the capture is in focus/notify your user that they need to get closer
Question 25
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 26
Is there a way to quickly create a thumbnail after the user selects an image with PhotosPicker?
File provider API
Additional questions from the WWDC25 in-person labs that occurred later in the WWDC week
Question 1
When should I build my custom photo picker instead of using the system one?
Always start with the system picker -> try embeddable customization APIs -> fallback to custom picker for very special needs
Question 2
I'm building a new camera app for pros and I want to give my users the most un-processed image possible, and the most control over the capture as possible. How can I do that with AVCapture?
If stills, Brief Bayer RAW capture overview, or Pro RAW if you want Apple's processing and dynamic range
If video, talk about prores LOG.
Custom exposure settings are available throguh the apis
maybe global/local tonemapping discussion?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
I explored several methods to trigger a 35mm camera connected via USB:
1- ICCameraDevice: Unable to make it work with Canon cameras (details).
2- Canon's EDSDK: Works but is complex to implement.
3- gPhoto2 (command-line): Simple to use but requires gPhoto2 to be installed.
In your opinion, what is the most efficient way to trigger and download images via USB from Canon cameras?
What is the purpose of AdjustmentsSecondary.data included in the PHAssetResource for a cleaned-up image?
When using creationRequest.addResource, what should be set for the PHAssetResourceType?
If I set the PHAssetResourceType as follows to create an asset, it appears correctly in the camera roll. However, when attempting to edit the image in the Photos app, the app crashes:
IMG_5332.HEIC → .photo
FullSizeRender.HEIC → .fullSizePhoto
Adjustments.plist → .adjustmentData
AdjustmentsSecondary.data → .adjustmentData
Hello,
I am experiencing slow image retrieval when using the requestImageForAsset:targetSize:contentMode:options:resultHandler: method in my application. The delay is significantly impacting the performance of my app.
Here are the details of my implementation:
for (PHAsset *asset in assets) {
@autoreleasepool {
PHImageManager *imageManager = [PHImageManager defaultManager];
PHImageRequestOptions *options = [[PHImageRequestOptions alloc] init];
options.synchronous = YES;
options.deliveryMode = PHImageRequestOptionsDeliveryModeFastFormat;
options.resizeMode = PHImageRequestOptionsResizeModeNone;
[imageManager requestImageForAsset:asset
targetSize:CGSizeMake(100, 100)
contentMode:PHImageContentModeAspectFill
options:options
resultHandler:^(UIImage *thumbnail, NSDictionary *info) {
CameraRollCellDto *cellDto = [[CameraRollCellDto alloc] init];
cellDto.index = index;
cellDto.thumbnail = thumbnail;
cellDto.propertyDate = asset.creationDate;
if (self.segmentedTorikomi.selectedSegmentIndex == SEG_INDEX_IKKATSU) {
cellDto.isSelected = YES;
} else {
cellDto.isSelected = NO;
}
[list addObject:cellDto];
}];
index++;
}
}
Has anyone else encountered this issue? Are there any known solutions or optimizations that can help improve the speed of image retrieval using this method?
Thank you for your assistance.
Topic:
Media Technologies
SubTopic:
Photos & Camera
I'd like to add a share extension to my app (an Action app extension, I think). The extension would appear when users share a photo in the Photos app (and, ideally, Safari). If you tapped my app icon on the share sheet, iOS would pass the photo to my app and switch the user from Photos or Safari to my full app, with the shared photo(s) available for my app to work with.
I know this is possible, because Instagram (a third-party app) works exactly like this. If you look at an image in the Photos app, tap Share and then tap Instagram, iOS will background the Photos app, activate the Instagram app and let you edit and post your photo in the main Instagram app.
It seems like NSExtensionContext#open(_:completionHandler:) might do this if I add a custom URL to my main app, but the documentation for that says:
Each extension point determines whether to support this method, or under which conditions to support this method. In iOS, the Today and iMessage app extension points support this method.
That would rule out an Action, Photo Editing or Share extension. But then how does Instagram do this, and how can I achieve the same in my app?
I know that it's possible for an Action, Photo Editing or Share extension to open as a mini-app on top of the app providing the content. But coordinating the IPC for that is much, much more work (for my particular app) than just switching the user over to the app, with full access to all the functionality and data that my main app usually has access to.
How can I use my RGB Curve points:
let redCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.152), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)]
let greenCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.247, y: 0.196), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)]
let blueCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.184), CIVector(x: 0.466, y: 0.466), CIVector(x: 1, y: 1)]
in colorCurvesFilter which I've found in Apple Docs:
func colorCurves(inputImage: CIImage) -> CIImage {
let colorCurvesEffect = CIFilter.colorCurves()
colorCurvesEffect.inputImage = inputImage
colorCurvesEffect.curvesDomain = CIVector(x: 0, y: 1)
colorCurvesEffect.curvesData = Data(
bytes: [Float32]([
0.0,0.0,0.0,
0.8,0.8,0.8,
1.0,1.0,1.0
]), count: 36)
colorCurvesEffect.colorSpace = CGColorSpaceCreateDeviceRGB()
return colorCurvesEffect.outputImage!
}
Following WWDC 2023 "Support HDR images in your app", I'm trying to save 48-megapixel ProRAWs (taken on an iPhone 14 Pro Max) as HDR HEICs to the Photo Library. After processing the ProRAW file using CIRAWFilter, whether I use CIContext.heif10Representation() or convert to a CGImage, then UIImage, and use UIImage.heicData(), I get photos that behave oddly in the Photo Library. They appear too dark, and visibly brighten when first viewed, but more problematic is that the photos brighten a great deal more when you edit them with the Photos editor. This is the behavior when using the itur_2100_PQ color space, but itur_2100_HLG behaves similarly, except that it gets dramatically darker when edited. This behavior occurs whether CIRAWFilter.extendedDynamicRangeAmount is set to 0.0, or 2.0, or not set at all.
So what am I doing wrong? Here is a minimal iOS app -- well, just the ContentView -- that demonstrates the issue. You also need a .dng ProRAW file included in the project directory named test.dng. I'd love to include such a file, but I can't.
Be prepared for a multi-second wait when you save the photo.
import SwiftUI
import Photos
struct ContentView: View {
let context = CIContext()
let hdrColorSpace = CGColorSpace(name: CGColorSpace.itur_2100_PQ)!
var body: some View {
VStack(spacing: 100) {
Button("Save Photo From CGImage/UIImage") {
savePhotoFromUIImage()
}
Button("Save Photo From CIImage") {
savePhotoDirectFromCIImage()
}
}.padding(60)
}
//convert RAW with CIRAWFilter to CIImage, then convert to CGImage, then UIImage, then HEIF
private func savePhotoFromUIImage() {
if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) {
guard let outputCGImage = context.createCGImage(ciImage, from: ciImage.extent, format: .RGB10, colorSpace: hdrColorSpace) else { return }
let uiImage = UIImage(cgImage: outputCGImage)
if let heicData = uiImage.heicData() {
saveHEIFPhotoToLibrary(imageData: heicData)
} else {
print("Failed to convert UIImage to HEIC")
}
}
}
//convert RAW with CIRAWFilter to CIImage, then to HEIF
private func savePhotoDirectFromCIImage() {
if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) {
do {
let heif = try context.heif10Representation(of: ciImage, colorSpace: hdrColorSpace)
saveHEIFPhotoToLibrary(imageData: heif)
} catch {
print("Failed to get HEIF representation from CIContext")
}
}
}
private func processRAW(url: URL) -> CIImage? {
guard let coreRawFilter = CIRAWFilter(imageURL: url) else { return nil }
coreRawFilter.extendedDynamicRangeAmount = 2.0 //the issue persists whether this is not set, or set to 0, or set to, say, 2.0
guard let ciImage = coreRawFilter.outputImage else { return nil }
return ciImage
}
private func saveHEIFPhotoToLibrary(imageData: Data) {
PHPhotoLibrary.shared().performChanges({
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
creationRequest.addResource(with: .photo, data: imageData, options: options)
}) { success, error in
if let error = error {
print("Error saving photo: \(error.localizedDescription)")
} else {
print("Photo saved.")
}
}
}
}
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Photos and Imaging
Core Graphics
Core Image
EDR
How can I use my RGB Curve points:
let redCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.152), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)]
let greenCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.247, y: 0.196), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)]
let blueCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.184), CIVector(x: 0.466, y: 0.466), CIVector(x: 1, y: 1)]
in colorCurvesFilter which I've found in Apple Docs:
func colorCurves(inputImage: CIImage) -> CIImage {
let colorCurvesEffect = CIFilter.colorCurves()
colorCurvesEffect.inputImage = inputImage
colorCurvesEffect.curvesDomain = CIVector(x: 0, y: 1)
colorCurvesEffect.curvesData = Data(
bytes: [Float32]([
0.0,0.0,0.0,
0.8,0.8,0.8,
1.0,1.0,1.0
]), count: 36)
colorCurvesEffect.colorSpace = CGColorSpaceCreateDeviceRGB()
return colorCurvesEffect.outputImage!
}
Hi! I'm attempting to add purpose string when requesting to access the camera, but for some reason it doesn't seem to be working. Below I've included the alert and the plist.
Based on the iPhone 14 Max camera, implement model recognition and draw a rectangular box around the recognized object. The width and height are calculated using LiDAR and displayed in centimeters on the real-time updated image.
I have an iPad app that I want to run on Apple Silicon macs.
Everything works fine except for VNDocumentCameraViewController. According to the docs this class is available on:
iOS 13.0+ iPadOS 13.0+ Mac Catalyst 13.1+ visionOS 1.0+
yet when I try using it I get Document camera is not available on my Mac Studio running macOS 15.2
Is this expected behaviour?
Thanks
am new to using Swift for a Mac Application. I am trying to control an external UVC-compliant camera focus and other capabilities. However, I'm having trouble with this and don't know where to start. I have downloaded an application from the App Store and it can control the focus and other capabilities.
I've tried IOKit but this seems to be complicated and this does not return any capabilities or control the camera.
I also tried AVfoundation and was able to open the camera, but using the following code did not work for me. as a device.isFocusPointOfInterestSupported returns false and without checking the app crashes.
@IBAction func focusChanged(_ sender: NSSlider) {
do {
guard let device = videoDevice else { return }
try device.lockForConfiguration()
// Check if focus mode and point of interest are supported
if device.isFocusModeSupported(.locked) {
device.focusMode = .locked
}
if device.isFocusPointOfInterestSupported {
// Map the slider value (0.0 to 1.0) to the focus point's X coordinate
let focusX = CGFloat(sender.doubleValue)
let focusPoint = CGPoint(x: focusX, y: 0.5) // Y coordinate is typically 0.5 (centered vertically)
device.focusPointOfInterest = focusPoint
} else {
print("Focus point of interest is not supported on this device.")
}
device.unlockForConfiguration()
// Log focus settings
print("Focus point: \(device.focusPointOfInterest)")
print("Focus mode: \(device.focusMode.rawValue)")
} catch {
print("Error adjusting focus: \(error)")
}
Any help or advice is much appreciated.
How can I implement the same custom CIFilter as a Lightroom Color Grading tool for shadows, midtones, highlights and global areas?
Dear Apple Developer Forum,
I have a question regarding the AVCaptureDevice on iOS. We're trying to capture photos in the best quality possible along with depth data with the highest accuracy possible. We were delighted when we saw AVCaptureDevice could be initialized with the AVMediaType=.depthData which works as expected (depthData is a part of the AVCapturePhoto). When setting to AVMediaType=.video, we still receive depth data (of same quality according to our own internal tests). That confused us.
Mind you, we set the device format and depth format as well:
private func getDeviceFormat() throws -> AVCaptureDevice.Format {
// Ensures high video format and an appropriate color profile.
let format = camera?.formats.first(where: {
$0.isHighPhotoQualitySupported &&
$0.supportedDepthDataFormats.count > 0 &&
$0.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
})
// Check and see if it's available.
guard format != nil else {
throw CaptureDeviceError.necessaryFormatNotAvailable
}
return format!
}
private func getDepthDataFormat(for format: AVCaptureDevice.Format) throws -> AVCaptureDevice.Format {
// Access the depth format.
let depthDataFormat = format.supportedDepthDataFormats.first(where: {
$0.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat32
})
// Check if it exists
guard depthDataFormat != nil else {
throw CaptureDeviceError.necessaryFormatNotAvailable
}
// Returns it.
return depthDataFormat!
}
We're wondering, what steps we can take to ensure the best quality photo, along with the most accurate depth data? What properties are the most important, which have an effect, which don't? Are there any ways we can optimize our current configuration? We find it difficult as there's very limited guides and explanations on the media subtypes, for example kCVPixelFormatType_420YpCbCr8BiPlanarFullRange. Is it the best? Is it the best for our use case of high quality photo + most accurate depth data?
Important comment: Our App only runs on iPhone 14 Pro, iPhone 15 Pro, iPhone 16 Pro on the latest iOS versions.
We hope someone with greater knowledge at Apple can help us and guide us on how we can have the photos of best quality and depth data with most accuracy.
Thank you very much!
Kind regards.
We are encountering a critical, intermittently occurring crash issue when accessing photo data using PHAssetResourceManager.writeDataForAssetResource on iOS 18. The problem does not arise on iOS 17 or earlier versions.
We have been unable to identify a consistent reproduction path. Based on user feedback, the issue seems to involve Live Photo and Raw image files.
Our investigation has revealed that the crash occurs in the +[PISchema identifier] method of the PhotoImaging Framework. When called manually, this method causes a crash on iOS 18 but works without issues on iOS 17.
Reproduction Steps:
1.Fetch PHAsset.
2.Get PHAssetResource by [PHAssetResource assetResourcesForAsset:].
3.Call [PHAssetResourceManager writeDataForAssetResource:toFile:options:completionHandler:].
Crash Log:
Incident Identifier: CFD60092-FDB1-43B4-BA42-3F507F7B8B96
CrashReporter Key: 260b4780989083a54e0cb451930fe9a3bed64862
Hardware Model: iPhone13,4
AppStoreTools: 16C5031b
AppVariant: 1:iPhone13,4:18
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd [1]
Date/Time: 2025-02-15 19:07:57.7054 +0800
Launch Time: 2025-02-15 19:07:55.4106 +0800
OS Version: iPhone OS 18.3.1 (22D72)
Release Type: User
Baseband Version: 5.20.03
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: SIGNAL 6 Abort trap: 6
Terminating Process: mCloud_iPhone [11109]
Triggered by Thread: 11
Application Specific Information:
abort() called
Thread 11 name: Dispatch queue: com.apple.NSXPCConnection.m-user.com.apple.photos.service
Thread 11 Crashed:
0 libsystem_kernel.dylib 0x1e850b2d4 __pthread_kill + 8
1 libsystem_pthread.dylib 0x221b4959c pthread_kill + 268
2 libsystem_c.dylib 0x19ec24b08 abort + 128
3 NeutrinoCore 0x1bdcdbdec -[NUAssertionPolicyAbort notifyAssertion:] + 68
4 NeutrinoCore 0x1bdcdbbf4 -[NUAssertionPolicyComposite notifyAssertion:] + 160
5 NeutrinoCore 0x1bdcdc098 -[NUAssertionPolicyUnique notifyAssertion:] + 176
6 NeutrinoCore 0x1bdcdb524 -[NUAssertionHandler handleFailureInFunction:file:lineNumber:currentlyExecutingJobName:description:arguments:] + 156
7 NeutrinoCore 0x1bdcdc4bc _NUAssertFailHandler + 176
8 NeutrinoCore 0x1bdc8ea98 -[NUIdentifier initWithNamespace:name:version:] + 2352
9 NeutrinoCore 0x1bdc8eba8 -[NUIdentifier initWithName:version:] + 84
10 NeutrinoCore 0x1bdc8ec10 -[NUIdentifier initWithName:] + 68
11 PhotoImaging 0x1bda54ce4 +[PISchema identifier] + 36
12 PhotoImaging 0x1bda550fc +[PISchema registeredPhotosSchemaIdentifier] + 32
13 PhotoImaging 0x1bd9d7128 +[PIPhotoEditHelper newComposition] + 28
14 PhotoImaging 0x1bd940798 +[PICompositionSerializer deserializeCompositionFromAdjustments:metadata:formatIdentifier:formatVersion:sidecarData:error:] + 160
15 PhotoImaging 0x1bd9412ec +[PICompositionSerializer deserializeCompositionFromData:formatIdentifier:formatVersion:sidecarData:error:] + 224
16 PhotoLibraryServices 0x1afabf75c -[PLPhotoEditPersistenceManager loadCompositionFrom:formatIdentifier:formatVersion:sidecarData:error:] + 1856
17 PhotoLibraryServices 0x1afabffe4 +[PLPhotoEditPersistenceManager validateAdjustmentData:formatIdentifier:formatVersion:error:] + 108
18 Photos 0x1af4ac360 __167+[PHContentEditingInputRequestContext contentEditingInputRequestContextForAsset:requestID:managerID:networkAccessAllowed:downloadIntent:progressHandler:resultHandler:]_block_invoke + 260
19 Photos 0x1af4ac67c -[PHAdjustmentData(ContentEditingInput) _contentEditing_readableByClientWithVerificationBlock:] + 136
20 Photos 0x1af4ac4b0 -[PHAdjustmentData(ContentEditingInput) _contentEditing_requiredBaseVersionReadableByClient:verificationBlock:] + 88
21 Photos 0x1af4abb8c -[PHContentEditingInputRequestContext _adjustmentBaseVersionFromResult:request:canHandleAdjustmentData:] + 404
22 Photos 0x1af4a911c -[PHContentEditingInputRequestContext produceChildRequestsForRequest:reportingIsLocallyAvailable:isDegraded:result:] + 624
23 Photos 0x1af2c1d10 -[PHMediaRequestContext _produceChildRequestsForRequest:withResult:] + 88
24 Photos 0x1af2c11e8 -[PHMediaRequestContext mediaRequest:didFinishWithResult:] + 88
25 Photos 0x1af505184 -[PHAdjustmentDataRequest _finishFromAsynchronousCallback] + 124
26 Photos 0x1af5050a0 __39-[PHAdjustmentDataRequest startRequest]_block_invoke + 584
27 PhotoLibraryServicesCore 0x1b001be8c __106-[PLAssetsdResourceClient adjustmentDataForAsset:networkAccessAllowed:trackCPLDownload:completionHandler:]_block_invoke.86 + 864
28 CoreFoundation 0x196dd8e34 __invoking___ + 148
29 CoreFoundation 0x196dd7e7c -[NSInvocation invoke] + 428
30 Foundation 0x195a64ae0 __NSXPCCONNECTION_IS_CALLING_OUT_TO_EXPORTED_OBJECT__ + 16
31 Foundation 0x195a63514 -[NSXPCConnection _decodeAndInvokeReplyBlockWithEvent:sequence:replyInfo:] + 532
32 Foundation 0x195a6653c __88-[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:]_block_invoke_5 + 188
33 libxpc.dylib 0x221babb80 _xpc_connection_reply_callout + 116
34 libxpc.dylib 0x221b9e2d0 _xpc_connection_call_reply_async + 80
35 libdispatch.dylib 0x19eb6b028 _dispatch_client_callout3 + 20
36 libdispatch.dylib 0x19eb88b64 _dispatch_mach_msg_async_reply_invoke + 340
37 libdispatch.dylib 0x19eb7242c _dispatch_lane_serial_drain + 352
38 libdispatch.dylib 0x19eb73158 _dispatch_lane_invoke + 432
39 libdispatch.dylib 0x19eb7e38c _dispatch_root_queue_drain_deferred_wlh + 288
40 libdispatch.dylib 0x19eb7dbd8 _dispatch_workloop_worker_thread + 540
41 libsystem_pthread.dylib 0x221b44680 _pthread_wqthread + 288
42 libsystem_pthread.dylib 0x221b42474 start_wqthread + 8
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hello, my company is developing a product that will send data to/from the phone over cable and Wi-Fi. I have three questions:
Do we need an MFi authentication chip in our product if we plan to send video and commands to the iPhone/iPad over USB or Lightning cable?
Likewise, do we need an MFI authentication chip for communication over Wi-Fi? (Informal research suggests that the answer is no to this one.)
And, do we even still need MFI certification at all for Wi-Fi comms? (We are not using HomeKit.)
Thank you!
Topic:
Media Technologies
SubTopic:
Photos & Camera
I am writing an iOS app to present a slide show of assets in a Photo album, in a random order, including videos and live photos. I have got it all working quite nicely but for a Live Photo, I need to know what effect is selected (Live, Loop, Bounce, Long Exposure, Live Off) to display the image correctly. I can't find any mention of getting this information in the documentation. Anyone know how to do this? Thanks in advance.
Adrian.
(Xcode 16.1 iOS 18.0)
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hi,
Currently I am developing a 3D reconstruction project.
Which requires images to be distortion-free (rectilinear) and with known intrinsics.
The session I am developing on is a builtInDualWideCamera, with isGeometricDistortionCorrectionEnabled set to false to be able to get the intrinsic matrix of the images, isVirtualDeviceConstituentPhotoDeliveryEnabled set to true and isAutoVirtualDeviceFusionEnabled set to false to get both images and isCameraCalibrationDataDeliveryEnabled set to true to actually get the calibration data.
The distortion correction parameters such as lensDistortionLookupTable are used.
The 42 coefficients mapping array is used as described in the AVCameraCalibrationData header file. A simple piecewise linear interpolation.
There are two questions I would like to get support on:
A way to set the calibration parameters in each image.
I have an approach that sets the parameters in the kCGImagePropertyExifDictionary -> "UserComment". Is there a better approach to write calibration parameter data into the images? I feel like this is a bit dirty and there might be a better and neat approach.
For the ultra-wide angle camera's images, the lensDistortionLookupTable contains several zeros at the end of the array.
For example (last 10 elements are zero):
"LensDistortionLookupTable":"0.000000000000000,0.000349554029526,0.001385628827848,0.003071037586778,... ,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000"
The problem comes when the complete array is used to correct the image (including zeros), the end result is a wrapped-like-circle image close to the edges of it which is completely wrong.
In contrast, if the LensDistortionLookupTable is used without the last zeros and the new size accommodated the image looks better (although not as rectilinear as if you take the image from the iPhone's camera app), but definitely less distorted.
Including zeros (full array):
Excluding zeros (array size changed):
Am I missing an important point in the usage of the lensDistortionLookupTable where this case is addressed (zeros at the end)?
What is the criteria to shrink/exclude elements of the array?
Any advice is very much welcome.
Is it possible to use the AVExternalStorageDevice to access external storage from a connected camera or usb drive (via USB C or Lightning connector) on an iPad/iPhone.
I have tested the following code on an iPhone 14 (iOS 18.1.1) and an iPad Gen 10 (18.3.1), and both return false for:
// returns false on iPhone 14, iPad gen 10
print(AVExternalStorageDeviceDiscoverySession.isSupported)
The following code returns null, when I try to access the external storage discovery session.
// returns null on iOS devices
print(AVExternalStorageDeviceDiscoverySession.shared)
The following returns false, without displaying a permission dialog:
AVExternalStorageDevice.requestAccess(completionHandler: { (granted: Bool) in
// returns false with no permission dialog
print(granted);
What type of iOS devices are supported by AVExternalStorageDeviceDiscoverySession?
What situations has it been used for (e.g. connecting to Camera via the external storage protocol, accessing photos from a SD card with an adapter, accessing photos from usb drive).
Is there are sample code for using the AV External Storage api?
I am following the Apple sample code and trying to add a manual focus lens position slider:
@available(iOS 18.0, *)
private func addCameraControls() {
if !self.session.controls.isEmpty {
for control in self.session.controls {
self.session.removeControl(control)
}
}
self.cameraControlFocusSlider = nil
//Focus Slider
if self.videoDevice!.isLockingFocusWithCustomLensPositionSupported {
self.cameraControlFocusSlider = AVCaptureSlider("Focus", symbolName: "dot.square", in: 0.0...1.0)
self.cameraControlFocusSlider!.setActionQueue(self.sessionQueue) { focusValue in
//Do manual focus
}
if self.session.canAddControl(self.cameraControlFocusSlider!) {
self.session.addControl(self.cameraControlFocusSlider!)
}
}
}
So there are these AVCaptureSessionControlsDelegate methods:
final func sessionControlsDidBecomeActive(_ session: AVCaptureSession) {
print ("sessionControlsDidBecomeActive")
}
final func sessionControlsWillEnterFullscreenAppearance(_ session: AVCaptureSession) {
print ("sessionControlsWillEnterFullscreenAppearance")
}
final func sessionControlsWillExitFullscreenAppearance(_ session: AVCaptureSession) {
print ("sessionControlsWillExitFullscreenAppearance")
}
final func sessionControlsDidBecomeInactive(_ session: AVCaptureSession) {
print ("sessionControlsDidBecomeInactive")
}
So when self.cameraControlFocusSlider is presented, I have to show the current value of the lense position. Lens position can change from auto focus and also from manual focus by the user using the app UI. Is there a way to see if self.cameraControlFocusSlider is active or being used?
Please note that I will have more than one AVCaptureSlider in the final code.
I want to create a Live Photo. The project includes a .jpg image and a .mov video (2 seconds).
Two permissions in xcode have been added:
Privacy - Photo Library Usage Description
Privacy - Photo Library Additions Usage Description
Simulate: iphone 16, ios 18.3
The codes in ContentView.swift :
private func saveLivePhoto(imageURL: URL, videoURL: URL, completion: @escaping (Bool, Error?) -> Void) {
PHPhotoLibrary.shared().performChanges {
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = false
creationRequest.addResource(with: .photo, fileURL: imageURL, options: options)
creationRequest.addResource(with: .pairedVideo, fileURL: videoURL, options: options)
} completionHandler: { success, error in
DispatchQueue.main.async {
print(error)
completion(success, error)
}
}
}
guard let imageURL = Bundle.main.url(forResource: "livephoto", withExtension: "jpeg"),
let videoURL = Bundle.main.url(forResource: "livephoto", withExtension: "mov") else {
showAlertMessage(title: "error", message: "cant find Live Photo ")
return
}
print("imageURL: \(imageURL)")
print("videoURL: \(videoURL)")
saveLivePhoto(imageURL: imageURL, videoURL: videoURL) { success, error in
if success {
xxxxx
} else {
xxxxx
}
}
Really need help, thanks