(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Introductory kick-off questions
Question 1
Tell us a little about the new AVFoundation Capture APIs we've made available in the new iOS 26 developer preview?
Cinematic Capture API (strong/weak focus, tracking focus)(scene monitoring)(simulated aperture)(dog/cat heads/groupIDs)
Camera Controls and AirPod Stem Clicks
Spatial Audio and Studio Quality AirPod Mics in Camera
Lens Smudge Detection
Exposure and Focus Rect of Interest
Question 2
I built QR code scanning into my app, but on newer iPhones I have to hold the phone very far away from the QR code, otherwise the image is blurry and it doesn't scan. Why is this happening and how can I fix it?
Every year, the cameras get better and better as we push the state of the art on iPhone photography and videography. This sometimes results in changes to the characteristics of the lenses.
min focus distance
newer phones have multiple lenses
automatic switching behavior
Use virtual device like the builtInDualWide or built in Triple, rather than just the builtInWide
Set the videoZoomFactor to 2. You're done.
Question 3
Last year, we saw some exciting new APIs introduced in AVFoundation in the health space. With Constant Color photography, developers can take pictures that have constant color regardless of ambient lighting. There are some further advancements this year. Davide, could you tell us about them?
constant color photography is mean to remove the "tone mapping" applied to photograph captured with camera app, usually incldsuing artistic intent, and instead try to be a close as possible to the real color of the scene, regardless of the illumination
constant color images could be captured in HEIF anf JPEG laste year. this year we are adding Support for the DICOM medical imaging photo format. It is a fomrat used by the health industry to store images related to medical subjects like MRI, skin problems, xray and so on.
It's writable and also readable format on all OS26, supported through AVCapturePhotoOutput APIs and through the coregraphics api.
for coregrapphics there is a new DICOM entry in the property dictionary which includes all the dicom availbale and defined propertie in a file. finder will also display all those in the info panel
(Address why a developer would want to use it) - not for regualr picture taking apps. for those HEIF and JPEG are the preferred delivery format. use dicom if your app produces output that are health related, that you can also share with health providers or your doctors
Main session developer questions
Question 1
LiDAR vs. Dual Camera depth generation: Which resolution does the LiDAR sensor natively have (iPhone 16 Pro) and when to prefer LiDAR over Dual Camera?
Both report formats with output resolutions (we don't advertise sensor resolution)
Lidar vs Dual, etc:
Lidar: Best for absolute depth, real world scale and computer vision
Dual, etc: relative, disparity-based, less power, photo effects
Also see: 2022 WWDC session "Discovery advancements in iOS camera capture: Depth, focus and multitasking"
Question 2
Can true depth and lidar camera run at 60fps?
Lidar can do 30fps (edited)
Front true depth can do 60fps.
Question 3
What’s the first class way to use PhotoKit to reimplement a high performance photo grid? We’ve been using a LazyVGrid and the photos caching manager, but are never able to hit the holy trinity (60hz, efficient memory footprint, minimal flashes of placeholder/empty cells)
use the PHCachingImageManager to get media content delivered before you need to display it
specify the size you need for grid sized display
set the options PHVideoRequestOptionsDeliveryModeFastFormat, PHImageRequestOptionsDeliveryModeFastFormat and PHImageRequestOptionsResizeModeFast
Question 4
For rending live preview of video stream, Is there performance overhead from using async and Swift UI for image updates vs UIViewRepresentable + AVCaptureVideoPreviewLayer.self?
AVCaptureVideoPreviewLayer is the most efficient display path
Use VDO + AVSampleBufferDisplayLayer if you need to modify the image data
Swift UI image is optimized for static image content
Question 5
Is there a way to configure the AVFoundation BuiltInLiDarDepthCamera mode to provide a depth map as accurate as ARKit at close range?
The AVCaptureDepthDataOutput supports filtering that reduces noise and fills in invalid values. Consider using this for smoother depth maps
Question 6
Pyramid-based photo editing in core image (such as adobe camera raw highlights and shadows)?
First off you may want to look a the builtin filter called CIHighlightShadowAdjust
Also the noise reduction in the CIRawFilter uses a pyramid-based algorithm.
You can also write your own pyramid-based algorithms by taking an input image:
down sample it by two multiply times using imageByApplyingAffineTransform
apply additional CIKernels to each downsampled image as needed.
use a custom CIKernel to combine the results.
Question 7
Is the best way to integrate an in-app camera for a “non-camera” app UIImagePickerController?
Yes, UIImagePickerController provides system-provided UI for capturing photos and movies.
Question 8
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 9
For shipping photo style assets in the app that need transparency what is the best format to use? JPEG2000? will moving to this save a lot of space comapred to PNG or other options?
If you want lossless compression PNG is good and supports unpremutiplied alpha
If you want lossy compression HEIF supports premutiplied or unpremutiplied alpha
(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 10
Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation?
Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected.
Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops.
Question 11
What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices?
The ImageCaptureCore framework supports camera devices, memory cards, scanners
read and write, where supported
check out the docs to see how to browse connected devices, folders, files, etc.
Question 12
Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info!
Yes, this is totally fine.
AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance
Question 13
What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map?
There’s a suite of new System Tone Mapper APIs in this years’ OSes
CoreImage ImageKit CoreAnimation, CoreGraphics
For example:
CoreImage: new CISystemToneMap filter.
CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh
Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange
Can go from high to constrained to standard
Tone mapping is provided by the system (CISystemToneMap for controllable example)
Question 14
What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image?
One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide.
Question 15
For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each?
AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview.
For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output
Question 16
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 17
Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward?
digital zoom upscales the image to output dimensions and cropping will yield a smaller output image
while digital zoom will crop, it also upscales
Question 18
How do you design camera interfaces that work for both casual users and photography enthusiasts?
Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down.
Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts
A good philosophy is: Keep the simple things easy, make the hard things possible
Question 19
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 20
a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user?
You can't prevent the built-in camera from being available to the user
Question 21
Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly?
Yes they can, use CoreMLRequest , provide their model container
Been supported for a while (iOS 18/macOS 15)
For more details go to Machine Learning & AI group lab Thursday
use smaller images for better performance
Question 22
What would you recommend for capture of the new immersive and spatial formats?
To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property
Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported
See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details
Question 23
You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding?
For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files.
For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs.
If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant.
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
(Note: this is part 3 of a 3 part posting. See Part 1 or Part 2)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 24
What’s the best approach for optimizing barcode scanning using AVFoundation or Vision in low-light or angled scenarios
Turn on flash in low-light scenarios
Lower framerate to improve exposure and reduce noise
Wait until the capture is in focus/notify your user that they need to get closer
Question 25
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 26
Is there a way to quickly create a thumbnail after the user selects an image with PhotosPicker?
File provider API
Additional questions from the WWDC25 in-person labs that occurred later in the WWDC week
Question 1
When should I build my custom photo picker instead of using the system one?
Always start with the system picker -> try embeddable customization APIs -> fallback to custom picker for very special needs
Question 2
I'm building a new camera app for pros and I want to give my users the most un-processed image possible, and the most control over the capture as possible. How can I do that with AVCapture?
If stills, Brief Bayer RAW capture overview, or Pro RAW if you want Apple's processing and dynamic range
If video, talk about prores LOG.
Custom exposure settings are available throguh the apis
maybe global/local tonemapping discussion?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
A functioning Multiplatform app, which includes use of Continuity Camera on an M1MacMini running Sequoia 15.5, works correctly capturing photos with AVCapturePhoto. However, that app (and a test app just for Continuity Camera) crashes at delegate callback when run on a 2017 MacBookPro under MacOS 13.7.5. The app was created with Xcode 16 (various releases) and using Swift 6 (but tried with 5). Compiling and running the test app with Xcode 15.2 on the 13.7.5 machine also crashes at delegate callback.
The iPhone 15 Continuity Camera gets detected and set up correctly, and preview video works correctly. It's when the CapturePhoto code is run that the crash occurs.
The relevant capture code is:
func capturePhoto() {
let captureSettings = AVCapturePhotoSettings()
captureSettings.flashMode = .auto
photoOutput.maxPhotoQualityPrioritization = .quality
photoOutput.capturePhoto(with: captureSettings, delegate: PhotoDelegate.shared)
print("**** CameraManager: capturePhoto")
}
and the delegate callbacks are:
class PhotoDelegate: NSObject, AVCapturePhotoCaptureDelegate {
nonisolated(unsafe) static let shared = PhotoDelegate()
// MARK: - Delegate callbacks
func photoOutput(
_ output: AVCapturePhotoOutput,
didFinishProcessingPhoto photo: AVCapturePhoto,
error: (any Error)?
) {
print("**** CameraManager: didFinishProcessingPhoto")
guard let pData = photo.fileDataRepresentation() else {
print("**** photoOutput is empty")
return
}
print("**** photoOutput data is \(pData.count) bytes")
}
func photoOutput(
_ output: AVCapturePhotoOutput,
willBeginCaptureFor resolvedSettings: AVCaptureResolvedPhotoSettings
) {
print("**** CameraManager: willBeginCaptureFor")
}
func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
print("**** CameraManager: willCaptureCapturePhotoFor")
}
}
The crash report significant parts are.....
Crashed Thread: 3 Dispatch queue: com.apple.cmio.CMIOExtensionProviderHostContext
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000
Exception Codes: 0x0000000000000001, 0x0000000000000000
Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11
Terminating Process: exc handler [30850]
VM Region Info: 0 is not in any region. Bytes before following region: 4296495104
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
UNUSED SPACE AT START
--->
__TEXT 100175000-10017f000 [ 40K] r-x/r-x SM=COW ...tinuityCamera
Thread 0:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x7ff803aed552 mach_msg2_trap + 10
1 libsystem_kernel.dylib 0x7ff803afb6cd mach_msg2_internal + 78
2 libsystem_kernel.dylib 0x7ff803af4584 mach_msg_overwrite + 692
3 libsystem_kernel.dylib 0x7ff803aed83a mach_msg + 19
4 CoreFoundation 0x7ff803c07f8f __CFRunLoopServiceMachPort + 145
5 CoreFoundation 0x7ff803c06a10 __CFRunLoopRun + 1365
6 CoreFoundation 0x7ff803c05e51 CFRunLoopRunSpecific + 560
7 HIToolbox 0x7ff80d694f3d RunCurrentEventLoopInMode + 292
8 HIToolbox 0x7ff80d694d4e ReceiveNextEventCommon + 657
9 HIToolbox 0x7ff80d694aa8 _BlockUntilNextEventMatchingListInModeWithFilter + 64
10 AppKit 0x7ff806ca59d8 _DPSNextEvent + 858
11 AppKit 0x7ff806ca4882 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1214
12 AppKit 0x7ff806c96ef7 -[NSApplication run] + 586
13 AppKit 0x7ff806c6b111 NSApplicationMain + 817
14 SwiftUI 0x7ff90e03a9fb 0x7ff90dfb4000 + 551419
15 SwiftUI 0x7ff90f0778b4 0x7ff90dfb4000 + 17578164
16 SwiftUI 0x7ff90e9906cf 0x7ff90dfb4000 + 10340047
17 ContinuityCamera 0x10017b49e 0x100175000 + 25758
18 dyld 0x7ff8037d1418 start + 1896
Thread 1:
0 libsystem_pthread.dylib 0x7ff803b27bb0 start_wqthread + 0
Thread 2:
0 libsystem_pthread.dylib 0x7ff803b27bb0 start_wqthread + 0
Thread 3 Crashed:: Dispatch queue: com.apple.cmio.CMIOExtensionProviderHostContext
0 ??? 0x0 ???
1 AVFCapture 0x7ff82045996c StreamAsyncStillCaptureCallback + 61
2 CoreMediaIO 0x7ff813a4358f __94-[CMIOExtensionProviderHostContext captureAsyncStillImageWithStreamID:uniqueID:options:reply:]_block_invoke + 498
3 libxpc.dylib 0x7ff803875b33 _xpc_connection_reply_callout + 36
4 libxpc.dylib 0x7ff803875ab2 _xpc_connection_call_reply_async + 69
5 libdispatch.dylib 0x7ff80398b099 _dispatch_client_callout3 + 8
6 libdispatch.dylib 0x7ff8039a6795 _dispatch_mach_msg_async_reply_invoke + 387
7 libdispatch.dylib 0x7ff803991088 _dispatch_lane_serial_drain + 393
8 libdispatch.dylib 0x7ff803991d6c _dispatch_lane_invoke + 417
9 libdispatch.dylib 0x7ff80399c3fc _dispatch_workloop_worker_thread + 765
10 libsystem_pthread.dylib 0x7ff803b28c55 _pthread_wqthread + 327
11 libsystem_pthread.dylib 0x7ff803b27bbf start_wqthread + 15
Of course, the MacBookPro is an old device - but Continuity Camera works with the installed Photo Booth app, so it's possible.
Any thoughts on solving this situation would be appreciated.
Regards, Michaela
I'd like to add a share extension to my app (an Action app extension, I think). The extension would appear when users share a photo in the Photos app (and, ideally, Safari). If you tapped my app icon on the share sheet, iOS would pass the photo to my app and switch the user from Photos or Safari to my full app, with the shared photo(s) available for my app to work with.
I know this is possible, because Instagram (a third-party app) works exactly like this. If you look at an image in the Photos app, tap Share and then tap Instagram, iOS will background the Photos app, activate the Instagram app and let you edit and post your photo in the main Instagram app.
It seems like NSExtensionContext#open(_:completionHandler:) might do this if I add a custom URL to my main app, but the documentation for that says:
Each extension point determines whether to support this method, or under which conditions to support this method. In iOS, the Today and iMessage app extension points support this method.
That would rule out an Action, Photo Editing or Share extension. But then how does Instagram do this, and how can I achieve the same in my app?
I know that it's possible for an Action, Photo Editing or Share extension to open as a mini-app on top of the app providing the content. But coordinating the IPC for that is much, much more work (for my particular app) than just switching the user over to the app, with full access to all the functionality and data that my main app usually has access to.
AVCaptureVideoDataOutput.preparesCellularRadioForNetworkConnection requires com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning. But I cannot acquire its entitlement. I can't find its entitlement on 'Certificates, Identifiers & Profiles'. Any solutions?
Provisioning profile "iOS Team Provisioning Profile: ......" doesn't include the com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning entitlement.
Hi Apple Engineer,
My App is using ImageCapture Framwork to connect DSLR Camera, Before iOS 18 this method is effective,but When I upgraded my iPhone and iPad, found my app can`t connect DSLR Camera, open Setting -> Privacy & Security -> Files and Folders permission, can‘t found my app, I swear it worked before iOS 18.
I find other developers have the same problem.
https://forums.developer.apple.com/forums/thread/756960 .
https://developer.apple.com/forums/thread/765768.
I also found a process for reproducing this problem in ios 18,
Do reset all settings.
Can you help me with this problem? Or tell me how to use the API properly.Look forward to your reply. Thank you very much.
Topic:
Media Technologies
SubTopic:
Photos & Camera
am new to using Swift for a Mac Application. I am trying to control an external UVC-compliant camera focus and other capabilities. However, I'm having trouble with this and don't know where to start. I have downloaded an application from the App Store and it can control the focus and other capabilities.
I've tried IOKit but this seems to be complicated and this does not return any capabilities or control the camera.
I also tried AVfoundation and was able to open the camera, but using the following code did not work for me. as a device.isFocusPointOfInterestSupported returns false and without checking the app crashes.
@IBAction func focusChanged(_ sender: NSSlider) {
do {
guard let device = videoDevice else { return }
try device.lockForConfiguration()
// Check if focus mode and point of interest are supported
if device.isFocusModeSupported(.locked) {
device.focusMode = .locked
}
if device.isFocusPointOfInterestSupported {
// Map the slider value (0.0 to 1.0) to the focus point's X coordinate
let focusX = CGFloat(sender.doubleValue)
let focusPoint = CGPoint(x: focusX, y: 0.5) // Y coordinate is typically 0.5 (centered vertically)
device.focusPointOfInterest = focusPoint
} else {
print("Focus point of interest is not supported on this device.")
}
device.unlockForConfiguration()
// Log focus settings
print("Focus point: \(device.focusPointOfInterest)")
print("Focus mode: \(device.focusMode.rawValue)")
} catch {
print("Error adjusting focus: \(error)")
}
Any help or advice is much appreciated.
How can I use my RGB Curve points:
let redCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.152), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)]
let greenCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.247, y: 0.196), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)]
let blueCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.184), CIVector(x: 0.466, y: 0.466), CIVector(x: 1, y: 1)]
in colorCurvesFilter which I've found in Apple Docs:
func colorCurves(inputImage: CIImage) -> CIImage {
let colorCurvesEffect = CIFilter.colorCurves()
colorCurvesEffect.inputImage = inputImage
colorCurvesEffect.curvesDomain = CIVector(x: 0, y: 1)
colorCurvesEffect.curvesData = Data(
bytes: [Float32]([
0.0,0.0,0.0,
0.8,0.8,0.8,
1.0,1.0,1.0
]), count: 36)
colorCurvesEffect.colorSpace = CGColorSpaceCreateDeviceRGB()
return colorCurvesEffect.outputImage!
}
i have a wallpaper app , i need a photos access when i try to add it in capabilities i don't find it.is there any solution?
I extracted the gain map info from an image using
let url = Bundle.main.url(forResource: "IMG_1181", withExtension: "HEIC")
let source = CGImageSourceCreateWithURL(url! as CFURL, nil)
let portraitData = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source!, 0, kCGImageAuxiliaryDataTypeHDRGainMap) as! [AnyHashable : Any]
let metaData = portraitData[kCGImageAuxiliaryDataInfoMetadata] as! CGImageMetadata
Then I printed all the metadata tags
func printMetadataProperties(from metadata: CGImageMetadata) {
guard let tags = CGImageMetadataCopyTags(metadata) as? [CGImageMetadataTag] else {
return
}
for tag in tags {
if let prefix = CGImageMetadataTagCopyPrefix(tag) as String?,
let namespace = CGImageMetadataTagCopyNamespace(tag) as String?,
let key = CGImageMetadataTagCopyName(tag) as String?,
let value = CGImageMetadataTagCopyValue(tag){
print("Namespace: \(namespace), Key: \(key), Prefix: \(prefix), value: \(value)")
} else {
}
}
}
//Namespace: http://ns.apple.com/ImageIO/1.0/, Key: hasXMP, Prefix: iio, value: True
//Namespace: http://ns.apple.com/HDRGainMap/1.0/, Key: HDRGainMapVersion, Prefix: HDRGainMap, value: 131072
//Namespace: http://ns.apple.com/HDRGainMap/1.0/, Key: HDRGainMapHeadroom, Prefix: HDRGainMap, value: 3.586325
I want to create a new CGImageMetadata and tags.
But when it comes to the HDR tags. It always fails to add to metadata.
let tag = CGImageMetadataTagCreate(
"http://ns.apple.com/HDRGainMap/1.0/" as CFString,
"HDRGainMap" as CFString,
"HDRGainMapHeadroom" as CFString,
.default,
3.56 as CFNumber
)
let path = "\(HDRGainMap):\(HDRGainMapHeadroom)" as CFString
let success = CGImageMetadataSetTagWithPath(metadata, nil, path, tag)// always false
The hasXMP works fine.
Is HDR a private dict for Apple?
Hello,
I’ve encountered an issue where a green line appears only in the thumbnail view of an image in the iPhone photo gallery. The green line is not present in the actual image itself.
Here are some additional details:
• The issue occurs only when saving the image as a JPEG. When saved as PNG, the green line does not appear.
• The green line also shows up when using the PHAsset method requestImage(for:targetSize:). Depending on the targetSize, the resulting image may contain the green line.
• Interestingly, this issue does not appear on iPhone Xs Max running iOS 15.2.1. However, the green line does appear on iPhone 15 Pro Max running iOS 18.0.1 when viewing the same image.
I have attached the problematic image for your reference.
Following images are the screen captures that shows the issue occurring on my iPhone.
iPhone 15 pro max (iOS 18.0.1)
iPhone Xs Max (iOS 15.2.1 )
Could this be related to a display or gallery app issue on iOS? Any advice or solutions would be greatly appreciated.
Thank you!
Hi. I encounter some random crashes of my camera app. After some investigations, I found that it's terminated by the system and the crash log did be generated but the information is not quite useful, and here is the log found via the Console app.
Termination & Crash log
"Camera not actively used; AVCaptureEventInteraction not installed":
Received termination request from [osservice<com.apple.SpringBoard>:10931] on <RBSProcessPredicate <RBSProcessInstancePredicate| [app<com.juniperphoton.PhotonCam]>> with context <RBSTerminateContext| explanation:Capture Application Requirements Unmet: "Camera not actively used; AVCaptureEventInteraction not installed" reportType:CrashLog maxTerminationResistance:Interactive>
The crash log exported from the device will have some common information like:
It's a EXC_CRASH (SIGKILL) type with no termination reason.
Exception Type: EXC_CRASH (SIGKILL)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: RUNNINGBOARD 0
It's triggered by the main thread, but it seems to be waiting for an event to process.
Triggered by Thread: 0
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x1ee165788 mach_msg2_trap + 8
1 libsystem_kernel.dylib 0x1ee168e98 mach_msg2_internal + 80
2 libsystem_kernel.dylib 0x1ee168db0 mach_msg_overwrite + 424
3 libsystem_kernel.dylib 0x1ee168bfc mach_msg + 24
4 CoreFoundation 0x19cbe47f4 __CFRunLoopServiceMachPort + 160
5 CoreFoundation 0x19cbe3ea0 __CFRunLoopRun + 1212
6 CoreFoundation 0x19cc36274 CFRunLoopRunSpecific + 588
7 GraphicsServices 0x1e9d6d4c0 GSEventRunModal + 164
8 UIKitCore 0x19f783480 -[UIApplication _run] + 816
9 UIKitCore 0x19f3a9410 UIApplicationMain + 340
10 UIKitCore 0x19fae4bb0 0x19f394000 + 7670704
11 PhotonCam 0x1002e7e3c 0x1002cc000 + 114236
12 dyld 0x1c2d5ade8 start + 2724
Address size fault on the main thread
Thread 0 crashed with ARM Thread State (64-bit):
...
far: 0x0000000000000000 esr: 0x56000080 Address size fault
I have once tried to reproduce this issue when the app is attached with debugger, and it says:
Terminated due to signal 9
When the crash or termination happened, the app:
No AVCaptureSession is running.
The app is in the foreground and users are interacting with some functions like viewing photos or editing photos in the app. When users exit the camera view, like entering the gallery or settings, the camera session will be stopped.
Both TestFlight and Debug build will have the same issue.
No 3rd party crash reporter is installed(I deliberately disable it in Debug build and TestFlight build)
It has adopted the LockedCameraCapture, but current it's running on the main app target(if not, my app will have a button of unlock, so I can confirm about this).
Also, when it comes to the memory consumption, there is no JetsamEvent around the crash time.
Device and app information
Additionally, some information about the tech stack and the current state of my device and my app:
iPhone 16 Pro with iOS 18.2 Beta 3.
The app is a camera based app(it's PhotonCam and you can find it on the App Store), its main functionality is the camera feature using AVFoundation + Core Image + Metal to deliver camera functionality.
It has adopted the Camera Control, AVCaptureEventInteraction and LockedCameraCapture features.
If I remember it right, it occurs in iOS 18.1 Release build, but currently I have no such device to confirm. But in iOS 17.x the issue has never happened.
Regarding to this termination, on top of my head is the "watchdog" mechanism that will terminate the process that is running on the LockedCameraCapture feature. However I can make sure that currently the app is running as the main target on the home screen.
Has anybody encountered this kind of issue and has found some solutions? Thanks in advance.
I found that when the development tool above Xcode16 ran my app, I opened the suspended inscription function, and then opened the system camera, the content in the suspended window would not be displayed, and the suspended window would have a black screen. However, this phenomenon does not appear on Xcode15.4 development tools, it is the same code, I do not know why
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Swift Packages
App Clips
Developer Tools
iOS
Hi, I have a problem when I want to attach my grayscale depth map image into the real image. The produced depth map doesn't have the cameraCalibration value which should responsible to align the depth data to the image. How do I align the depth map? I saw an article about it but it is not really detailed so I might be missing some process.
I tried to:
convert my depth map into pixel buffer
create image destination ref and add the image there.
add the auxData (depth map dict)
This is the output:
There is some black space there and my RGB image colour changes
I'm trying to capture the depth map image using true depth camera in iPhone 15 plus. I was able to setup the AVCapture session with AVCaptureDeviceInput as builtInTrueDepthCamera and AVCapturePhotoOutput with isDepthDataDeliveryEnabled set as true. I also manually made the activeDepthDataFormat of AVCapture device to kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32. Finally I have enabled isDepthDataDeliveryEnabled, embedsDepthDataInPhoto , embedsPortraitEffectsMatteInPhoto and embedsSemanticSegmentationMattesInPhoto in AVCapturePhotoSettings before capturing the photo using capturePhoto(with: photoSettings, delegate: self) method.
I have checked manually printing the activeDepthDataFormat of AVCapture device. First before setting it by default it is
Optional('dpth'/'hdis' 640x 480, { 2- 30 fps}, photo dims:{}, fov:73.699, system exposure bias range:-2.0-2.0)
After forcing it to kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32 the format is
Optional('dpth'/'hdep' 160x 120, { 2- 30 fps}, photo dims:{}, fov:73.699, system exposure bias range:-2.0-2.0)
But when I receive the captured photo in
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?)
The depth map is
Optional(hdis 640x480 (high/abs) calibration:{intrinsicMatrix: [2723.07 0.00 2016.00 | 0.00 2723.07 1512.00 | 0.00 0.00 1.00], extrinsicMatrix: [1.00 0.00 0.00 0.00 | 0.00 1.00 0.00 0.00 | 0.00 0.00 1.00 0.00] pixelSize:0.001 mm, distortionCenter:{2016.00,1512.00}, ref:{4032x3024}})
Here it shows hdis instead of hdep, why is it capturing disparity map instead of true depth map.
The depth quality is high and depth data accuracy is absolute.
Here is my code
import UIKit
import AVKit
import AVFoundation
class ViewController: UIViewController, AVCapturePhotoCaptureDelegate {
@IBOutlet weak var previewView: UIView!
@IBOutlet weak var resultLbl: UILabel!
private var session = AVCaptureSession()
private var captureDevice: AVCaptureDevice?
private var inputDevice: AVCaptureDeviceInput?
private var photoOutput: AVCapturePhotoOutput?
private var photoSettings: AVCapturePhotoSettings?
private var cameraPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
self.setupCaptureSession()
}
func setupCaptureSession(){
captureDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .unspecified)
guard let captureDevice else{
print("ERROR::UNABLE TO SET TRUE DEPTH CAMERA ")
return }
session.beginConfiguration()
do{
inputDevice = try AVCaptureDeviceInput(device: captureDevice)
guard let inputDevice else{
print("ERROR: UNABLE TO SET UP INPUT DEVICE")
return }
if session.canAddInput(inputDevice){
session.addInput(inputDevice)
}
}
catch{
print(error)
}
photoOutput = AVCapturePhotoOutput()
guard let photoOutput else{
print("ERROR: UNABLE TO SET UP PHOTO OUTPUT")
return }
if session.canAddOutput(photoOutput){
session.addOutput(photoOutput)
}
session.sessionPreset = .photo
photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
print("IS DEPTH ENABLED:: \(photoOutput.isDepthDataDeliveryEnabled)")
session.commitConfiguration()
let availableFormats = captureDevice.activeFormat.supportedDepthDataFormats
let depthFormat = availableFormats.filter { format in
let pixelFormatType =
CMFormatDescriptionGetMediaSubType(format.formatDescription)
return (pixelFormatType == kCVPixelFormatType_DepthFloat16 ||
pixelFormatType == kCVPixelFormatType_DepthFloat32)
}.first
session.beginConfiguration()
try! captureDevice.lockForConfiguration()
captureDevice.activeDepthDataFormat = depthFormat
captureDevice.unlockForConfiguration()
session.commitConfiguration()
self.setupPreviewLayer()
}
func setupPreviewLayer(){
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
cameraPreviewLayer?.videoGravity = .resizeAspectFill
if let cameraPreviewLayer{
self.previewView.layer.addSublayer(cameraPreviewLayer)
cameraPreviewLayer.frame = self.previewView.bounds
}
DispatchQueue.global(qos: .userInteractive).async {
self.session.startRunning()
}
}
@IBAction func captureBtnPressed(_ sender: Any) {
photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecType.jpeg])
guard let photoSettings else{
print("ERROR: UNABLE TO SETUP PHOTO SETTINGS")
return
}
guard let photoOutput else{
print("ERROR: UNABLE TO SET UP PHOTO OUTPUT")
return
}
photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
photoSettings.embedsDepthDataInPhoto = true
photoSettings.embedsPortraitEffectsMatteInPhoto = true
photoSettings.embedsSemanticSegmentationMattesInPhoto = true
photoOutput.capturePhoto(with: photoSettings, delegate: self)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) {
print(photo.depthData)
switch photo.depthData?.depthDataQuality {
case .low:
print("Depth quality is low")
case .high:
print("Depth quality is high")
case nil:
print("Depth quality is nil")
}
switch photo.depthData?.depthDataAccuracy {
case .relative:
print("Depth accuarcy is relative")
case .absolute:
print("Depth accuarcy is absolute")
case nil:
print("Depth accuarcy is nil")
}
if let imageData = photo.fileDataRepresentation(){
if let image = UIImage(data: imageData){
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
}
}
}
On some devices, when i select the same media multiple times, the data by` loadFileRepresentation(forTypeIdentifier: completionHandler) ` returned is different(data.count is not equal).
environment:
* Model: iPhone 12
* Model Number: MGGM3CH/A
* iOS Version: 18.3.2
```Swift
// import PhotosUI
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
picker.dismiss(animated: true, completion: nil)
guard let provider = results.last?.itemProvider else { return }
guard provider.hasItemConformingToTypeIdentifier(UTType.movie.identifier) else {
return
}
Task {
provider.loadFileRepresentation(forTypeIdentifier: UTType.movie.identifier) { url, error in
guard let url = url else {
return
}
if let data = try? Data(contentsOf: url) {
print("data count is: \(data.count)")
}
}
}
}
```
ps: I also try some other function, eg: ` provide.loadItem(forTypeIdentifier:)`, but not work too.
Hi, We have created an app which allows recording 4K 60 fps videos in the app. We have noted that some time the recording switched to 20 fps (the value 20 is constant) even though the resolution settings is at 4K 60fps. We are using AVCaptureDevice to invoke the camera.
has anyone experienced this problem before ? What is unique to 20 fps? Why does it resort to 20 fps from 60 fps and why not to other numbers ?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
VideoToolbox
Professional Video Applications
Media
Issue: After iOS 18.5 release, our app is experiencing a significant increase in AVCaptureSessionInterruptionReason.videoDeviceNotAvailableWithMultipleForegroundApps errors.
Details:
Our camera-related code has not been updated recently.However, we've observed that the error rate has significantly increased starting from May 2025. The error rate has risen from approximately 0.02% (2 in 10,000 users) to 0.1% (1 in 1,000 users). This represents a 5x increase in error occurrence.
The frequency has increased noticeably since iOS 18.5
This is affecting our app's camera functionality and user experience
Questions:
Are there any known changes in iOS 18.5 regarding camera access management?
What are the recommended best practices to handle this interruption reason?
Are there any API changes we should be aware of?
Best,
Shay
I set both the AVCapturePhotoOutput and the AVCapturePhotoSettings with maxPhotoDimensions = .init(width: 8064, height: 6048), but still get the 12mp photo.
Topic:
Media Technologies
SubTopic:
Photos & Camera
I developed a driverkit extension based on overriding-the-default-usb-video-class-extension, but the link didn’t give the details of realization. I asked DTS who gave two tips:
1, Do you also have a CMIO extension to load in place of the default overriding-the-default-usb-video-class-extension
2, Your DriverKit extension’s info.plist is also missing the CameraAssistantBundleID.
I want to know why a driverkit extension needs a CMIO extension, what’s the data and control flow?
Before you post —Camera doesn't work on the Simulator— that's no longer true. I've made a solution that makes the Simulator believe there's an actual hardware device connected, allowing users to stream the macOS camera to the iOS Simulator (see for more info RocketSim's documentation: https://docs.rocketsim.app/features/hzQMSrSga7BGWvxdNVdwYs/simulator-camera-support/58tQ5jvevLNSnyUEA7VgAv)
Now, it works for VNDocumentCameraViewController, but when I try opening DataScannerViewController, I directly run into:
Failed to start scanning: The operation couldn’t be completed. (VisionKit.DataScannerViewController.ScanningUnavailable error 0.)
My question:
How does this view controller determine whether scanning is available?
Is there a certain capability the available AVCaptureDevice's need to support maybe?
Any direction would be helpful for me to make this work for developers, making them build apps faster!
AVCaptureSession's startRunning method is thread blocking and seems to be slow. What is this method doing behind the scenes?
For context: I'm working on Simulator Camera support and I have a 'fake' AVCaptureDevice that might be causing this. My hypothesis is that AVCaptureSession tries to connect to the device and waits for a notification to be posted back.
I'd love to find a way to let my fake device message AVCaptureSession that it's connected.