I'm getting an error writing a ciImage as a heif image:
// Create CIImage directly from pixel buffer
let ciImage = CIImage(cvPixelBuffer: pixelBuffer, options: [CIImageOption.properties: combinedMetadata])
// Write HEIC synchronously
do {
try ciContext.writeHEIFRepresentation(of: ciImage, to: url, format: .RGBA8, colorSpace: colorSpace)
The error I'm getting is:
Error Domain=CINonLocalizedDescriptionKey Code=3 "(null)" UserInfo={CINonLocalizedDescriptionKey=failed to write HEIC data to file., NSUnderlyingError=0x11b1a1ec0 {Error Domain=CINonLocalizedDescriptionKey Code=10 "(null)" UserInfo={CINonLocalizedDescriptionKey=failed to add image to the PhotoCompressionSession.}}}
Both
try ciContext.writeJPEGRepresentation(of: copiedCIImage, to: url, colorSpace: colorSpace, options: options)
and
try ciContext.writePNGRepresentation(of: copiedCIImage, to: url, format: .RGBA8, colorSpace: colorSpace)
work. I also verified that the code works with iOS 18.
Is there something new we need to do for Heif images?
Thanks in advance
Photos and Imaging
RSS for tagIntegrate still images and other forms of photography into your apps.
Posts under Photos and Imaging tag
62 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone,
I’m running into an issue with PHPickerFilter when using PHPickerViewController.
When I configure the picker with a .videos and .livePhotos filter, it seems to work correctly in the Photos tab. However, when I switch to the Collections tab, the filter doesn’t always apply — users can still see and select static image assets in certain collections (e.g. from one of the People & Pets sections).
Here’s a simplified snippet of my setup:
var configuration = PHPickerConfiguration(photoLibrary: .shared())
configuration.selectionLimit = 1
var filters = [PHPickerFilter]()
filters.append(.videos)
filters.append(.livePhotos)
configuration.filter = PHPickerFilter.any(of: filters)
configuration.preferredAssetRepresentationMode = .current
let picker = PHPickerViewController(configuration: configuration)
picker.delegate = self
present(picker, animated: true)
Expected behavior:
The picker should consistently respect the filter across both Photos and Collections tabs, only showing assets that match the filter.
Actual behavior:
The filter seems to apply correctly in the Photos tab, but in the Collections tab, other asset types are still visible/selectable.
Has anyone else encountered this behavior? Is this expected or a known issue, or am I missing something in the configuration?
Thanks in advance!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Files and Storage
Media Library
Photos and Imaging
PhotoKit
I am trying to run my app where I present PHPickerViewController for picking images from PhotoLibrary.
I am running this from Xcode 16.2 on a simulator.
As soon as PHPickerViewController is presented I see that the app freezes. This only happens on simulator of iOS version 18.
Any help on this would be appreciated.
Area
ImageCaptureCore / ICDeviceBrowser
Description
On iOS 26.1 beta, calling
requestControlAuthorization()
requestContentsAuthorization()
always returns .notDetermined and never transitions to .authorized or .denied.
This prevents apps from properly accessing device control or contents authorization. The issue occurs regardless of device state or prior requests.
Steps to Reproduce
1. Create and start an ICDeviceBrowser instance.
2. Call requestControlAuthorization() or requestContentsAuthorization().
3. Inspect the returned ICAuthorizationStatus.
Expected Result
• The system should prompt the user if necessary.
• A final status of either .authorized or .denied should be returned.
Actual Result
• The completion handler always reports .notDetermined.
• No user prompt appears and the status does not change.
Version / Build
• iOS 26.1 beta
• Xcode
Hardware
• [iPhone 15 Pro, iPad Pro (M2)]
Impact
This regression blocks development and testing of features relying on ImageCaptureCore. Applications depending on device browsing and content access cannot proceed, which significantly affects workflows involving external device integration.
Notes
This appears to be a regression compared to earlier iOS releases.
I'm using an iPhone 15 Pro running iOS 26.1 (23B5044l) public beta and photos and videos taken on my phone are not syncing with iCloud and do not appear within Photos on my Mac Mini yet screenshots and images that come down via Apple messages or WhatsApp are syncing immediately., and appear on both devices. I can understand why Apple Messages and WhatsApp images appear within Photos on my Mac Mini as it too has access to both apps independently from my iPhone but screenshots that immediately appear on my Mac Mini are generated on my iPhone. So if screenshots sync why do my photos and videos not sync as they too are generated on my iPhone?
I'm working on a photo app and I want to allow the user to display, edit and delete photos. I can fetch all photos using PHAsset.fetchAssets(with: options). This works as intended.
However, I can't seem to find a way to prevent the user from seeing photos from a Shared Library. The PHAssetSourceType only contains typeCloudShared to only show items from a specific album; not library.
How can I filter by iCloud Shared Library?
Error capturing ProRAW using iPhone 17 Pro Telephoto with photoQualityPrioritization set to .Quality
Hey,
I'm having a very strange issue on my iPhone 17 Pro. I'm trying to capture a 12MP ProRAW image using the Telephoto Lens with the photoQualityPrioritization set to .Quality. Unfortunately I receive this error when trying to capture the image:
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSUnderlyingError=0x134f7a1f0 {Error Domain=NSOSStatusErrorDomain Code=-16802 "(null)"}, NSLocalizedFailureReason=An unknown error occurred (-16802), AVErrorRecordingFailureDomainKey=4, NSLocalizedDescription=The operation could not be completed}
The photo captures correctly at 7.9x zoom, it's only a problem when the zoom goes over 8x.
Also, it's only this particular configuration of settings which causes the issue. I'm able to capture an image if I either:
Set quality to ".balanced"
Set max dimensions to 48MP
Capture a JPEG image instead of a ProRAW image
Use the TripleCamera fusion lens
Any help would be greatly appreciated.
Alex
We observed that the PHPicker is unable to load RAW images captured on an iPhone in some scenarios. And it is also somehow related to iCloud.
Here is the setup:
The PHPickerViewController is configured with preferredAssetRepresentationMode = .current to avoid transcoding.
The image is loaded from the item provider like this:
if itemProvider.hasItemConformingToTypeIdentifier(kUTTypeImage) {
itemProvider.loadFileRepresentation(forTypeIdentifier: kUTTypeImage) { url, error in
// work
}
}
This usually works, also for RAW images. However, when trying to load a RAW image that has just been captured with the iPhone, the loading fails with the following errors on the console:
[claims] 43A5D3B2-84CD-488D-B9E4-19F9ED5F39EB grantAccessClaim reply is an error: Error Domain=NSCocoaErrorDomain Code=4097 "Couldn’t communicate with a helper application." UserInfo={NSUnderlyingError=0x2804a8e70 {Error Domain=NSCocoaErrorDomain Code=4097 "connection from pid 19420 on anonymousListener or serviceListener" UserInfo={NSDebugDescription=connection from pid 19420 on anonymousListener or serviceListener}}}
Error copying file type public.image. Error: Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.image" UserInfo={NSLocalizedDescription=Cannot load representation of type public.image, NSUnderlyingError=0x280480540 {Error Domain=NSCocoaErrorDomain Code=4097 "Couldn’t communicate with a helper application." UserInfo={NSUnderlyingError=0x2804a8e70 {Error Domain=NSCocoaErrorDomain Code=4097 "connection from pid 19420 on anonymousListener or serviceListener" UserInfo={NSDebugDescription=connection from pid 19420 on anonymousListener or serviceListener}}}}}
We observed that on some devices, loading the image will actually work after a short time (~30 sec), but on others it will always fail.
We think it is related to iCloud Photos: On the device that has iCloud Photos sync enabled, the picker is able to load the image right after it was synced to the cloud. On devices that don't sync the image, loading always fails. It seems that the sync process is doing some processing (?) of the image that will later enable the picker to load it successfully, but that's just guessing.
Additional observations:
This seems to only occur for images that were taken with the stock Camera app. When using Halide to capture RAW (either ProRAW or RAW), the Picker is able to load the image.
When trying to load the image as kUTTypeRawImage instead of kUTTypeImage, it also fails.
The picker also can't load RAW images that were AirDroped from another device, unless it synced to iCloud first.
This is reproducable using the Selecting Photos and Videos in iOS sample code project.
We observed this happening in other apps that use the PHPicker, not just ours.
Is this a bug, or is there something that we are missing?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
PhotoKit
Photos and Imaging
wwdc2022-10023
I'm experiencing an issue with my app when saving images to the camera roll. This is intermittent, but it happens several times a day. The error I receive is the following:
Connection to assetsd was interrupted - assetsd exited, died, or closed the photo library
Error getting remote object proxy for -[PLNonBindingAssetsdPhotoKitClient sendChangesRequest:reply:]_block_invoke: Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.photos.service" UserInfo={NSDebugDescription=connection to service named com.apple.photos.service}
PhotoKit XPC proxy is invalid. Dropping request on the floor and returning an error: Error Domain=PHPhotosErrorDomain Code=3301 "(null)" (underlying error Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.photos.service" UserInfo={NSDebugDescription=connection to service named com.apple.photos.service})
CoreData: error: XPC: synchronousRemoteObjectProxyWithErrorHandler: store 'file:///var/mobile/Media/PhotoData/Photos.sqlite' encountered error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003." UserInfo={NSDebugDescription=The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003.}
CoreData: error: XPC: synchronousRemoteObjectProxyWithErrorHandler: store 'file:///var/mobile/Media/PhotoData/Photos.sqlite' encountered error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003." UserInfo={NSDebugDescription=The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003.}
My code is unchanged from using my app daily on an iPhone 16 Pro with iOS 26. I never saw the issue on this device.
Here is an excerpt from my code for saving the image:
var localIdentifier = String()
PHPhotoLibrary.shared().performChanges({
let albumChangeRequest = PHAssetCollectionChangeRequest(for: album)
let assetCreationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
assetCreationRequest.addResource(with: .photo, data: imageData, options: options)
assetCreationRequest.creationDate = Date.now
let placeHolder = assetCreationRequest.placeholderForCreatedAsset
albumChangeRequest?.addAssets([placeHolder!] as NSArray)
if placeHolder != nil {
localIdentifier = (placeHolder?.localIdentifier)!
}
}) { (didSucceed, error) in
OperationQueue.main.addOperation({
didSucceed ? success(localIdentifier) : failure(error)
})
}
I'm not sure why this would be device specific but I have had users with iPhone 17 Pro and iPhone Air reporting the issue.
Alex
Hello everyone,
I'm working on a feature where I need to capture the highest possible quality photo (e.g., 24MP on supported devices) and upload it to our server. I don't need the photos to appear in user's main Photos app so I thought I could store the photos in app's private directory using FileManager until they are uploaded. This wouldn't require requesting Photo Library permission, maximizing user privacy.
The documentation on AVCapturePhotoOutput states that "the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled"
/**
@property maxPhotoDimensions
@abstract
Indicates the maximum resolution of the requested photo.
@discussion
Set this property to enable requesting of images up to as large as the specified dimensions. Images returned by AVCapturePhotoOutput may be smaller than these dimensions but will never be larger. Once set, images can be requested with any valid maximum photo dimensions by setting AVCapturePhotoSettings.maxPhotoDimensions on a per photo basis. The dimensions set must match one of the dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions for the current active format. Changing this property may trigger a lengthy reconfiguration of the capture render pipeline so it is recommended that this is set before calling -[AVCaptureSession startRunning].
Note: When supported, the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled.
*/
@available(iOS 16.0, *)
open var maxPhotoDimensions: CMVideoDimensions
(btw. this note is not present in the docs https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/maxphotodimensions)
Enabling autoDeferredPhotoDeliveryEnabled means that for a 24MP capture, the system will call the photoOutput(_:didFinishCapturingDeferredPhotoProxy:error:) delegate method, providing a proxy object instead of the final image data.
According to the WWDC23 session "Create a more responsive camera experience," this AVCaptureDeferredPhotoProxy must be saved to the PHPhotoLibrary using a PHAssetCreationRequest with the resource type .photoProxy. The system then handles the final processing in the background within the library.
To use deferred photo processing, you'll need to have write permission to the photo library to store the proxy photo, and read permission if your app needs to show the final photo or wants to modify it in any way.
https://developer.apple.com/videos/play/wwdc2023/10105/?time=799
This seems to create a hard dependency on the Photo Library for accessing 24MP images.
My question is:
Is there any way to receive the final, processed 24MP image data directly in the app after a deferred capture, without using PHPhotoLibrary as the processing intermediary?
For example, is there a delegate callback or a mechanism I'm missing that provides the final data for a deferred photo, allowing an app to handle it in-memory or in its own private sandbox, completely bypassing the user's Photo Library?
Our goal is to follow Apple's privacy-first principles by avoiding requesting a PHPhotoLibrary authorization when our app's core function doesn't require access to the user's photo collection.
Thank you for your time and any clarification you can provide.
Because I want to control the grid size and number of HEIC images myself, I decided to perform HEVC encoding manually and then generate the HEIC image. Previously, I used VTCompressionSession to accomplish this task, and the results were satisfactory. It worked perfectly on iOS 16 through iOS 18 — in other words, it was able to generate correct HEVC encoding, and its CMFormatDescription should also have been correct, since I relied on it to generate the decoderConfig; otherwise, the final image would have decoding issues.
However, it can no longer generate a valid HEIC image on a physical device running iOS 26. Interestingly, it still works fine on the iOS 26 simulator — it only fails on real hardware. The abnormal result is that the image becomes completely black, although the image dimensions are still correct.
After my troubleshooting, I suspect that the encoding behavior of VTCompressionSession has been modified on iOS 26, which causes the final hvc1 encoding I pass in to be incorrect.
I created a VTCompressionSession using the following configuration.
var newSession: VTCompressionSession!
var status = VTCompressionSessionCreate(
allocator: kCFAllocatorDefault,
width: Int32(frameSize.width),
height: Int32(frameSize.height),
codecType: kCMVideoCodecType_HEVC,
encoderSpecification: nil,
imageBufferAttributes: nil,
compressedDataAllocator: nil,
outputCallback: nil,
refcon: nil,
compressionSessionOut: &newSession
)
try check(status, VideoToolboxErrorDomain)
let properties: [CFString: Any] = [
kVTCompressionPropertyKey_AllowFrameReordering: false,
kVTCompressionPropertyKey_AllowTemporalCompression: false,
kVTCompressionPropertyKey_RealTime: false,
kVTCompressionPropertyKey_MaximizePowerEfficiency: false,
kVTCompressionPropertyKey_ProfileLevel: profileLevel,
kVTCompressionPropertyKey_Quality: quality.rawValue,
]
status = VTSessionSetProperties(newSession, propertyDictionary: properties as CFDictionary)
try check(status, VideoToolboxErrorDomain) {
VTCompressionSessionInvalidate(newSession)
}
Then use the following code to encode each Grid of the image.
let status = VTCompressionSessionEncodeFrame(
session,
imageBuffer: buffer,
presentationTimeStamp: presentationTimeStamp,
duration: frameDuration,
frameProperties: nil,
infoFlagsOut: nil) { [weak self] status, _, sampleBuffer in
try check(status, VideoToolboxErrorDomain)
if let sampleBuffer {
let encodedImage = try self.encodedImage(from: sampleBuffer)
// handle encodedImage
}
}
try check(status, VideoToolboxErrorDomain)
If I try to display this abnormal image in the App, my console outputs the following error, so it can be inferred that the issue probably occurred during decoding.
createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL
callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray
createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL
callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray
createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL
callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray
It needs to be emphasized again that this code used to work fine in the past, and the issue only occurs on an iOS 26 physical device. I noticed that iOS 26 has introduced many new properties, but I’m not sure whether some of these new properties must be set in the new system, and there’s no information about this in the official documentation.
Some users reported that their images are not loading correctly in our app. After a lot of debugging we identified the following:
This only happens when the app is build for Mac Catalyst. Not on iOS, iPadOS, or “real” macOS (AppKit).
The images in question have unusual color spaces. We observed the issue for uRGB and eciRGB v2.
Those images are rendered correctly in Photos and Preview on all platforms.
When displaying the image inside of a UIImageView or in a SwiftUI Image, they render correctly.
The issue only occurs when loading the image via Core Image.
When comparing the different Core Image render graphs between AppKit (working) and Catalyst (faulty) builds, they look identical—except for the result.
Mac (AppKit):
Catalyst:
Something seems to be off when Core Image tries to load an image with foreign color space in Catalyst.
We identified a workaround: By using a CGImageDestination to transcode the image using the kCGImageDestinationOptimizeColorForSharing option, Image I/O will convert the image to sRGB (or similar) and Core Image is able to load the image correctly. However, one potentially loses fidelity this way.
Or might there be a better workaround?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
Core Image
Core Graphics
Is anyone aware of a issue with the videos of you replaying a video you took then swip up the sound is still being played when you open another video it doubles to the first video if you open a third video and so on it just keeps adding the sound of all the other videos you open and swip up to change to a different video all videos sound is being played and doubled over The videos o was hoping they would be a fix on iOS 26 beta 4 but so far nothing has fixed the issue. You have to close the entire photo library and swip up to force close the phot video gallery for the recording sounds playing to stop??
(Note: this is part 3 of a 3 part posting. See Part 1 or Part 2)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 24
What’s the best approach for optimizing barcode scanning using AVFoundation or Vision in low-light or angled scenarios
Turn on flash in low-light scenarios
Lower framerate to improve exposure and reduce noise
Wait until the capture is in focus/notify your user that they need to get closer
Question 25
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 26
Is there a way to quickly create a thumbnail after the user selects an image with PhotosPicker?
File provider API
Additional questions from the WWDC25 in-person labs that occurred later in the WWDC week
Question 1
When should I build my custom photo picker instead of using the system one?
Always start with the system picker -> try embeddable customization APIs -> fallback to custom picker for very special needs
Question 2
I'm building a new camera app for pros and I want to give my users the most un-processed image possible, and the most control over the capture as possible. How can I do that with AVCapture?
If stills, Brief Bayer RAW capture overview, or Pro RAW if you want Apple's processing and dynamic range
If video, talk about prores LOG.
Custom exposure settings are available throguh the apis
maybe global/local tonemapping discussion?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 10
Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation?
Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected.
Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops.
Question 11
What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices?
The ImageCaptureCore framework supports camera devices, memory cards, scanners
read and write, where supported
check out the docs to see how to browse connected devices, folders, files, etc.
Question 12
Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info!
Yes, this is totally fine.
AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance
Question 13
What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map?
There’s a suite of new System Tone Mapper APIs in this years’ OSes
CoreImage ImageKit CoreAnimation, CoreGraphics
For example:
CoreImage: new CISystemToneMap filter.
CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh
Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange
Can go from high to constrained to standard
Tone mapping is provided by the system (CISystemToneMap for controllable example)
Question 14
What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image?
One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide.
Question 15
For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each?
AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview.
For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output
Question 16
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 17
Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward?
digital zoom upscales the image to output dimensions and cropping will yield a smaller output image
while digital zoom will crop, it also upscales
Question 18
How do you design camera interfaces that work for both casual users and photography enthusiasts?
Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down.
Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts
A good philosophy is: Keep the simple things easy, make the hard things possible
Question 19
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 20
a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user?
You can't prevent the built-in camera from being available to the user
Question 21
Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly?
Yes they can, use CoreMLRequest , provide their model container
Been supported for a while (iOS 18/macOS 15)
For more details go to Machine Learning & AI group lab Thursday
use smaller images for better performance
Question 22
What would you recommend for capture of the new immersive and spatial formats?
To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property
Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported
See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details
Question 23
You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding?
For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files.
For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs.
If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant.
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
I'm new here so I don't know what's this function belongs to which topic... Sorry about that!
I watched the WWDC stream and I am really interested in this function, I'm wondering if this function could be used in my apps.
I looked up the document but I find it only support visionOS(i'm not sure about that, but I saw the demo is base on the visionOS)
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here.
Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo
let photos = PHAsset.fetchAssets(with: .image, options: nil)
// enumerating photos ....
if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) {
spatialAsset = asset
}
// other code show below
I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos).
// imageCount is 1 when it comes to generated spatial photo
let imageCount = CGImageSourceGetCount(source)
I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource.
The full code below, the imagePair extraction will stop at "no groups found":
func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) {
let options = PHImageRequestOptions()
options.isNetworkAccessAllowed = true
options.deliveryMode = .highQualityFormat
options.resizeMode = .none
options.version = .original
return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) {
imageData, _, _, _ in
guard let imageData,
let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil)
else {
completion(nil)
return
}
let stereoImagePair = stereoImagePair(from: imageSource)
completion(stereoImagePair)
}
}
}
func stereoImagePair(from source: CGImageSource) -> StereoImagePair? {
guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else {
return nil
}
let imageCount = CGImageSourceGetCount(source)
print(String(format: "%d images found", imageCount))
guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else {
/// function returns here
print("no groups found")
return nil
}
guard
let stereoGroup = groups.first(where: {
let groupType = $0[kCGImagePropertyGroupType] as! CFString
return groupType == kCGImagePropertyGroupTypeStereoPair
})
else {
return nil
}
guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int,
let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int,
let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil),
let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil),
let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil),
let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil)
else {
return nil
}
return (leftImage, rightImage, self.identifier)
}
Any suggestion? Thanks
visionOS 2.4
How do I test the new RecognizeDocumentRequest API. Reference: https://www.youtube.com/watch?v=H-GCNsXdKzM
I am running Xcode Beta, however I only have one primary device that I cannot install beta software on.
Please provide a strategy for testing. Will simulator work?
The new capability is critical to my application, just what I need for structuring document scans and extraction.
Thank you.
When trying to edit some Live Photos, calling PHLivePhotoEditingContext.saveLivePhoto results in the following error:
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12815), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x300d05380 {Error Domain=NSOSStatusErrorDomain Code=-12815 "(null)"}}
I was able to replicate it on my device by taking a new Live Photo. Not sure what's wrong with that one specifically, not all Live Photos replicate the issue.
I've submitted FB15880825 with a sysdiagnose and a Photos Diagnostics as well. Any ideas what's going on here? It's impacting multiple customers. Thanks!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Media
Photos and Imaging
PhotoKit
AVFoundation
I have an iOS app that includes a Photo Editing Extension and is optimized for Mac Catalyst so you can edit photos in the Photos app on your Mac. This has worked really well but now I am encountering an error alert trying to open the photo editing extension:
RBSLaunchRequest error trying to launch plugin com.company.TestEditor. TestPhotoEditor (B7A616A7-2 5A8-4E02-8B32-5CAB37C8B4B2): ErrorDomain=RBSRequestErrorDomain Code=5 "Launch failed." UserInfo={NSLocalizedFailureReason=Launch failed., NSUnderlyingError=0x7f08fafd0 {ErrorDomain=NSPOSIXErrorDomain Code=153 "Unknown error: 153" UserInfo={NSLocalizedDescription=Launchd job spawn failed}}}
Create a new iOS app project in Xcode
Create a new target and choose iOS > Photo Editing Extension
For both targets in the project, add Mac Catalyst as a supported destination
Run the app on My Mac (Mac Catalyst)
Open the Photos app, double click a photo, click Edit, click the more plugins button, and click TestPhotoEditor in the list
macOS 15.4.1 + Xcode 16.3