I occasionally receive reports from users that photo import from the Photos library gets stuck and the progress appears to stop indefinitely.
I’m using the following APIs:
func fetchAsset(_ asset: PHAsset) {
let options = PHImageRequestOptions()
options.deliveryMode = .highQualityFormat
options.resizeMode = .exact
options.isSynchronous = false
options.isNetworkAccessAllowed = true
options.progressHandler = { (progress, error, stop, info) in
// 🚨 never called
}
let requestId = PHImageManager.default().requestImageDataAndOrientation(
for: asset,
options: options
) { data, _, _, info in
// 🚨 never called
}
}
Due to repeated reports, I added detailed logs inside the callback closures. Based on the logs, it looks like the request keeps waiting without any callbacks being invoked — neither the progressHandler nor the completion block of requestImageDataAndOrientation is called.
This happens not only with the PHImageManager approach, but also when using PHAsset with PHContentEditingInputRequestOptions — the completion callback is not invoked as well.
func fetchAssetByContentEditingInput(_ asset: PHAsset) {
let options = PHContentEditingInputRequestOptions()
options.isNetworkAccessAllowed = true
asset.requestContentEditingInput(with: nil) { contentEditingInput, info in
// 🚨 never called
}
}
I suspect this is related to iCloud Photos.
Here is what I confirmed from affected users:
Using the native picker (My app also provides the native picker as an alternative option for attaching photos), iCloud download proceeds normally and the photo can be attached. However, using the PHImageManager-based approach in my app, the same photo cannot be attached.
Even after verifying that the photo has been fully downloaded from iCloud (e.g., by trying “Export Unmodified Originals” in the Photos app as described here: https://support.apple.com/en-us/111762, and confirming the iCloud download progress completed), the callback is still not invoked for that asset.
Detailed flow for (1):
I asked the user to attach the problematic photo (the one where callbacks never fire) using the native photo picker (UIImagePickerController).
The UI showed “Downloading from iCloud” progress.
The progress advanced and the photo was attached successfully.
Then I asked the user to attach the same photo again using my custom photo picker (which uses the PHImageManager APIs mentioned above).
The progress did not advance (No callbacks were invoked).
The operation waited indefinitely and never completed.
Workaround / current behavior:
If I ask users to reboot the device and try again, about 6 out of 10 users can attach successfully afterward.
The remaining ~4 out of 10 users still cannot attach even after rebooting.
For users who are not fixed immediately after reboot, it seems to resolve naturally after some time.
I’ve seen similar reports elsewhere, so I’m wondering if Apple is already aware of an internal issue related to this. If there is any known information, guidance, or recommended workaround, I would appreciate it.
I also logged the properties of affected PHAssets (metadata) when the issue occurs, and I can share them below if that helps troubleshooting:
[size=3.91MB] [PHAssetMediaSubtype(rawValue: 528)+DepthEffect | userLibrary | (4284x5712) | adjusted=true]
[size=3.91MB] [PHAssetMediaSubtype(rawValue: 528)+DepthEffect | userLibrary | (4284x5712) | adjusted=true]
[size=2.72MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
[size=2.72MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
[size=2.49MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
[size=2.49MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi Apple Developer Support Team,
We are developing an iOS application using a camera package within a hybrid (cross-platform) framework, and we would like to confirm whether it is possible to disable the camera shutter sound programmatically.
As per our understanding, the shutter sound on iOS is system-controlled and depends on the device’s silent/ring mode, and there is no App Store–approved API available to force-disable this sound. Kindly confirm whether this understanding is correct or if any supported alternative approach exists for hybrid or native implementations.
Thank you for your clarification.
Best regards,
ParkhyaSolutions
Looking to implement to UI to tell the user to clean their lens in our app.
Implemented the KVO for the cameraLensSmudgeDetectionStatus but I'm having issues reliably triggering it in, both in our app and the main camera app. Tried to get inventive by putting tupperware over the lens, but I think the model driving this or the LiDAR sensor might be smart enough to detect there is something close to the lens.
Is there any way to trigger this change in a similar way we can trigger thermal changes in debug?
Thanks.
Hi everyone,
I’m seeing recurring internal AVFoundation camera logs on iOS 26.2 and I’m trying to understand whether this is expected behavior or a regression in the capture pipeline.
These logs appear shortly after starting an AVCaptureSession, while video frames are being delivered, and also when the camera is stopped or the capture session is torn down.
<<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281)
Even in this clean, minimal setup, the same logs appear on iOS 26.2
The exact same logic did not produce these logs on iOS 18.x.
To rule out issues caused by my own code, GPT created a minimal SwiftUI example from scratch.
My primary interest is to perform real-time processing on the video frames delivered by the camera (via AVCaptureVideoDataOutput), for tasks such as analysis, computer vision, or custom frame handling, while simultaneously displaying the live preview.
Thanks in advance for any insight.
Example Code
At which point in the image processing pipeline does iOS apply the white balance gains which can be set via AVCaptureDevice.setWhiteBalanceModeLocked(with:completionHandler:)?
Are those gains applied in the analog part of the camera pipeline, before the pixel voltage gets converted via the ADC to digital values? Or does the camera first convert the pixel voltages to digital values and then the gains are applied to the digital values?
Is this consistent across devices or can the behavior vary from device to device?
On iPhone 16 Pro Max (not tested other devices) there's a noticeable jump in the framing of the preview video when you record in the iOS AVCam Sample App. The same jump in camera framing can be observed by switching to the front facing camera and then back to the rear one.
It looks roughly consistent with switching between the 0.5x and 1x camera (but not quite a match for the same viewable area in the Camera app) - and it's only when it's initially loaded, once recording is started it retains the 'closer' image no matter how many times it's stopped/started thereafter.
I'm relatively new to Swift and haven't done anything with the camera before, so odd 'buggy' behaviour in the sample code isn't helping me understand it! :-)
Is there any way to fix this?
Some of our app's users are repeatedly running into a crash on NeutrinoCore -[NUIdentifier initWithNamespace:name:version:] + 2352. It looks from the stack trace like multiple threads are performing PHFetchRequests, but that shouldn't be causing a crash. It's isolated to a small number of users, which makes me think that it's something related to their specific Photos databases (e.g., data corruption.)
Do you have any suggestions how I might be able to resolve this?
2
libsystem_c.dylib
abort + 124
3
NeutrinoCore
-[NUAssertionPolicyCrashReport notifyAssertion:] + 66
4
NeutrinoCore
-[NUAssertionPolicyComposite notifyAssertion:] + 160
5
NeutrinoCore
-[NUAssertionPolicyUnique notifyAssertion:] + 176
6
NeutrinoCore
-[NUAssertionHandler handleFailureInFunction:file:lineNumber:currentlyExecutingJobName:description:arguments:] + 156
7
NeutrinoCore
_NUAssertFailHandler + 176
8
NeutrinoCore
-[NUIdentifier initWithNamespace:name:version:] + 2352
9
NeutrinoCore
-[NUIdentifier initWithName:version:] + 84
10
NeutrinoCore
-[NUIdentifier initWithName:] + 68
11
PhotoImaging
+[PISchema identifier] + 36
12
PhotoImaging
+[PISchema registeredPhotosSchemaIdentifier] + 32
13
PhotoImaging
+[PIPhotoEditHelper newComposition] + 28
14
PhotoImaging
+[PICompositionSerializer deserializeCompositionFromAdjustments:metadata:formatIdentifier:formatVersion:sidecarData:error:] + 160
15
PhotoImaging
+[PICompositionSerializer deserializeCompositionFromData:formatIdentifier:formatVersion:sidecarData:error:] + 224
16
PhotoLibraryServices
-[PLPhotoEditPersistenceManager loadCompositionFrom:formatIdentifier:formatVersion:sidecarData:error:] + 1848
17
PhotoLibraryServices
+[PLPhotoEditPersistenceManager validateAdjustmentData:formatIdentifier:formatVersion:error:] + 108
18
Photos
__167+[PHContentEditingInputRequestContext contentEditingInputRequestContextForAsset:requestID:managerID:networkAccessAllowed:downloadIntent:progressHandler:resultHandler:]_block_invoke + 260
19
Photos
-[PHAdjustmentData(ContentEditingInput) _contentEditing_readableByClientWithVerificationBlock:] + 136
20
Photos
-[PHAdjustmentData(ContentEditingInput) _contentEditing_requiredBaseVersionReadableByClient:verificationBlock:] + 88
21
Photos
-[PHContentEditingInputRequestContext _adjustmentBaseVersionFromResult:request:canHandleAdjustmentData:] + 356
22
Photos
-[PHContentEditingInputRequestContext produceChildRequestsForRequest:reportingIsLocallyAvailable:isDegraded:result:] + 624
23
Photos
-[PHMediaRequestContext _produceChildRequestsForRequest:withResult:] + 88
24
Photos
-[PHMediaRequestContext mediaRequest:didFinishWithResult:] + 92
25
Photos
-[PHAdjustmentDataRequest _finishFromAsynchronousCallback] + 124
26
Photos
__39-[PHAdjustmentDataRequest startRequest]_block_invoke + 584
27
PhotoLibraryServicesCore
__106-[PLAssetsdResourceClient adjustmentDataForAsset:networkAccessAllowed:trackCPLDownload:completionHandler:]block_invoke.84 + 880
28
CoreFoundation
invoking + 148
29
CoreFoundation
-[NSInvocation invoke] + 424
30
Foundation
<deduplicated_symbol> + 16
31
Foundation
-[NSXPCConnection _decodeAndInvokeReplyBlockWithEvent:sequence:replyInfo:] + 528
32
Foundation
__88-[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:]_block_invoke_5 + 188
33
libxpc.dylib
_xpc_connection_reply_callout + 124
42
libsystem_pthread.dylib
start_wqthread + 8
I am unable to find any clearcut documentation on configuring AVCaptureSession pipeline to capture video with proResRAW codec type, which is 16 bit format. Is it supported only with AVCaptureMovieFileOutput or one can have AVCaptureVideoDataOutput emitting 16-bit sample buffers that can be vended to AVAssetWriter?
When inputting data within a sheet, I'm allowing the user to take a photo, so the camera is called and presents itself within a 2nd sheet, however the controls are centered within the iPhone's entire screen, cropping the top controls and not extending down to the bottom of the phone's screen.
Any help on how to fix this?
I’m a normie: not a developer at all. My idea might be super dumb.
Would it be possible to please let us have a button in iphone photos that when toggled allows us to save certain chosen raw images to an iphone block chain, AND have Apple authenticate they are native photos, marked the milisecond they were taken, that they are native and no AI was used on those images? That might go a long way toward restoring trust in truth in photos again.
We could also have the same thing for AI. Marked notification in the data on AI photos that can't be erased.
Sorry if this is already underway and I'm just a normal person and therefore don't know it. I just want to be able to trust things again. 🤷🏽♀️
Topic:
Media Technologies
SubTopic:
Photos & Camera
While implementing the new background backup feature introduced in iOS 26.1, I create a PHAssetResourceUploadJob in an Extension. On iOS 26.1, the system successfully triggers the upload. However, on iOS 26.2, although the job is created successfully and all related configurations are correctly set, the system does not trigger the upload.
Could you please help confirm the cause of this issue? Thank you.
Hey,
I've noticed that in some scenarios photo data can be corrupted from the cameras on iPhone 17 Pro. The requirements are:
The zoom level is greater than 2 times the base zoom, so 2x for the wide lens, and 8x for the telephoto.
QualityPrioritization is set to .Quality. If set to .Balanced the images look as expected.
The scene is well lit. I haven't managed to work out if there's an ISO cut off, but in darker scenes the images look as expected.
The scene does not contain any objects or texture, e.g. a blank white screen, a blue sky, up close against a bright wall.
Here is an example:
This is really weird behavior. I have opened a ticket here: https://feedbackassistant.apple.com/feedback/22092908
There's also a repo here if anyone would like to try it:
https://github.com/alexfoxy/CameraQualityTest.
Thanks,
Alex
Hello Apple Developer Support,
I’m developing a virtual camera using the CMIOExtensionDevice / CMIOExtensionStreamSource APIs on macOS. While the virtual camera appears in system settings and apps like Zoom and Google Meet, the video output exhibits the following issues:
Jittering frames: The first frame sometimes appears correctly, but subsequent frames flicker or jitter.
Solid color fill: Eventually, the camera feed fills entirely with a solid accent color (e.g., blue), rather than the intended video content.
Console logs: Repeated messages appear in Console.app:
Invalid display 0x00000000
Setup details:
The virtual camera is created using CMIOExtensionDevice and CMIOExtensionStream.
Video frames are rendered from NSImage/CGImage using CGContext and copied into CVPixelBuffers.
Frame delivery is controlled by a DispatchSourceTimer at 60 FPS.
macOS version: [Your macOS version here]
Xcode version: [Your Xcode version here]
Observations:
The Invalid display 0x00000000 logs suggest that CGContext drawing or NSImage operations are failing in headless mode (i.e., there is no real display attached to the virtual camera).
Using CIContext with .useSoftwareRenderer = true appears to mitigate some flicker, but not entirely.
Questions / Requests:
Is it expected that CoreMediaIO virtual cameras cannot reliably render CGImage / NSImage frames offscreen?
Are there recommended APIs or approaches to render virtual camera frames fully headless to avoid display-dependent jitter?
Is there any documentation or sample code from Apple showing stable video output from a virtual camera extension that does not rely on a physical display?
Any guidance or examples would be greatly appreciated. This issue prevents the virtual camera from being used reliably in standard video apps.
Thank you,
Savvy
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
System Extensions
Core Media
Continuity Camera
Hello Apple Developer Support,
I’m developing a virtual camera using the CMIOExtensionDevice / CMIOExtensionStreamSource APIs on macOS. While the virtual camera appears in system settings and apps like Zoom and Google Meet, the video output exhibits the following issues:
Jittering frames: The first frame sometimes appears correctly, but subsequent frames flicker or jitter.
Solid color fill: Eventually, the camera feed fills entirely with a solid accent color (e.g., blue), rather than the intended video content.
Console logs: Repeated messages appear in Console.app:
Invalid display 0x00000000
Setup details:
The virtual camera is created using CMIOExtensionDevice and CMIOExtensionStream.
Video frames are rendered from NSImage/CGImage using CGContext and copied into CVPixelBuffers.
Frame delivery is controlled by a DispatchSourceTimer at 60 FPS.
macOS version: 26.2
Xcode version: 26.1
Observations:
The Invalid display 0x00000000 logs suggest that CGContext drawing or NSImage operations are failing in headless mode (i.e., there is no real display attached to the virtual camera).
Using CIContext with .useSoftwareRenderer = true appears to mitigate some flicker, but not entirely.
Questions / Requests:
Is it expected that CoreMediaIO virtual cameras cannot reliably render CGImage / NSImage frames offscreen?
Are there recommended APIs or approaches to render virtual camera frames fully headless to avoid display-dependent jitter?
Is there any documentation or sample code from Apple showing stable video output from a virtual camera extension that does not rely on a physical display?
Any guidance or examples would be greatly appreciated. This issue prevents the virtual camera from being used reliably in standard video apps.
Thank you,
Savvy
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
System Extensions
Core Media
Continuity Camera
Hello,
I am getting the following errors when building a Mac Camera Extension with web sockets. I am using URLSessionWebsocketTask as my web socket library. I built a test program for my code and in there I can see my web sockets are working properly, but when I run it from the System Extension I get the following errors. The socket opens for two - three messages then crashes. I couldnt find any documentation online for the following errors
CMIOExtensionProvider.m:1975:-[CMIOExtensionProvider removeProviderContext:]_block_invoke Unregistered provider context &lt;CMIOExtensionProviderContext: -&gt;, don't be surprised if things go badly
CMIOExtensionProviderContext.m:64:-[CMIOExtensionProviderContext initWithConnection:]_block_invoke [391] received Connection invalid``
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
System Extensions
wwdc2022-10022
Core Media
Hi,
I am trying to implement a PHBackgroundResourceUploadExtension to upload backup media files to an external cloud service based on these docs: https://developer.apple.com/documentation/PhotoKit/uploading-asset-resources-in-the-background#Acknowledge-completed-jobs
Creating jobs and actual uploading is working as expected, but the problem I have is in the acknowledgeCompletedJobs() function.
When in this function, I am trying to access a job's resource. The resource is nil and thus has empty assetLocalIdentifier and originalFilename.
Did anybody successfully implement this extension or knows, why this would happen? Because the resource of an acknowledgable job is empty, I can not match it back to my processed assets.
I know how to search for Smart Albums (favorites, selfies, etc...) containing photos:
// Get smart albums
PHFetchResult *smartAlbums = [PHAssetCollection fetchAssetCollectionsWithType:PHAssetCollectionTypeSmartAlbum subtype:PHAssetCollectionSubtypeAlbumRegular options:nil];
I have the following questions:
Is there a way to enumerate the People Smart Albums and access the photos in a specific People Smart Album?
Is there a way to enumerate the Places Smart Albums and access the photos in a specific Place Smart Album?
Is there a way to enumerate Shared Albums (shared to the current iCloud user) and access the photos in a specific Shared Album?
Hi,
I am trying to implement a PHBackgroundResourceUploadExtension to upload backup media files to an external cloud service based on these docs: https://developer.apple.com/documentation/PhotoKit/uploading-asset-resources-in-the-background#Acknowledge-completed-jobs
Creating jobs and actual uploading is working as expected, but the problem I have is in the acknowledgeCompletedJobs() function.
When trying to access a job's resource, the resource is nil and thus has empty assetLocalIdentifier and originalFilename.
Did anybody successfully implement this extension or knows, why this would happen? Because the resource of an acknowledgable job is empty, I can not match it back to my processed assets.
How I can get iCloud photo file size?
Could I use private API like this in prod? Does anyone know another way? (without downloading the file to the device)
func getFileSize(asset: PHAsset) -> Int64? {
let resources = PHAssetResource.assetResources(for: asset)
let resource = resources.first
let size = resource?.value(forKey: "fileSize") as? Int64
return size
}
I'm implementing a PHBackgroundResourceUploadExtension to back up photos and videos from the user's library to our cloud storage service.
Our existing upload infrastructure uses chunked uploads for large files (splitting videos into smaller byte ranges and uploading each chunk separately). This approach:
Allows resumable uploads if interrupted
Stays within server-side request size limits
Provides granular progress tracking
Looking at the PHAssetResourceUploadJobChangeRequest.createJob(destination:resource:) API, I don't see a way to specify byte ranges or create multiple jobs for chunks of the same resource.
Questions:
Does the system handle large files (1GB+) automatically under the hood, or is there a recommended maximum file size for a single upload job?
Is there a supported pattern for chunked/resumable uploads, or should the destination URL endpoint handle the entire file in one request?
If our server requires chunked uploads (e.g., BITS protocol with CreateSession → Fragment → CloseSession), is this extension the right mechanism, or should we use a different approach for large videos?
Any guidance on best practices for large asset uploads would be greatly appreciated.
Environment: iOS 26, Xcode 18