Photos & Camera

RSS for tag

Explore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.

Posts under Photos & Camera subtopic

Post

Replies

Boosts

Views

Created

Capturing 48mp photos with .builtInTripleCamera
I am able to capture 48mp photos using .builtInWideAngleCamera, but it seems like .builtInTripleCamera is capped at 12mp? Is there a way to capture 48mp photos using .builtInTripleCamera? Because .builtInTripleCamera provides smooth transition between cameras during zooming, and I'd like to keep this behavior. New iPhone 17 Pro have all their cameras at 48mp. Is there a chance that their .builtInTripleCamera is capable of capturing 48mp? Or is this an API limitation?
3
0
373
Sep ’25
Error capturing ProRAW using iPhone 17 Pro Telephoto with photoQualityPrioritization set to .Quality
Hey, I'm having a very strange issue on my iPhone 17 Pro. I'm trying to capture a 12MP ProRAW image using the Telephoto Lens with the photoQualityPrioritization set to .Quality. Unfortunately I receive this error when trying to capture the image: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSUnderlyingError=0x134f7a1f0 {Error Domain=NSOSStatusErrorDomain Code=-16802 "(null)"}, NSLocalizedFailureReason=An unknown error occurred (-16802), AVErrorRecordingFailureDomainKey=4, NSLocalizedDescription=The operation could not be completed} The photo captures correctly at 7.9x zoom, it's only a problem when the zoom goes over 8x. Also, it's only this particular configuration of settings which causes the issue. I'm able to capture an image if I either: Set quality to ".balanced" Set max dimensions to 48MP Capture a JPEG image instead of a ProRAW image Use the TripleCamera fusion lens Any help would be greatly appreciated. Alex
1
0
239
Sep ’25
Is there a resource available that lists all the AVCaptureDevices for each iPhone model?
I want to fully support the new iPhone models in my app, and ideally need to know the available lenses. However, I can't find information about this on the web and they're not reported in the simulators. The closest thing I found was this, but it's very out of date. https://developer.apple.com/library/archive/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/Cameras/Cameras.html My only other option is to buy each device, which isn't really feasible, or to log the data from real users via an analytics tool which isn't ideal either. Thanks, Alex
1
0
327
Sep ’25
PHFetchOptions.includeHiddenAssets does not correctly return asset by id
When attempting to access a PHAsset that is in the hidden folder of iOS26, the PHFetchResult always returns no items, even when the user has granted full access to photos and even when includeHiddenAssets is true. This is the code suggested by ChatGPT; it always fails: public func fetchAsset(withLocalIdentifier identifier: String) -> PSSPHAssetImplementing? { // First try the direct fetch by identifier (fast path) let directResult = PHAsset.fetchAssets(withLocalIdentifiers: [identifier], options: nil) if let asset = directResult.firstObject { return build(from: asset) } // Fallback: fetch all assets including hidden, then filter manually let options = PHFetchOptions() options.includeHiddenAssets = true let allAssets = PHAsset.fetchAssets(with: options) for index in 0..<allAssets.count { let asset = allAssets.object(at: index) if asset.localIdentifier == identifier { return build(from: asset) } } return nil } Is it no longer possible to retrieve a hidden photo in iOS 26?
1
0
283
Sep ’25
ICDeviceBrowser authorization requests always return ICAuthorizationStatusNotDetermined on iOS 26.1 beta
Area ImageCaptureCore / ICDeviceBrowser Description On iOS 26.1 beta, calling requestControlAuthorization() requestContentsAuthorization() always returns .notDetermined and never transitions to .authorized or .denied. This prevents apps from properly accessing device control or contents authorization. The issue occurs regardless of device state or prior requests. Steps to Reproduce 1. Create and start an ICDeviceBrowser instance. 2. Call requestControlAuthorization() or requestContentsAuthorization(). 3. Inspect the returned ICAuthorizationStatus. Expected Result • The system should prompt the user if necessary. • A final status of either .authorized or .denied should be returned. Actual Result • The completion handler always reports .notDetermined. • No user prompt appears and the status does not change. Version / Build • iOS 26.1 beta • Xcode Hardware • [iPhone 15 Pro, iPad Pro (M2)] Impact This regression blocks development and testing of features relying on ImageCaptureCore. Applications depending on device browsing and content access cannot proceed, which significantly affects workflows involving external device integration. Notes This appears to be a regression compared to earlier iOS releases.
1
0
167
Sep ’25
Recording Constant Frame Rate at 4K 60 fps mode
Hi, We have created an app which allows recording 4K 60 fps videos in the app. We have noted that some time the recording switched to 20 fps (the value 20 is constant) even though the resolution settings is at 4K 60fps. We are using AVCaptureDevice to invoke the camera. has anyone experienced this problem before ? What is unique to 20 fps? Why does it resort to 20 fps from 60 fps and why not to other numbers ?
1
0
187
Sep ’25
Clean up render files saved to PHContentEditingOutput.renderedContentURL
I discovered when editing photos with the PhotoKit API, PHContentEditingOutput's renderedContentURL is a file in the app container's tmp directory with a filename that seems to follow the format render.<uuid>.JPG, and that file does not get deleted if the edit does not complete successfully (the user cancels the edit request, an error occurs, the app crashes, etc). I understand the system is supposed to automatically delete tmp files every once in a while, but some users are noticing my app's Documents & Data inflates, so I'm considering deleting these render files each time the app is launched. But I don't want to delete everything in the tmp directory as there could possibly be other data in there. What's the best way to remove those temporary files? Does the filename always start with render. no matter the device language? I thought I'd delete files in NSTemporaryDirectory() with that prefix but then I discovered in Mac Catalyst the location is not the tmp directory directly, they're in tmp/TemporaryItems/<bundleid>. Thanks!
0
0
76
Oct ’25
LockedCameraCapture Does Not Launch The App from Lock Screen
My implementation of LockedCameraCapture does not launch my app when tapped from locked screen. But when the same widget is in the Control Center, it launches the app successfully. Standard Xcode target template: Lock_Screen_Capture.swift @main struct Lock_Screen_Capture: LockedCameraCaptureExtension { var body: some LockedCameraCaptureExtensionScene { LockedCameraCaptureUIScene { session in Lock_Screen_CaptureViewFinder(session: session) } } } Lock_Screen_CaptureViewFinder.swift: import SwiftUI import UIKit import UniformTypeIdentifiers import LockedCameraCapture struct Lock_Screen_CaptureViewFinder: UIViewControllerRepresentable { let session: LockedCameraCaptureSession var sourceType: UIImagePickerController.SourceType = .camera init(session: LockedCameraCaptureSession) { self.session = session } func makeUIViewController(context: Self.Context) -> UIImagePickerController { let imagePicker = UIImagePickerController() imagePicker.sourceType = sourceType imagePicker.mediaTypes = [UTType.image.identifier, UTType.movie.identifier] imagePicker.cameraDevice = .rear return imagePicker } func updateUIViewController(_ uiViewController: UIImagePickerController, context: Self.Context) { } } Then I have my widget: struct CameraWidgetControl: ControlWidget { var body: some ControlWidgetConfiguration { StaticControlConfiguration( kind: "com.myCompany.myAppName.lock-screen") { ControlWidgetButton(action: MyAppCaptureIntent()) { Label("Capture", systemImage: "camera.shutter.button.fill") } } } } My AppIntent: struct MyAppContext: Codable {} struct MyAppCaptureIntent: CameraCaptureIntent { typealias AppContext = MyAppContext static let title: LocalizedStringResource = "MyAppCaptureIntent" static let description = IntentDescription("Capture photos and videos with MyApp.") @MainActor func perform() async throws -> some IntentResult { .result() } } The Issue LockedCameraCapture Widget does not launch my app when tapped from locked screen. You get the Face ID prompt and takes you to just Home Screen. But when the same widget is in the Control Center, it launches the app successfully. Error Message When tapped on Lock Screen, I get the following error code: LaunchServices: store ‹private > or url ‹private > was nil: Error Domain=NSOSStatusErrorDomain Code=-54 "process may not map database" UserInfo=&NSDebugDescription=process may not map database, _LSLine=72, _LSFunction=_LSServer_GetServerStoreForConnectionWithCompletionHandler} Attempt to map database failed: permission was denied. This attempt will not be retried. Failed to initialize client context with error Error Domain=NSOSStatusErrorDomain Code=-54 "process may not map database" UserInfo=&NSDebugDescription=process may not map database, _LSLine=72, _LSFunction=_LSServer_GetServerStoreForConnectionWithCompletionHandler} Things I tried Widget image displays correctly App ID and the Provisioning Profile seem to be fine since they work fine when the same code injected in to AVCam sample app and when used the same App ID's. AppIntent file contains the target memberships of the Lock Screen capture and Widget Apple compiles without errors or warnings.
1
0
211
Oct ’25
How can I locate a UVC camera for PTZ control by AVCaptureDevice.unique_id
I'm writing a program to control a PTZ camera connected via USB. I can get access to target camera's unique_id, and also other infos provided by AVFoundation. But I don't know how to locate my target USB device to send a UVC ControlRequest. There's many Cameras with same VendorID and ProductID connected at a time, so I need a more exact way to find out which device is my target. It looks that the unique_id provided is (locationID<<32|VendorID<<16|ProductID) as hex string, but I'm not sure if I can always assume this behavior won't change. Is there's a document declares how AVFoundation generate the unique_id for USB camera, so I can assume this convert will always work? Or is there's a way to send a PTZ control request to AVCaptureDevice? https://stackoverflow.com/questions/40006908/usb-interface-of-an-avcapturedevice I have seen this similar question. But I'm worrying that Exacting LocationID+VendorID+ProductID from unique_id seems like programming to implementation instead of interface. So, if there's any other better way to control my camera? here's my example code for getting unique_id: // // camera_unique_id_test.mm // // 测试代码:使用C++获取当前系统摄像头的AVCaptureDevice unique_id // // 编译命令: // clang++ -framework AVFoundation -framework CoreMedia -framework Foundation // camera_unique_id_test.mm -o camera_unique_id_test // #include <iostream> #include <string> #include <vector> #import <AVFoundation/AVFoundation.h> #import <Foundation/Foundation.h> struct CameraInfo { std::string uniqueId; }; std::vector<CameraInfo> getAllCameraDevices() { std::vector<CameraInfo> cameras; @autoreleasepool { NSArray<AVCaptureDevice*>* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; AVCaptureDevice* defaultDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; // 遍历所有设备 for (AVCaptureDevice* device in devices) { CameraInfo info; // 获取unique_id info.uniqueId = std::string([device.uniqueID UTF8String]); cameras.push_back(info); } } return cameras; } int main(int argc, char* argv[]) { std::vector<CameraInfo> cameras = getAllCameraDevices(); for (size_t i = 0; i < cameras.size(); i++) { const CameraInfo& camera = cameras[i]; std::cout << " 设备 " << (i + 1) << ":" << std::endl; std::cout << " unique_id: " << camera.uniqueId << std::endl; } return 0; } and here's my code for UVC control: // clang++ -framework Foundation -framework IOKit uvc_test.cpp -o uvc_test #include <iostream> #include <CoreFoundation/CoreFoundation.h> #include <IOKit/IOCFPlugIn.h> #include <IOKit/IOKitLib.h> #include <IOKit/IOMessage.h> #include <IOKit/usb/IOUSBLib.h> #include <IOKit/usb/USB.h> CFStringRef CreateCFStringFromIORegistryKey(io_service_t ioService, const char* key) { CFStringRef keyString = CFStringCreateWithCString(kCFAllocatorDefault, key, kCFStringEncodingUTF8); if (!keyString) return nullptr; CFStringRef result = static_cast<CFStringRef>( IORegistryEntryCreateCFProperty(ioService, keyString, kCFAllocatorDefault, kIORegistryIterateRecursively)); CFRelease(keyString); return result; } std::string GetStringFromIORegistry(io_service_t ioService, const char* key) { CFStringRef cfString = CreateCFStringFromIORegistryKey(ioService, key); if (!cfString) return ""; char buffer[256]; Boolean success = CFStringGetCString(cfString, buffer, sizeof(buffer), kCFStringEncodingUTF8); CFRelease(cfString); return success ? std::string(buffer) : std::string(""); } uint32_t GetUInt32FromIORegistry(io_service_t ioService, const char* key) { CFStringRef keyString = CFStringCreateWithCString(kCFAllocatorDefault, key, kCFStringEncodingUTF8); if (!keyString) return 0; CFNumberRef number = static_cast<CFNumberRef>( IORegistryEntryCreateCFProperty(ioService, keyString, kCFAllocatorDefault, kIORegistryIterateRecursively)); CFRelease(keyString); if (!number) return 0; uint32_t value = 0; CFNumberGetValue(number, kCFNumberSInt32Type, &value); CFRelease(number); return value; } int main() { // Get matching dictionary for USB devices CFMutableDictionaryRef matchingDict = IOServiceMatching(kIOUSBDeviceClassName); // Get iterator for matching services io_iterator_t serviceIterator; IOServiceGetMatchingServices(kIOMasterPortDefault, matchingDict, &serviceIterator); // Iterate through matching devices io_service_t usbService; while ((usbService = IOIteratorNext(serviceIterator))) { uint32_t locationId = GetUInt32FromIORegistry(usbService, "locationID"); uint32_t vendorId = GetUInt32FromIORegistry(usbService, "idVendor"); uint32_t productId = GetUInt32FromIORegistry(usbService, "idProduct"); IOCFPlugInInterface** plugInInterface = nullptr; IOUSBDeviceInterface** deviceInterface = nullptr; SInt32 score; // Get device plugin interface IOCreatePlugInInterfaceForService(usbService, kIOUSBDeviceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score); // Get device interface (*plugInInterface) ->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID), (LPVOID*)&deviceInterface); (*plugInInterface)->Release(plugInInterface); // Try to find UVC control interface using CreateInterfaceIterator io_iterator_t interfaceIterator; IOUSBFindInterfaceRequest interfaceRequest; interfaceRequest.bInterfaceClass = kUSBVideoInterfaceClass; // 14 interfaceRequest.bInterfaceSubClass = kUSBVideoControlSubClass; // 1 interfaceRequest.bInterfaceProtocol = kIOUSBFindInterfaceDontCare; interfaceRequest.bAlternateSetting = kIOUSBFindInterfaceDontCare; (*deviceInterface) ->CreateInterfaceIterator(deviceInterface, &interfaceRequest, &interfaceIterator); (*deviceInterface)->Release(deviceInterface); io_service_t usbInterface = IOIteratorNext(interfaceIterator); IOObjectRelease(interfaceIterator); if (usbInterface) { std::cout << "Get UVC device with:" << std::endl; std::cout << "locationId: " << std::hex << locationId << std::endl; std::cout << "vendorId: " << std::hex << vendorId << std::endl; std::cout << "productId: " << std::hex << productId << std::endl << std::endl; IOObjectRelease(usbInterface); } IOObjectRelease(usbService); } IOObjectRelease(serviceIterator); }
2
0
215
Oct ’25
iOS 26 AVCaptureDevice continuousAutoFocus not working
Device: iPhone 16 Pro Max OS: iOS 26.0.1 AVCaptureDevice.DeviceType: builtInUltraWideCamera avtiveFormat: <AVCaptureDeviceFormat: 0x10ffb9ac0 'vide'/'420v' 1440x1080, { 1- 60 fps}, photo dims:{1440x1080,2016x1512}, fov:101.022, gdc fov:103.625, binned, max zoom:94.50 (upscales @1.40), system zoom range:1.0-3.0, AF System:1, ISO:15.0-3600.0, SS:0.000023-1.000000, system exposure bias range:-2.0-2.0, supports multicam, supports CS RoI, supports Smart Style, supports Smudge Detection> API: device.isFocusModeSupported(.continuousAutoFocus) == true setting: device.focusMode = .continuousAutoFocus setting is ok, but it's not working actually with continuousAutoFocus
0
0
64
Oct ’25
How-to highlight people in a Vision Pro app using Compositor Services
Fundamentally, my questions are: is there a known transform I can apply onto a given (pixel) position (passed into a Metal Fragment Function) to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks? My goal is to highlight people in a Vision Pro app using Compositor Services. To start, I asynchronously receive camera frames for the main left and right cameras. This is the breakdown of the specific CameraVideoFormat I pass along to the CameraFrameProvider: minFrameDuration: 0.03 maxFrameDuration: 0.033333335 frameSize: (1920.0, 1080.0) pixelFormat: 875704422 cameraType: main cameraPositions: [left, right] cameraRectification: mono From each camera frame sample, I extract the left and right buffers (CVReadOnlyPixelBuffer.withUnsafebuffer ==> CVPixelBuffer). I asynchronously process the extracted buffers by performing a VNGeneratePersonSegmentationRequest on both of them: // NOTE: This block of code and all following code blocks contain simplified representations of my code for clarity's sake. var request = VNGeneratePersonSegmentationRequest() request.qualityLevel = .balanced request.outputPixelFormat = kCVPixelFormatType_OneComponent8 ... let lHandler = VNSequenceRequestHandler() let rHandler = VNSequenceRequestHandler() ... func processBuffers() async { try lHandler.perform([request], on: lBuffer) guard let lMask = request.results?.first?.pixelBuffer else {...} try rHandler.perform([request], on: rBuffer) guard let rMask = request.results?.first?.pixelBuffer else {...} appModel.latestPersonMasks = (lMask, rMask) } I store the two resulting CVPixelBuffers in my appModel. For both of these buffers aka grayscale masks: width (in pixels) = 512 height (in pixels) = 384 byters per row = 512 plane count = 0 pixel format type = 1278226488 I am using Compositor Services to render my content in Immersive Space. My implementation of Compositor Services is based off of the same code from Interacting with virtual content blended with passthrough. Within the Shaders.metal, the tint's Fragment Shader is now passed the grayscale masks (converted from CVPixelBuffer to MTLTexture via CVMetalTextureCacheCreateTextureFromImage() at the beginning of the main render pipeline). fragment float4 tintFragmentShader( TintInOut in [[stage_in]], ushort amp_id [[amplification_id]], texture2d<uint> leftMask [[texture(0)]], texture2d<uint> rightMask [[texture(1)]] ) { if (in.color.a <= 0.0) { discard_fragment(); } float2 uv; if (amp_id == 0) { // LEFT uv = ??????????????????????; } else { // RIGHT uv = ??????????????????????; } constexpr sampler linearSampler (mip_filter::linear, mag_filter::linear, min_filter::linear); // Sample the PersonSegmentation grayscale mask float maskValue = 0.0; if (amp_id == 0) { // LEFT if (leftMask.get_width() > 0) { maskValue = rightMask.sample(linearSampler, uv).r; } } else { // RIGHT if (rightMask.get_width() > 0) { maskValue = rightMask.sample(linearSampler, uv).r; } } if (maskValue > 250) { return (1.0, 1.0, 1.0, 0.5) } return in.color; } I need to correctly sample the masks for a given fragment. The LayerRenderer.Layout is set to .layered. From Developer Documentation. A layout that specifies each view’s content as a slice of a single texture. Using the Metal debugger, I know that the final render target texture for each view / eye is 1888 x 1792 pixels, giving an aspect ratio of 59:56. The initial CVPixelBuffer provided by the main left and right cameras is 1920x1080 (16:9). The grayscale CVPixelBuffer returned by the VNPersonSegmentationRequest is 512x384 (4:3). All of these aspect ratios are different. My questions come down to: is there a known transform I can apply onto a given (pixel) position to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks? Within the tint's Vertex Shader, after applying the modelViewProjectionMatrix, I have tried every version I have been able to find that takes the pixel space position (= vertices[vertexID].position.xy) and the viewport size (1888x1792) to compute the correct clip space position (maybe = pixel space position.xy / (viewport size * 0.5)???) of the grayscale masks but nothing has worked. The "highlight" of the person segmentations is off: scaled a little too big, offset little to far up and off to the side.
1
0
393
Oct ’25
New API to control front camera orientation like native Camera app on iPhone 17 with iOS 26?
The iPhone 17’s front camera with the new 18MP square sensor and iOS 26’s Center Stage feature can auto-rotate between portrait and landscape like the native iOS Camera app. Is there a Swift or AVFoundation API that allows developers to manually control front camera orientation in the same way the native Camera app does? Or is this auto-rotation strictly handled by the system without public API access?
0
0
170
Oct ’25
APMP & Photography?
Hi, I'm a fan of the gallery in vision pro which has video as well as still photography but I'm wondering if Apple has considered adding the projected media tags to heic so that we can go that next step from Spatial photos to Immersive photos. I have a device that can give me 12k x 6k fisheye images in HDR, but it can't do it at a framerate or resolution that's good enough for video, so I want to cut my losses and show off immersive photos instead. Is there something Apple is already working on for APMP stills or should I create my own app that reads metadata inside a HEIC that I infer in a similar way to the demo "ProjectedMediaConversion" is doing for Video. It would be great to have 180VR photos, which could show as Spatial in a gallery view, but going immersive would half-surround you instead of floating in the blurred view. I think that would be a pretty amazing effect.
2
0
293
Oct ’25
Error when capturing a high-resolution frame with depth data enabled in ARKit
Problem Description (1) I am using ARKit in an iOS app to provide AR capabilities. Specifically, I'm trying to use the ARSession's captureHighResolutionFrame(using:) method to capture a high-resolution frame along with its corresponding depth data: open func captureHighResolutionFrame(using photoSettings: AVCapturePhotoSettings?) async throws -> ARFrame (2) However, when I attempt to do so, the call fails at runtime with the following error, which I captured from the Xcode debugger: [AVCapturePhotoOutput capturePhotoWithSettings:delegate:] settings.depthDataDeliveryEnabled must be NO if self.isDepthDataDeliveryEnabled is NO Code Snippet Explanation (1) ARConfig and ARSession Initialization The following code configures the ARConfiguration and ARSession. A key part of this setup is setting the videoFormat to the one recommended for high-resolution frame capturing, as suggested by the documentation. func start(imagesDirectory: URL, configuration: Configuration = Configuration()) { // ... basic setup ... let arConfig = ARWorldTrackingConfiguration() arConfig.planeDetection = [.horizontal, .vertical] // Enable various frame semantics for depth and segmentation if ARWorldTrackingConfiguration.supportsFrameSemantics(.smoothedSceneDepth) { arConfig.frameSemantics.insert(.smoothedSceneDepth) } if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) { arConfig.frameSemantics.insert(.sceneDepth) } if ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth) { arConfig.frameSemantics.insert(.personSegmentationWithDepth) } // Set the recommended video format for high-resolution captures if let videoFormat = ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing { arConfig.videoFormat = videoFormat print("Enabled: High-Resolution Frame Capturing by selecting recommended video format.") } arSession.run(arConfig, options: [.resetTracking, .removeExistingAnchors]) // ... } (2) Capturing the High-Resolution Frame The code below is intended to manually trigger the capture of a high-resolution frame. The goal is to obtain both a high-resolution color image and its associated high-resolution depth data. To achieve this, I explicitly set the isDepthDataDeliveryEnabled property of the AVCapturePhotoSettings object to true. func requestImageCapture() async { // ... guard statements ... print("Manual image capture requested.") if #available(iOS 16.0, *) { // Assuming 16.0+ for this API if let defaultSettings = arSession.configuration?.videoFormat.defaultPhotoSettings { // Create a mutable copy from the default settings, as recommended let photoSettings = AVCapturePhotoSettings(from: defaultSettings) // Explicitly enable depth data delivery for this capture request photoSettings.isDepthDataDeliveryEnabled = true do { let highResFrame = try await arSession.captureHighResolutionFrame(using: photoSettings) print("Successfully captured a high-resolution frame.") if let initialDepthData = highResFrame.capturedDepthData { // Process depth data... } else { print("High-resolution frame was captured, but it contains no depth data.") } } catch { // The exception is caught here print("Error capturing high-resolution frame: \(error.localizedDescription)") } } } // ... } Issue Confirmation & Question (1) Through debugging, I have confirmed the following behavior: If I call captureHighResolutionFrame without providing the photoSettings parameter, or if photoSettings.isDepthDataDeliveryEnabled is set to false, the method successfully returns a high-resolution ARFrame, but its capturedDepthData is nil. (2) The error message clearly indicates that settings.depthDataDeliveryEnabled can only be true if the underlying AVCapturePhotoOutput instance's own isDepthDataDeliveryEnabled property is also true. (3) However, within the context of ARKit and ARSession, I cannot find any public API that would allow me to explicitly access and configure the underlying AVCapturePhotoOutput instance that ARSession manages. (4) My question is: Is there a way to configure the ARSession's internal AVCapturePhotoOutput to enable its isDepthDataDeliveryEnabled property? Or, is simultaneously capturing a high-resolution frame and its associated depth data simply not a supported use case in the current ARKit framework?
1
0
290
Nov ’25
LockedCameraCaptureExtension and Sharing User Preferences
I have the main app that saves preferences to UserDefaults.standard. So I have this one preference that the user is able to toggle - isRawOn UserDefaults.standard.set(self.isRawOn, forKey: "isRawOn") Now, I have LockedCameraCaptureExtension which is required know if that above setting on or off during launch. Also if it's toggled within the extension, the main app should know about it on the next launch. The main app and the extension runs on separate containers and the preferences are not shared due to privacy reasons. Apple mentions of using appContext of CameraCaptureIntent, but not sure how above scenario is possible through that....unless I am missing something. Apple Reference What I have for CameraCaptureIntent: @available(iOS 18, *) struct LaunchMyAppControlIntent: CameraCaptureIntent { typealias AppContext = MyAppContext static let title: LocalizedStringResource = "LaunchMyAppControlIntent" static let description = IntentDescription("Capture photos with MyApp.") @MainActor func perform() async throws -> some IntentResult { .result() } }
1
0
295
Nov ’25
PHPhotoLibrary.performChanges completionHandler not called when deleting assets on iOS 26
In my app, I use api provided in Photos framework to delete specified photo. But after upgrading to iOS 26, the delete function in some iOS device no longer work. The api will never triggers the system confirmation dialog, and the completionHandler is never called. In the iOS Photos app, deletion works correctly on the same assets, but calling the API from my app does not work. Steps to Reproduce Make sure the app has Full Photo Library Access. Execute the following code: PHPhotoLibrary.shared().performChanges({ let assetsToBeDeleted = PHAsset.fetchAssets(withLocalIdentifiers: delUrls, options: nil) PHAssetChangeRequest.deleteAssets(assetsToBeDeleted) }, completionHandler: completionHandler) Expected Behavior The system should present a confirmation dialog asking the user to delete the selected photos. After the user confirms, the deletion should occur, and the completionHandler should be called with success or error. Actual Behavior The system delete confirmation dialog does not appear. The completionHandler is never called. Environment iOS Versions: 26.1 / 26.0.1 It looks like api bug. I want to check Is it a know issue and will be fixed. Thanks
2
0
208
Nov ’25
Launch The Main App from LockedCameraCapture
If the app is launched from LockedCameraCapture and if the settings button is tapped, I need to launch the main app. CameraViewController: func settingsButtonTapped() { #if isLockedCameraCaptureExtension //App is launched from Lock Screen //Launch main app here... #else //App is launched from Home Screen self.showSettings(animated: true) #endif } In this document: https://developer.apple.com/documentation/lockedcameracapture/creating-a-camera-experience-for-the-lock-screen Apple asks you to use: func launchApp(with session: LockedCameraCaptureSession, info: String) { Task { do { let activity = NSUserActivityTypeLockedCameraCapture activity.userInfo = [UserInfoKey: info] try await session.openApplication(for: activity) } catch { StatusManager.displayError("Unable to open app - \(error.localizedDescription)") } } } However, the documentation states that this should be placed within the extension code - LockedCameraCapture. If I do that, how can I call that all the way down from the main app's CameraViewController?
3
0
442
Nov ’25
builtInLiDARDepthCamera doesn't work on the 2020 iPad Pro on iOS 26
On iOS 26.1, this throws on the 2020 iPad Pro (4th gen) but works fine on an M4 iPad Pro or iPhone 15 Pro: guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else { throw ConfigurationError.lidarDeviceUnavailable } It's just the standard code from Apple's own sample code so obviously used to work: https://developer.apple.com/documentation/AVFoundation/capturing-depth-using-the-lidar-camera Does it fail because Apple have silently dumped support for the older LiDAR sensor used prior to the M4 iPad Pro, or is there another reason? What about the 5th and 6th gen iPad Pro, does it still work on those?
2
0
366
4w