Photos & Camera

RSS for tag

Explore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.

Posts under Photos & Camera subtopic

Post

Replies

Boosts

Views

Activity

How to get a callback once a requested frameDuration change has been applied?
When changing a camera's exposure, AVFoundation provides a callback which offers the timestamp of the first frame captured with the new exposure duration: AVCaptureDevice.setExposureModeCustom(duration:, iso:, completionHandler:). I want to get a similar callback when changing frame duration. After setting AVCaptureDevice.activeVideoMinFrameDuration or AVCaptureDevice.activeVideoMinFrameDuration to a new value, how can I compute the index or the timestamp of the first camera frame which was captured using the newly set frame duration?
0
0
503
Aug ’25
PHPickerFilter doesn't always apply in Collections tab when using PHPickerViewController
Hi everyone, I’m running into an issue with PHPickerFilter when using PHPickerViewController. When I configure the picker with a .videos and .livePhotos filter, it seems to work correctly in the Photos tab. However, when I switch to the Collections tab, the filter doesn’t always apply — users can still see and select static image assets in certain collections (e.g. from one of the People & Pets sections). Here’s a simplified snippet of my setup: var configuration = PHPickerConfiguration(photoLibrary: .shared()) configuration.selectionLimit = 1 var filters = [PHPickerFilter]() filters.append(.videos) filters.append(.livePhotos) configuration.filter = PHPickerFilter.any(of: filters) configuration.preferredAssetRepresentationMode = .current let picker = PHPickerViewController(configuration: configuration) picker.delegate = self present(picker, animated: true) Expected behavior: The picker should consistently respect the filter across both Photos and Collections tabs, only showing assets that match the filter. Actual behavior: The filter seems to apply correctly in the Photos tab, but in the Collections tab, other asset types are still visible/selectable. Has anyone else encountered this behavior? Is this expected or a known issue, or am I missing something in the configuration? Thanks in advance!
2
0
345
Oct ’25
Blurry Depth Data since iPhone 13
I tested the accuracy of the depth map on iPhone 12, 13, 14, 15, and 16, and found that the variance of the depth map after iPhone 12 is significantly greater than that of iPhone 12. Enabling depth filtering will cause the depth data to be affected by the previous frame, adding more unnecessary noise, especially when the phone is moving. This is not friendly for high-precision reconstruction. I tried to add depth map smoothing in post-processing to solve the problem of large depth map deviation, but the performance is still poor. Is there any depth map smoothing solutions already announced by Apple?
1
0
63
Sep ’25
Unabe to use writeHEIFRepresentation - failed to add image to the PhotoCompressionSession.
I'm getting an error writing a ciImage as a heif image: // Create CIImage directly from pixel buffer let ciImage = CIImage(cvPixelBuffer: pixelBuffer, options: [CIImageOption.properties: combinedMetadata]) // Write HEIC synchronously do { try ciContext.writeHEIFRepresentation(of: ciImage, to: url, format: .RGBA8, colorSpace: colorSpace) The error I'm getting is: Error Domain=CINonLocalizedDescriptionKey Code=3 "(null)" UserInfo={CINonLocalizedDescriptionKey=failed to write HEIC data to file., NSUnderlyingError=0x11b1a1ec0 {Error Domain=CINonLocalizedDescriptionKey Code=10 "(null)" UserInfo={CINonLocalizedDescriptionKey=failed to add image to the PhotoCompressionSession.}}} Both try ciContext.writeJPEGRepresentation(of: copiedCIImage, to: url, colorSpace: colorSpace, options: options) and try ciContext.writePNGRepresentation(of: copiedCIImage, to: url, format: .RGBA8, colorSpace: colorSpace) work. I also verified that the code works with iOS 18. Is there something new we need to do for Heif images? Thanks in advance
7
0
614
Oct ’25
iPhone 17 smart framing api not working
I tried to modify the AVCam sample code by copying the code here https://developer.apple.com/documentation/avfoundation/adopting-smart-framing-in-your-camera-app#Configure-the-smart-framing-monitor smart framing monitors I can ensure the activeformat supports smart framing, but the supported frames in monitor is always nil. In my another project it has supported value, but the observation has never been triggered, then I tried to keep printing the recommended frame, it's always nil. Could the engineer embed the code into AVCam rather than posting a few code pieces?
0
0
140
Sep ’25
Orientation does not work on iPhone 17 and above.
I'm receiving output from avcapturesession and capturing an image using Vision, but the image is output in landscape orientation instead of portrait. Even when I set the orientation to up in ciimage, cgimage, and uiimage, the image is still output in landscape orientation. On iPhones 16 and below, the image is output in portrait orientation. But on iPhones 17 and above, the image is output in landscape orientation. Please help.
0
0
90
Sep ’25
How to use Front UW and TrueDepth in iPad
I want to use both front UW and TrueDepth cameras in iPad which has front UW camera. Firstly, I have used only front builtInDualCamera by AVFoundation and tried all the formats that can be used with builtInDualCamera, but there was no format that could capture UW. Secondly, I have tried to both front builtInDualCamera and builtInUltraWideCamera, but there was no combination that could use builtInUltraWideCamera and builtInDualCamera. Is there any way ?
0
0
142
Sep ’25
Error saving image to Camera Roll on iPhone 17 Pro
I'm experiencing an issue with my app when saving images to the camera roll. This is intermittent, but it happens several times a day. The error I receive is the following: Connection to assetsd was interrupted - assetsd exited, died, or closed the photo library Error getting remote object proxy for -[PLNonBindingAssetsdPhotoKitClient sendChangesRequest:reply:]_block_invoke: Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.photos.service" UserInfo={NSDebugDescription=connection to service named com.apple.photos.service} PhotoKit XPC proxy is invalid. Dropping request on the floor and returning an error: Error Domain=PHPhotosErrorDomain Code=3301 "(null)" (underlying error Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.photos.service" UserInfo={NSDebugDescription=connection to service named com.apple.photos.service}) CoreData: error: XPC: synchronousRemoteObjectProxyWithErrorHandler: store 'file:///var/mobile/Media/PhotoData/Photos.sqlite' encountered error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003." UserInfo={NSDebugDescription=The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003.} CoreData: error: XPC: synchronousRemoteObjectProxyWithErrorHandler: store 'file:///var/mobile/Media/PhotoData/Photos.sqlite' encountered error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003." UserInfo={NSDebugDescription=The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003.} My code is unchanged from using my app daily on an iPhone 16 Pro with iOS 26. I never saw the issue on this device. Here is an excerpt from my code for saving the image: var localIdentifier = String() PHPhotoLibrary.shared().performChanges({ let albumChangeRequest = PHAssetCollectionChangeRequest(for: album) let assetCreationRequest = PHAssetCreationRequest.forAsset() let options = PHAssetResourceCreationOptions() assetCreationRequest.addResource(with: .photo, data: imageData, options: options) assetCreationRequest.creationDate = Date.now let placeHolder = assetCreationRequest.placeholderForCreatedAsset albumChangeRequest?.addAssets([placeHolder!] as NSArray) if placeHolder != nil { localIdentifier = (placeHolder?.localIdentifier)! } }) { (didSucceed, error) in OperationQueue.main.addOperation({ didSucceed ? success(localIdentifier) : failure(error) }) } I'm not sure why this would be device specific but I have had users with iPhone 17 Pro and iPhone Air reporting the issue. Alex
0
0
84
Sep ’25
Error capturing ProRAW using iPhone 17 Pro Telephoto with photoQualityPrioritization set to .Quality
Hey, I'm having a very strange issue on my iPhone 17 Pro. I'm trying to capture a 12MP ProRAW image using the Telephoto Lens with the photoQualityPrioritization set to .Quality. Unfortunately I receive this error when trying to capture the image: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSUnderlyingError=0x134f7a1f0 {Error Domain=NSOSStatusErrorDomain Code=-16802 "(null)"}, NSLocalizedFailureReason=An unknown error occurred (-16802), AVErrorRecordingFailureDomainKey=4, NSLocalizedDescription=The operation could not be completed} The photo captures correctly at 7.9x zoom, it's only a problem when the zoom goes over 8x. Also, it's only this particular configuration of settings which causes the issue. I'm able to capture an image if I either: Set quality to ".balanced" Set max dimensions to 48MP Capture a JPEG image instead of a ProRAW image Use the TripleCamera fusion lens Any help would be greatly appreciated. Alex
1
0
239
Sep ’25
PHFetchOptions.includeHiddenAssets does not correctly return asset by id
When attempting to access a PHAsset that is in the hidden folder of iOS26, the PHFetchResult always returns no items, even when the user has granted full access to photos and even when includeHiddenAssets is true. This is the code suggested by ChatGPT; it always fails: public func fetchAsset(withLocalIdentifier identifier: String) -> PSSPHAssetImplementing? { // First try the direct fetch by identifier (fast path) let directResult = PHAsset.fetchAssets(withLocalIdentifiers: [identifier], options: nil) if let asset = directResult.firstObject { return build(from: asset) } // Fallback: fetch all assets including hidden, then filter manually let options = PHFetchOptions() options.includeHiddenAssets = true let allAssets = PHAsset.fetchAssets(with: options) for index in 0..<allAssets.count { let asset = allAssets.object(at: index) if asset.localIdentifier == identifier { return build(from: asset) } } return nil } Is it no longer possible to retrieve a hidden photo in iOS 26?
1
0
283
Sep ’25
ICDeviceBrowser authorization requests always return ICAuthorizationStatusNotDetermined on iOS 26.1 beta
Area ImageCaptureCore / ICDeviceBrowser Description On iOS 26.1 beta, calling requestControlAuthorization() requestContentsAuthorization() always returns .notDetermined and never transitions to .authorized or .denied. This prevents apps from properly accessing device control or contents authorization. The issue occurs regardless of device state or prior requests. Steps to Reproduce 1. Create and start an ICDeviceBrowser instance. 2. Call requestControlAuthorization() or requestContentsAuthorization(). 3. Inspect the returned ICAuthorizationStatus. Expected Result • The system should prompt the user if necessary. • A final status of either .authorized or .denied should be returned. Actual Result • The completion handler always reports .notDetermined. • No user prompt appears and the status does not change. Version / Build • iOS 26.1 beta • Xcode Hardware • [iPhone 15 Pro, iPad Pro (M2)] Impact This regression blocks development and testing of features relying on ImageCaptureCore. Applications depending on device browsing and content access cannot proceed, which significantly affects workflows involving external device integration. Notes This appears to be a regression compared to earlier iOS releases.
1
0
167
Sep ’25
Recording Constant Frame Rate at 4K 60 fps mode
Hi, We have created an app which allows recording 4K 60 fps videos in the app. We have noted that some time the recording switched to 20 fps (the value 20 is constant) even though the resolution settings is at 4K 60fps. We are using AVCaptureDevice to invoke the camera. has anyone experienced this problem before ? What is unique to 20 fps? Why does it resort to 20 fps from 60 fps and why not to other numbers ?
1
0
187
Oct ’25
Clean up render files saved to PHContentEditingOutput.renderedContentURL
I discovered when editing photos with the PhotoKit API, PHContentEditingOutput's renderedContentURL is a file in the app container's tmp directory with a filename that seems to follow the format render.<uuid>.JPG, and that file does not get deleted if the edit does not complete successfully (the user cancels the edit request, an error occurs, the app crashes, etc). I understand the system is supposed to automatically delete tmp files every once in a while, but some users are noticing my app's Documents & Data inflates, so I'm considering deleting these render files each time the app is launched. But I don't want to delete everything in the tmp directory as there could possibly be other data in there. What's the best way to remove those temporary files? Does the filename always start with render. no matter the device language? I thought I'd delete files in NSTemporaryDirectory() with that prefix but then I discovered in Mac Catalyst the location is not the tmp directory directly, they're in tmp/TemporaryItems/<bundleid>. Thanks!
0
0
76
Oct ’25
LockedCameraCapture Does Not Launch The App from Lock Screen
My implementation of LockedCameraCapture does not launch my app when tapped from locked screen. But when the same widget is in the Control Center, it launches the app successfully. Standard Xcode target template: Lock_Screen_Capture.swift @main struct Lock_Screen_Capture: LockedCameraCaptureExtension { var body: some LockedCameraCaptureExtensionScene { LockedCameraCaptureUIScene { session in Lock_Screen_CaptureViewFinder(session: session) } } } Lock_Screen_CaptureViewFinder.swift: import SwiftUI import UIKit import UniformTypeIdentifiers import LockedCameraCapture struct Lock_Screen_CaptureViewFinder: UIViewControllerRepresentable { let session: LockedCameraCaptureSession var sourceType: UIImagePickerController.SourceType = .camera init(session: LockedCameraCaptureSession) { self.session = session } func makeUIViewController(context: Self.Context) -> UIImagePickerController { let imagePicker = UIImagePickerController() imagePicker.sourceType = sourceType imagePicker.mediaTypes = [UTType.image.identifier, UTType.movie.identifier] imagePicker.cameraDevice = .rear return imagePicker } func updateUIViewController(_ uiViewController: UIImagePickerController, context: Self.Context) { } } Then I have my widget: struct CameraWidgetControl: ControlWidget { var body: some ControlWidgetConfiguration { StaticControlConfiguration( kind: "com.myCompany.myAppName.lock-screen") { ControlWidgetButton(action: MyAppCaptureIntent()) { Label("Capture", systemImage: "camera.shutter.button.fill") } } } } My AppIntent: struct MyAppContext: Codable {} struct MyAppCaptureIntent: CameraCaptureIntent { typealias AppContext = MyAppContext static let title: LocalizedStringResource = "MyAppCaptureIntent" static let description = IntentDescription("Capture photos and videos with MyApp.") @MainActor func perform() async throws -> some IntentResult { .result() } } The Issue LockedCameraCapture Widget does not launch my app when tapped from locked screen. You get the Face ID prompt and takes you to just Home Screen. But when the same widget is in the Control Center, it launches the app successfully. Error Message When tapped on Lock Screen, I get the following error code: LaunchServices: store ‹private > or url ‹private > was nil: Error Domain=NSOSStatusErrorDomain Code=-54 "process may not map database" UserInfo=&NSDebugDescription=process may not map database, _LSLine=72, _LSFunction=_LSServer_GetServerStoreForConnectionWithCompletionHandler} Attempt to map database failed: permission was denied. This attempt will not be retried. Failed to initialize client context with error Error Domain=NSOSStatusErrorDomain Code=-54 "process may not map database" UserInfo=&NSDebugDescription=process may not map database, _LSLine=72, _LSFunction=_LSServer_GetServerStoreForConnectionWithCompletionHandler} Things I tried Widget image displays correctly App ID and the Provisioning Profile seem to be fine since they work fine when the same code injected in to AVCam sample app and when used the same App ID's. AppIntent file contains the target memberships of the Lock Screen capture and Widget Apple compiles without errors or warnings.
1
0
211
Oct ’25
iOS 26 AVCaptureDevice continuousAutoFocus not working
Device: iPhone 16 Pro Max OS: iOS 26.0.1 AVCaptureDevice.DeviceType: builtInUltraWideCamera avtiveFormat: <AVCaptureDeviceFormat: 0x10ffb9ac0 'vide'/'420v' 1440x1080, { 1- 60 fps}, photo dims:{1440x1080,2016x1512}, fov:101.022, gdc fov:103.625, binned, max zoom:94.50 (upscales @1.40), system zoom range:1.0-3.0, AF System:1, ISO:15.0-3600.0, SS:0.000023-1.000000, system exposure bias range:-2.0-2.0, supports multicam, supports CS RoI, supports Smart Style, supports Smudge Detection> API: device.isFocusModeSupported(.continuousAutoFocus) == true setting: device.focusMode = .continuousAutoFocus setting is ok, but it's not working actually with continuousAutoFocus
0
0
64
Oct ’25
How-to highlight people in a Vision Pro app using Compositor Services
Fundamentally, my questions are: is there a known transform I can apply onto a given (pixel) position (passed into a Metal Fragment Function) to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks? My goal is to highlight people in a Vision Pro app using Compositor Services. To start, I asynchronously receive camera frames for the main left and right cameras. This is the breakdown of the specific CameraVideoFormat I pass along to the CameraFrameProvider: minFrameDuration: 0.03 maxFrameDuration: 0.033333335 frameSize: (1920.0, 1080.0) pixelFormat: 875704422 cameraType: main cameraPositions: [left, right] cameraRectification: mono From each camera frame sample, I extract the left and right buffers (CVReadOnlyPixelBuffer.withUnsafebuffer ==> CVPixelBuffer). I asynchronously process the extracted buffers by performing a VNGeneratePersonSegmentationRequest on both of them: // NOTE: This block of code and all following code blocks contain simplified representations of my code for clarity's sake. var request = VNGeneratePersonSegmentationRequest() request.qualityLevel = .balanced request.outputPixelFormat = kCVPixelFormatType_OneComponent8 ... let lHandler = VNSequenceRequestHandler() let rHandler = VNSequenceRequestHandler() ... func processBuffers() async { try lHandler.perform([request], on: lBuffer) guard let lMask = request.results?.first?.pixelBuffer else {...} try rHandler.perform([request], on: rBuffer) guard let rMask = request.results?.first?.pixelBuffer else {...} appModel.latestPersonMasks = (lMask, rMask) } I store the two resulting CVPixelBuffers in my appModel. For both of these buffers aka grayscale masks: width (in pixels) = 512 height (in pixels) = 384 byters per row = 512 plane count = 0 pixel format type = 1278226488 I am using Compositor Services to render my content in Immersive Space. My implementation of Compositor Services is based off of the same code from Interacting with virtual content blended with passthrough. Within the Shaders.metal, the tint's Fragment Shader is now passed the grayscale masks (converted from CVPixelBuffer to MTLTexture via CVMetalTextureCacheCreateTextureFromImage() at the beginning of the main render pipeline). fragment float4 tintFragmentShader( TintInOut in [[stage_in]], ushort amp_id [[amplification_id]], texture2d<uint> leftMask [[texture(0)]], texture2d<uint> rightMask [[texture(1)]] ) { if (in.color.a <= 0.0) { discard_fragment(); } float2 uv; if (amp_id == 0) { // LEFT uv = ??????????????????????; } else { // RIGHT uv = ??????????????????????; } constexpr sampler linearSampler (mip_filter::linear, mag_filter::linear, min_filter::linear); // Sample the PersonSegmentation grayscale mask float maskValue = 0.0; if (amp_id == 0) { // LEFT if (leftMask.get_width() > 0) { maskValue = rightMask.sample(linearSampler, uv).r; } } else { // RIGHT if (rightMask.get_width() > 0) { maskValue = rightMask.sample(linearSampler, uv).r; } } if (maskValue > 250) { return (1.0, 1.0, 1.0, 0.5) } return in.color; } I need to correctly sample the masks for a given fragment. The LayerRenderer.Layout is set to .layered. From Developer Documentation. A layout that specifies each view’s content as a slice of a single texture. Using the Metal debugger, I know that the final render target texture for each view / eye is 1888 x 1792 pixels, giving an aspect ratio of 59:56. The initial CVPixelBuffer provided by the main left and right cameras is 1920x1080 (16:9). The grayscale CVPixelBuffer returned by the VNPersonSegmentationRequest is 512x384 (4:3). All of these aspect ratios are different. My questions come down to: is there a known transform I can apply onto a given (pixel) position to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks? Within the tint's Vertex Shader, after applying the modelViewProjectionMatrix, I have tried every version I have been able to find that takes the pixel space position (= vertices[vertexID].position.xy) and the viewport size (1888x1792) to compute the correct clip space position (maybe = pixel space position.xy / (viewport size * 0.5)???) of the grayscale masks but nothing has worked. The "highlight" of the person segmentations is off: scaled a little too big, offset little to far up and off to the side.
1
0
393
4w
New API to control front camera orientation like native Camera app on iPhone 17 with iOS 26?
The iPhone 17’s front camera with the new 18MP square sensor and iOS 26’s Center Stage feature can auto-rotate between portrait and landscape like the native iOS Camera app. Is there a Swift or AVFoundation API that allows developers to manually control front camera orientation in the same way the native Camera app does? Or is this auto-rotation strictly handled by the system without public API access?
0
0
170
Oct ’25
Error when capturing a high-resolution frame with depth data enabled in ARKit
Problem Description (1) I am using ARKit in an iOS app to provide AR capabilities. Specifically, I'm trying to use the ARSession's captureHighResolutionFrame(using:) method to capture a high-resolution frame along with its corresponding depth data: open func captureHighResolutionFrame(using photoSettings: AVCapturePhotoSettings?) async throws -> ARFrame (2) However, when I attempt to do so, the call fails at runtime with the following error, which I captured from the Xcode debugger: [AVCapturePhotoOutput capturePhotoWithSettings:delegate:] settings.depthDataDeliveryEnabled must be NO if self.isDepthDataDeliveryEnabled is NO Code Snippet Explanation (1) ARConfig and ARSession Initialization The following code configures the ARConfiguration and ARSession. A key part of this setup is setting the videoFormat to the one recommended for high-resolution frame capturing, as suggested by the documentation. func start(imagesDirectory: URL, configuration: Configuration = Configuration()) { // ... basic setup ... let arConfig = ARWorldTrackingConfiguration() arConfig.planeDetection = [.horizontal, .vertical] // Enable various frame semantics for depth and segmentation if ARWorldTrackingConfiguration.supportsFrameSemantics(.smoothedSceneDepth) { arConfig.frameSemantics.insert(.smoothedSceneDepth) } if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) { arConfig.frameSemantics.insert(.sceneDepth) } if ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth) { arConfig.frameSemantics.insert(.personSegmentationWithDepth) } // Set the recommended video format for high-resolution captures if let videoFormat = ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing { arConfig.videoFormat = videoFormat print("Enabled: High-Resolution Frame Capturing by selecting recommended video format.") } arSession.run(arConfig, options: [.resetTracking, .removeExistingAnchors]) // ... } (2) Capturing the High-Resolution Frame The code below is intended to manually trigger the capture of a high-resolution frame. The goal is to obtain both a high-resolution color image and its associated high-resolution depth data. To achieve this, I explicitly set the isDepthDataDeliveryEnabled property of the AVCapturePhotoSettings object to true. func requestImageCapture() async { // ... guard statements ... print("Manual image capture requested.") if #available(iOS 16.0, *) { // Assuming 16.0+ for this API if let defaultSettings = arSession.configuration?.videoFormat.defaultPhotoSettings { // Create a mutable copy from the default settings, as recommended let photoSettings = AVCapturePhotoSettings(from: defaultSettings) // Explicitly enable depth data delivery for this capture request photoSettings.isDepthDataDeliveryEnabled = true do { let highResFrame = try await arSession.captureHighResolutionFrame(using: photoSettings) print("Successfully captured a high-resolution frame.") if let initialDepthData = highResFrame.capturedDepthData { // Process depth data... } else { print("High-resolution frame was captured, but it contains no depth data.") } } catch { // The exception is caught here print("Error capturing high-resolution frame: \(error.localizedDescription)") } } } // ... } Issue Confirmation & Question (1) Through debugging, I have confirmed the following behavior: If I call captureHighResolutionFrame without providing the photoSettings parameter, or if photoSettings.isDepthDataDeliveryEnabled is set to false, the method successfully returns a high-resolution ARFrame, but its capturedDepthData is nil. (2) The error message clearly indicates that settings.depthDataDeliveryEnabled can only be true if the underlying AVCapturePhotoOutput instance's own isDepthDataDeliveryEnabled property is also true. (3) However, within the context of ARKit and ARSession, I cannot find any public API that would allow me to explicitly access and configure the underlying AVCapturePhotoOutput instance that ARSession manages. (4) My question is: Is there a way to configure the ARSession's internal AVCapturePhotoOutput to enable its isDepthDataDeliveryEnabled property? Or, is simultaneously capturing a high-resolution frame and its associated depth data simply not a supported use case in the current ARKit framework?
1
0
290
3w