(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Introductory kick-off questions
Question 1
Tell us a little about the new AVFoundation Capture APIs we've made available in the new iOS 26 developer preview?
Cinematic Capture API (strong/weak focus, tracking focus)(scene monitoring)(simulated aperture)(dog/cat heads/groupIDs)
Camera Controls and AirPod Stem Clicks
Spatial Audio and Studio Quality AirPod Mics in Camera
Lens Smudge Detection
Exposure and Focus Rect of Interest
Question 2
I built QR code scanning into my app, but on newer iPhones I have to hold the phone very far away from the QR code, otherwise the image is blurry and it doesn't scan. Why is this happening and how can I fix it?
Every year, the cameras get better and better as we push the state of the art on iPhone photography and videography. This sometimes results in changes to the characteristics of the lenses.
min focus distance
newer phones have multiple lenses
automatic switching behavior
Use virtual device like the builtInDualWide or built in Triple, rather than just the builtInWide
Set the videoZoomFactor to 2. You're done.
Question 3
Last year, we saw some exciting new APIs introduced in AVFoundation in the health space. With Constant Color photography, developers can take pictures that have constant color regardless of ambient lighting. There are some further advancements this year. Davide, could you tell us about them?
constant color photography is mean to remove the "tone mapping" applied to photograph captured with camera app, usually incldsuing artistic intent, and instead try to be a close as possible to the real color of the scene, regardless of the illumination
constant color images could be captured in HEIF anf JPEG laste year. this year we are adding Support for the DICOM medical imaging photo format. It is a fomrat used by the health industry to store images related to medical subjects like MRI, skin problems, xray and so on.
It's writable and also readable format on all OS26, supported through AVCapturePhotoOutput APIs and through the coregraphics api.
for coregrapphics there is a new DICOM entry in the property dictionary which includes all the dicom availbale and defined propertie in a file. finder will also display all those in the info panel
(Address why a developer would want to use it) - not for regualr picture taking apps. for those HEIF and JPEG are the preferred delivery format. use dicom if your app produces output that are health related, that you can also share with health providers or your doctors
Main session developer questions
Question 1
LiDAR vs. Dual Camera depth generation: Which resolution does the LiDAR sensor natively have (iPhone 16 Pro) and when to prefer LiDAR over Dual Camera?
Both report formats with output resolutions (we don't advertise sensor resolution)
Lidar vs Dual, etc:
Lidar: Best for absolute depth, real world scale and computer vision
Dual, etc: relative, disparity-based, less power, photo effects
Also see: 2022 WWDC session "Discovery advancements in iOS camera capture: Depth, focus and multitasking"
Question 2
Can true depth and lidar camera run at 60fps?
Lidar can do 30fps (edited)
Front true depth can do 60fps.
Question 3
What’s the first class way to use PhotoKit to reimplement a high performance photo grid? We’ve been using a LazyVGrid and the photos caching manager, but are never able to hit the holy trinity (60hz, efficient memory footprint, minimal flashes of placeholder/empty cells)
use the PHCachingImageManager to get media content delivered before you need to display it
specify the size you need for grid sized display
set the options PHVideoRequestOptionsDeliveryModeFastFormat, PHImageRequestOptionsDeliveryModeFastFormat and PHImageRequestOptionsResizeModeFast
Question 4
For rending live preview of video stream, Is there performance overhead from using async and Swift UI for image updates vs UIViewRepresentable + AVCaptureVideoPreviewLayer.self?
AVCaptureVideoPreviewLayer is the most efficient display path
Use VDO + AVSampleBufferDisplayLayer if you need to modify the image data
Swift UI image is optimized for static image content
Question 5
Is there a way to configure the AVFoundation BuiltInLiDarDepthCamera mode to provide a depth map as accurate as ARKit at close range?
The AVCaptureDepthDataOutput supports filtering that reduces noise and fills in invalid values. Consider using this for smoother depth maps
Question 6
Pyramid-based photo editing in core image (such as adobe camera raw highlights and shadows)?
First off you may want to look a the builtin filter called CIHighlightShadowAdjust
Also the noise reduction in the CIRawFilter uses a pyramid-based algorithm.
You can also write your own pyramid-based algorithms by taking an input image:
down sample it by two multiply times using imageByApplyingAffineTransform
apply additional CIKernels to each downsampled image as needed.
use a custom CIKernel to combine the results.
Question 7
Is the best way to integrate an in-app camera for a “non-camera” app UIImagePickerController?
Yes, UIImagePickerController provides system-provided UI for capturing photos and movies.
Question 8
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 9
For shipping photo style assets in the app that need transparency what is the best format to use? JPEG2000? will moving to this save a lot of space comapred to PNG or other options?
If you want lossless compression PNG is good and supports unpremutiplied alpha
If you want lossy compression HEIF supports premutiplied or unpremutiplied alpha
(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 10
Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation?
Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected.
Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops.
Question 11
What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices?
The ImageCaptureCore framework supports camera devices, memory cards, scanners
read and write, where supported
check out the docs to see how to browse connected devices, folders, files, etc.
Question 12
Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info!
Yes, this is totally fine.
AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance
Question 13
What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map?
There’s a suite of new System Tone Mapper APIs in this years’ OSes
CoreImage ImageKit CoreAnimation, CoreGraphics
For example:
CoreImage: new CISystemToneMap filter.
CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh
Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange
Can go from high to constrained to standard
Tone mapping is provided by the system (CISystemToneMap for controllable example)
Question 14
What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image?
One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide.
Question 15
For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each?
AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview.
For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output
Question 16
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 17
Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward?
digital zoom upscales the image to output dimensions and cropping will yield a smaller output image
while digital zoom will crop, it also upscales
Question 18
How do you design camera interfaces that work for both casual users and photography enthusiasts?
Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down.
Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts
A good philosophy is: Keep the simple things easy, make the hard things possible
Question 19
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 20
a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user?
You can't prevent the built-in camera from being available to the user
Question 21
Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly?
Yes they can, use CoreMLRequest , provide their model container
Been supported for a while (iOS 18/macOS 15)
For more details go to Machine Learning & AI group lab Thursday
use smaller images for better performance
Question 22
What would you recommend for capture of the new immersive and spatial formats?
To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property
Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported
See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details
Question 23
You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding?
For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files.
For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs.
If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant.
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
(Note: this is part 3 of a 3 part posting. See Part 1 or Part 2)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 24
What’s the best approach for optimizing barcode scanning using AVFoundation or Vision in low-light or angled scenarios
Turn on flash in low-light scenarios
Lower framerate to improve exposure and reduce noise
Wait until the capture is in focus/notify your user that they need to get closer
Question 25
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 26
Is there a way to quickly create a thumbnail after the user selects an image with PhotosPicker?
File provider API
Additional questions from the WWDC25 in-person labs that occurred later in the WWDC week
Question 1
When should I build my custom photo picker instead of using the system one?
Always start with the system picker -> try embeddable customization APIs -> fallback to custom picker for very special needs
Question 2
I'm building a new camera app for pros and I want to give my users the most un-processed image possible, and the most control over the capture as possible. How can I do that with AVCapture?
If stills, Brief Bayer RAW capture overview, or Pro RAW if you want Apple's processing and dynamic range
If video, talk about prores LOG.
Custom exposure settings are available throguh the apis
maybe global/local tonemapping discussion?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
Just downloaded iOS 26.1 and my phone keeps ringing after the call has been answered. Any fixes for this?
Topic:
Media Technologies
SubTopic:
Photos & Camera
I discovered when editing photos with the PhotoKit API, PHContentEditingOutput's renderedContentURL is a file in the app container's tmp directory with a filename that seems to follow the format render.<uuid>.JPG, and that file does not get deleted if the edit does not complete successfully (the user cancels the edit request, an error occurs, the app crashes, etc). I understand the system is supposed to automatically delete tmp files every once in a while, but some users are noticing my app's Documents & Data inflates, so I'm considering deleting these render files each time the app is launched. But I don't want to delete everything in the tmp directory as there could possibly be other data in there.
What's the best way to remove those temporary files? Does the filename always start with render. no matter the device language? I thought I'd delete files in NSTemporaryDirectory() with that prefix but then I discovered in Mac Catalyst the location is not the tmp directory directly, they're in tmp/TemporaryItems/<bundleid>.
Thanks!
Hi, We have created an app which allows recording 4K 60 fps videos in the app. We have noted that some time the recording switched to 20 fps (the value 20 is constant) even though the resolution settings is at 4K 60fps. We are using AVCaptureDevice to invoke the camera.
has anyone experienced this problem before ? What is unique to 20 fps? Why does it resort to 20 fps from 60 fps and why not to other numbers ?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
VideoToolbox
Professional Video Applications
Media
Area
ImageCaptureCore / ICDeviceBrowser
Description
On iOS 26.1 beta, calling
requestControlAuthorization()
requestContentsAuthorization()
always returns .notDetermined and never transitions to .authorized or .denied.
This prevents apps from properly accessing device control or contents authorization. The issue occurs regardless of device state or prior requests.
Steps to Reproduce
1. Create and start an ICDeviceBrowser instance.
2. Call requestControlAuthorization() or requestContentsAuthorization().
3. Inspect the returned ICAuthorizationStatus.
Expected Result
• The system should prompt the user if necessary.
• A final status of either .authorized or .denied should be returned.
Actual Result
• The completion handler always reports .notDetermined.
• No user prompt appears and the status does not change.
Version / Build
• iOS 26.1 beta
• Xcode
Hardware
• [iPhone 15 Pro, iPad Pro (M2)]
Impact
This regression blocks development and testing of features relying on ImageCaptureCore. Applications depending on device browsing and content access cannot proceed, which significantly affects workflows involving external device integration.
Notes
This appears to be a regression compared to earlier iOS releases.
When attempting to access a PHAsset that is in the hidden folder of iOS26, the PHFetchResult always returns no items, even when the user has granted full access to photos and even when includeHiddenAssets is true.
This is the code suggested by ChatGPT; it always fails:
public func fetchAsset(withLocalIdentifier identifier: String) -> PSSPHAssetImplementing? {
// First try the direct fetch by identifier (fast path)
let directResult = PHAsset.fetchAssets(withLocalIdentifiers: [identifier], options: nil)
if let asset = directResult.firstObject {
return build(from: asset)
}
// Fallback: fetch all assets including hidden, then filter manually
let options = PHFetchOptions()
options.includeHiddenAssets = true
let allAssets = PHAsset.fetchAssets(with: options)
for index in 0..<allAssets.count {
let asset = allAssets.object(at: index)
if asset.localIdentifier == identifier {
return build(from: asset)
}
}
return nil
}
Is it no longer possible to retrieve a hidden photo in iOS 26?
I want to fully support the new iPhone models in my app, and ideally need to know the available lenses. However, I can't find information about this on the web and they're not reported in the simulators. The closest thing I found was this, but it's very out of date. https://developer.apple.com/library/archive/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/Cameras/Cameras.html
My only other option is to buy each device, which isn't really feasible, or to log the data from real users via an analytics tool which isn't ideal either.
Thanks,
Alex
Topic:
Media Technologies
SubTopic:
Photos & Camera
Error capturing ProRAW using iPhone 17 Pro Telephoto with photoQualityPrioritization set to .Quality
Hey,
I'm having a very strange issue on my iPhone 17 Pro. I'm trying to capture a 12MP ProRAW image using the Telephoto Lens with the photoQualityPrioritization set to .Quality. Unfortunately I receive this error when trying to capture the image:
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSUnderlyingError=0x134f7a1f0 {Error Domain=NSOSStatusErrorDomain Code=-16802 "(null)"}, NSLocalizedFailureReason=An unknown error occurred (-16802), AVErrorRecordingFailureDomainKey=4, NSLocalizedDescription=The operation could not be completed}
The photo captures correctly at 7.9x zoom, it's only a problem when the zoom goes over 8x.
Also, it's only this particular configuration of settings which causes the issue. I'm able to capture an image if I either:
Set quality to ".balanced"
Set max dimensions to 48MP
Capture a JPEG image instead of a ProRAW image
Use the TripleCamera fusion lens
Any help would be greatly appreciated.
Alex
I am able to capture 48mp photos using .builtInWideAngleCamera, but it seems like .builtInTripleCamera is capped at 12mp?
Is there a way to capture 48mp photos using .builtInTripleCamera? Because .builtInTripleCamera provides smooth transition between cameras during zooming, and I'd like to keep this behavior.
New iPhone 17 Pro have all their cameras at 48mp. Is there a chance that their .builtInTripleCamera is capable of capturing 48mp? Or is this an API limitation?
I'm experiencing an issue with my app when saving images to the camera roll. This is intermittent, but it happens several times a day. The error I receive is the following:
Connection to assetsd was interrupted - assetsd exited, died, or closed the photo library
Error getting remote object proxy for -[PLNonBindingAssetsdPhotoKitClient sendChangesRequest:reply:]_block_invoke: Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.photos.service" UserInfo={NSDebugDescription=connection to service named com.apple.photos.service}
PhotoKit XPC proxy is invalid. Dropping request on the floor and returning an error: Error Domain=PHPhotosErrorDomain Code=3301 "(null)" (underlying error Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.photos.service" UserInfo={NSDebugDescription=connection to service named com.apple.photos.service})
CoreData: error: XPC: synchronousRemoteObjectProxyWithErrorHandler: store 'file:///var/mobile/Media/PhotoData/Photos.sqlite' encountered error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003." UserInfo={NSDebugDescription=The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003.}
CoreData: error: XPC: synchronousRemoteObjectProxyWithErrorHandler: store 'file:///var/mobile/Media/PhotoData/Photos.sqlite' encountered error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003." UserInfo={NSDebugDescription=The connection to service created from an endpoint was invalidated: failed to check-in, peer may have been unloaded: mach_error=10000003.}
My code is unchanged from using my app daily on an iPhone 16 Pro with iOS 26. I never saw the issue on this device.
Here is an excerpt from my code for saving the image:
var localIdentifier = String()
PHPhotoLibrary.shared().performChanges({
let albumChangeRequest = PHAssetCollectionChangeRequest(for: album)
let assetCreationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
assetCreationRequest.addResource(with: .photo, data: imageData, options: options)
assetCreationRequest.creationDate = Date.now
let placeHolder = assetCreationRequest.placeholderForCreatedAsset
albumChangeRequest?.addAssets([placeHolder!] as NSArray)
if placeHolder != nil {
localIdentifier = (placeHolder?.localIdentifier)!
}
}) { (didSucceed, error) in
OperationQueue.main.addOperation({
didSucceed ? success(localIdentifier) : failure(error)
})
}
I'm not sure why this would be device specific but I have had users with iPhone 17 Pro and iPhone Air reporting the issue.
Alex
I want to use both front UW and TrueDepth cameras in iPad which has front UW camera.
Firstly, I have used only front builtInDualCamera by AVFoundation and tried all the formats that can be used with builtInDualCamera, but there was no format that could capture UW.
Secondly, I have tried to both front builtInDualCamera and builtInUltraWideCamera, but there was no combination that could use builtInUltraWideCamera and builtInDualCamera.
Is there any way ?
I'm receiving output from avcapturesession and capturing an image using Vision, but the image is output in landscape orientation instead of portrait.
Even when I set the orientation to up in ciimage, cgimage, and uiimage, the image is still output in landscape orientation.
On iPhones 16 and below, the image is output in portrait orientation.
But on iPhones 17 and above, the image is output in landscape orientation.
Please help.
I tried to modify the AVCam sample code by copying the code here https://developer.apple.com/documentation/avfoundation/adopting-smart-framing-in-your-camera-app#Configure-the-smart-framing-monitor
smart framing monitors
I can ensure the activeformat supports smart framing, but the supported frames in monitor is always nil.
In my another project it has supported value, but the observation has never been triggered, then I tried to keep printing the recommended frame, it's always nil.
Could the engineer embed the code into AVCam rather than posting a few code pieces?
I'm getting an error writing a ciImage as a heif image:
// Create CIImage directly from pixel buffer
let ciImage = CIImage(cvPixelBuffer: pixelBuffer, options: [CIImageOption.properties: combinedMetadata])
// Write HEIC synchronously
do {
try ciContext.writeHEIFRepresentation(of: ciImage, to: url, format: .RGBA8, colorSpace: colorSpace)
The error I'm getting is:
Error Domain=CINonLocalizedDescriptionKey Code=3 "(null)" UserInfo={CINonLocalizedDescriptionKey=failed to write HEIC data to file., NSUnderlyingError=0x11b1a1ec0 {Error Domain=CINonLocalizedDescriptionKey Code=10 "(null)" UserInfo={CINonLocalizedDescriptionKey=failed to add image to the PhotoCompressionSession.}}}
Both
try ciContext.writeJPEGRepresentation(of: copiedCIImage, to: url, colorSpace: colorSpace, options: options)
and
try ciContext.writePNGRepresentation(of: copiedCIImage, to: url, format: .RGBA8, colorSpace: colorSpace)
work. I also verified that the code works with iOS 18.
Is there something new we need to do for Heif images?
Thanks in advance
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Mobile Core Services
Photos and Imaging
Core Image
I tested the accuracy of the depth map on iPhone 12, 13, 14, 15, and 16, and found that the variance of the depth map after iPhone 12 is significantly greater than that of iPhone 12.
Enabling depth filtering will cause the depth data to be affected by the previous frame, adding more unnecessary noise, especially when the phone is moving.
This is not friendly for high-precision reconstruction. I tried to add depth map smoothing in post-processing to solve the problem of large depth map deviation, but the performance is still poor.
Is there any depth map smoothing solutions already announced by Apple?
What options do I have if I don't want to use Blackmagic's Camera ProDock as the external Sync Hardware, but instead I want to create my own USB-C hardware accessory which would show up as an AVExternalSyncDevice on the iPhone 17 Pro?
Which protocol does my USB-C device have to implement to show up as an eligible clock device in AVExternalSyncDevice.DiscoverySession?
Where can I find the documentation of the Genlock feature of the iPhone 17 Pro? How does it work and how can I use it in my app?
Hi everyone,
I’m running into an issue with PHPickerFilter when using PHPickerViewController.
When I configure the picker with a .videos and .livePhotos filter, it seems to work correctly in the Photos tab. However, when I switch to the Collections tab, the filter doesn’t always apply — users can still see and select static image assets in certain collections (e.g. from one of the People & Pets sections).
Here’s a simplified snippet of my setup:
var configuration = PHPickerConfiguration(photoLibrary: .shared())
configuration.selectionLimit = 1
var filters = [PHPickerFilter]()
filters.append(.videos)
filters.append(.livePhotos)
configuration.filter = PHPickerFilter.any(of: filters)
configuration.preferredAssetRepresentationMode = .current
let picker = PHPickerViewController(configuration: configuration)
picker.delegate = self
present(picker, animated: true)
Expected behavior:
The picker should consistently respect the filter across both Photos and Collections tabs, only showing assets that match the filter.
Actual behavior:
The filter seems to apply correctly in the Photos tab, but in the Collections tab, other asset types are still visible/selectable.
Has anyone else encountered this behavior? Is this expected or a known issue, or am I missing something in the configuration?
Thanks in advance!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Files and Storage
Media Library
Photos and Imaging
PhotoKit
When changing a camera's exposure, AVFoundation provides a callback which offers the timestamp of the first frame captured with the new exposure duration: AVCaptureDevice.setExposureModeCustom(duration:, iso:, completionHandler:).
I want to get a similar callback when changing frame duration.
After setting AVCaptureDevice.activeVideoMinFrameDuration or AVCaptureDevice.activeVideoMinFrameDuration to a new value, how can I compute the index or the timestamp of the first camera frame which was captured using the newly set frame duration?
Hello,
Does anyone have a recipe on how to raycast VNFaceLandmarkRegion2D points obtained from a frame's capturedImage?
More specifically, how to construct the "from" parameter of the frame's raycastQuery from a VNFaceLandmarkRegion2D point?
Do the points need to be flipped vertically? Is there any other transformation that needs to be performed on the points prior to passing them to raycastQuery?
I'm working on a photo app and I want to allow the user to display, edit and delete photos. I can fetch all photos using PHAsset.fetchAssets(with: options). This works as intended.
However, I can't seem to find a way to prevent the user from seeing photos from a Shared Library. The PHAssetSourceType only contains typeCloudShared to only show items from a specific album; not library.
How can I filter by iCloud Shared Library?