(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Introductory kick-off questions
Question 1
Tell us a little about the new AVFoundation Capture APIs we've made available in the new iOS 26 developer preview?
Cinematic Capture API (strong/weak focus, tracking focus)(scene monitoring)(simulated aperture)(dog/cat heads/groupIDs)
Camera Controls and AirPod Stem Clicks
Spatial Audio and Studio Quality AirPod Mics in Camera
Lens Smudge Detection
Exposure and Focus Rect of Interest
Question 2
I built QR code scanning into my app, but on newer iPhones I have to hold the phone very far away from the QR code, otherwise the image is blurry and it doesn't scan. Why is this happening and how can I fix it?
Every year, the cameras get better and better as we push the state of the art on iPhone photography and videography. This sometimes results in changes to the characteristics of the lenses.
min focus distance
newer phones have multiple lenses
automatic switching behavior
Use virtual device like the builtInDualWide or built in Triple, rather than just the builtInWide
Set the videoZoomFactor to 2. You're done.
Question 3
Last year, we saw some exciting new APIs introduced in AVFoundation in the health space. With Constant Color photography, developers can take pictures that have constant color regardless of ambient lighting. There are some further advancements this year. Davide, could you tell us about them?
constant color photography is mean to remove the "tone mapping" applied to photograph captured with camera app, usually incldsuing artistic intent, and instead try to be a close as possible to the real color of the scene, regardless of the illumination
constant color images could be captured in HEIF anf JPEG laste year. this year we are adding Support for the DICOM medical imaging photo format. It is a fomrat used by the health industry to store images related to medical subjects like MRI, skin problems, xray and so on.
It's writable and also readable format on all OS26, supported through AVCapturePhotoOutput APIs and through the coregraphics api.
for coregrapphics there is a new DICOM entry in the property dictionary which includes all the dicom availbale and defined propertie in a file. finder will also display all those in the info panel
(Address why a developer would want to use it) - not for regualr picture taking apps. for those HEIF and JPEG are the preferred delivery format. use dicom if your app produces output that are health related, that you can also share with health providers or your doctors
Main session developer questions
Question 1
LiDAR vs. Dual Camera depth generation: Which resolution does the LiDAR sensor natively have (iPhone 16 Pro) and when to prefer LiDAR over Dual Camera?
Both report formats with output resolutions (we don't advertise sensor resolution)
Lidar vs Dual, etc:
Lidar: Best for absolute depth, real world scale and computer vision
Dual, etc: relative, disparity-based, less power, photo effects
Also see: 2022 WWDC session "Discovery advancements in iOS camera capture: Depth, focus and multitasking"
Question 2
Can true depth and lidar camera run at 60fps?
Lidar can do 30fps (edited)
Front true depth can do 60fps.
Question 3
What’s the first class way to use PhotoKit to reimplement a high performance photo grid? We’ve been using a LazyVGrid and the photos caching manager, but are never able to hit the holy trinity (60hz, efficient memory footprint, minimal flashes of placeholder/empty cells)
use the PHCachingImageManager to get media content delivered before you need to display it
specify the size you need for grid sized display
set the options PHVideoRequestOptionsDeliveryModeFastFormat, PHImageRequestOptionsDeliveryModeFastFormat and PHImageRequestOptionsResizeModeFast
Question 4
For rending live preview of video stream, Is there performance overhead from using async and Swift UI for image updates vs UIViewRepresentable + AVCaptureVideoPreviewLayer.self?
AVCaptureVideoPreviewLayer is the most efficient display path
Use VDO + AVSampleBufferDisplayLayer if you need to modify the image data
Swift UI image is optimized for static image content
Question 5
Is there a way to configure the AVFoundation BuiltInLiDarDepthCamera mode to provide a depth map as accurate as ARKit at close range?
The AVCaptureDepthDataOutput supports filtering that reduces noise and fills in invalid values. Consider using this for smoother depth maps
Question 6
Pyramid-based photo editing in core image (such as adobe camera raw highlights and shadows)?
First off you may want to look a the builtin filter called CIHighlightShadowAdjust
Also the noise reduction in the CIRawFilter uses a pyramid-based algorithm.
You can also write your own pyramid-based algorithms by taking an input image:
down sample it by two multiply times using imageByApplyingAffineTransform
apply additional CIKernels to each downsampled image as needed.
use a custom CIKernel to combine the results.
Question 7
Is the best way to integrate an in-app camera for a “non-camera” app UIImagePickerController?
Yes, UIImagePickerController provides system-provided UI for capturing photos and movies.
Question 8
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 9
For shipping photo style assets in the app that need transparency what is the best format to use? JPEG2000? will moving to this save a lot of space comapred to PNG or other options?
If you want lossless compression PNG is good and supports unpremutiplied alpha
If you want lossy compression HEIF supports premutiplied or unpremutiplied alpha
(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 10
Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation?
Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected.
Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops.
Question 11
What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices?
The ImageCaptureCore framework supports camera devices, memory cards, scanners
read and write, where supported
check out the docs to see how to browse connected devices, folders, files, etc.
Question 12
Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info!
Yes, this is totally fine.
AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance
Question 13
What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map?
There’s a suite of new System Tone Mapper APIs in this years’ OSes
CoreImage ImageKit CoreAnimation, CoreGraphics
For example:
CoreImage: new CISystemToneMap filter.
CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh
Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange
Can go from high to constrained to standard
Tone mapping is provided by the system (CISystemToneMap for controllable example)
Question 14
What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image?
One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide.
Question 15
For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each?
AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview.
For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output
Question 16
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 17
Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward?
digital zoom upscales the image to output dimensions and cropping will yield a smaller output image
while digital zoom will crop, it also upscales
Question 18
How do you design camera interfaces that work for both casual users and photography enthusiasts?
Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down.
Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts
A good philosophy is: Keep the simple things easy, make the hard things possible
Question 19
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 20
a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user?
You can't prevent the built-in camera from being available to the user
Question 21
Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly?
Yes they can, use CoreMLRequest , provide their model container
Been supported for a while (iOS 18/macOS 15)
For more details go to Machine Learning & AI group lab Thursday
use smaller images for better performance
Question 22
What would you recommend for capture of the new immersive and spatial formats?
To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property
Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported
See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details
Question 23
You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding?
For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files.
For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs.
If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant.
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
(Note: this is part 3 of a 3 part posting. See Part 1 or Part 2)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 24
What’s the best approach for optimizing barcode scanning using AVFoundation or Vision in low-light or angled scenarios
Turn on flash in low-light scenarios
Lower framerate to improve exposure and reduce noise
Wait until the capture is in focus/notify your user that they need to get closer
Question 25
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 26
Is there a way to quickly create a thumbnail after the user selects an image with PhotosPicker?
File provider API
Additional questions from the WWDC25 in-person labs that occurred later in the WWDC week
Question 1
When should I build my custom photo picker instead of using the system one?
Always start with the system picker -> try embeddable customization APIs -> fallback to custom picker for very special needs
Question 2
I'm building a new camera app for pros and I want to give my users the most un-processed image possible, and the most control over the capture as possible. How can I do that with AVCapture?
If stills, Brief Bayer RAW capture overview, or Pro RAW if you want Apple's processing and dynamic range
If video, talk about prores LOG.
Custom exposure settings are available throguh the apis
maybe global/local tonemapping discussion?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
Hello,
I am a developer currently working on an AR application using ARKit. I aim to implement a Zoom feature that allows users to enlarge and reduce objects within the AR scene while simultaneously measuring the distance to those objects. Specifically, I want to incorporate Optical Zoom to provide a more natural and precise user experience. I have considered several approaches and would appreciate your advice on the most effective methods.
Approaches Being Considered:
Using UIPinchGestureRecognizer to Adjust the Camera's Field of View Modifying the scale Property of SCNNode to Enlarge/Reduce Specific Objects Leveraging AVFoundation to Control the Camera's Optical Zoom Questions:
Compatibility Between ARKit and Optical Zoom: Is it feasible to control the camera's optical zoom using AVFoundation while utilizing ARKit's features? What should be considered when integrating these two frameworks?
Integrating Object Distance Measurement with Zoom Functionality: What is the most effective approach to measure and display the distance to an object in real-time when a user zooms in on it?
User Experience Considerations: Do you have any UI/UX design tips for implementing optical zoom to ensure a natural and intuitive experience? For example, how can visual feedback for zoom actions and distance measurements be effectively presented to users?
Performance Optimization: What optimization strategies can minimize potential performance issues when implementing both optical zoom and distance measurement features simultaneously?
Example Code and Reference Materials: Could you share any example code or reference materials that demonstrate similar functionalities?
Thank you.
Example Code Request:
If possible, providing sample code that integrates optical zoom with distance measurement would be extremely helpful.
Reference Links:
Please share any tutorials or resources that demonstrate the combined use of ARKit and AVFoundation.
I'm updating my Photo Editing Extension to support HDR. To do this I set imageView.preferredImageDynamicRange = .high. But you can turn off the option to view HDR photos in the complete dynamic range in Settings > Photos. When you do that, open a photo, and tap the edit button, it does not appear in the full range as expected, but when you select my app from More > Extensions, it does appear in the complete dynamic range unexpectedly. I need to set imageView.preferredImageDynamicRange = .standard when View Full HDR is off, but I don't see any way to get that in my PHContentEditingController.
In a photo editing extension, is it possible to display the photo in HDR? In this context you only have a placeholder UIImage and a PHContentEditingInput which has a displaySizeImage and fullSizeImageURL. The displaySizeImage has isHighDynamicRange false.
Hello everyone,
I am working on an iOS app that involves capturing images automatically, and I would like to control the start/stop of the capture process remotely from a Mac app. I explored the iPhone Mirroring feature, which allows some remote control but has the limitation of only functioning when the iPhone is locked, and it doesn’t permit access to the iPhone’s camera from the Mac.
Ideally, I am looking for a solution that would allow me to:
Remotely control the camera capture process on the iOS app from the Mac app.
Ensure the iPhone’s camera remains fully operational and controllable from the Mac during the capture process.
I have considered using options like Handoff for communication between the apps but faced some issues while communicating between the iOS and mac app. I would like to know if there is a more optimal solution within Apple’s ecosystem, or if there are APIs I might have overlooked.
Any advice or guidance on how to achieve this functionality would be greatly appreciated!
Thanks in advance!
Hi Apple Engineer,
My app is using ImageCaptureCore framework to communicate to external DSLR Camera. When I connect my device to a camera, I execute the requestContentsAuthorization(completion:) to request for Access Files on Connected Cameras. This is the dialog when the request is executed:
When I tap "OK", the status of content authorization keeps "Denied". even when I open "Files and Folders" permission in "Privacy & Security" Settings.
When I switched ON the permission, the switch keeps back to turned off. You could see the reproduce in this GoogleDrive video https://drive.google.com/file/d/15B-R5TONgMWg8qFiYUGK0hTy62dsVGUX/view?usp=sharing
The occurrence keeps happen even:
I uninstall and install the app back
Do "Reset Location & Privacy"
Do "Reset All Settings"
I attached the sysdiagnose files in this GoogleDrive file https://drive.google.com/file/d/11lovl_xC95AKXQTkZ1_e6UbEgS5md0Z3/view?usp=sharing
I firstly experience this issue after researching ImageCaptureCore's API. I executed resetContentsAuthorizationWithCompletion:. After that, my permission request keeps denied as described above :(
There are other developer that experiences the same as mine https://forums.developer.apple.com/forums/thread/756960 . There is a simple sample project there and it's reproducible in my case.
Could you help me how to accomplished my app can be granted for permission to "Files and Folders" permission when using ImageCaptureCore? Could it be a bug from the system?
I would like to offer the functionality that the user aims the camera at a graph (including axes and scales) and the app detects the graph and the app replicates the graph using the image.
I have the whole camera setup finished with a AVCaptureSession, VNDetectContoursRequest, VNImageRequestHandler, etc.
However, now I get many many results so I guess I will now need to tell the image processing process what I am looking for. i.e. filter the VNContoursObservations.
I 'think' I first need to detect two perpendicular lines (the two axes). How do I do that? If I do not see them, I can just ignore that input and wait for the next VNContoursObservation.
When I found the axes of the graph, I will need to find the curve (graph) that I need to scan. Any tips on how I can find that curve and turn that curve into a bunch of coordinates?
Thanks!
Wouter
Hello,
I apologize if the answer is obvious but I'm having a hard time figuring this one out.
Let's say the user taps an "Edit" button in my LockedCameraCaptureSession. The extension calls:
activity.userInfo = ["ActivityKey": "ID"]
try await session.openApplication(for: activity)
Can I retrieve, in my application, the data stored in activity.userInfo (lets say, a flag "open editor"), or is data passing exclusively handled via appContext of CameraCaptureIntent?
Thank you!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hello developer community.
I purchase recently my new iPhone 16 Pro Max; it is a premium device with great quality overall.
However, I am having a big trouble shotting in ProRaw MAX (48 mode) with native camera.
Just to be clear, the problem that i will describe do not happen in 3rd apps, such as ProCam; only with native camera.
When I use ProRaw Max, and take the photo, and watch the photo in the gallery the image can’t load and render properly. Even, when I maximize the image to the maximum I can see pixelated portions, defects and super low resolution and excessive denoise.
For comparison, this not occur with my previous iPhone 15 PM and/or when I capture photos from ProCam (same settings and configurations) in the 16PM. I proceed to take the photo, open the gallery and I see full of details, when zoomed to 100%.
I tried to format the phone, reinstall the software via my mac. Tried even to look at some forums to find if there’s someone with the same issue, the information available so far is very low.
I’m in contact with apple assistant from my country (Portugal), and they escalated this problem to the engineers. (that’s what I’ve been told).
They did all the tests remotely (via analysis and improvement’s) and they told me that my phone is perfect in the hardware department. I will wait for the next days to be contacted again.
I’m on iOS 18.0.1. (The last software available at this time).
I tried multiple 16PM, from friends, family and stores (more or less 10 units), and they all showed the exact same problematic.
I’m a professional photographer, so I find this frustrating and unacceptable.
I would appreciate any additional suggestion or information. Thank you!
Cannot add photos or files because they are bigger than 5Mb.
This ideal gonna be cool:
When people finish recording a video and later realize there's something else worth capturing, they can only create a second clip. But what if it were possible to reopen the first video and continue recording from where they left off? This would be a great convenience for many people
Hi:
I am working with the ObjectCapture frameworks and sample code.
Everything works great.
We are trying to go from capturing 12MP images as in the sample code to capturing 48MP 6048 × 8064 images.
We can't seem to get it to work.
Any advice here?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Task {
for await update in LockedCameraCaptureManager.shared.sessionContentUpdates {
switch update {
case .initial(let urls):
print("frank: init \(urls)")
await MainActor.run {
let label = UILabel(frame: CGRect(x: 100, y: 100, width: 100, height: 30))
label.text = "frank test"
label.textColor = .black
UIViewController.getTop().view.addSubview(label)
}
case .added(let url):
print("frank: add \(url)")
case .removed(let url):
print("frank: removed \(url)")
default:
break
}
}
}
why 'case .initial(let urls)': never never be executed? Can some one provide a sample code?
We have a new photo sharing app (https://photodare.ca).
We've had no issues with photos loading in North America and Caribbean, but so far 2 users (Germany, Netherlands) are saying they can't load photos even though they've proven they have permissions for photos enabled.
I can't reproduce this in Canada.
Anyone know about other permissions we need to setup for european countries, or is anyone in GDPR countries willing to try this for us?
They were on 17.6.1.
Thanks either way
Topic:
Media Technologies
SubTopic:
Photos & Camera
I feel that IOS18 camera filters are over complicated and generate lower level results than iOS18 filters. I am really missing the Vivid filter.
It was perfect on ios17.
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hi Apple Engineer,
My App is using ImageCapture Framwork to connect DSLR Camera, Before iOS 18 this method is effective,but When I upgraded my iPhone and iPad, found my app can`t connect DSLR Camera, open Setting -> Privacy & Security -> Files and Folders permission, can‘t found my app, I swear it worked before iOS 18.
I find other developers have the same problem.
https://forums.developer.apple.com/forums/thread/756960 .
https://developer.apple.com/forums/thread/765768.
I also found a process for reproducing this problem in ios 18,
Do reset all settings.
Can you help me with this problem? Or tell me how to use the API properly.Look forward to your reply. Thank you very much.
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hello everyone, I am using QRCodeScanner library in my project, the scan qr code was working in earlier ipad os but now in iPad os 18 it's stopped working.
I have noticed a problem when a PHAsset creation request is made with the resource type PHAssetResourceType.photoProxy.
let creationRequest = PHAssetCreationRequest.forAsset()
creationRequest.addResource(with: .photoProxy, data: photoData, options: nil)
creationRequest.location = location
creationRequest.isFavorite = true
After successfully saving the resulting asset through PHPhotoLibrary.shared().performChanges, I could verify it in the Photos app.
I noticed that the created photo was initially marked as Favorite and that the location was added to the info as expected. The title of the image changes from "Today" to "" too.
Next, the photo was refreshed, and location data was purged. However, the title remains unchanged and displays the .
This refresh was also observed in the code. PHPhotoLibraryChangeObserver protocols func photoLibraryDidChange(_ changeInstance: PHChange) receives a change notification. The same asset has been changed, and there is no location information anymore. isFavorite information persists correctly.
After debugging for a few hours, I discovered that changing the resource type to .photo fixes this issue. Location data is not removed in the Photos app, and no refresh callback is seen in func photoLibraryDidChange(_ changeInstance: PHChange).
I initially used .photoProxy because in the AVCapturePhotoCaptureDelegate implementation class, I always get the call in func photoOutput(_ output: AVCapturePhotoOutput, didFinishCapturingDeferredPhotoProxy deferredPhotoProxy: AVCaptureDeferredPhotoProxy?, error: Error?). So here is where I am capturing the photo data as photoData = deferredPhotoProxy?.fileDataRepresentation().
I’m working on real-time object detection using YOLOv8, but I only need to detect objects in approximately 40% of the screen area. Is it possible to limit the captureOut method to focus solely on that specific region of the screen?
If this isn’t feasible, I’m considering an approach where the full-screen pixel buffer is captured and then cropped to the target area before running detection. However, I’m concerned about how this might affect real-time performance.
I’d appreciate any insights on how to maintain real-time performance or suggestions for better alternatives. Thank you!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
ML Compute
Machine Learning
Core Image
AVFoundation
I am currently developing an application that requires access to GPS coordinates from photos on iOS. However, with the recent update to iOS 18 beta, I have encountered a challenge: I can only view photos within maps, and I am unable to access the GPS coordinates directly.
Could you please provide guidance on how to enable or retrieve GPS coordinates from photos in the current iOS 18 beta version? Any insights or resources you could share would be greatly appreciated.
Thank you for your assistance!
I'm using PhotoKit in macOS to add photos to the user's library. Experimenting with Shared Photo Library, it seems that these new photos always end up in the Personal Library, not the Shared Library. I'd like to get them into the Shared Photo Library somehow. Is this possible?
Things I've considered:
A variation/option for PHAssetChangeRequest.creationRequestForAsset: doesn't seem to exist
A property of PHAsset: can't find anything
A special PHAssetCollection that I could add to: again, doesn't seem to exist
Hi,
After installing iPadOS18 Beta3 on my iPad 7th gen, the default camera app no longer detects QR codes.
I tried updating to Beta7, but the issue remained.
Also, third-party apps that use AVCaptureMetadataOutput in AVFoundation Framework to detect QR codes also no longer work.
You can reproduce the issue by running default camera app or the AVFoundation sample code from the Apple developer site on iPad 7th gen (iPadOS18Beta installed).
https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
Has anyone else experienced this issue?
I would like to know if this issue occurs on other iPad models as well.
This is similar to the following issue that previously occurred with iPadOS 17.4.
https://support.apple.com/en-lamr/118614
https://developer.apple.com/forums/thread/748092