Photos & Camera

RSS for tag

Explore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.

Posts under Photos & Camera subtopic

Post

Replies

Boosts

Views

Activity

WWDC25 Camera & Photos group lab summary (Part 1 of 3)
(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Introductory kick-off questions Question 1 Tell us a little about the new AVFoundation Capture APIs we've made available in the new iOS 26 developer preview? Cinematic Capture API (strong/weak focus, tracking focus)(scene monitoring)(simulated aperture)(dog/cat heads/groupIDs) Camera Controls and AirPod Stem Clicks Spatial Audio and Studio Quality AirPod Mics in Camera Lens Smudge Detection Exposure and Focus Rect of Interest Question 2 I built QR code scanning into my app, but on newer iPhones I have to hold the phone very far away from the QR code, otherwise the image is blurry and it doesn't scan. Why is this happening and how can I fix it? Every year, the cameras get better and better as we push the state of the art on iPhone photography and videography. This sometimes results in changes to the characteristics of the lenses. min focus distance newer phones have multiple lenses automatic switching behavior Use virtual device like the builtInDualWide or built in Triple, rather than just the builtInWide Set the videoZoomFactor to 2. You're done. Question 3 Last year, we saw some exciting new APIs introduced in AVFoundation in the health space. With Constant Color photography, developers can take pictures that have constant color regardless of ambient lighting. There are some further advancements this year. Davide, could you tell us about them? constant color photography is mean to remove the "tone mapping" applied to photograph captured with camera app, usually incldsuing artistic intent, and instead try to be a close as possible to the real color of the scene, regardless of the illumination constant color images could be captured in HEIF anf JPEG laste year. this year we are adding Support for the DICOM medical imaging photo format. It is a fomrat used by the health industry to store images related to medical subjects like MRI, skin problems, xray and so on. It's writable and also readable format on all OS26, supported through AVCapturePhotoOutput APIs and through the coregraphics api. for coregrapphics there is a new DICOM entry in the property dictionary which includes all the dicom availbale and defined propertie in a file. finder will also display all those in the info panel (Address why a developer would want to use it) - not for regualr picture taking apps. for those HEIF and JPEG are the preferred delivery format. use dicom if your app produces output that are health related, that you can also share with health providers or your doctors Main session developer questions Question 1 LiDAR vs. Dual Camera depth generation: Which resolution does the LiDAR sensor natively have (iPhone 16 Pro) and when to prefer LiDAR over Dual Camera? Both report formats with output resolutions (we don't advertise sensor resolution) Lidar vs Dual, etc: Lidar: Best for absolute depth, real world scale and computer vision Dual, etc: relative, disparity-based, less power, photo effects Also see: 2022 WWDC session "Discovery advancements in iOS camera capture: Depth, focus and multitasking" Question 2 Can true depth and lidar camera run at 60fps? Lidar can do 30fps (edited) Front true depth can do 60fps. Question 3 What’s the first class way to use PhotoKit to reimplement a high performance photo grid? We’ve been using a LazyVGrid and the photos caching manager, but are never able to hit the holy trinity (60hz, efficient memory footprint, minimal flashes of placeholder/empty cells) use the PHCachingImageManager to get media content delivered before you need to display it specify the size you need for grid sized display set the options PHVideoRequestOptionsDeliveryModeFastFormat, PHImageRequestOptionsDeliveryModeFastFormat and PHImageRequestOptionsResizeModeFast Question 4 For rending live preview of video stream, Is there performance overhead from using async and Swift UI for image updates vs UIViewRepresentable + AVCaptureVideoPreviewLayer.self? AVCaptureVideoPreviewLayer is the most efficient display path Use VDO + AVSampleBufferDisplayLayer if you need to modify the image data Swift UI image is optimized for static image content Question 5 Is there a way to configure the AVFoundation BuiltInLiDarDepthCamera mode to provide a depth map as accurate as ARKit at close range? The AVCaptureDepthDataOutput supports filtering that reduces noise and fills in invalid values. Consider using this for smoother depth maps Question 6 Pyramid-based photo editing in core image (such as adobe camera raw highlights and shadows)? First off you may want to look a the builtin filter called CIHighlightShadowAdjust Also the noise reduction in the CIRawFilter uses a pyramid-based algorithm. You can also write your own pyramid-based algorithms by taking an input image: down sample it by two multiply times using imageByApplyingAffineTransform apply additional CIKernels to each downsampled image as needed. use a custom CIKernel to combine the results. Question 7 Is the best way to integrate an in-app camera for a “non-camera” app UIImagePickerController? Yes, UIImagePickerController provides system-provided UI for capturing photos and movies. Question 8 Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready CIFilter can be called on the final at that point Photo will have to be re-inserted into the Photo library as adjustment Question 9 For shipping photo style assets in the app that need transparency what is the best format to use? JPEG2000? will moving to this save a lot of space comapred to PNG or other options? If you want lossless compression PNG is good and supports unpremutiplied alpha If you want lossy compression HEIF supports premutiplied or unpremutiplied alpha (Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
0
0
317
Jul ’25
WWDC25 Camera & Photos group lab summary (Part 2 of 3)
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Question 10 Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation? Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected. Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops. Question 11 What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices? The ImageCaptureCore framework supports camera devices, memory cards, scanners read and write, where supported check out the docs to see how to browse connected devices, folders, files, etc. Question 12 Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info! Yes, this is totally fine. AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance Question 13 What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map? There’s a suite of new System Tone Mapper APIs in this years’ OSes CoreImage ImageKit CoreAnimation, CoreGraphics For example: CoreImage: new CISystemToneMap filter. CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange Can go from high to constrained to standard Tone mapping is provided by the system (CISystemToneMap for controllable example) Question 14 What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image? One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide. Question 15 For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each? AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview. For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output Question 16 Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready CIFilter can be called on the final at that point Photo will have to be re-inserted into the Photo library as adjustment Question 17 Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward? digital zoom upscales the image to output dimensions and cropping will yield a smaller output image while digital zoom will crop, it also upscales Question 18 How do you design camera interfaces that work for both casual users and photography enthusiasts? Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down. Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts A good philosophy is: Keep the simple things easy, make the hard things possible Question 19 Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values. Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available Question 20 a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user? You can't prevent the built-in camera from being available to the user Question 21 Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly? Yes they can, use CoreMLRequest , provide their model container Been supported for a while (iOS 18/macOS 15) For more details go to Machine Learning & AI group lab Thursday use smaller images for better performance Question 22 What would you recommend for capture of the new immersive and spatial formats? To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details Question 23 You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding? For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files. For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs. If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant. (Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
0
0
184
Jul ’25
WWDC25 Camera & Photos group lab summary (Part 3 of 3)
(Note: this is part 3 of a 3 part posting. See Part 1 or Part 2) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Question 24 What’s the best approach for optimizing barcode scanning using AVFoundation or Vision in low-light or angled scenarios Turn on flash in low-light scenarios Lower framerate to improve exposure and reduce noise Wait until the capture is in focus/notify your user that they need to get closer Question 25 Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values. Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available Question 26 Is there a way to quickly create a thumbnail after the user selects an image with PhotosPicker? File provider API Additional questions from the WWDC25 in-person labs that occurred later in the WWDC week Question 1 When should I build my custom photo picker instead of using the system one? Always start with the system picker -> try embeddable customization APIs -> fallback to custom picker for very special needs Question 2 I'm building a new camera app for pros and I want to give my users the most un-processed image possible, and the most control over the capture as possible. How can I do that with AVCapture? If stills, Brief Bayer RAW capture overview, or Pro RAW if you want Apple's processing and dynamic range If video, talk about prores LOG. Custom exposure settings are available throguh the apis maybe global/local tonemapping discussion?
0
0
204
Jul ’25
Significant Uptick in AVCaptureSessionWasInterrupted (Reason 4) Leading to Camera Black Screen and AVError Code -11803
In the latest production release of our iOS app (deployed via the App Store), we’ve observed a significant increase in AVCaptureSessionWasInterrupted notifications where the interruption reason has a rawValue of 4. The session does not automatically recover, even after returning from background or deleting/reinstalling the app. An employee ran into this and was able to get a recording. We see the below error when attempting to take photos. "Error Domain=AVFoundationErrorDomain Code=-11803 \"Cannot Record\" UserInfo={AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record, NSLocalizedRecoverySuggestion=Try recording again.}", } This interruption causes the camera preview to remain black, and any attempt to capture an image results in a failure with the following error: Some questions from our team: What common system conditions or foreground app behaviors can cause .videoDeviceNotAvailableWithMultipleForegroundApps (reason 4) to become persistent? Our teams under is under the impression the interruption reason 4 is mostly associated with iPad and PiP, but neither of these are true in the logs we see. Is manual recovery of the session required? Is there a recommended strategy to detect that the session is unrecoverable and gracefully notify the user or rebuild the session? Is there an instrument(s) in XCode you would recommend when trying to evaluate the increase in reason 4? Best, Ben
3
17
498
Jun ’25
PHPicker fails to load RAW images
We observed that the PHPicker is unable to load RAW images captured on an iPhone in some scenarios. And it is also somehow related to iCloud. Here is the setup: The PHPickerViewController is configured with preferredAssetRepresentationMode = .current to avoid transcoding. The image is loaded from the item provider like this: if itemProvider.hasItemConformingToTypeIdentifier(kUTTypeImage) { itemProvider.loadFileRepresentation(forTypeIdentifier: kUTTypeImage) { url, error in // work } } This usually works, also for RAW images. However, when trying to load a RAW image that has just been captured with the iPhone, the loading fails with the following errors on the console: [claims] 43A5D3B2-84CD-488D-B9E4-19F9ED5F39EB grantAccessClaim reply is an error: Error Domain=NSCocoaErrorDomain Code=4097 "Couldn’t communicate with a helper application." UserInfo={NSUnderlyingError=0x2804a8e70 {Error Domain=NSCocoaErrorDomain Code=4097 "connection from pid 19420 on anonymousListener or serviceListener" UserInfo={NSDebugDescription=connection from pid 19420 on anonymousListener or serviceListener}}} Error copying file type public.image. Error: Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.image" UserInfo={NSLocalizedDescription=Cannot load representation of type public.image, NSUnderlyingError=0x280480540 {Error Domain=NSCocoaErrorDomain Code=4097 "Couldn’t communicate with a helper application." UserInfo={NSUnderlyingError=0x2804a8e70 {Error Domain=NSCocoaErrorDomain Code=4097 "connection from pid 19420 on anonymousListener or serviceListener" UserInfo={NSDebugDescription=connection from pid 19420 on anonymousListener or serviceListener}}}}} We observed that on some devices, loading the image will actually work after a short time (~30 sec), but on others it will always fail. We think it is related to iCloud Photos: On the device that has iCloud Photos sync enabled, the picker is able to load the image right after it was synced to the cloud. On devices that don't sync the image, loading always fails. It seems that the sync process is doing some processing (?) of the image that will later enable the picker to load it successfully, but that's just guessing. Additional observations: This seems to only occur for images that were taken with the stock Camera app. When using Halide to capture RAW (either ProRAW or RAW), the Picker is able to load the image. When trying to load the image as kUTTypeRawImage instead of kUTTypeImage, it also fails. The picker also can't load RAW images that were AirDroped from another device, unless it synced to iCloud first. This is reproducable using the Selecting Photos and Videos in iOS sample code project. We observed this happening in other apps that use the PHPicker, not just ours. Is this a bug, or is there something that we are missing?
7
1
2.7k
Sep ’25
AVCaptureVideoPreviewLayer layoutSublayers invoked on background thread
Opening this question after discussing the issue in the AVCapture lab, hopefully so we can track down this issue. We've been noticing some crashes in App Store Connect caused by layoutSublayers being called on a background thread. After debugging the issue a bit we found that all calls which modified the AVCaptureSession or preview layer were indeed done on the main thread. It would be useful to see what results in AVCaptureVideoPreviewLayer.updateFormatDescription being called. I've attached the crashlog below. Crash log.ips - https://developer.apple.com/forums/content/attachment/800b0dba-3477-4c5a-b56c-f4cc393b384f
1
1
730
Jun ’25
writeImageAtIndex:1012: ⭕️ ERROR: 'App' is trying to save an opaque image (5712x4284) with 'AlphaLast'.
I have an app that edits photos in your library. When I call try CIContext().writeHEIFRepresentation(of: editedImage, to: fileURL, format: .RGBA8, colorSpace: originalImage.colorSpace!) The following is logged to the console: writeImageAtIndex:1012: ⭕️ ERROR: 'App' is trying to save an opaque image (5712x4284) with 'AlphaLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha. What does that mean and how can I resolve it? Xcode Version 16.0 (16A242d) iOS 18.1 (22B82)
5
9
2.8k
Jan ’25
iOS 14, App crash when call presentLimitedLibraryPickerFromViewControlle
iPhone7 : iOS 14.0 Beta 5 Xcode-beta Mac OS : 10.15.5 (19F101) crash info : ** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[PHPhotoLibrary presentLimitedLibraryPickerFromViewController:]: unrecognized selector sent to instance xxxxxx' terminating with uncaught exception of type NSException my code: (void)viewDidAppear:(BOOL)animated {   [super viewDidAppear:animated];   if (@available(iOS 14, *)) {     [[PHPhotoLibrary sharedPhotoLibrary] presentLimitedLibraryPickerFromViewController:self];   } }
9
0
4.2k
Nov ’25
I need a way to permantently disable Reactions from my app, ideally the universe too
So I've spent the last five years optimizing my video AI system so that it runs with less than 5% CPU while processing a 30fps video feed on a Macbook Pro M2, and everything is great, until Sonoma comes out, and I find myself consuming 40% CPU for the exact same workload. So I fire up Instruments, and the "heaviest stack trace" (see screenshot) turns out to be Espresso doing some completely unasked-for and absolutely useless processing on my video frames. I turn off Reactions, but nothing helps - the CPU consumptions stays at 40%. "Reactions" is nothing but a useless toy to please some WWDC keynote fanboys, I don't want it anywhere near my app or my users, and I especially do not want to take the blame for it pissing away the user's CPU cycles and battery. Now, how do I make it go away, for ever? Best regards Jacob
10
6
1.4k
Jul ’25
After iPadOS 26 Beta and iOS 26 Beta, AVCaptureMetadataOutput no longer detects Face on some devices.
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face. After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue. iPad (9th Gen) iPad air (4th Gen) iPhone 15 This issue has not occur on any other devices I have. I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. [https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces] Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?
1
6
466
Sep ’25
Is Photo Library access mandatory for 24MP Deferred Photo Capture?
Hello everyone, I'm working on a feature where I need to capture the highest possible quality photo (e.g., 24MP on supported devices) and upload it to our server. I don't need the photos to appear in user's main Photos app so I thought I could store the photos in app's private directory using FileManager until they are uploaded. This wouldn't require requesting Photo Library permission, maximizing user privacy. The documentation on AVCapturePhotoOutput states that "the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled" /** @property maxPhotoDimensions @abstract Indicates the maximum resolution of the requested photo. @discussion Set this property to enable requesting of images up to as large as the specified dimensions. Images returned by AVCapturePhotoOutput may be smaller than these dimensions but will never be larger. Once set, images can be requested with any valid maximum photo dimensions by setting AVCapturePhotoSettings.maxPhotoDimensions on a per photo basis. The dimensions set must match one of the dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions for the current active format. Changing this property may trigger a lengthy reconfiguration of the capture render pipeline so it is recommended that this is set before calling -[AVCaptureSession startRunning]. Note: When supported, the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled. */ @available(iOS 16.0, *) open var maxPhotoDimensions: CMVideoDimensions (btw. this note is not present in the docs https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/maxphotodimensions) Enabling autoDeferredPhotoDeliveryEnabled means that for a 24MP capture, the system will call the photoOutput(_:didFinishCapturingDeferredPhotoProxy:error:) delegate method, providing a proxy object instead of the final image data. According to the WWDC23 session "Create a more responsive camera experience," this AVCaptureDeferredPhotoProxy must be saved to the PHPhotoLibrary using a PHAssetCreationRequest with the resource type .photoProxy. The system then handles the final processing in the background within the library. To use deferred photo processing, you'll need to have write permission to the photo library to store the proxy photo, and read permission if your app needs to show the final photo or wants to modify it in any way. https://developer.apple.com/videos/play/wwdc2023/10105/?time=799 This seems to create a hard dependency on the Photo Library for accessing 24MP images. My question is: Is there any way to receive the final, processed 24MP image data directly in the app after a deferred capture, without using PHPhotoLibrary as the processing intermediary? For example, is there a delegate callback or a mechanism I'm missing that provides the final data for a deferred photo, allowing an app to handle it in-memory or in its own private sandbox, completely bypassing the user's Photo Library? Our goal is to follow Apple's privacy-first principles by avoiding requesting a PHPhotoLibrary authorization when our app's core function doesn't require access to the user's photo collection. Thank you for your time and any clarification you can provide.
3
3
493
Sep ’25
Images with unusual color spaces not correctly loaded by Core Image
Some users reported that their images are not loading correctly in our app. After a lot of debugging we identified the following: This only happens when the app is build for Mac Catalyst. Not on iOS, iPadOS, or “real” macOS (AppKit). The images in question have unusual color spaces. We observed the issue for uRGB and eciRGB v2. Those images are rendered correctly in Photos and Preview on all platforms. When displaying the image inside of a UIImageView or in a SwiftUI Image, they render correctly. The issue only occurs when loading the image via Core Image. When comparing the different Core Image render graphs between AppKit (working) and Catalyst (faulty) builds, they look identical—except for the result. Mac (AppKit): Catalyst: Something seems to be off when Core Image tries to load an image with foreign color space in Catalyst. We identified a workaround: By using a CGImageDestination to transcode the image using the kCGImageDestinationOptimizeColorForSharing option, Image I/O will convert the image to sRGB (or similar) and Core Image is able to load the image correctly. However, one potentially loses fidelity this way. Or might there be a better workaround?
2
3
118
Aug ’25
IPadOS 17 external camera exposure
I'm developing iPad app that will be mostly dedicated for certain external camera for visually impaired people. The linux UVC api (e.g. using guvcview) allows to enable automatic exposure for the camera. IOs api "isExposureModeSupported" unfortunately returns false for any of the exposure modes. Is it a bug? Or perhaps AVFoundation doesn't support UVC exposure yet?
1
2
585
Jul ’25
Constituent active device switching very slow on iPhone 16 Pro Models on focus changes
Hi all, we are in the business of scanning documents and barcodes with the camera system of mobile devices. Since there is a wide variety of use cases, from scanning tiniest barcodes and small business cards to scanning barcodes or large documents from far distances we preferably rely on the triple camera devices, if available, with automatic constituent device switching. This approach used to be working perfectly fine. Depending on the zoom level (we prefer to use an initial zoom value of 2.0) and the focusing distance the iPhone Pro models switched through the different camera systems at light speed: from ultra-wide to wide, tele and back. No issues at all. Unfortunately the new iPhone 16 Pro models behave very different when it comes to constituent device switching based on focus distance. The switching is slow and sometimes it does not happen at all when the focusing distance changes. Especially when aiming for a at a distant object for a longer time and then aiming at a very close object that is maybe 2" away. The iPhone 15 Pro here always switches immediately to the ultra-wide camera, while the iPhone 16 Pro takes at least 2-3 seconds, in rare cases up to 10 seconds and sometimes forever to switch to the ultra-wide camera. Of course we assumed that our code is responsible for these issues. So we experimented with restricting the devices and so on. Then we stripped more and more configuration code but nothing we tried improved the situation. So we ended up writing a minimal example app that demonstrates the problem. You can find the code below. Execute it on various iPhones and aim at far distance (> 10 feet) and then quickly to very close distance (<5 inches). Here is a list of devices and our test results: iPhone 15 Pro, iOS 17.6: very fast and reliable switching iPhone 15 Pro, iOS 18.1: very fast and reliable switching iPhone 13 Pro Max, iOS 15.3: very fast and reliable switching iPhone 16 (dual-wide camera), iOS 18.1: very fast and reliable switching iPhone 16 Pro, iOS 18.1: slow switching, unreliable iPhone 16 Pro Max, iOS 18.1: slow switching, unreliable Questions: Does anyone else have seen this issue? And possibly found a workaround? Is this behaviour intended on iPhone 16 Pro models? Can we somehow improve the switching speed? Further the iPhone 16 Pro models also show a jumping preview in the preview layer when they switch the constituent active device. Not dramatic, but compared to the other phones it looks like a glitch. Thank you very much! Kind regards, Sebastian import UIKit import AVFoundation class ViewController: UIViewController { var captureSession : AVCaptureSession! var captureDevice : AVCaptureDevice! var captureInput : AVCaptureInput! var previewLayer : AVCaptureVideoPreviewLayer! var activePrimaryConstituentToken: NSKeyValueObservation? var zoomToken: NSKeyValueObservation? override func viewDidLoad() { super.viewDidLoad() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) checkPermissions() setupAndStartCaptureSession() } func checkPermissions() { let cameraAuthStatus = AVCaptureDevice.authorizationStatus(for: AVMediaType.video) switch cameraAuthStatus { case .authorized: return case .denied: abort() case .notDetermined: AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler: { (authorized) in if(!authorized){ abort() } }) case .restricted: abort() @unknown default: fatalError() } } func setupAndStartCaptureSession() { DispatchQueue.global(qos: .userInitiated).async{ self.captureSession = AVCaptureSession() self.captureSession.beginConfiguration() if self.captureSession.canSetSessionPreset(.photo) { self.captureSession.sessionPreset = .photo } self.captureSession.automaticallyConfiguresCaptureDeviceForWideColor = true self.setupInputs() DispatchQueue.main.async { self.setupPreviewLayer() } self.captureSession.commitConfiguration() self.captureSession.startRunning() self.activePrimaryConstituentToken = self.captureDevice.observe(\.activePrimaryConstituent, options: [.new], changeHandler: { (device, change) in let type = device.activePrimaryConstituent!.deviceType.rawValue print("Device type: \(type)") }) self.zoomToken = self.captureDevice.observe(\.videoZoomFactor, options: [.new], changeHandler: { (device, change) in let zoom = device.videoZoomFactor print("Zoom: \(zoom)") }) let switchZoomFactor = 2.0 DispatchQueue.main.async { self.setZoom(CGFloat(switchZoomFactor), animated: false) } } } func setupInputs() { if let device = AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) { captureDevice = device } else { fatalError("no back camera") } guard let input = try? AVCaptureDeviceInput(device: captureDevice) else { fatalError("could not create input device from back camera") } if !captureSession.canAddInput(input) { fatalError("could not add back camera input to capture session") } captureInput = input captureSession.addInput(input) } func setupPreviewLayer() { previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) view.layer.addSublayer(previewLayer) previewLayer.frame = self.view.layer.frame } func setZoom(_ value: CGFloat, animated: Bool) { guard let device = captureDevice else { return } let maxZoom: CGFloat = captureDevice.maxAvailableVideoZoomFactor let minZoom: CGFloat = captureDevice.minAvailableVideoZoomFactor let zoomValue = max(min(value, maxZoom), minZoom) let deltaZoom = Float(abs(zoomValue - device.videoZoomFactor)) do { try device.lockForConfiguration() if animated { device.ramp(toVideoZoomFactor: zoomValue, withRate: max(deltaZoom * 50.0, 50.0)) } else { device.videoZoomFactor = zoomValue } device.unlockForConfiguration() } catch { return } } }
4
2
511
Dec ’24
After iPadOS 26 beta and iOS 26 beta, AVCaptureMetadataOutput no longer detects Face on some devices.
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face. After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue. iPad (9th Gen) iPad air (4th Gen) iPhone 15 This issue has not occur on any other devices I have. I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?
0
2
117
Jul ’25
Camera become black for few propduction users during photo capture
PLATFORM AND VERSION :iOS 18.5 I wanted to bring to your attention a critical issue some of our production users are experiencing with the CoinOut app. Specifically, users are encountering a problem when attempting to capture photos of receipts using the app's customized camera feature. The camera, which utilizes AVCaptureVideoPreviewLayer and AVCaptureDevice, occasionally fails to load the preview, resulting in a black screen instead of the expected camera view. This camera blackout issue is significantly impacting the user experience as it prevents them from snapping photos of their receipts, which is a core functionality of the CoinOut app. Any help/suggestion to this issue would be greatly appreciated. STEPS TO REPRODUCE Open the app and click on camera icon. It will display camera to capture photo. Camera shows black for few production user's. class ViewController: UIViewController { @IBOutlet private weak var captureButton: UIButton! private var fillLayer: CAShapeLayer! private var previewLayer : AVCaptureVideoPreviewLayer! private var output: AVCapturePhotoOutput! private var device: AVCaptureDevice! private var session : AVCaptureSession! private var highResolutionEnabled: Bool = false private let sessionQueue = DispatchQueue(label: "session queue") override func viewDidLoad() { super.viewDidLoad() setupCamera() customiseUI() } @IBAction func startCamera(sender: UIButton) { didTapTakePhoto() } private func setupCamera() { let session = AVCaptureSession() session.sessionPreset = AVCaptureSession.Preset.high previewLayer = AVCaptureVideoPreviewLayer(session: session) output = AVCapturePhotoOutput() device = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back) if let device = self.device{ do{ let input = try AVCaptureDeviceInput(device: device) if session.canAddInput(input){ session.addInput(input)} else { print("\(#fileID):\(#function):\(#line) : Session Input addition failed") } if session.canAddOutput(output){ output.isHighResolutionCaptureEnabled = self.highResolutionEnabled session.addOutput(output) } else { print("\(#fileID):\(#function):\(#line) : Session Input high resolution failed") } previewLayer.videoGravity = .resizeAspectFill previewLayer.session = session sessionQueue.async { session.startRunning() } self.session = session self.session.accessibilityElementIsFocused() try device.lockForConfiguration() if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.autoWhiteBalance) { device.whiteBalanceMode = .autoWhiteBalance } else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") } if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.continuousAutoWhiteBalance) { device.whiteBalanceMode = .continuousAutoWhiteBalance } else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") } if device.isFocusModeSupported(.continuousAutoFocus) { device.focusMode = .continuousAutoFocus} else if device.isFocusModeSupported(.autoFocus) { device.focusMode = .autoFocus } device.unlockForConfiguration() } catch { print("\(#fileID):\(#function):\(#line) : \(error.localizedDescription)") } } else { print("\(#fileID):\(#function):\(#line) : Device found as nil") } } private func customiseUI() { let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height), cornerRadius: 0) let rectangleWidth = view.frame.width - (view.frame.width * 0.16) let x = (view.frame.width - rectangleWidth) / 2 let rectangleHeight = view.frame.height - (view.frame.height * 0.16) let y = (view.frame.height - rectangleHeight) / 2 let roundRect = UIBezierPath(roundedRect: CGRect(x: x, y: y, width: rectangleWidth, height: rectangleHeight), byRoundingCorners:.allCorners, cornerRadii: CGSize(width: 0, height: 0)) roundRect.move(to: CGPoint(x: self.view.center.x , y: self.view.center.y)) path.append(roundRect) path.usesEvenOddFillRule = true fillLayer = CAShapeLayer() fillLayer.path = path.cgPath fillLayer.fillRule = .evenOdd fillLayer.opacity = 0.4 previewLayer.addSublayer(fillLayer) previewLayer.frame = view.bounds view.layer.addSublayer(previewLayer) view.bringSubviewToFront(captureButton) } private func didTapTakePhoto() { let settings = self.getSettings(camera: self.device) if device.isAdjustingFocus { do { try device.lockForConfiguration() device.focusMode = .continuousAutoFocus device.unlockForConfiguration() device.addObserver(self, forKeyPath: "adjustingFocus", options: [.new], context: nil) } catch { print(error) } } else { output.capturePhoto(with: settings, delegate: self) } } func getSettings(camera: AVCaptureDevice) -> AVCapturePhotoSettings { var settings = AVCapturePhotoSettings() if let rawFormat = output.availableRawPhotoPixelFormatTypes.first { settings = AVCapturePhotoSettings(rawPixelFormatType: OSType(rawFormat)) } settings.isHighResolutionPhotoEnabled = self.highResolutionEnabled let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first! let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType] as [String : Any] settings.previewPhotoFormat = previewFormat return settings } } extension ViewController: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) { AudioServicesDisposeSystemSoundID(1108) } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { guard let data = photo.fileDataRepresentation() else { return } let image = UIImage(data: data)! showImage(cropped: image) } func showImage(cropped: UIImage) { let vc = self.storyboard?.instantiateViewController(withIdentifier: "ImagePreviewViewController") as? ImagePreviewViewController vc?.captured = cropped self.present(vc!, animated: true) } }```
1
0
186
Jul ’25
Generating Live Photo from JPG and MOV fails
I am working on an iOS application using SwiftUI where I want to convert a JPG and a MOV file to a live photo. I am utilizing the LivePhoto Class from Github for this. The JPG and MOV files are displayed correctly in my WallpaperDetailView, but I am facing issues when trying to download the live photo to the gallery and generate the Live Photo. Here is the relevant code and the errors I am encountering: Console prints: Play button should be visible Image URL fetched and set: Optional("https://firebasestorage.googleapis.com/...") Video is ready to play Video downloaded to: file:///var/mobile/Containers/Data/Application/.../tmp/CFNetworkDownload_7rW5ny.tmp Failed to generate Live Photo I have verified that the app has the necessary permissions to access the Photo Library. The JPEG and MOV files are successfully downloaded and can be displayed in the app. The issue seems to occur when generating the Live Photo from the downloaded files. struct WallpaperDetailView: View { var wallpaper: Wallpaper @State private var isLoading = false @State private var isImageSaved = false @State private var imageURL: URL? @State private var livePhotoVideoURL: URL? @State private var player: AVPlayer? @State private var playerViewController: AVPlayerViewController? @State private var isVideoReady = false @State private var showBuffering = false var body: some View { ZStack { if let imageURL = imageURL { GeometryReader { geometry in KFImage(imageURL) .resizable() ... } } if let playerViewController = playerViewController { VideoPlayerViewController(playerViewController: playerViewController) .frame(maxWidth: .infinity, maxHeight: .infinity) .clipped() .edgesIgnoringSafeArea(.all) } } .onAppear { PHPhotoLibrary.requestAuthorization { status in if status == .authorized { loadImage() } else { print("User denied access to photo library") } } } private func loadImage() { isLoading = true if let imageURLString = wallpaper.imageURL, let imageURL = URL(string: imageURLString) { self.imageURL = imageURL if imageURL.scheme == "file" { self.isLoading = false print("Local image URL set: \(imageURL)") } else { fetchDownloadURL(from: imageURLString) { url in self.imageURL = url self.isLoading = false print("Image URL fetched and set: \(String(describing: url))") } } } if let livePhotoVideoURLString = wallpaper.livePhotoVideoURL, let livePhotoVideoURL = URL(string: livePhotoVideoURLString) { self.livePhotoVideoURL = livePhotoVideoURL preloadAndPlayVideo(from: livePhotoVideoURL) } else { self.isLoading = false print("No valid image or video URL") } } private func preloadAndPlayVideo(from url: URL) { self.player = AVPlayer(url: url) let playerViewController = AVPlayerViewController() playerViewController.player = self.player self.playerViewController = playerViewController let playerItem = AVPlayerItem(url: url) playerItem.preferredForwardBufferDuration = 1.0 self.player?.replaceCurrentItem(with: playerItem) ... print("Live Photo Video URL set: \(url)") } private func saveWallpaperToPhotos() { if let imageURL = imageURL, let livePhotoVideoURL = livePhotoVideoURL { saveLivePhotoToPhotos(imageURL: imageURL, videoURL: livePhotoVideoURL) } else if let imageURL = imageURL { saveImageToPhotos(url: imageURL) } } private func saveImageToPhotos(url: URL) { ... } private func saveLivePhotoToPhotos(imageURL: URL, videoURL: URL) { isLoading = true downloadVideo(from: videoURL) { localVideoURL in guard let localVideoURL = localVideoURL else { print("Failed to download video for Live Photo") DispatchQueue.main.async { self.isLoading = false } return } print("Video downloaded to: \(localVideoURL)") self.generateAndSaveLivePhoto(imageURL: imageURL, videoURL: localVideoURL) } } private func generateAndSaveLivePhoto(imageURL: URL, videoURL: URL) { LivePhoto.generate(from: imageURL, videoURL: videoURL, progress: { percent in print("Progress: \(percent)") }, completion: { livePhoto, resources in guard let resources = resources else { print("Failed to generate Live Photo") DispatchQueue.main.async { self.isLoading = false } return } print("Live Photo generated with resources: \(resources)") self.saveLivePhotoToLibrary(resources: resources) }) } private func saveLivePhotoToLibrary(resources: LivePhoto.LivePhotoResources) { LivePhoto.saveToLibrary(resources) { success in DispatchQueue.main.async { if success { self.isImageSaved = true print("Live Photo saved successfully") } else { print("Failed to save Live Photo") } self.isLoading = false } } } private func fetchDownloadURL(from gsURL: String, completion: @escaping (URL?) -> Void) { let storageRef = Storage.storage().reference(forURL: gsURL) storageRef.downloadURL { url, error in if let error = error { print("Failed to fetch image URL: \(error)") completion(nil) } else { completion(url) } } } private func downloadVideo(from url: URL, completion: @escaping (URL?) -> Void) { let task = URLSession.shared.downloadTask(with: url) { localURL, response, error in guard let localURL = localURL, error == nil else { print("Failed to download video: \(String(describing: error))") completion(nil) return } completion(localURL) } task.resume() } }```
1
1
785
Mar ’25
Pictures won't import from iPhone to the Photo app on iMac
Since the OS was recently updated to 18.1.1 on my iPhone 15, I am no longer able to import my pictures into the Photos app on my iMac. I have to mention that my iMac is pretty old and is running OS High Sierra 10.13.6 and is not allowing me to update the OS to a newer version. Anyway, the main error message I get is: "Some items cannot be added to your Photo library because they may be an unrecognizable file format or the file may not contain valid data". Then, for each individual photo that failed to upload, the error message I get reads, "unable to read metadata. The file may be corrupt". However, videos import just fine from my iPhone to iMac. This was not a problem before the recent iPhone update. I tried closing the Photo app and reopening it. I tried restarting my iPhone and iMac but nothing seems to work. Any help would be much appreciated.
3
1
1.8k
Dec ’24
App crashes when opening camera from file input in WKWebView (iOS)
Hello everyone, I have a SwiftUI app using WKWebView to load a website that includes a form with a file input (). The issue is: 📌 When a user taps “Browse” and selects “Take Photo” (camera option), the app crashes before the camera opens. Setup Details: • App Uses SwiftUI with WKWebView • The crash occurs only when selecting “Take Photo”, but selecting an image from the library works fine. 📌 Full Code (WKWebView in SwiftUI) import SwiftUI import WebKit struct WebViewRepresentable: UIViewRepresentable { var urlString: String func makeUIView(context: Context) -> WKWebView { let webView = WKWebView() webView.configuration.allowsInlineMediaPlayback = true webView.configuration.mediaTypesRequiringUserActionForPlayback = [] loadURL(in: webView) return webView } func updateUIView(_ uiView: WKWebView, context: Context) { loadURL(in: uiView) } private func loadURL(in webView: WKWebView) { if let url = URL(string: urlString) { webView.load(URLRequest(url: url)) } } } struct ContentView: View { @State private var currentURL: String = "https://fv-wohlensee.ch" var body: some View { VStack(spacing: 0) { // Oberer Bereich in Grün Color(red: 0, green: 0.4, blue: 0) .frame(height: 50) // WebView with white background WebViewRepresentable(urlString: currentURL) .background(Color.white) Divider() // Navigation buttons HStack(spacing: 10) { Button { currentURL = "https://fv-wohlensee.ch/vereinshaus-eymatt/" } label: { VStack { Image(systemName: "house") .font(.system(size: 18)) Text("Klubhaus") .font(.system(size: 12)) .minimumScaleFactor(0.7) .lineLimit(1) } .padding(8) } .foregroundColor(.white) .frame(maxWidth: .infinity) Button { currentURL = "https://fv-wohlensee.ch/vereinsboot/" } label: { VStack { Image(systemName: "ferry.fill") .font(.system(size: 18)) Text("Boot") .font(.system(size: 12)) .minimumScaleFactor(0.7) .lineLimit(1) } .padding(8) } .foregroundColor(.white) .frame(maxWidth: .infinity) Button { currentURL = "https://fv-wohlensee.ch/aktivitaeten/" } label: { VStack { Image(systemName: "calendar") .font(.system(size: 18)) Text("Aktivitäten") .font(.system(size: 12)) .minimumScaleFactor(0.7) .lineLimit(1) } .padding(8) } .foregroundColor(.white) .frame(maxWidth: .infinity) Button { currentURL = "https://fv-wohlensee.ch/mitglied-werden/" } label: { VStack { Image(systemName: "person.badge.plus") .font(.system(size: 18)) Text("Mitglied") .font(.system(size: 12)) .minimumScaleFactor(0.7) .lineLimit(1) } .padding(8) } .foregroundColor(.white) .frame(maxWidth: .infinity) } .padding(.horizontal, 15) .padding(.vertical, 10) .background(Color(red: 0, green: 0.4, blue: 0)) } .frame(maxWidth: .infinity, maxHeight: .infinity) .background(Color(red: 0, green: 0.4, blue: 0)) .ignoresSafeArea() } } struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView() } } What I’ve Tried: 1️⃣ Checked Info.plist: Added permissions for camera and photo library: <key>NSCameraUsageDescription</key> <string>This app requires access to the camera to upload photos.</string> <key>NSPhotoLibraryUsageDescription</key> <string>This app requires access to your photo library.</string> 2️⃣ Enabled Media Capture in WKWebView: webView.configuration.allowsInlineMediaPlayback = true webView.configuration.mediaTypesRequiringUserActionForPlayback = [] 3️⃣ Tested in Safari: The same form works fine when opened in Safari. Questions: ❓ Does WKWebView need additional permissions to open the camera? ❓ Do I need to implement a delegate to handle file uploads in SwiftUI? ❓ Has anyone faced this issue and found a fix? Any guidance would be greatly appreciated! 🚀 Thanks in advance! 😊
1
1
458
Jan ’25
ProRAW to CIRAWFilter to HEIF producing borked HDR results
Following WWDC 2023 "Support HDR images in your app", I'm trying to save 48-megapixel ProRAWs (taken on an iPhone 14 Pro Max) as HDR HEICs to the Photo Library. After processing the ProRAW file using CIRAWFilter, whether I use CIContext.heif10Representation() or convert to a CGImage, then UIImage, and use UIImage.heicData(), I get photos that behave oddly in the Photo Library. They appear too dark, and visibly brighten when first viewed, but more problematic is that the photos brighten a great deal more when you edit them with the Photos editor. This is the behavior when using the itur_2100_PQ color space, but itur_2100_HLG behaves similarly, except that it gets dramatically darker when edited. This behavior occurs whether CIRAWFilter.extendedDynamicRangeAmount is set to 0.0, or 2.0, or not set at all. So what am I doing wrong? Here is a minimal iOS app -- well, just the ContentView -- that demonstrates the issue. You also need a .dng ProRAW file included in the project directory named test.dng. I'd love to include such a file, but I can't. Be prepared for a multi-second wait when you save the photo. import SwiftUI import Photos struct ContentView: View { let context = CIContext() let hdrColorSpace = CGColorSpace(name: CGColorSpace.itur_2100_PQ)! var body: some View { VStack(spacing: 100) { Button("Save Photo From CGImage/UIImage") { savePhotoFromUIImage() } Button("Save Photo From CIImage") { savePhotoDirectFromCIImage() } }.padding(60) } //convert RAW with CIRAWFilter to CIImage, then convert to CGImage, then UIImage, then HEIF private func savePhotoFromUIImage() { if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) { guard let outputCGImage = context.createCGImage(ciImage, from: ciImage.extent, format: .RGB10, colorSpace: hdrColorSpace) else { return } let uiImage = UIImage(cgImage: outputCGImage) if let heicData = uiImage.heicData() { saveHEIFPhotoToLibrary(imageData: heicData) } else { print("Failed to convert UIImage to HEIC") } } } //convert RAW with CIRAWFilter to CIImage, then to HEIF private func savePhotoDirectFromCIImage() { if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) { do { let heif = try context.heif10Representation(of: ciImage, colorSpace: hdrColorSpace) saveHEIFPhotoToLibrary(imageData: heif) } catch { print("Failed to get HEIF representation from CIContext") } } } private func processRAW(url: URL) -> CIImage? { guard let coreRawFilter = CIRAWFilter(imageURL: url) else { return nil } coreRawFilter.extendedDynamicRangeAmount = 2.0 //the issue persists whether this is not set, or set to 0, or set to, say, 2.0 guard let ciImage = coreRawFilter.outputImage else { return nil } return ciImage } private func saveHEIFPhotoToLibrary(imageData: Data) { PHPhotoLibrary.shared().performChanges({ let creationRequest = PHAssetCreationRequest.forAsset() let options = PHAssetResourceCreationOptions() creationRequest.addResource(with: .photo, data: imageData, options: options) }) { success, error in if let error = error { print("Error saving photo: \(error.localizedDescription)") } else { print("Photo saved.") } } } }
0
1
562
Jan ’25