Photos & Camera

RSS for tag

Explore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.

Posts under Photos & Camera subtopic

Post

Replies

Boosts

Views

Created

Implementation of Audio-Video Synchronization in Swift
I have a feature requirement: to switch the writer for file writing every 5 minutes, and then quickly merge the last two files. How can I ensure that the merged file is seamlessly combined and that the audio and video information remains synchronized? Currently, the merged video has glitches, and the audio is also out of sync. If there are experts who can provide solutions in this area, I would be extremely grateful.
1
0
197
Aug ’25
Live Photos created with PHLivePhoto API show "Motion not available" when setting as wallpaper
I'm creating Live Photos programmatically in my app using the Photos and AVFoundation frameworks. While the Live Photos work perfectly in the Photos app (long press shows motion), users cannot set them as motion wallpapers. The system shows "Motion not available" message. Here's my approach for creating Live Photos: // 1. Create video with required metadata let writer = try AVAssetWriter(outputURL: videoURL, fileType: .mov) let contentIdentifier = AVMutableMetadataItem() contentIdentifier.identifier = .quickTimeMetadataContentIdentifier contentIdentifier.value = assetIdentifier as NSString writer.metadata = [contentIdentifier] // Video settings: 882x1920, H.264, 30fps, 2 seconds // Added still-image-time metadata at middle frame // 2. Create HEIC image with asset identifier var makerAppleDict: [String: Any] = [:] makerAppleDict["17"] = assetIdentifier // Required key for Live Photo metadata[kCGImagePropertyMakerAppleDictionary as String] = makerAppleDict // 3. Generate Live Photo PHLivePhoto.request( withResourceFileURLs: [photoURL, videoURL], placeholderImage: nil, targetSize: .zero, contentMode: .aspectFit ) { livePhoto, info in // Success - Live Photo created } // 4. Save to Photos library PHAssetCreationRequest.forAsset().addResource(with: .photo, fileURL: photoURL, options: nil) PHAssetCreationRequest.forAsset().addResource(with: .pairedVideo, fileURL: videoURL, options: nil) What I've Tried Matching exact video specifications from Camera app (882x1920, H.264, 30fps) Adding all documented metadata (content identifier, still-image-time) Testing various video durations (1.5s, 2s, 3s) Different image formats (HEIC, JPEG) Comparing with exiftool against working Live Photos Expected Behavior Live Photos created programmatically should be eligible for motion wallpapers, just like those from the Camera app. Actual Behavior System shows "Motion not available" and only allows setting as static wallpaper. Any insights or workarounds would be greatly appreciated. This is affecting our users who want to use their created content as wallpapers. Questions Are there additional undocumented requirements for Live Photos to be wallpaper-eligible? Is this a deliberate restriction for third-party apps, or a bug? Has anyone successfully created Live Photos that work as motion wallpapers? Environment iOS 17.0 - 18.1 Xcode 16.0 Tested on iPhone 16 Pro
1
1
229
Aug ’25
AVCaptureMetadataOutput .face detection not working on iOS 26 Beta with high sessionPreset
In iOS 26 (Developer Beta), the AVCaptureMetadataOutputObjectsDelegate no longer receives callbacks when metadataOutput.metadataObjectTypes = [.face] is set. On earlier iOS versions the issue does not occur. Interestingly, face detection works if I set the sessionPreset to .medium, but not with .high — except on the iPhone 16 Pro Max, where it works regardless.
2
1
315
Aug ’25
Optimizing UICollectionView Scrolling Performance and High-Quality Image Loading with PHCachingImageManager
Hello, I'm developing an app that displays a photo library using UICollectionView and PHCachingImageManager. I'd like to achieve a user experience similar to the native iOS Photos app, where low-quality images are shown quickly while scrolling, and higher-quality images are loaded for visible cells once scrolling stops. I'm currently using the following approach: While Scrolling: I'm using the UICollectionViewDataSourcePrefetching protocol. In the prefetchItemsAt method, I call startCachingImages with low-quality options to cache images in advance. After Scrolling Stops: In the scrollViewDidEndDecelerating method, I intend to load high-quality images for the currently visible cells. I have a few questions regarding this approach: What is the best practice for managing both low-quality and high-quality images efficiently with PHCachingImageManager? Is it correct to call startCachingImages with fastFormat options and then call it again with highQualityFormat in scrollViewDidEndDecelerating? How can I minimize the delay when a low-quality image is replaced by a high-quality one? Are there any additional strategies to help pre-load high-quality images more effectively?
0
1
234
Aug ’25
How to Keep Camera Running in iOS PiP Mode (Like WhatsApp/Google Meet)?
I'm using Picture-in-Picture (PiP) mode in my native iOS application, which is similar to Google Meet, using the VideoSDK.live Native iOS SDK. The SDK has built-in support for PiP and it's working fine for the most part. However, I'm running into a problem: When the app enters PiP mode, the local camera (self-video) of the participant freezes or stops. I want to fix this and achieve the same smooth behavior seen in apps like Google Meet and WhatsApp, where the local camera keeps working even in PiP mode. I've been trying to find documentation or examples on how to achieve this but haven't had much luck. I came across a few mentions that using the camera in the background may require special entitlements from Apple (like in the entitlements file). Most of the official documentation says background camera access is restricted due to Apple’s privacy policies. So my questions are: Has anyone here successfully implemented background camera access in PiP mode on iOS? If yes, what permissions or entitlements are required? How do apps like WhatsApp and Google Meet achieve this functionality on iPhones? Any help, advice, or pointers would be really appreciated!
0
0
194
Aug ’25
Why is my YCbCr to sRGB camera pipeline brighter Than Preview Layer? (420YpCbCr8BiPlanarFullRange, AVCaptureVideoPreviewLayer)
Hi all, I'm working on a custom Metal-based video pipeline using AVCaptureVideoDataOutput, and I've run into an unexpected issue related to exposure. Setup: I'm capturing video frames using kCVPixelFormatType_420YpCbCr8BiPlanarFullRange. In my Metal shader, I: Convert YCbCr (full range Rec.709) to linear Rec.709 RGB. Apply Rec.709 → sRGB gamma encoding. Output to .bgra8Unorm_srgb via MTKView. Everything renders correctly in terms of colorspace math, but the image appears significantly brighter (~+3 stops EV) compared to AVCaptureVideoPreviewLayer and the native iOS Camera app under the same camera exposure settings. What I’ve verified: The color transforms are correct: YCbCr709 to RGB, then linear to sRGB. I'm not applying any tone mapping or aggressive look LUTs yet. Camera exposure is locked using: device.setExposureModeCustom(duration: ..., iso: ...) The same EV (e.g., ISO 50, 1/125s, f/5.6) on my iPhone appears visually 3 stops brighter than on my digital cameras (Sony/Canon etc). To match the look of the preview layer or camera app, I have to simulate a ~–3 EV shift in my custom pipeline. Questions: Is AVCaptureVideoPreviewLayer applying extra tone mapping, digital gain, or contrast shaping (like OOTF etc)? Does the camera ISP expose "hotter" (i.e., with more light) internally for the preview layer than what we get in video frame buffers? Is there a standard way to compensate for this ISP behavior in custom pipelines using AVCaptureVideoDataOutput? Can this be accounted for using metadata (e.g., exposure bias, gain, gamma curve)?
2
0
228
Aug ’25
Is a Locked Capture Extension allowed to just "open the app" when the device is unlocked?
Hey, Quick question. I noticed that Adobe's new app, Project Indigo, allows you to open the app using the Camera Control button. However, when your device is locked it just shows this screen: Would this normally be approved by the Appstore approval process? I ask because I would like to do something similar with my camera app. I know that this is not the best user experience, but my apps UI is not built in Swift and I don't have the resources to build the UI again. At least this way the user experience would be improved from what it is now, where users cannot even launch the app. I get many requests per week about this feature and would love to improve the UX for my users, even if it's not the best possible. Thanks, Alex
1
0
271
Jul ’25
How to reliably detect user-modified photos?
I'm developing a photo backup app. To detect newly added or edited photos since the app launched, I keep a local dictionary in the format [localIdentifier: modification_date]. However, PHAsset.modificationDate is not reliable. It often changes unexpectedly, possibly due to system operations like iCloud metadata updates. Is there a more reliable way to detect whether a photo has been modified by user since the last app launch? I'm thinking about using content hash instead, but I'm not sure how heavy this operation is in terms of performance.
2
0
53
Jul ’25
Is Photo Library access mandatory for 24MP Deferred Photo Capture?
Hello everyone, I'm working on a feature where I need to capture the highest possible quality photo (e.g., 24MP on supported devices) and upload it to our server. I don't need the photos to appear in user's main Photos app so I thought I could store the photos in app's private directory using FileManager until they are uploaded. This wouldn't require requesting Photo Library permission, maximizing user privacy. The documentation on AVCapturePhotoOutput states that "the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled" /** @property maxPhotoDimensions @abstract Indicates the maximum resolution of the requested photo. @discussion Set this property to enable requesting of images up to as large as the specified dimensions. Images returned by AVCapturePhotoOutput may be smaller than these dimensions but will never be larger. Once set, images can be requested with any valid maximum photo dimensions by setting AVCapturePhotoSettings.maxPhotoDimensions on a per photo basis. The dimensions set must match one of the dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions for the current active format. Changing this property may trigger a lengthy reconfiguration of the capture render pipeline so it is recommended that this is set before calling -[AVCaptureSession startRunning]. Note: When supported, the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled. */ @available(iOS 16.0, *) open var maxPhotoDimensions: CMVideoDimensions (btw. this note is not present in the docs https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/maxphotodimensions) Enabling autoDeferredPhotoDeliveryEnabled means that for a 24MP capture, the system will call the photoOutput(_:didFinishCapturingDeferredPhotoProxy:error:) delegate method, providing a proxy object instead of the final image data. According to the WWDC23 session "Create a more responsive camera experience," this AVCaptureDeferredPhotoProxy must be saved to the PHPhotoLibrary using a PHAssetCreationRequest with the resource type .photoProxy. The system then handles the final processing in the background within the library. To use deferred photo processing, you'll need to have write permission to the photo library to store the proxy photo, and read permission if your app needs to show the final photo or wants to modify it in any way. https://developer.apple.com/videos/play/wwdc2023/10105/?time=799 This seems to create a hard dependency on the Photo Library for accessing 24MP images. My question is: Is there any way to receive the final, processed 24MP image data directly in the app after a deferred capture, without using PHPhotoLibrary as the processing intermediary? For example, is there a delegate callback or a mechanism I'm missing that provides the final data for a deferred photo, allowing an app to handle it in-memory or in its own private sandbox, completely bypassing the user's Photo Library? Our goal is to follow Apple's privacy-first principles by avoiding requesting a PHPhotoLibrary authorization when our app's core function doesn't require access to the user's photo collection. Thank you for your time and any clarification you can provide.
3
3
448
Jul ’25
I cannot acquire entitlement named com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning.
AVCaptureVideoDataOutput.preparesCellularRadioForNetworkConnection requires com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning. But I cannot acquire its entitlement. I can't find its entitlement on 'Certificates, Identifiers & Profiles'. Any solutions? Provisioning profile "iOS Team Provisioning Profile: ......" doesn't include the com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning entitlement.
2
0
497
Jul ’25
After iOS 18.5, the AVFoundation AVCaptureSessionInterruptionReason.videoDeviceNotAvailableWithMultipleForegroundApps error has caused a significant increase in camera black screen issues.
Issue: After iOS 18.5 release, our app is experiencing a significant increase in AVCaptureSessionInterruptionReason.videoDeviceNotAvailableWithMultipleForegroundApps errors. Details: Our camera-related code has not been updated recently.However, we've observed that the error rate has significantly increased starting from May 2025. The error rate has risen from approximately 0.02% (2 in 10,000 users) to 0.1% (1 in 1,000 users). This represents a 5x increase in error occurrence. The frequency has increased noticeably since iOS 18.5 This is affecting our app's camera functionality and user experience Questions: Are there any known changes in iOS 18.5 regarding camera access management? What are the recommended best practices to handle this interruption reason? Are there any API changes we should be aware of? Best, Shay
1
0
274
Jul ’25
AVCaptureSession startRunning is slow
AVCaptureSession's startRunning method is thread blocking and seems to be slow. What is this method doing behind the scenes? For context: I'm working on Simulator Camera support and I have a 'fake' AVCaptureDevice that might be causing this. My hypothesis is that AVCaptureSession tries to connect to the device and waits for a notification to be posted back. I'd love to find a way to let my fake device message AVCaptureSession that it's connected.
3
0
215
Jul ’25
Making DataScannerViewController work in the Simulator
Before you post —Camera doesn't work on the Simulator— that's no longer true. I've made a solution that makes the Simulator believe there's an actual hardware device connected, allowing users to stream the macOS camera to the iOS Simulator (see for more info RocketSim's documentation: https://docs.rocketsim.app/features/hzQMSrSga7BGWvxdNVdwYs/simulator-camera-support/58tQ5jvevLNSnyUEA7VgAv) Now, it works for VNDocumentCameraViewController, but when I try opening DataScannerViewController, I directly run into: Failed to start scanning: The operation couldn’t be completed. (VisionKit.DataScannerViewController.ScanningUnavailable error 0.) My question: How does this view controller determine whether scanning is available? Is there a certain capability the available AVCaptureDevice's need to support maybe? Any direction would be helpful for me to make this work for developers, making them build apps faster!
0
0
272
Jul ’25
Why Does AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps Occur on iPhone?
Hi everyone, We're encountering an unexpected issue with our iPhone-only camera app: 👉 TimeMark - Photo Proof https://apps.apple.com/us/app/timemark-photo-proof/id6446071834 Problem Description: Our app uses a full-screen camera view via AVCaptureSession. In some cases reported by users, the camera fails immediately upon app launch, and we receive this interruption reason: AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps According to the Apple documentation https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps?language=objc , this interruption typically occurs when the app is running in a multi-app layout such as Slide Over, Split View, or Picture in Picture — all of which are iPad-only features. However, this issue is being reported on iPhones, and our app does not support iPad at all. Also noted in the documentation: "Given your present AVCaptureSession configuration, the session may only be run if your app occupies the full screen." Additional Context: The issue occurs immediately on app launch, before the user can interact with the camera. We don’t enable multitaskingCameraAccessEnabled. We are 100% sure this is happening on iPhone, not iPad. It’s hard to reproduce; users report it happening sporadically. Locally, we tried playing Picture-in-Picture videos (e.g., Safari/YouTube) before launching our app, but we could not reproduce the issue. Questions: Why is this interruption reason occurring on iPhone, which doesn’t officially support Slide Over or Split View? Could this be caused by some system-level multitasking or resource contention (e.g., Picture in Picture from FaceTime or Safari)? Would enabling multitaskingCameraAccessEnabled help prevent this issue on iPhone, even though it's designed for iPad? Enabling multitaskingCameraAccessEnabled seems to require enabling UIBackgroundModes → voip. Would adding this background mode cause any App Store review risk or rejection if our app doesn't actually use VoIP functionality? Any help, insight, or suggestions would be greatly appreciated. Thanks in advance!
2
0
603
Jul ’25
How to transcode HEIC to JPEG while preserving the ISO 21496-1 gain map
When shooting with an iPhone 15 or later, it’s possible to capture HEIC or JPEG images that include gain map information conforming to the ISO 21496-1 standard. However, during image format transcoding, the HEIC codec is able to preserve the ISO 21496-1 gain map. But when converting from HEIC to JPEG, the gain map is transformed into the Apple Gain Map format instead. Is there any solution to this issue?
3
0
918
Jul ’25
WWDC25 Camera & Photos group lab summary (Part 3 of 3)
(Note: this is part 3 of a 3 part posting. See Part 1 or Part 2) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Question 24 What’s the best approach for optimizing barcode scanning using AVFoundation or Vision in low-light or angled scenarios Turn on flash in low-light scenarios Lower framerate to improve exposure and reduce noise Wait until the capture is in focus/notify your user that they need to get closer Question 25 Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values. Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available Question 26 Is there a way to quickly create a thumbnail after the user selects an image with PhotosPicker? File provider API Additional questions from the WWDC25 in-person labs that occurred later in the WWDC week Question 1 When should I build my custom photo picker instead of using the system one? Always start with the system picker -> try embeddable customization APIs -> fallback to custom picker for very special needs Question 2 I'm building a new camera app for pros and I want to give my users the most un-processed image possible, and the most control over the capture as possible. How can I do that with AVCapture? If stills, Brief Bayer RAW capture overview, or Pro RAW if you want Apple's processing and dynamic range If video, talk about prores LOG. Custom exposure settings are available throguh the apis maybe global/local tonemapping discussion?
0
0
155
Jul ’25
WWDC25 Camera & Photos group lab summary (Part 2 of 3)
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Question 10 Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation? Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected. Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops. Question 11 What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices? The ImageCaptureCore framework supports camera devices, memory cards, scanners read and write, where supported check out the docs to see how to browse connected devices, folders, files, etc. Question 12 Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info! Yes, this is totally fine. AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance Question 13 What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map? There’s a suite of new System Tone Mapper APIs in this years’ OSes CoreImage ImageKit CoreAnimation, CoreGraphics For example: CoreImage: new CISystemToneMap filter. CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange Can go from high to constrained to standard Tone mapping is provided by the system (CISystemToneMap for controllable example) Question 14 What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image? One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide. Question 15 For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each? AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview. For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output Question 16 Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready CIFilter can be called on the final at that point Photo will have to be re-inserted into the Photo library as adjustment Question 17 Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward? digital zoom upscales the image to output dimensions and cropping will yield a smaller output image while digital zoom will crop, it also upscales Question 18 How do you design camera interfaces that work for both casual users and photography enthusiasts? Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down. Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts A good philosophy is: Keep the simple things easy, make the hard things possible Question 19 Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values. Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available Question 20 a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user? You can't prevent the built-in camera from being available to the user Question 21 Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly? Yes they can, use CoreMLRequest , provide their model container Been supported for a while (iOS 18/macOS 15) For more details go to Machine Learning & AI group lab Thursday use smaller images for better performance Question 22 What would you recommend for capture of the new immersive and spatial formats? To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details Question 23 You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding? For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files. For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs. If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant. (Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
0
0
140
Jul ’25