Photos & Camera

RSS for tag

Explore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.

Posts under Photos & Camera subtopic

Post

Replies

Boosts

Views

Activity

WWDC25 Camera & Photos group lab summary (Part 1 of 3)
(Note: this is part 1 of a 3 part posting. See Part 2 or Part 3) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Introductory kick-off questions Question 1 Tell us a little about the new AVFoundation Capture APIs we've made available in the new iOS 26 developer preview? Cinematic Capture API (strong/weak focus, tracking focus)(scene monitoring)(simulated aperture)(dog/cat heads/groupIDs) Camera Controls and AirPod Stem Clicks Spatial Audio and Studio Quality AirPod Mics in Camera Lens Smudge Detection Exposure and Focus Rect of Interest Question 2 I built QR code scanning into my app, but on newer iPhones I have to hold the phone very far away from the QR code, otherwise the image is blurry and it doesn't scan. Why is this happening and how can I fix it? Every year, the cameras get better and better as we push the state of the art on iPhone photography and videography. This sometimes results in changes to the characteristics of the lenses. min focus distance newer phones have multiple lenses automatic switching behavior Use virtual device like the builtInDualWide or built in Triple, rather than just the builtInWide Set the videoZoomFactor to 2. You're done. Question 3 Last year, we saw some exciting new APIs introduced in AVFoundation in the health space. With Constant Color photography, developers can take pictures that have constant color regardless of ambient lighting. There are some further advancements this year. Davide, could you tell us about them? constant color photography is mean to remove the "tone mapping" applied to photograph captured with camera app, usually incldsuing artistic intent, and instead try to be a close as possible to the real color of the scene, regardless of the illumination constant color images could be captured in HEIF anf JPEG laste year. this year we are adding Support for the DICOM medical imaging photo format. It is a fomrat used by the health industry to store images related to medical subjects like MRI, skin problems, xray and so on. It's writable and also readable format on all OS26, supported through AVCapturePhotoOutput APIs and through the coregraphics api. for coregrapphics there is a new DICOM entry in the property dictionary which includes all the dicom availbale and defined propertie in a file. finder will also display all those in the info panel (Address why a developer would want to use it) - not for regualr picture taking apps. for those HEIF and JPEG are the preferred delivery format. use dicom if your app produces output that are health related, that you can also share with health providers or your doctors Main session developer questions Question 1 LiDAR vs. Dual Camera depth generation: Which resolution does the LiDAR sensor natively have (iPhone 16 Pro) and when to prefer LiDAR over Dual Camera? Both report formats with output resolutions (we don't advertise sensor resolution) Lidar vs Dual, etc: Lidar: Best for absolute depth, real world scale and computer vision Dual, etc: relative, disparity-based, less power, photo effects Also see: 2022 WWDC session "Discovery advancements in iOS camera capture: Depth, focus and multitasking" Question 2 Can true depth and lidar camera run at 60fps? Lidar can do 30fps (edited) Front true depth can do 60fps. Question 3 What’s the first class way to use PhotoKit to reimplement a high performance photo grid? We’ve been using a LazyVGrid and the photos caching manager, but are never able to hit the holy trinity (60hz, efficient memory footprint, minimal flashes of placeholder/empty cells) use the PHCachingImageManager to get media content delivered before you need to display it specify the size you need for grid sized display set the options PHVideoRequestOptionsDeliveryModeFastFormat, PHImageRequestOptionsDeliveryModeFastFormat and PHImageRequestOptionsResizeModeFast Question 4 For rending live preview of video stream, Is there performance overhead from using async and Swift UI for image updates vs UIViewRepresentable + AVCaptureVideoPreviewLayer.self? AVCaptureVideoPreviewLayer is the most efficient display path Use VDO + AVSampleBufferDisplayLayer if you need to modify the image data Swift UI image is optimized for static image content Question 5 Is there a way to configure the AVFoundation BuiltInLiDarDepthCamera mode to provide a depth map as accurate as ARKit at close range? The AVCaptureDepthDataOutput supports filtering that reduces noise and fills in invalid values. Consider using this for smoother depth maps Question 6 Pyramid-based photo editing in core image (such as adobe camera raw highlights and shadows)? First off you may want to look a the builtin filter called CIHighlightShadowAdjust Also the noise reduction in the CIRawFilter uses a pyramid-based algorithm. You can also write your own pyramid-based algorithms by taking an input image: down sample it by two multiply times using imageByApplyingAffineTransform apply additional CIKernels to each downsampled image as needed. use a custom CIKernel to combine the results. Question 7 Is the best way to integrate an in-app camera for a “non-camera” app UIImagePickerController? Yes, UIImagePickerController provides system-provided UI for capturing photos and movies. Question 8 Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready CIFilter can be called on the final at that point Photo will have to be re-inserted into the Photo library as adjustment Question 9 For shipping photo style assets in the app that need transparency what is the best format to use? JPEG2000? will moving to this save a lot of space comapred to PNG or other options? If you want lossless compression PNG is good and supports unpremutiplied alpha If you want lossy compression HEIF supports premutiplied or unpremutiplied alpha (Note: this is part 1 of a 3 part posting. See Part 2 or Part 3)
0
0
239
Jul ’25
WWDC25 Camera & Photos group lab summary (Part 2 of 3)
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Question 10 Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation? Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected. Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops. Question 11 What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices? The ImageCaptureCore framework supports camera devices, memory cards, scanners read and write, where supported check out the docs to see how to browse connected devices, folders, files, etc. Question 12 Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info! Yes, this is totally fine. AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance Question 13 What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map? There’s a suite of new System Tone Mapper APIs in this years’ OSes CoreImage ImageKit CoreAnimation, CoreGraphics For example: CoreImage: new CISystemToneMap filter. CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange Can go from high to constrained to standard Tone mapping is provided by the system (CISystemToneMap for controllable example) Question 14 What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image? One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide. Question 15 For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each? AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview. For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output Question 16 Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready CIFilter can be called on the final at that point Photo will have to be re-inserted into the Photo library as adjustment Question 17 Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward? digital zoom upscales the image to output dimensions and cropping will yield a smaller output image while digital zoom will crop, it also upscales Question 18 How do you design camera interfaces that work for both casual users and photography enthusiasts? Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down. Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts A good philosophy is: Keep the simple things easy, make the hard things possible Question 19 Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values. Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available Question 20 a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user? You can't prevent the built-in camera from being available to the user Question 21 Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly? Yes they can, use CoreMLRequest , provide their model container Been supported for a while (iOS 18/macOS 15) For more details go to Machine Learning & AI group lab Thursday use smaller images for better performance Question 22 What would you recommend for capture of the new immersive and spatial formats? To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details Question 23 You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding? For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files. For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs. If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant. (Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
0
0
131
Jul ’25
WWDC25 Camera & Photos group lab summary (Part 3 of 3)
(Note: this is part 3 of a 3 part posting. See Part 1 or Part 2) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Question 24 What’s the best approach for optimizing barcode scanning using AVFoundation or Vision in low-light or angled scenarios Turn on flash in low-light scenarios Lower framerate to improve exposure and reduce noise Wait until the capture is in focus/notify your user that they need to get closer Question 25 Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values. Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available Question 26 Is there a way to quickly create a thumbnail after the user selects an image with PhotosPicker? File provider API Additional questions from the WWDC25 in-person labs that occurred later in the WWDC week Question 1 When should I build my custom photo picker instead of using the system one? Always start with the system picker -> try embeddable customization APIs -> fallback to custom picker for very special needs Question 2 I'm building a new camera app for pros and I want to give my users the most un-processed image possible, and the most control over the capture as possible. How can I do that with AVCapture? If stills, Brief Bayer RAW capture overview, or Pro RAW if you want Apple's processing and dynamic range If video, talk about prores LOG. Custom exposure settings are available throguh the apis maybe global/local tonemapping discussion?
0
0
127
Jul ’25
AVCaptureVideoPreviewLayer layoutSublayers invoked on background thread
Opening this question after discussing the issue in the AVCapture lab, hopefully so we can track down this issue. We've been noticing some crashes in App Store Connect caused by layoutSublayers being called on a background thread. After debugging the issue a bit we found that all calls which modified the AVCaptureSession or preview layer were indeed done on the main thread. It would be useful to see what results in AVCaptureVideoPreviewLayer.updateFormatDescription being called. I've attached the crashlog below. Crash log.ips - https://developer.apple.com/forums/content/attachment/800b0dba-3477-4c5a-b56c-f4cc393b384f
1
1
679
Jun ’20
PHPhoto localIdentifier to cloudIdentifier conversion
The sample code in the Apple documentation found in  PHCloudIdentifier does not compile in xCode 13.2.1. Can the interface for identifier conversion be clarified so that the answer values are more accessible/readable. The values are 'hidden' inside a Result enum It was difficult (for me) to rewrite the sample code because I made the mistake of interpreting the Result type as a tuple. Result type is really an enum. Using the Result type as the return from library.cloudIdentifierMappings(forLocalIdentifiers: ) and .localIdentifierMappings( for: ) puts the actual mapped identifiers inside the the enum where they need additional access via a .stringValue message or an evaluation of an element of the result enum. For others finding the same compile issue, here is a working version of the sample code. This compiles in xCode 13.2.1. func localId2CloudId(localIdentifiers: [String]) -> [String] {         var mappedIdentifiers = [String]()        let library = PHPhotoLibrary.shared()         let iCloudIDs = library.cloudIdentifierMappings(forLocalIdentifiers: localIdentifiers)         for aCloudID in iCloudIDs {           let cloudResult: Result = aCloudID.value             // Result is an enum .. not a tuple             switch cloudResult {                 case .success(let success):                     let newValue = success.stringValue                     mappedIdentifiers.append(newValue)                 case .failure(let failure):                     // do error notify to user                       }         }         return mappedIdentifiers     } ``` swift func func cloudId2LocalId(assetCloudIdentifiers: [PHCloudIdentifier]) -> [String] {             // patterned error handling per documentation         var localIDs = [String]()         let localIdentifiers: [PHCloudIdentifier: Result<String, Error>]  = PHPhotoLibrary.shared() .localIdentifierMappings(                   for: assetCloudIdentifiers)         for cloudIdentifier in assetCloudIdentifiers {             guard let identifierMapping = localIdentifiers[cloudIdentifier] else {                 print("Failed to find a mapping for \(cloudIdentifier).")                 continue             }             switch identifierMapping {                 case .success(let success):                     localIDs.append(success)                 case .failure(let failure) :                     let thisError = failure as? PHPhotosError                     switch thisError?.code {                         case .identifierNotFound:                             // Skip the missing or deleted assets.                             print("Failed to find the local identifier for \(cloudIdentifier). \(String(describing: thisError?.localizedDescription)))")                         case .multipleIdentifiersFound:                             // Prompt the user to resolve the cloud identifier that matched multiple assets.                             print("Found multiple local identifiers for \(cloudIdentifier). \(String(describing: thisError?.localizedDescription))") //                            if let selectedLocalIdentifier = promptUserForPotentialReplacement(with: thisError.userInfo[PHLocalIdentifiersErrorKey]) { //                                localIDs.append(selectedLocalIdentifier)                         default:                             print("Encountered an unexpected error looking up the local identifier for \(cloudIdentifier). \(String(describing: thisError?.localizedDescription))")                     }               }             }         return localIDs     }
1
0
863
Feb ’22
[PHCollectionList isPrivacySensitiveAlbum]: unrecognized selector crash
I cannot find anything documentation re: isPrivacySensitiveAlbum. I've granted my app access to all photos. Not sure what else to try Code that triggers the crash: let options = PHFetchOptions() options.fetchLimit = 1 let assetColl = PHAssetCollection.fetchAssetCollections(withLocalIdentifiers: [localId], options: options) if assetColl.count > 0 { if let asset = PHAsset.fetchKeyAssets(in: assetColl.firstObject!, options: options) stack trace from here on `2023-04-15 06:34:41.628537-0700 DPF[33615:6484880] -[PHCollectionList isPrivacySensitiveAlbum]: unrecognized selector sent to instance 0x7ff09232aec0 2023-04-15 06:34:41.632378-0700 DPF[33615:6484880] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[PHCollectionList isPrivacySensitiveAlbum]: unrecognized selector sent to instance 0x7ff09232aec0' *** First throw call stack: ( 0 CoreFoundation 0x00007ff80045478b __exceptionPreprocess + 242 1 libobjc.A.dylib 0x00007ff80004db73 objc_exception_throw + 48 2 CoreFoundation 0x00007ff8004638c4 +[NSObject(NSObject) instanceMethodSignatureForSelector:] + 0 3 CoreFoundation 0x00007ff800458c66 ___forwarding___ + 1443 4 CoreFoundation 0x00007ff80045ae08 _CF_forwarding_prep_0 + 120 5 Photos 0x00007ff80b8480e1 +[PHAsset fetchKeyAssetsInAssetCollection:options:] + 86 6 DPF 0x0000000100791029 $s3DPF16AlbumListFetcherV22loadKeyImageForLocalIdySo7UIImageCSgSSYaFTY0_ + 569`
2
0
766
Apr ’23
IPadOS 17 external camera exposure
I'm developing iPad app that will be mostly dedicated for certain external camera for visually impaired people. The linux UVC api (e.g. using guvcview) allows to enable automatic exposure for the camera. IOs api "isExposureModeSupported" unfortunately returns false for any of the exposure modes. Is it a bug? Or perhaps AVFoundation doesn't support UVC exposure yet?
1
2
535
Oct ’23
Synchronized depth and video data not being received with builtInLiDARDepthCamera
Hello, Faced with a really perplexing issue. Primary problem is that sometimes I get depth and video data as expected, but at other times I don't. And sometimes I'll get both data outputs for a 4-5 frames and then it'll just stop. The source code I implemented is a modified version of the sample code provided by Apple, and interestingly enough I can't re-create this issue with the Apple sample app. So wondering what I could be doing wrong? Here's the code for setting up the capture input. preferredDepthResolution is 1280 in my case. I'm running this on an iPad Pro (6th gen). iOS version 17.0.3 (21A360). Encounter this issue on iPhone 13 Pro as well. iOS version is 17.0 (21A329) private func setupLiDARCaptureInput() throws { // Look up the LiDAR camera. guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else { throw ConfigurationError.lidarDeviceUnavailable } guard let format = (device.formats.last { format in format.formatDescription.dimensions.width == preferredWidthResolution && format.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange && format.videoSupportedFrameRateRanges.first(where: {$0.maxFrameRate >= 60}) != nil && !format.isVideoBinned && !format.supportedDepthDataFormats.isEmpty }) else { throw ConfigurationError.requiredFormatUnavailable } guard let depthFormat = (format.supportedDepthDataFormats.last { depthFormat in depthFormat.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat16 }) else { throw ConfigurationError.requiredFormatUnavailable } // Begin the device configuration. try device.lockForConfiguration() // Configure the device and depth formats. device.activeFormat = format device.activeDepthDataFormat = depthFormat let desc = format.formatDescription dimensions = CMVideoFormatDescriptionGetDimensions(desc) let duration = CMTime(value:1, timescale:CMTimeScale(60)) device.activeVideoMinFrameDuration = duration device.activeVideoMaxFrameDuration = duration // Finish the device configuration. device.unlockForConfiguration() self.device = device print("Selected video format: \(device.activeFormat)") print("Selected depth format: \(String(describing: device.activeDepthDataFormat))") // Add a device input to the capture session. let deviceInput = try AVCaptureDeviceInput(device: device) captureSession.addInput(deviceInput) guard let audioDevice = AVCaptureDevice.default(for: .audio) else { return } // Configure audio input - always configure audio even if isAudioEnabled is false audioDeviceInput = try! AVCaptureDeviceInput(device: audioDevice) captureSession.addInput(audioDeviceInput) deviceSystemPressureStateObservation = device.observe( \.systemPressureState, options: .new ) { _, change in guard let systemPressureState = change.newValue else { return } print("system pressure \(systemPressureState.levelAsString()) due to \(systemPressureState.factors)") } } Here's how I'm setting up the output: private func setupLiDARCaptureOutputs() { // Create an object to output video sample buffers. videoDataOutput = AVCaptureVideoDataOutput() captureSession.addOutput(videoDataOutput) // Create an object to output depth data. depthDataOutput = AVCaptureDepthDataOutput() depthDataOutput.isFilteringEnabled = false captureSession.addOutput(depthDataOutput) audioDeviceOutput = AVCaptureAudioDataOutput() audioDeviceOutput.setSampleBufferDelegate(self, queue: videoQueue) captureSession.addOutput(audioDeviceOutput) // Create an object to synchronize the delivery of depth and video data. outputVideoSync = AVCaptureDataOutputSynchronizer(dataOutputs: [depthDataOutput, videoDataOutput]) outputVideoSync.setDelegate(self, queue: videoQueue) // Enable camera intrinsics matrix delivery. guard let outputConnection = videoDataOutput.connection(with: .video) else { return } if outputConnection.isCameraIntrinsicMatrixDeliverySupported { outputConnection.isCameraIntrinsicMatrixDeliveryEnabled = true } } The top part of my delegate implementation is as follows: func dataOutputSynchronizer( _ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection ) { // Retrieve the synchronized depth and sample buffer container objects. guard let syncedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData, let syncedVideoData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else { if synchronizedDataCollection.synchronizedData(for: depthDataOutput) == nil { print("no depth data at time \(mach_absolute_time())") } if synchronizedDataCollection.synchronizedData(for: videoDataOutput) == nil { print("no video data at time \(mach_absolute_time())") } return } print("received depth data \(mach_absolute_time())") } As you can see, I'm console logging whenever depth data is not received. Note because I'm driving the video frames at 60 fps, its expected that I'll only receive depth data for every alternate video frame. Console output is posted as a follow up comment (because of the character limit). I edited some lines out for brevity. You'll see it started streaming correctly but after a while it stopped received both video and depth outputs (in some other runs, it works perfectly and in some other runs I receive no depth data whatsoever). One thing to note, I sometimes run quicktime mirroring to see the device screen to see what the app is displaying (so not sure if that's causing any interference - that said I don't see any system pressure changes either). Any help is most appreciated! Thanks.
3
0
1.3k
Dec ’23
I need a way to permantently disable Reactions from my app, ideally the universe too
So I've spent the last five years optimizing my video AI system so that it runs with less than 5% CPU while processing a 30fps video feed on a Macbook Pro M2, and everything is great, until Sonoma comes out, and I find myself consuming 40% CPU for the exact same workload. So I fire up Instruments, and the "heaviest stack trace" (see screenshot) turns out to be Espresso doing some completely unasked-for and absolutely useless processing on my video frames. I turn off Reactions, but nothing helps - the CPU consumptions stays at 40%. "Reactions" is nothing but a useless toy to please some WWDC keynote fanboys, I don't want it anywhere near my app or my users, and I especially do not want to take the blame for it pissing away the user's CPU cycles and battery. Now, how do I make it go away, for ever? Best regards Jacob
10
6
1.3k
Dec ’23
Generating Live Photo from JPG and MOV fails
I am working on an iOS application using SwiftUI where I want to convert a JPG and a MOV file to a live photo. I am utilizing the LivePhoto Class from Github for this. The JPG and MOV files are displayed correctly in my WallpaperDetailView, but I am facing issues when trying to download the live photo to the gallery and generate the Live Photo. Here is the relevant code and the errors I am encountering: Console prints: Play button should be visible Image URL fetched and set: Optional("https://firebasestorage.googleapis.com/...") Video is ready to play Video downloaded to: file:///var/mobile/Containers/Data/Application/.../tmp/CFNetworkDownload_7rW5ny.tmp Failed to generate Live Photo I have verified that the app has the necessary permissions to access the Photo Library. The JPEG and MOV files are successfully downloaded and can be displayed in the app. The issue seems to occur when generating the Live Photo from the downloaded files. struct WallpaperDetailView: View { var wallpaper: Wallpaper @State private var isLoading = false @State private var isImageSaved = false @State private var imageURL: URL? @State private var livePhotoVideoURL: URL? @State private var player: AVPlayer? @State private var playerViewController: AVPlayerViewController? @State private var isVideoReady = false @State private var showBuffering = false var body: some View { ZStack { if let imageURL = imageURL { GeometryReader { geometry in KFImage(imageURL) .resizable() ... } } if let playerViewController = playerViewController { VideoPlayerViewController(playerViewController: playerViewController) .frame(maxWidth: .infinity, maxHeight: .infinity) .clipped() .edgesIgnoringSafeArea(.all) } } .onAppear { PHPhotoLibrary.requestAuthorization { status in if status == .authorized { loadImage() } else { print("User denied access to photo library") } } } private func loadImage() { isLoading = true if let imageURLString = wallpaper.imageURL, let imageURL = URL(string: imageURLString) { self.imageURL = imageURL if imageURL.scheme == "file" { self.isLoading = false print("Local image URL set: \(imageURL)") } else { fetchDownloadURL(from: imageURLString) { url in self.imageURL = url self.isLoading = false print("Image URL fetched and set: \(String(describing: url))") } } } if let livePhotoVideoURLString = wallpaper.livePhotoVideoURL, let livePhotoVideoURL = URL(string: livePhotoVideoURLString) { self.livePhotoVideoURL = livePhotoVideoURL preloadAndPlayVideo(from: livePhotoVideoURL) } else { self.isLoading = false print("No valid image or video URL") } } private func preloadAndPlayVideo(from url: URL) { self.player = AVPlayer(url: url) let playerViewController = AVPlayerViewController() playerViewController.player = self.player self.playerViewController = playerViewController let playerItem = AVPlayerItem(url: url) playerItem.preferredForwardBufferDuration = 1.0 self.player?.replaceCurrentItem(with: playerItem) ... print("Live Photo Video URL set: \(url)") } private func saveWallpaperToPhotos() { if let imageURL = imageURL, let livePhotoVideoURL = livePhotoVideoURL { saveLivePhotoToPhotos(imageURL: imageURL, videoURL: livePhotoVideoURL) } else if let imageURL = imageURL { saveImageToPhotos(url: imageURL) } } private func saveImageToPhotos(url: URL) { ... } private func saveLivePhotoToPhotos(imageURL: URL, videoURL: URL) { isLoading = true downloadVideo(from: videoURL) { localVideoURL in guard let localVideoURL = localVideoURL else { print("Failed to download video for Live Photo") DispatchQueue.main.async { self.isLoading = false } return } print("Video downloaded to: \(localVideoURL)") self.generateAndSaveLivePhoto(imageURL: imageURL, videoURL: localVideoURL) } } private func generateAndSaveLivePhoto(imageURL: URL, videoURL: URL) { LivePhoto.generate(from: imageURL, videoURL: videoURL, progress: { percent in print("Progress: \(percent)") }, completion: { livePhoto, resources in guard let resources = resources else { print("Failed to generate Live Photo") DispatchQueue.main.async { self.isLoading = false } return } print("Live Photo generated with resources: \(resources)") self.saveLivePhotoToLibrary(resources: resources) }) } private func saveLivePhotoToLibrary(resources: LivePhoto.LivePhotoResources) { LivePhoto.saveToLibrary(resources) { success in DispatchQueue.main.async { if success { self.isImageSaved = true print("Live Photo saved successfully") } else { print("Failed to save Live Photo") } self.isLoading = false } } } private func fetchDownloadURL(from gsURL: String, completion: @escaping (URL?) -> Void) { let storageRef = Storage.storage().reference(forURL: gsURL) storageRef.downloadURL { url, error in if let error = error { print("Failed to fetch image URL: \(error)") completion(nil) } else { completion(url) } } } private func downloadVideo(from url: URL, completion: @escaping (URL?) -> Void) { let task = URLSession.shared.downloadTask(with: url) { localURL, response, error in guard let localURL = localURL, error == nil else { print("Failed to download video: \(String(describing: error))") completion(nil) return } completion(localURL) } task.resume() } }```
1
1
733
Jul ’24
Detect when main app launched by LockedCameraCapture for permission
I have a LockedCameraCapture extension working well, however there is one situation I cannot find a solution to. If the user has not yet provided camera access permission then the main app will be launched rather than the LockedCameraCapture extension. I cannot find a mechanism by which my main app can detect that this was the reason for the launch and thereby request permission. When the button is pressed from the control center without permission the app is run and the CameraCaptureIntent is called so I can prompt the user from there. However, as best I can tell the CameraCaptureIntent is not called when launched from a locked Lock Screen, the app is simply opened. My app has a variety of functions, most of which do not involve the camera so I cannot just always prompt the user for camera access on open. Is there any mechanism by which my main app can detect it was launched for this reason so it could ask for permission? Thank you!
4
1
790
Aug ’24
[iPadOS18 Beta3] on iPad 7th gen, camera app cannot detect QR code
Hi, After installing iPadOS18 Beta3 on my iPad 7th gen, the default camera app no ​​longer detects QR codes. I tried updating to Beta7, but the issue remained. Also, third-party apps that use AVCaptureMetadataOutput in AVFoundation Framework to detect QR codes also no longer work. You can reproduce the issue by running default camera app or the AVFoundation sample code from the Apple developer site on iPad 7th gen (iPadOS18Beta installed). https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Has anyone else experienced this issue? I would like to know if this issue occurs on other iPad models as well. This is similar to the following issue that previously occurred with iPadOS 17.4. https://support.apple.com/en-lamr/118614 https://developer.apple.com/forums/thread/748092
5
0
958
Aug ’24
CIFilter chain failing to render parts of output
I’ve built a iOS camera app that applies many CIFilters to an image captured by the camera. Some of my users have reported that on occasion the images have large parts that are blank, see below: Frustratingly, I can’t reproduce this myself! Does anyone know what could he causing it, is it a memory issue? I haven’t posted the code as there’s a lot to look over and I’m not sure it would help diagnose it. Thanks for any pointers.
1
0
647
Aug ’24
Xcode16RC present PHPickerViewController layout error & cell non-Interactive.
After upgrading to Xcode16RC, in an old project based on ObjC, I directly used the following controller code in AppDelegate: - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. UIButton *b = [[UIButton alloc]initWithFrame:CGRectMake(100, 100, 44, 44)]; [b setTitle:@"title" forState:UIControlStateNormal]; [self.view addSubview:b]; [b addTarget:self action:@selector(onB:) forControlEvents:UIControlEventTouchUpInside]; } - (IBAction)onB:(id)sender{ PHPickerConfiguration *config = [[PHPickerConfiguration alloc]initWithPhotoLibrary:PHPhotoLibrary.sharedPhotoLibrary]; config.preferredAssetRepresentationMode = PHPickerConfigurationAssetRepresentationModeCurrent; config.selectionLimit = 1; config.filter = nil; PHPickerViewController *picker = [[PHPickerViewController alloc]initWithConfiguration:config]; picker.modalPresentationStyle = UIModalPresentationFullScreen; picker.delegate = self; [self presentViewController:picker animated:true completion:nil]; } - (void)picker:(PHPickerViewController *)picker didFinishPicking:(NSArray<PHPickerResult *> *)results{ } Environment: Simulator iPhone 15 Pro (iOS18) Before this version (iOS17.4), clicking the button to pop up the system photo picker interface was normal (the top boundary was within the SafeAreaGuide area), but now the top boundary of the interface aligns directly to the top of the window, and clicking the photo cell is unresponsive. If I create a new Target, using the same codes, the photo picker page does not have the above problem. Therefore, I suspect it may be due to the old project’s .proj file’s info.plist, buildSetting, or buildPhase lacking some default configuration key value required by the new version, (My project was built years ago may be from iOS13 or earlier ) but I cannot confirm the final cause. iOS18.0 has the additional messages: objc[79039]: Class UIAccessibilityLoaderWebShared is implemented in both /Library/Developer/CoreSimulator/Volumes/iOS_22A3351/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 18.0.simruntime/Contents/Resources/RuntimeRoot/System/Library/AccessibilityBundles/WebCore.axbundle/WebCore (0x198028328) and /Library/Developer/CoreSimulator/Volumes/iOS_22A3351/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 18.0.simruntime/Contents/Resources/RuntimeRoot/System/Library/AccessibilityBundles/WebKit.axbundle/WebKit (0x1980fc398). One of the two will be used. Which one is undefined. AX Safe category class 'SLHighlightDisambiguationPillViewAccessibility' was not found! Has anyone encountered the same issue as me?
2
2
1.5k
Sep ’24
PHAsset, PHImageManager requestAVAssetForVideo not downloading full size video from iCloud
Hello, I am using the below code to request for video to be downloaded from iCloud. But the downloaded video size does not match with the original size of video. PHVideoRequestOptions *options = [[PHVideoRequestOptions alloc] init]; options.version = PHVideoRequestOptionsVersionOriginal; options.deliveryMode = PHVideoRequestOptionsDeliveryModeHighQualityFormat; [options setNetworkAccessAllowed:YES]; [[PHImageManager defaultManager] requestAVAssetForVideo:asset options:options resultHandler: ^(AVAsset *avAsset, AVAudioMix *audioMix, NSDictionary *info) { } Original size of video, getting it from below code NSArray *resources = [PHAssetResource assetResourcesForAsset:asset]; for (PHAssetResource *resource in resources) { if ((resource.type == PHAssetResourceTypeVideo) || (resource.type== PHAssetResourceTypePhoto)){ return resource; } } [resource valueForKey:@"fileSize"] The original Size and the downloaded size of video is not matching. Can anyone help me to debug what is the issue here
0
0
458
Sep ’24
Native camera and AVCapture image difference
We are trying to build a simple image capture app using AVFoundation and AVCaptureDevice. Custom settings are used for exposure point and bias. But when image is captured using front camera , the image captured from the app and front native camera does not match. The image captured from the app includes more area than the native app. Also there is difference between the tilt angle between two images. So is there any way to capture image exactly same as native camera using AVFoundation and AVCaptureDevice. Native Custom
0
0
609
Sep ’24
Strange behaviour after modifying exposure duration and going back to AVCaptureExposureModeContinuousAutoExposure
When I set a custom exposure duration, like 1/8, and then switch back to continuous auto exposure, the exposure duration in areas that were previously 1/17 changes to something like 1/5 or 1/10. As a result, the screen becomes laggy and overexposed. I'm not sure why this is happening.
0
0
404
Sep ’24
Launching an app with Camera Control
I've just received my iPhone 16 Pro to develop some of the Camera Control features. I am trying to set up my app to be launched from a button press, and from my research in the documents this is only possible if I develop a LockedCameraCaptureExtension. Is this correct? My app is written in React Native, so to build an extension would require me to re-create the entire UI in Swift which just isn't possible with my resources. Ideally I could build a simple extension that requires Authentication to open the app but I'n not sure that will work: The app extension terminates shortly after launch if it doesn’t have an active camera view that uses AVCaptureEventInteraction to handle events from the hardware buttons, or if access to the camera hasn’t been requested. This is a bit frustrating for something so simple as to just opening an app. Thanks, Alex
1
0
1.1k
Sep ’24
Capture Extension Icon
Hello, I seem to be having an issue assigning my Capture Extension an icon. It works fine using a system icon, for example: Image(systemName: "star") But it fails when I use my custom icon, such as: Image(uiImage: UIImage(named: "widget-icon")!) The "widget-icon" is located in both my Assets collection and the widget folder for good measure, and yet, my Widget always has a "?" icon. I am able to use "widget-icon" just fine for other Lock Screen widgets, but it is not working for the Camera Extension Widget. Any thoughts? Thank you for your help!
2
0
486
Sep ’24
DockKit tracking becomes erratic with increased zoom factor in iOS app
I'm developing an iOS app using DockKit to control a motorized stand. I've noticed that as the zoom factor of the AVCaptureDevice increases, the stand's movement becomes increasingly erratic up and down, almost like a pendulum motion. I'm not sure why this is happening or how to fix it. Here's a simplified version of my tracking logic: func trackObject(_ boundingBox: CGRect, _ dockAccessory: DockAccessory) async throws { guard let device = AVCaptureDevice.default(for: .video), let input = try? AVCaptureDeviceInput(device: device) else { fatalError("Camera not available") } let currentZoomFactor = device.videoZoomFactor let dimensions = device.activeFormat.formatDescription.dimensions let referenceDimensions = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height)) let intrinsics = calculateIntrinsics(for: device, currentZoom: Double(currentZoomFactor)) let deviceOrientation = UIDevice.current.orientation let cameraOrientation: DockAccessory.CameraOrientation = { switch deviceOrientation { case .landscapeLeft: return .landscapeLeft case .landscapeRight: return .landscapeRight case .portrait: return .portrait case .portraitUpsideDown: return .portraitUpsideDown default: return .unknown } }() let cameraInfo = DockAccessory.CameraInformation( captureDevice: input.device.deviceType, cameraPosition: input.device.position, orientation: cameraOrientation, cameraIntrinsics: useIntrinsics ? intrinsics : nil, referenceDimensions: referenceDimensions ) let observation = DockAccessory.Observation( identifier: 0, type: .object, rect: boundingBox ) let observations = [observation] try await dockAccessory.track(observations, cameraInformation: cameraInfo) } func calculateIntrinsics(for device: AVCaptureDevice, currentZoom: Double) -> matrix_float3x3 { let dimensions = CMVideoFormatDescriptionGetDimensions(device.activeFormat.formatDescription) let width = Float(dimensions.width) let height = Float(dimensions.height) let diagonalPixels = sqrt(width * width + height * height) let estimatedFocalLength = diagonalPixels * 0.8 let fx = Float(estimatedFocalLength) * Float(currentZoom) let fy = fx let cx = width / 2.0 let cy = height / 2.0 return matrix_float3x3( SIMD3<Float>(fx, 0, cx), SIMD3<Float>(0, fy, cy), SIMD3<Float>(0, 0, 1) ) } I'm calling this function regularly (10-30 times per second) with updated bounding box information. The erratic movement seems to worsen as the zoom factor increases. Questions: Why might increasing the zoom factor cause this erratic movement? I'm currently calculating camera intrinsics based on the current zoom factor. Is this approach correct, or should I be doing something differently? Are there any other factors I should consider when using DockKit with a variable zoom? Could the frequency of calls to trackRider (10-30 times per second) be contributing to the erratic movement? If so, what would be an optimal frequency? Any insights or suggestions would be greatly appreciated. Thanks!
8
0
752
Sep ’24
AVCaptureSystemZoomSlider has a factor that I can't get anywhere.
As you can see, the value shown in the AVCaptureSystemZoomSlider is not the same as the raw camera zoom factor. I tried to calculate this value, and it seems it's 0.8. (5-1)*0.8=4.2-1 in this image. It seems this factor only applies to the default wide-angle camera. And I can't get this value from anywhere. (It's not displayVideoZoomFactorMultiplier btw, I checked that.) What is it?
1
1
605
Sep ’24
Moving Photos
How can it be that you still don't have the option to move photos into an album instead of just copying them? This is a bad joke, right? The entire Photos app is absolutely untidy and a nightmare for people who like order. I want car photos in the car folder. Vacation photos in the vacation folder without them being visible in the recent folder. Cant be so difficult???
3
0
336
Sep ’24