Discuss using the camera on Apple devices.

Posts under Camera tag

108 Posts

Post

Replies

Boosts

Views

Activity

Camera become black for few propduction users during photo capture
PLATFORM AND VERSION :iOS 18.5 I wanted to bring to your attention a critical issue some of our production users are experiencing with the CoinOut app. Specifically, users are encountering a problem when attempting to capture photos of receipts using the app's customized camera feature. The camera, which utilizes AVCaptureVideoPreviewLayer and AVCaptureDevice, occasionally fails to load the preview, resulting in a black screen instead of the expected camera view. This camera blackout issue is significantly impacting the user experience as it prevents them from snapping photos of their receipts, which is a core functionality of the CoinOut app. Any help/suggestion to this issue would be greatly appreciated. STEPS TO REPRODUCE Open the app and click on camera icon. It will display camera to capture photo. Camera shows black for few production user's. class ViewController: UIViewController { @IBOutlet private weak var captureButton: UIButton! private var fillLayer: CAShapeLayer! private var previewLayer : AVCaptureVideoPreviewLayer! private var output: AVCapturePhotoOutput! private var device: AVCaptureDevice! private var session : AVCaptureSession! private var highResolutionEnabled: Bool = false private let sessionQueue = DispatchQueue(label: "session queue") override func viewDidLoad() { super.viewDidLoad() setupCamera() customiseUI() } @IBAction func startCamera(sender: UIButton) { didTapTakePhoto() } private func setupCamera() { let session = AVCaptureSession() session.sessionPreset = AVCaptureSession.Preset.high previewLayer = AVCaptureVideoPreviewLayer(session: session) output = AVCapturePhotoOutput() device = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back) if let device = self.device{ do{ let input = try AVCaptureDeviceInput(device: device) if session.canAddInput(input){ session.addInput(input)} else { print("\(#fileID):\(#function):\(#line) : Session Input addition failed") } if session.canAddOutput(output){ output.isHighResolutionCaptureEnabled = self.highResolutionEnabled session.addOutput(output) } else { print("\(#fileID):\(#function):\(#line) : Session Input high resolution failed") } previewLayer.videoGravity = .resizeAspectFill previewLayer.session = session sessionQueue.async { session.startRunning() } self.session = session self.session.accessibilityElementIsFocused() try device.lockForConfiguration() if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.autoWhiteBalance) { device.whiteBalanceMode = .autoWhiteBalance } else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") } if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.continuousAutoWhiteBalance) { device.whiteBalanceMode = .continuousAutoWhiteBalance } else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") } if device.isFocusModeSupported(.continuousAutoFocus) { device.focusMode = .continuousAutoFocus} else if device.isFocusModeSupported(.autoFocus) { device.focusMode = .autoFocus } device.unlockForConfiguration() } catch { print("\(#fileID):\(#function):\(#line) : \(error.localizedDescription)") } } else { print("\(#fileID):\(#function):\(#line) : Device found as nil") } } private func customiseUI() { let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height), cornerRadius: 0) let rectangleWidth = view.frame.width - (view.frame.width * 0.16) let x = (view.frame.width - rectangleWidth) / 2 let rectangleHeight = view.frame.height - (view.frame.height * 0.16) let y = (view.frame.height - rectangleHeight) / 2 let roundRect = UIBezierPath(roundedRect: CGRect(x: x, y: y, width: rectangleWidth, height: rectangleHeight), byRoundingCorners:.allCorners, cornerRadii: CGSize(width: 0, height: 0)) roundRect.move(to: CGPoint(x: self.view.center.x , y: self.view.center.y)) path.append(roundRect) path.usesEvenOddFillRule = true fillLayer = CAShapeLayer() fillLayer.path = path.cgPath fillLayer.fillRule = .evenOdd fillLayer.opacity = 0.4 previewLayer.addSublayer(fillLayer) previewLayer.frame = view.bounds view.layer.addSublayer(previewLayer) view.bringSubviewToFront(captureButton) } private func didTapTakePhoto() { let settings = self.getSettings(camera: self.device) if device.isAdjustingFocus { do { try device.lockForConfiguration() device.focusMode = .continuousAutoFocus device.unlockForConfiguration() device.addObserver(self, forKeyPath: "adjustingFocus", options: [.new], context: nil) } catch { print(error) } } else { output.capturePhoto(with: settings, delegate: self) } } func getSettings(camera: AVCaptureDevice) -> AVCapturePhotoSettings { var settings = AVCapturePhotoSettings() if let rawFormat = output.availableRawPhotoPixelFormatTypes.first { settings = AVCapturePhotoSettings(rawPixelFormatType: OSType(rawFormat)) } settings.isHighResolutionPhotoEnabled = self.highResolutionEnabled let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first! let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType] as [String : Any] settings.previewPhotoFormat = previewFormat return settings } } extension ViewController: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) { AudioServicesDisposeSystemSoundID(1108) } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { guard let data = photo.fileDataRepresentation() else { return } let image = UIImage(data: data)! showImage(cropped: image) } func showImage(cropped: UIImage) { let vc = self.storyboard?.instantiateViewController(withIdentifier: "ImagePreviewViewController") as? ImagePreviewViewController vc?.captured = cropped self.present(vc!, animated: true) } }```
1
0
216
Jul ’25
Main Camera Access Entitlement Bug
Hello everyone can you help me, i have requested main camera access API Enterprise and have got the license to, and i have setting up the project main camera access demo from apple with my new license and have create app bundle and identifier for it but when i tried to deploy it test flight i got some error say "Profile doesn't support Main Camera Access" and "Profile doesn't include the com.apple.developer.arkit.main-camera-access.alow entitlement, even have do it it app Certificates, Identifiers & Profiles and add the additional capability Main Camera Access. can you help me fixing this so that i can use Main Camera Access Entitlement
5
0
206
Jul ’25
AppIntent take a photo?
Hi i'm new to swift/swiftui i want to my app shortcut to have the ability to take a photo within my AppIntent instead of having to configure a 'Take a photo' action in the Shortcuts app and then parsing that to my Appintent (for less human error). Is this possible? I read there's a protocol called CameraCaptureIntent but i think it's only used for a separate extension like for Control Center, Lock Screen, and Action buttons :(
0
0
98
Jul ’25
After iPadOS 26 beta and iOS 26 beta, AVCaptureMetadataOutput no longer detects Face on some devices.
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face. After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue. iPad (9th Gen) iPad air (4th Gen) iPhone 15 This issue has not occur on any other devices I have. I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?
0
2
137
Jul ’25
Creating spatial video with one camera
Hello everyone I would like to create my own spatial video on my Apple Vision Pro. According to all the documentation from Apple, this requires two camera angles that enhance the spatial perception. I have purchased the Enterprise license with main camera access for this purpose. However, this only gives me access to the left main camera of the glasses. Is there a way to access the right camera as well? Or is the one camera image enough to create a spatial video by splitting the image, for example? I am open to any help and ideas. My goal is to create the video with the cameras on the glasses, not externally.
1
0
296
Jun ’25
Memory leak when performing DetectHumanBodyPose3DRequest request
Hi, I'm developing an application for macos and ios that has to run DetectHumanBodyPose3DRequest model in real time for retrieving the 3d skeleton from the camera. I'm experiencing a memory leak every time the model is used (when i comment that line, the memory stays constant). After a minute it uses about 1GB of ram running with mac catalyst. I attached a minimal project that has this problem Code Camera View import SwiftUI import Combine import Vision struct CameraView: View { @StateObject private var viewModel = CameraViewModel() var body: some View { HStack { ZStack { GeometryReader { geometry in if let image = viewModel.currentFrame { Image(decorative: image, scale: 1) .resizable() .scaledToFill() .frame(width: geometry.size.width, height: geometry.size.height) .clipped() } else { ProgressView() } } } } } } class CameraViewModel: ObservableObject { @Published var currentFrame: CGImage? @Published var frameRate: Double = 0 @Published var currentVisionBodyPose: HumanBodyPose3DObservation? // Store current body pose @Published var currentImageSize: CGSize? // Store current image size private var cameraManager: CameraManager? private var humanBodyPose = HumanBodyPose3DDetector() private var lastClassificationTime = Date() private var frameCount = 0 private var lastFrameTime = Date() private let classificationThrottleInterval: TimeInterval = 1.0 private var lastPoseSendTime: Date = .distantPast init() { cameraManager = CameraManager() startPreview() startClassification() } private func startPreview() { Task { guard let previewStream = cameraManager?.previewStream else { return } for await frame in previewStream { let size = CGSize(width: frame.width, height: frame.height) Task { @MainActor in self.currentFrame = frame self.currentImageSize = size self.updateFrameRate() } } } } private func startClassification() { Task { guard let classificationStream = cameraManager?.classificationStream else { return } for await pixelBuffer in classificationStream { self.classifyFrame(pixelBuffer: pixelBuffer) } } } private func classifyFrame(pixelBuffer: CVPixelBuffer) { humanBodyPose.runHumanBodyPose3DRequestOnImage(pixelBuffer: pixelBuffer) { [weak self] observation in guard let self = self else { return } DispatchQueue.main.async { if let observation = observation { self.currentVisionBodyPose = observation print(observation) } else { self.currentVisionBodyPose = nil } } } } private func updateFrameRate() { frameCount += 1 let now = Date() let elapsed = now.timeIntervalSince(lastFrameTime) if elapsed >= 1.0 { frameRate = Double(frameCount) / elapsed frameCount = 0 lastFrameTime = now } } } HumanBodyPose3DDetector import Foundation import Vision class HumanBodyPose3DDetector: NSObject, ObservableObject { @Published var humanObservation: HumanBodyPose3DObservation? = nil private let queue = DispatchQueue(label: "humanbodypose.queue") private let request = DetectHumanBodyPose3DRequest() private struct SendablePixelBuffer: @unchecked Sendable { let buffer: CVPixelBuffer } public func runHumanBodyPose3DRequestOnImage(pixelBuffer: CVPixelBuffer, completion: @escaping (HumanBodyPose3DObservation?) -> Void) { let sendableBuffer = SendablePixelBuffer(buffer: pixelBuffer) queue.async { [weak self] in Task { [weak self, sendableBuffer] in do { guard let self = self else { return } let result = try await self.request.perform(on: sendableBuffer.buffer) //process result DispatchQueue.main.async { if result.isEmpty { completion(nil) } else { completion(result[0]) } } } catch { DispatchQueue.main.async { completion(nil) } } } } } }
1
0
153
Jun ’25
Is a Locked Capture Extension allowed to just "open the app" when the device is unlocked?
Hey, Quick question. I noticed that Adobe's new app, Project Indigo, allows you to open the app using the Camera Control button. However, when your device is locked it just shows this screen: Would this normally be approved by the Appstore approval process? I ask because I would like to do something similar with my camera app. I know that this is not the best user experience, but my apps UI is not built in Swift and I don't have the resources to build the UI again. At least this way the user experience would be improved from what it is now, where users cannot even launch the app. I get many requests per week about this feature and would love to improve the UX for my users, even if it's not the best possible. Thanks, Alex
0
0
218
Jun ’25
UIImagePickerController shows black screen for some users on iOS 18.4+ (iPhone 13/14)
We're facing a strange issue where UIImagePickerController opens with a black screen (no camera preview) for some users only. The camera permissions are granted, and the picker is presented without errors. This problem does not reproduce on all devices — it's been reported on: iPhone 14 – iOS 18.4 iPhone 13 – iOS 18.5 Other unknown devices (users didn’t share details) We are using UIImagePickerController to open the rear camera, and presenting it from appDelegate.window?.rootViewController. All required permissions are in place (NSCameraUsageDescription is added in Info.plist, runtime permissions checked and approved). Still, for a subset of users, the screen goes black when trying to capture a photo. We suspect either a system-level issue with iOS 18.4+, a session conflict, or an issue with how we present the picker. Looking for advice or known issues/workarounds. Would switching to AVCaptureSession help? What We’ve Verified: NSCameraUsageDescription is set in Info.plist Camera permission is requested and granted at runtime Users tried: Reinstalling the app Restarting the phone Switching between front/rear camera Still, the camera preview remains black No crash logs or exceptions Below is the Code Level Example:- let imagePicker = UIImagePickerController() let Capture = UIAlertAction(title: "TAKE_PHOTO".localized, style: .destructive) { _ in self.imagePicker.sourceType = .camera self.imagePicker.cameraDevice = .rear self.imagePicker.showsCameraControls = true self.imagePicker.allowsEditing = false appDelegate.window?.rootViewController?.present(self.imagePicker, animated: true, completion: nil) }
Topic: UI Frameworks SubTopic: UIKit Tags:
1
0
137
Jun ’25
IOS 26 Camera and External Storage
Hi All, I searched for this feedback but didn't see it, so apologies if this has been covered by another thread. Exploring the new camera app, It doesn't seem to recognize that external storage has been connected, so the additional features that allow ProRes high frame rates will throw an error dialog stating that "to use this you need external storage" even when external storage is connected. Using the Files app, the phone recognizes the storage, and this is something I can do with this external storage device on the previous version of IOS. It is clear that this release of the camera has been rewritten significantly since the last version. Is it possible that this is an oversight, a bug, or just functionality that has not been completed? Interested if anybody else is seeing this, or if it is just my setup.
0
0
105
Jun ’25
Feedback on the new Camera app icon in iOS 26
I’m currently using the iOS 26 Developer Beta and noticed the new icon design for the Camera app. Personally, I preferred the previous icon it looked cleaner, more elegant, and felt more in line with Apple’s signature iOS design language. The new icon feels more like something you’d expect from Android. It lacks the minimalist, refined style that usually defines iOS icons. I understand UI evolves over time, but this change feels like a step away from what makes Apple’s design philosophy unique. Just wanted to share this honest feedback as a long-time user and developer. Thanks for considering!
Topic: Design SubTopic: General Tags:
2
0
179
Jun ’25
Significant Uptick in AVCaptureSessionWasInterrupted (Reason 4) Leading to Camera Black Screen and AVError Code -11803
In the latest production release of our iOS app (deployed via the App Store), we’ve observed a significant increase in AVCaptureSessionWasInterrupted notifications where the interruption reason has a rawValue of 4. The session does not automatically recover, even after returning from background or deleting/reinstalling the app. An employee ran into this and was able to get a recording. We see the below error when attempting to take photos. "Error Domain=AVFoundationErrorDomain Code=-11803 \"Cannot Record\" UserInfo={AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record, NSLocalizedRecoverySuggestion=Try recording again.}", } This interruption causes the camera preview to remain black, and any attempt to capture an image results in a failure with the following error: Some questions from our team: What common system conditions or foreground app behaviors can cause .videoDeviceNotAvailableWithMultipleForegroundApps (reason 4) to become persistent? Our teams under is under the impression the interruption reason 4 is mostly associated with iPad and PiP, but neither of these are true in the logs we see. Is manual recovery of the session required? Is there a recommended strategy to detect that the session is unrecoverable and gracefully notify the user or rebuild the session? Is there an instrument(s) in XCode you would recommend when trying to evaluate the increase in reason 4? Best, Ben
3
17
549
Jun ’25
App Clip links encoded as a QR Code do not load when scanned on an iPhone camera
Using an App Clip link encoded into a QR Code shows an error when scanning the encoded QR Code on an iPhone or iPad. After being scanned, the App Clip's banner is visible, but a message says: "App Clip Unavailable". Accessing the same App Clip URL via Safari works as expected. I've filed a feedback with more details and screenshots of the issue here: FB17891015 Thanks!
2
4
238
Jun ’25
Is Phase Detection Autofocus degrading video stabilization, and can I disable it?
I'm developing a video capture app using AVFoundation, designed specifically for use on a boat pylon to record slalom water skiing. This setup involves considerable vibration. As you may know, the OIS that Apple began adding to lenses since the iPhone 7 is actually very problematic in high vibration circumstances, ironically creating very shaky video, whereas lenses without OIS produce perfectly stable video. Because of this, up until iPhone 14, the solution for my app was simply to use the Selfie lens, which did not have OIS. Starting with iPhone 14 through iPhone 16 (non-Pro models), technical specs suggest the selfie lens still does not include OIS. However, I’m still seeing the same kind of shaky video behavior I see on OIS-equipped lenses. The one hardware change I see in this camera module is the addition of PDAF (Phase Detection Autofocus), so that is my best guess as to what is causing the unstable video. 1- Does that make any sense - that in high vibration settings, PDAF could create unstable video in the same way that OIS does? Or could it be something else that was changed between the iPhone 13 and 14 Selfie lens? Thinking that the issue was PDAF, I figured that if I enabled my app to set a Manual Focus level, that ought to circumvent PDAF (expecting that if a lens is manually focusing, it can’t also be autofocusing via PDAF). However, even with manual focus locked via AVCaptureDevice in my app, on the Selfie lens of an iPhone 16, the video still comes out very shaky, basically unusable. I also tested with the built-in Apple Camera app (using the press-and-hold to lock focus and exposure) and another 3rd party camera app to lock focus, all with the same results, so it's not that my app just isn't correctly doing manual focus. So I'm stuck with these questions: 2- Does the selfie camera on iPhones 14–16 use PDAF even when focus is set to locked/manual mode? 3- Is there any way in AVFoundation to disable or suppress PDAF during video recording (e.g., a flag, device format setting, or private API)? 4- Is PDAF behavior or suppression documented or controllable via AVCaptureDevice or any related class? 5- If no control of PDAF is available, are there any best practices for stabilizing or smoothing this effect programmatically? Note that I also have set my app to use the most aggressive form of stabilization available, so it defaults to .cinematicExtendedEnhanced, if that’s not available, then .cinematicExtended, etc. On the 16 Selfie lens, it is using .cinematicExtended. As an additional question: 6- Would those be the most appropriate stabilization settings for a high vibration environment, and if not, what would be best?
0
0
213
May ’25
iOS Camera access issues in Developer mode on real device - PermissionStatus.permanentlyDenied
Xcode Version 16.3 (16E140) App developed in Flutter Flutter 3.29.3 Test iPhone device: iPhone 16 Pro running iOS 18.5 I have an app that requires Camera access. This used to work before with iOS 18.4.x. I have dumbed down my app to just get Camera permission. Even then it fails flutter: Camera permission: PermissionStatus.denied flutter: Photos permission: PermissionStatus.denied flutter: Microphone permission: PermissionStatus.denied flutter: --- End Debug Info --- flutter: Loaded translations from asset for en_US container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled flutter: CAMERA PERMISSION STATUS: PermissionStatus.permanentlyDenied Camera permissions don't show up in my App settings or under "Settings -> Privacy and Security -> Camera" and I am at loss to understand why this is happening.
1
0
153
May ’25
ARKit Camera Feed Zoom & Macro Support for Close-Range Objects
I am currently developing an AR experience using ARKit with SceneKit and am looking to implement functionality that enables: Zooming into the AR camera feed, ideally leveraging the ultra-wide or telephoto lenses available on supported devices. Macro-style focus capabilities, allowing users to view and interact with virtual content closely aligned with small or nearby real-world objects (within a few centimeters). My objective is to ensure that ARKit continues to render the scene accurately while enabling a zoomed-in view or macro-level focus for better detail visibility and alignment. Could you please advise on: Whether ARKit currently supports camera zoom or allows access to macro or ultra-wide cameras within an ARSession. Limitations or considerations when using multi-camera setups in conjunction with ARKit. Any guidance or references to documentation or sample code would be greatly appreciated.
0
0
137
May ’25
Regarding ARKit camera feed zoom and macro support for closer object
I am currently developing an AR experience using ARKit with SceneKit and am looking to implement functionality that enables: Zooming into the AR camera feed, ideally leveraging the ultra-wide or telephoto lenses available on supported devices. Macro-style focus capabilities, allowing users to view and interact with virtual content closely aligned with small or nearby real-world objects (within a few centimeters). My objective is to ensure that ARKit continues to render the scene accurately while enabling a zoomed-in view or macro-level focus for better detail visibility and alignment. Could you please advise on: Whether ARKit currently supports camera zoom or allows access to macro or ultra-wide cameras within an ARSession. Limitations or considerations when using multi-camera setups in conjunction with ARKit. Any guidance or references to documentation or sample code would be greatly appreciated. Best regards, Ayush
0
0
114
May ’25
allowCamera on Unsupervised devices
Is there any mechanism to restrict camera usage on a user-owned device, once they have opted in, consented to the restriction, and installed a management profile? Documentation suggests it was possible with allowCamera, but has be deprecated on unsupervised devices. Am I understanding correctly that it's simply not possible anymore unless the device is supervised?
2
0
256
May ’25
ScreenTime: Block the Messages, Phone, Camera apps
I want to limit my child's phone usage at night by allowing them to scan a QR code to enforce app limitations via ScreenTime. When they scan the QR code, I'm still unable to prevent them from accessing the Phone, Messages, and Camera app via ScreenTime. Is there a way I can block or heavily restrict their access to these three apps via ScreenTime? How do I prevent my child from undoing or evading the ScreenTime enforcement I'm trying to enforce?
1
0
125
May ’25