Hi All, I searched for this feedback but didn't see it, so apologies if this has been covered by another thread. Exploring the new camera app, It doesn't seem to recognize that external storage has been connected, so the additional features that allow ProRes high frame rates will throw an error dialog stating that "to use this you need external storage" even when external storage is connected. Using the Files app, the phone recognizes the storage, and this is something I can do with this external storage device on the previous version of IOS.
It is clear that this release of the camera has been rewritten significantly since the last version. Is it possible that this is an oversight, a bug, or just functionality that has not been completed? Interested if anybody else is seeing this, or if it is just my setup.
Camera
RSS for tagDiscuss using the camera on Apple devices.
Posts under Camera tag
113 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I’m currently using the iOS 26 Developer Beta and noticed the new icon design for the Camera app. Personally, I preferred the previous icon it looked cleaner, more elegant, and felt more in line with Apple’s signature iOS design language.
The new icon feels more like something you’d expect from Android. It lacks the minimalist, refined style that usually defines iOS icons. I understand UI evolves over time, but this change feels like a step away from what makes Apple’s design philosophy unique.
Just wanted to share this honest feedback as a long-time user and developer. Thanks for considering!
In the latest production release of our iOS app (deployed via the App Store), we’ve observed a significant increase in AVCaptureSessionWasInterrupted notifications where the interruption reason has a rawValue of 4. The session does not automatically recover, even after returning from background or deleting/reinstalling the app. An employee ran into this and was able to get a recording. We see the below error when attempting to take photos.
"Error Domain=AVFoundationErrorDomain Code=-11803 \"Cannot Record\" UserInfo={AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record, NSLocalizedRecoverySuggestion=Try recording again.}",
}
This interruption causes the camera preview to remain black, and any attempt to capture an image results in a failure with the following error:
Some questions from our team:
What common system conditions or foreground app behaviors can cause .videoDeviceNotAvailableWithMultipleForegroundApps (reason 4) to become persistent? Our teams under is under the impression the interruption reason 4 is mostly associated with iPad and PiP, but neither of these are true in the logs we see.
Is manual recovery of the session required?
Is there a recommended strategy to detect that the session is unrecoverable and gracefully notify the user or rebuild the session?
Is there an instrument(s) in XCode you would recommend when trying to evaluate the increase in reason 4?
Best,
Ben
Using an App Clip link encoded into a QR Code shows an error when scanning the encoded QR Code on an iPhone or iPad.
After being scanned, the App Clip's banner is visible, but a message says: "App Clip Unavailable".
Accessing the same App Clip URL via Safari works as expected.
I've filed a feedback with more details and screenshots of the issue here: FB17891015
Thanks!
Some users have reported an error editing portrait photo assets in my app:
The operation couldn’t be completed. (CINonLocalizedDescriptionKey error 3.)
What is that error? Will affected photos always encounter this error (due to data corruption for example) or can it be resolved in a future iOS update?
FB16241301
I'm developing a video capture app using AVFoundation, designed specifically for use on a boat pylon to record slalom water skiing. This setup involves considerable vibration.
As you may know, the OIS that Apple began adding to lenses since the iPhone 7 is actually very problematic in high vibration circumstances, ironically creating very shaky video, whereas lenses without OIS produce perfectly stable video. Because of this, up until iPhone 14, the solution for my app was simply to use the Selfie lens, which did not have OIS.
Starting with iPhone 14 through iPhone 16 (non-Pro models), technical specs suggest the selfie lens still does not include OIS. However, I’m still seeing the same kind of shaky video behavior I see on OIS-equipped lenses. The one hardware change I see in this camera module is the addition of PDAF (Phase Detection Autofocus), so that is my best guess as to what is causing the unstable video.
1- Does that make any sense - that in high vibration settings, PDAF could create unstable video in the same way that OIS does? Or could it be something else that was changed between the iPhone 13 and 14 Selfie lens?
Thinking that the issue was PDAF, I figured that if I enabled my app to set a Manual Focus level, that ought to circumvent PDAF (expecting that if a lens is manually focusing, it can’t also be autofocusing via PDAF).
However, even with manual focus locked via AVCaptureDevice in my app, on the Selfie lens of an iPhone 16, the video still comes out very shaky, basically unusable. I also tested with the built-in Apple Camera app (using the press-and-hold to lock focus and exposure) and another 3rd party camera app to lock focus, all with the same results, so it's not that my app just isn't correctly doing manual focus.
So I'm stuck with these questions:
2- Does the selfie camera on iPhones 14–16 use PDAF even when focus is set to locked/manual mode?
3- Is there any way in AVFoundation to disable or suppress PDAF during video recording (e.g., a flag, device format setting, or private API)?
4- Is PDAF behavior or suppression documented or controllable via AVCaptureDevice or any related class?
5- If no control of PDAF is available, are there any best practices for stabilizing or smoothing this effect programmatically?
Note that I also have set my app to use the most aggressive form of stabilization available, so it defaults to .cinematicExtendedEnhanced, if that’s not available, then .cinematicExtended, etc. On the 16 Selfie lens, it is using .cinematicExtended. As an additional question:
6- Would those be the most appropriate stabilization settings for a high vibration environment, and if not, what would be best?
Xcode Version 16.3 (16E140)
App developed in Flutter Flutter 3.29.3
Test iPhone device: iPhone 16 Pro running iOS 18.5
I have an app that requires Camera access. This used to work before with iOS 18.4.x. I have dumbed down my app to just get Camera permission. Even then it fails
flutter: Camera permission: PermissionStatus.denied
flutter: Photos permission: PermissionStatus.denied
flutter: Microphone permission: PermissionStatus.denied
flutter: --- End Debug Info ---
flutter: Loaded translations from asset for en_US
container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled
container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled
container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled
container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled
container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled
container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled
flutter: CAMERA PERMISSION STATUS: PermissionStatus.permanentlyDenied
Camera permissions don't show up in my App settings or under "Settings -> Privacy and Security -> Camera" and I am at loss to understand why this is happening.
I am currently developing an AR experience using ARKit with SceneKit and am looking to implement functionality that enables:
Zooming into the AR camera feed, ideally leveraging the ultra-wide or telephoto lenses available on supported devices.
Macro-style focus capabilities, allowing users to view and interact with virtual content closely aligned with small or nearby real-world objects (within a few centimeters).
My objective is to ensure that ARKit continues to render the scene accurately while enabling a zoomed-in view or macro-level focus for better detail visibility and alignment.
Could you please advise on:
Whether ARKit currently supports camera zoom or allows access to macro or ultra-wide cameras within an ARSession.
Limitations or considerations when using multi-camera setups in conjunction with ARKit.
Any guidance or references to documentation or sample code would be greatly appreciated.
I am currently developing an AR experience using ARKit with SceneKit and am looking to implement functionality that enables:
Zooming into the AR camera feed, ideally leveraging the ultra-wide or telephoto lenses available on supported devices.
Macro-style focus capabilities, allowing users to view and interact with virtual content closely aligned with small or nearby real-world objects (within a few centimeters).
My objective is to ensure that ARKit continues to render the scene accurately while enabling a zoomed-in view or macro-level focus for better detail visibility and alignment.
Could you please advise on:
Whether ARKit currently supports camera zoom or allows access to macro or ultra-wide cameras within an ARSession.
Limitations or considerations when using multi-camera setups in conjunction with ARKit.
Any guidance or references to documentation or sample code would be greatly appreciated.
Best regards,
Ayush
Is there any mechanism to restrict camera usage on a user-owned device, once they have opted in, consented to the restriction, and installed a management profile?
Documentation suggests it was possible with allowCamera, but has be deprecated on unsupervised devices. Am I understanding correctly that it's simply not possible anymore unless the device is supervised?
I want to limit my child's phone usage at night by allowing them to scan a QR code to enforce app limitations via ScreenTime.
When they scan the QR code, I'm still unable to prevent them from accessing the Phone, Messages, and Camera app via ScreenTime.
Is there a way I can block or heavily restrict their access to these three apps via ScreenTime?
How do I prevent my child from undoing or evading the ScreenTime enforcement I'm trying to enforce?
I'm developing a tennis ball tracking feature using Vision Framework in Swift, specifically utilizing VNDetectedObjectObservation and VNTrackObjectRequest.
Occasionally (but not always), I receive the following runtime error:
Failed to perform SequenceRequest: Error Domain=com.apple.Vision Code=9 "Internal error: unexpected tracked object bounding box size" UserInfo={NSLocalizedDescription=Internal error: unexpected tracked object bounding box size}
From my investigation, I suspect the issue arises when the bounding box from the initial observation (VNDetectedObjectObservation) is too small. However, Apple's documentation doesn't clearly define the minimum bounding box size that's considered valid by VNTrackObjectRequest.
Could someone clarify:
What is the minimum acceptable bounding box width and height (normalized) that Vision Framework's VNTrackObjectRequest expects?
Is there any recommended practice or official guidance for bounding box size validation before creating a tracking request?
This information would be extremely helpful to reliably avoid this internal error.
Thank you!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
ML Compute
Machine Learning
Camera
AVFoundation
Short summary
When setting exposureMode to .locked or .custom the brightness of a video stream still changes depending on the composition and contrast of the visible scene. These changes seem to come from contrast enhancements or dynamic range optimizations and totally break any analysis of the image that requires to assess absolute luminance. While exposure lock seems to indeed lock the physical exposure parameters of the camera (shutter speed and ISO), I cannot find any way to control these "soft" modifiers.
Details
Background
I am the developer of the app "phyphox", an educational app that makes the phone's sensors accessible to students as measurement tools in science experiments. Currently I am working on implementing photometric measurements through the camera and one very important aspect of it is luminance measurements.
This is particularly relevant since the light sensor of the phone has no publicly accessible API and the camera could to some extend make experiments available to Apple users that are otherwise only possible on Android devices.
Implementation
The app uses AVFoundation and explicitly picks individual cameras since camera groups do not support custom exposure settings. This means that it handles camera switching during zoom by itself and even implements its own auto exposure routines to optimize for the use in experiments. Therefore it always stays in custom exposure mode. The app uses YUV420 color space and the individual frames are analyzed in Metal using compute shaders.
However, the effects discussed here still occur if I remove all code to control the camera and replace it with a simple sequence of setting the exposure mode to custom, setting custom exposure values, setting a fixed white balance and then setting the exposure mode to locked as suggested on stackoverflow. This neither helps on an iPhone 14 Pro nor on an iPhone 8 despite a report on the developer forums that it would resolve the issue for older devices.
The app is open source, so the code can be seen in our current development branch (without the changes for the tests here, though) on github.
The videos below use the implementation with the suggestion from stackoverflow, but they can be reproduced in the same way with "professional" camera apps that promise manual control over the camera (like the Blackmagic cam to quote a reputable company) as well as the stock camera app after pressing and holding on the preview to enable AE/AF lock.
Demonstration
These examples were captured on an iPhone 14 Pro. The central part of the image (highlighted by the app using metal shaders after capture) should not change with fixed exposure settings, but significant changes are noticable if there are changes at the edge of the frame when I move a black piece of cardboard in from above:
https://share.icloud.com/photos/0b1f_3IB6yAQG-qSH27pm6oDQ
The graph above the camera preview is the average luminance (gamma corrected and weighted based on sRGB) across the highlighted central area and as mentioned before it should not change because of something happening at the side of the frame (worst case it should get a bit darker because of the cardboard's shadow).
In my opinion, the iPhone changes its mind on the ideal contrast as soon as it has a different exposure histogram because of the dark image part from the cardboard, but that's just me guessing.
For completeness here is the same effect in the stock camera app with AE/AF lock enabled:
https://share.icloud.com/photos/0cd7QM8ucBZKwPwE9mybnEowg
Here you can also see that the iPhone "ramps" the changes. The brightness of the gray area does not change immediately but transitions smoothly, so this is clearly deliberate postprocessing.
So...
Any suggestion on how to prevent this behavior would be highly appreciated.
Hi, I'm working with CameraFrameProvider from Enterprise API. Is it always capped at 30fps, or is there something I can switch to get more?
I assume it is capped at 30, so let me cram in additional question here :). If I'd get a developer strap and attach an external camera capable of doing >30fps, will I get the full stream, or some other limitation will kick in?
Hello Community,
I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro.
From this depth map, I generate a point cloud using the intrinsic camera parameters.
I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud.
For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°.
My question is:
Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code?
To obtain the depth map, I’m using:
AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
Thanks in advance for your help!
I am developing a video streaming app for iPhone.
Minimum version is IOS 13.
I want to connect an external USB camera to the iPhone app and stream from it.
I have looked through a lot of information and have not found how to do this.
Is it possible to do this? Is there any documentation on this?
Hi,
At work, we've done some development on an Apple Vision Pro. On the project, we used object tracking to track an object in 3D and found the default tracking refresh rate (I believe 5Hz)to be too slow so we applied for enterprise APIs so we could change it.
At some point, in the capabilities (as a beginner to Swift and the Apple development environment) I noticed that's where you enable the Object Tracking Parameter Adjustment API and I did so, before hearing back about whether we got access to the enterprise API's and the license file that comes with it. So I setup the re-fresh rate to 30Hz and logged the settings of the ObjectTrackingProvider, showing it was set at 30Hz and felt like it was better than the default when we ran our app. In the Xcode runtime logs, there was no warning or error saying that the license file for the enterprise API was not found (and I don't think we heard back from Apple if they had granted our request or not - even if they did I think the license would be expired by now).
Fast forward to today, I was running the sample code of the Main Camera access for VisionOS linked in the official developer documentation and when I ran the project in Xcode, I noticed in the logs that it wanted an enterprise license and that's why it wasn't running as expected in the immersive space. We've since applied for the Enterprise API for Main Camera Access.
I'm now confused - did I mistakenly believe the object tracking refresh rate was set to 30Hz but it actually wasn't due to the lack of a license file/being granted access to the enterprise APIs? It seemed to be running as expected without a license file. Is Object tracking Parameter Adjustment API handled with different permissions than Main Camera Access API even though they are both enterprise APIs?
This is all for internal development and not planning on distributing an app but I find the behaviour to be confusing between the different enterprise API? Does anyone have more insight as I find the developer notes on the enterprise APIs to be a bit sparse.
I’m building a professional camera app where users can customize the video recording format and color grading. In the func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) method, I handle video frames and use Metal for real-time color grading. This works well when device.activeColorSpace is sRGB or P3, and the results are great. However, when the color space is HLG_BT2020 or appleLog, the MTKTextureLoader.newTexture(cgImage: cgImage, options: options) method throws an error. After researching, I found that the video frame in these color spaces has a bit-per-channel (bpc) greater than 8 after being converted to CGImage, causing the texture creation to fail. I tried converting the CGImage to a lower bpc to successfully create the texture, but the final output image is garbled and not as expected. Is there a solution to this issue?
Hi,
I'm trying to correct the lens distortion in frames provided by Enterprise API camera frame provider. The frames provided seem to have only in/extrinsics info, but not the distortion lookup table.
Is there some magic setting, or function to do that (I can't seem to find anything like this)? Or is there a way to use AVCameraCalibrationData together with provider?
I have an AR game using ARKit with SceneKit that works just fine in iOS 17.
In the iOS 18 betas, the AR background image shows black instead of showing the real world. As a result there's no tracking and obviously the whole game is useless.
I narrowed down the issue to showing the Game Center Access Point.
My app has ViewController 1 (VC1) showing the main menu and that's where I want to show the GC Access Point. From there you open VC2 which shows a list of levels. Selecting any level will open VC3 which has the ARScene.
Following is the code I use to start Game Center in VC1:
GKLocalPlayer.local.authenticateHandler = { gcAuthVC, error in
let isGameCenterReady = (gcAuthVC == nil) && (error == nil)
if let viewController = gcAuthVC {
self.present (viewController, animated: true, completion: nil)
}
if error != nil {
print(error?.localizedDescription ?? "")
}
if isGameCenterReady {
GKAccessPoint.shared.location = .topLeading
GKAccessPoint.shared.showHighlights = true
GKAccessPoint.shared.isActive = true
}
}
When switching to VC2 I run GKAccessPoint.shared.isActive = false so that the Access Point will no longer show in any of the following VCs. I tried running it in VC1, VC2, and again in VC3 - it doesn't change anything. Once I reach VC3, the background is black.
If in VC1 I don't run GKAccessPoint.shared.isActive = true, so I don't activate the access point, the behavior is as follows:
If I wait until after the Game Center login animation completes and closes on its own and then I proceed to VC2 and VC3, the camera image will show correctly
If I quickly move to VC2 before the Game Center login animation has completed, so my code will close it by setting active = false, and then I continue to VC3, I will see the black background problem.
So it does look like activating the access point and then de-activating it causes the issue. BTW, if I activate the access point and leave it on in all VCs, the same black background issue persists.
Other than that, when I'm in VC3 with the black background and I switch to another app (so my game moves to the background), when it returns to the foreground, the camera suddenly shows the real world correctly!
I tried to manually reset the AR session by pausing and restarting it, but that didn't change anything. Also, when I check with the debugger, it looks like when the app comes back to the foreground it also doesn't run the session start code.
But something does seem to reset itself so I wonder what that is. Maybe I could trigger the same manually in my cdoe???
I repeat that everything works just fine in iOS 17 and below. This problem only started with the iOS 18 beta (currently on beta 5, but it started in some of the previous betas as well).
So could this be a bug in iOS 18?
As a workaround I could check the iOS version and if it's iOS18 not activate the access point, hoping that the user will not jump to VC2 too quickly, and show my own button which will open Game Center. But I'd rather give the users the full experience with their own avatar and the highlights showing up. Plus, certainly some users will move quickly to VC2 and that will be an awful experience.
Any help would be greatly appreciated. Thanks!