I don't know what else to say other than the 18.0.1 update did not resolve my camera issue on my iPhone 16 pro. Still crashes a few times a week and I have to restart my phone every time. I heavily use my camera as I have 2 toddlers. C'mon, Apple! Get it together, I've owned my phone for a month now.
Camera
RSS for tagDiscuss using the camera on Apple devices.
Posts under Camera tag
141 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm seeking detailed information about the rotation matrix of the iPhone's front-facing (selfie) camera when using ARKit.
Specifically, I need to understand:
The exact rotation matrix applied to the front-facing camera's output in ARKit.
Whether this matrix is consistent across all iPhone models or if there are variations.
If there are any transformations applied to align the camera's coordinate system with the device's orientation, particularly in portrait mode.
How this rotation matrix relates to the transform property of `ARFrame.camera
Hi, I am trying to stream spatial video in realtime from my iPhone 16.
I am able to record spatial video as a file output using:
let videoDeviceOutput = AVCaptureMovieFileOutput()
However, when I try to grab the raw sample buffer, it doesn't include any spatial information:
let captureOutput = AVCaptureVideoDataOutput()
//when init camera
session.addOutput(captureOutput)
captureOutput.setSampleBufferDelegate(self, queue: sessionQueue)
//finally
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//use sample buffer (but no spatial data available here)
}
Is this how it's supposed to work or maybe I am missing something?
this video: https://developer.apple.com/videos/play/wwdc2023/10071 gives us a clue towards setting up spatial streaming and I've got the backend all ready for 3D HLS streaming. Now I am only stuck at how to send the video stream to my server.
Set 3 controls to the AVCaptureSession and remove them all. The number of controls in the session is indeed 0, but the camera controls button still shows the previous 3 controls. If it is only 3->2 or 3->1, it can be modified normally, 3->0 is not OK, 0->3 is OK.
f (self.captureControl.zoom) {
if (self.zoomScaleControl) {
self.zoomScaleControl.enabled = false;
[_session removeControl:self.zoomScaleControl];
}
AVCaptureSlider *zoomSlider = [self.captureControl.zoom fetchCaptureSlider];
[zoomSlider setActionQueue:dispatch_get_main_queue() action:^(float zoomFactor) {
@strongify(self);
if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:tryChangeZoomScale:)]) {
[self.dataOutputDelegate videoCaptureSession:self tryChangeZoomScale:zoomFactor];
}
}];
self.zoomScaleControl = zoomSlider;
} else {
if (self.zoomScaleControl) {
self.zoomScaleControl.enabled = false;
[_session removeControl:self.zoomScaleControl];
}
self.zoomScaleControl = nil;
}
if (self.captureControl.exposure) {
if (self.exposureBiasControl) {
self.exposureBiasControl.enabled = false;
[_session removeControl:self.exposureBiasControl];
}
AVCaptureSlider *exposureSlider = [self.captureControl.exposure fetchCaptureSlider];
[exposureSlider setActionQueue:dispatch_get_main_queue() action:^(float bias) {
@strongify(self);
if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:tryChangeExposureBias:)]) {
[self.dataOutputDelegate videoCaptureSession:self tryChangeExposureBias:bias];
}
}];
self.exposureBiasControl = exposureSlider;
} else {
if (self.exposureBiasControl) {
self.exposureBiasControl.enabled = false;
[_session removeControl:self.exposureBiasControl];
}
self.exposureBiasControl = nil;
}
if (self.captureControl.len) {
if (self.lenControl) {
self.lenControl.enabled = false;
[_session removeControl:self.lenControl];
}
ORLenCaptureControlCustomModel *len = self.captureControl.len;
AVCaptureIndexPicker *picker = [len fetchCaptureSlider];
[picker setActionQueue:dispatch_get_main_queue() action:^(NSInteger selectedIndex) {
@strongify(self);
if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:didChangeLenIndex:datas:)]) {
[self.dataOutputDelegate videoCaptureSession:self didChangeLenIndex:selectedIndex datas:self.captureControl.len.indexDatas];
}
}];
self.lenControl = picker;
} else {
if (self.lenControl) {
self.lenControl.enabled = false;
[_session removeControl:self.lenControl];
}
self.lenControl = nil;
}
if ([_session canAddControl:self.zoomScaleControl]) {
[_session addControl:self.zoomScaleControl];
} else {
self.zoomScaleControl = nil;
}
if ([_session canAddControl:self.lenControl]) {
[_session addControl:self.lenControl];
} else {
self.lenControl = nil;
}
if ([_session canAddControl:self.exposureBiasControl]) {
[_session addControl:self.exposureBiasControl];
} else {
self.exposureBiasControl = nil;
}
if (_session.controlsDelegate == nil) {
[_session setControlsDelegate:self queue:GetCaptureControlQueue()];
}
Hello Apple Team,
Is it possible to change the zoom factor, exposure, white balance and other settings, of an iOS ARKit session?
I know how to do it using an AVCaptureSession,
however, I can't figure out how to access the AVCaptureDeviceInput of the current AR session.
Thanks
PS: I'm using ARkit and RealityKit on iOS 17
Hi,
After installing iPadOS18 Beta3 on my iPad 7th gen, the default camera app no longer detects QR codes.
I tried updating to Beta7, but the issue remained.
Also, third-party apps that use AVCaptureMetadataOutput in AVFoundation Framework to detect QR codes also no longer work.
You can reproduce the issue by running default camera app or the AVFoundation sample code from the Apple developer site on iPad 7th gen (iPadOS18Beta installed).
https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
Has anyone else experienced this issue?
I would like to know if this issue occurs on other iPad models as well.
This is similar to the following issue that previously occurred with iPadOS 17.4.
https://support.apple.com/en-lamr/118614
https://developer.apple.com/forums/thread/748092
I'm developing LockedCameraCapture extension.
My extension can capture photo, save to system photo library and load from system photo library. That's pretty nice extension.
I want to fix interface orientation to portrait for particular screen(capture screen). But I want some other screen to landscape orientation(photo viewing screen).
So, I'm using "supportedInterfaceOrientations" property and "setNeedsUpdateOfSupportedInterfaceOrientations" method for interface orientation flexibility.
This code implies the screen only supports portrait orientation.
override var supportedInterfaceOrientations: UIInterfaceOrientationMask {
.portrait
}
And I call this code to enable orientation setting.
// UIViewController # viewDidLoad
setNeedsUpdateOfSupportedInterfaceOrientations()
My App work as expected, but my CaptureExtension doesn't work.
My extension's capture screen can rotate Landscape and that's not intended behavior.
Task {
for await update in LockedCameraCaptureManager.shared.sessionContentUpdates {
switch update {
case .initial(let urls):
print("frank: init \(urls)")
await MainActor.run {
let label = UILabel(frame: CGRect(x: 100, y: 100, width: 100, height: 30))
label.text = "frank test"
label.textColor = .black
UIViewController.getTop().view.addSubview(label)
}
case .added(let url):
print("frank: add \(url)")
case .removed(let url):
print("frank: removed \(url)")
default:
break
}
}
}
why 'case .initial(let urls)': never never be executed? Can some one provide a sample code?
A very common use case in our iOS app is that users take a large number of pictures (about 30) in low-light conditions using the camera app, and immediately after, they try to upload them to our servers.
We measured the time to load photos from the PHPickerResult. For most photos, it takes less than 100 milliseconds, but for some of them, it takes several seconds—we even saw minutes in some extreme cases.
We believe this started happening with iOS 17, when deferred photo processing was introduced. If users take the pictures using our in-app camera experience, the options to customize the camera are enough to avoid the long waiting times. However, the majority of our users still prefer to take the photos with the camera app, and there is little we can do about that.
In the past few weeks, we tried many combinations:
Without asking for permissions, we tried loadFileRepresentation, loadData, and loadObject.
We explored the PHImageManager route, asking permissions and with different options for deliveryMode, resizeMode, version, isSynchronous, and allowSecondaryDegradedImage.
We also tried fetching the photos in parallel, with very bad results.
In summary, nothing helped the long waiting times—minutes in some cases.
The first question is then, is there anything we can do to ignore the post-processing of the photos and get them fast? We could accept the unprocessed images.
At a minimum, we would like to show our users what we are doing and why we are taking so much time. We tried to load thumbnails with loadPreviewImage and put a progress indicator on top. This method consistently gives us an error for all photos:
(lldb) p error.localizedDescription
(String) "Cannot load preview."
We can load thumbnails with the PHImageManager option, but it seems excessive to need to get permissions only for that.
Second question would be then, what can we do to load thumbnails without asking for permission?
I created a feedback report with a video and sample code to reproduce -> FB15493683
This ideal gonna be cool:
When people finish recording a video and later realize there's something else worth capturing, they can only create a second clip. But what if it were possible to reopen the first video and continue recording from where they left off? This would be a great convenience for many people
Hello developer community.
I purchase recently my new iPhone 16 Pro Max; it is a premium device with great quality overall.
However, I am having a big trouble shotting in ProRaw MAX (48 mode) with native camera.
Just to be clear, the problem that i will describe do not happen in 3rd apps, such as ProCam; only with native camera.
When I use ProRaw Max, and take the photo, and watch the photo in the gallery the image can’t load and render properly. Even, when I maximize the image to the maximum I can see pixelated portions, defects and super low resolution and excessive denoise.
For comparison, this not occur with my previous iPhone 15 PM and/or when I capture photos from ProCam (same settings and configurations) in the 16PM. I proceed to take the photo, open the gallery and I see full of details, when zoomed to 100%.
I tried to format the phone, reinstall the software via my mac. Tried even to look at some forums to find if there’s someone with the same issue, the information available so far is very low.
I’m in contact with apple assistant from my country (Portugal), and they escalated this problem to the engineers. (that’s what I’ve been told).
They did all the tests remotely (via analysis and improvement’s) and they told me that my phone is perfect in the hardware department. I will wait for the next days to be contacted again.
I’m on iOS 18.0.1. (The last software available at this time).
I tried multiple 16PM, from friends, family and stores (more or less 10 units), and they all showed the exact same problematic.
I’m a professional photographer, so I find this frustrating and unacceptable.
I would appreciate any additional suggestion or information. Thank you!
Cannot add photos or files because they are bigger than 5Mb.
Hi Apple Engineer,
My app is using ImageCaptureCore framework to communicate to external DSLR Camera. When I connect my device to a camera, I execute the requestContentsAuthorization(completion:) to request for Access Files on Connected Cameras. This is the dialog when the request is executed:
When I tap "OK", the status of content authorization keeps "Denied". even when I open "Files and Folders" permission in "Privacy & Security" Settings.
When I switched ON the permission, the switch keeps back to turned off. You could see the reproduce in this GoogleDrive video https://drive.google.com/file/d/15B-R5TONgMWg8qFiYUGK0hTy62dsVGUX/view?usp=sharing
The occurrence keeps happen even:
I uninstall and install the app back
Do "Reset Location & Privacy"
Do "Reset All Settings"
I attached the sysdiagnose files in this GoogleDrive file https://drive.google.com/file/d/11lovl_xC95AKXQTkZ1_e6UbEgS5md0Z3/view?usp=sharing
I firstly experience this issue after researching ImageCaptureCore's API. I executed resetContentsAuthorizationWithCompletion:. After that, my permission request keeps denied as described above :(
There are other developer that experiences the same as mine https://forums.developer.apple.com/forums/thread/756960 . There is a simple sample project there and it's reproducible in my case.
Could you help me how to accomplished my app can be granted for permission to "Files and Folders" permission when using ImageCaptureCore? Could it be a bug from the system?
Hello. I’m running the 18.3 beta on an 15 pro and have noticed the green camera indicator light turns on when I switch apps. I also am unable to use my flashlight until it turns off (usually a second or two). I’ve checked my privacy and access settings and nothing looks out of the norm. I’ve also closed all rubbing apps, but the issue continues.
The back camera of my new iphone 16 pro max keeps crashing almost every shot so bascially I cannot take any picture on this phone! such a huge failure on Apple!
I am developing an iOS app that uses YOLOv8 for object detection and aims to detect objects at 60 FPS using the UltraWide camera. My goal is to process every frame within captureOutput and utilize the detected data (such as coordinates) for each one.
I have a question regarding how background thread processing behaves in this scenario. Does the size of the YOLO model (n, s, m, etc.) or the weight of the operations inside captureOutput affect the number of frames that can be successfully processed?
Specifically, I would like to know if all frames will be processed sequentially with a delay due to heavy processing in the background, or if some frames will be dropped and not processed at all. Any insights on how to handle this would be greatly appreciated.
Thank you!
Hey, captureHighResolutionFrame() produces the normal camera shutter sound and that really doesn't fit the ARKit context. I can't override it the usual way because there's no AVCaptureSession object in ARSession. Any ideas on what to do? Thanks!
Hello,
I am a developer currently working on an AR application using ARKit. I aim to implement a Zoom feature that allows users to enlarge and reduce objects within the AR scene while simultaneously measuring the distance to those objects. Specifically, I want to incorporate Optical Zoom to provide a more natural and precise user experience. I have considered several approaches and would appreciate your advice on the most effective methods.
Approaches Being Considered:
Using UIPinchGestureRecognizer to Adjust the Camera's Field of View Modifying the scale Property of SCNNode to Enlarge/Reduce Specific Objects Leveraging AVFoundation to Control the Camera's Optical Zoom Questions:
Compatibility Between ARKit and Optical Zoom: Is it feasible to control the camera's optical zoom using AVFoundation while utilizing ARKit's features? What should be considered when integrating these two frameworks?
Integrating Object Distance Measurement with Zoom Functionality: What is the most effective approach to measure and display the distance to an object in real-time when a user zooms in on it?
User Experience Considerations: Do you have any UI/UX design tips for implementing optical zoom to ensure a natural and intuitive experience? For example, how can visual feedback for zoom actions and distance measurements be effectively presented to users?
Performance Optimization: What optimization strategies can minimize potential performance issues when implementing both optical zoom and distance measurement features simultaneously?
Example Code and Reference Materials: Could you share any example code or reference materials that demonstrate similar functionalities?
Thank you.
Example Code Request:
If possible, providing sample code that integrates optical zoom with distance measurement would be extremely helpful.
Reference Links:
Please share any tutorials or resources that demonstrate the combined use of ARKit and AVFoundation.
I tried "WWDC24: Build compelling spatial photo and video experiences | Apple" and it can successfully capture spatial video.
But I found the video by my app differs from the iPhone build-in camera app in:
Videos captured with the iPhone's build-in camera app tend to have a more natural or warmer tone, while videos taken with my app appear whiter or cooler in color temperature.
In videos recorded using the iPhone's built-in camera app, the left eye image is typically sharper than the right eye image. However, in my app, this is reversed: the right eye image is clearer than the left eye image.
I've noticed that when I cover the wide-angle lens while shooting, the entire preview screen in my app becomes brighter. However, this doesn't occur when using the iPhone's built-in camera app.
Is there any api or parameters to make my app more close to the iPhone build-in app? I have tried "whiteBalanceMode" and "exposureMode" but no luck.
Hi,
I currently have Enterprise API access and have observed that the main camera API only provides RGB data. I am trying to access point cloud information from LIDAR, but it seems ARKit doesn't offer this directly via the standard APIs that iPad uses.
I wanted to ask if there are any possible options to access depth data or enhanced camera capabilities using the Enterprise API.
Specifically:
Does having Enterprise API access unlock any additional camera-related APIs in AVFoundation that could provide depth information or more advanced control over the camera?
Are there any workarounds or alternative methods to obtain depth data from the camera?
I am currently working on a project where I aim to overlay the camera feed obtained via the Apple Vision Pro's camera access API to align perfectly with the user's perspective in Vision Pro.
However, I've noticed a discrepancy between the captured camera feed and the actual view from the user's perspective. My assumption is that this difference might be related to lens distortion correction or the lack thereof.
Unfortunately, I'm not entirely sure how the camera feed is being corrected or processed. For the overlay, I'm using a typical 3D CG approach where a texture captured from the background plane is projected onto a surface. In this case, the "background capture" is the camera feed that I'm projecting.
If anyone has insights or suggestions on how to align the camera feed with the user's perspective more accurately, any information would be greatly appreciated.
Attached image shows what difference between the camera feed and actual user's perspective field of view.
I want to align the camera feed image to the user's perspective.