Discuss using the camera on Apple devices.

Posts under Camera tag

181 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

com.apple.security.device.camera is being added to a MacCatalyst build Xcode 16.1
Hi, I have been building a MacCatalyst versions of an iOS app for years using a separate build that included a specific .entitlements file that excludes the com.apple.security.device.camera. Yet when I now build with Xcode 16.1 that entitlement is included. I have double checked my signing entitlement for my MacCatalyst build it is configured properly. I have check my .entitlement file to ensusre com.apple.security.device.camera is not there. All is as it should be. I have changed nothing, my build flow is the same. App Store Review has prevented the Mac build to be release becuse the com.apple.security.device.camera is set. What can I do to correct this?
1
0
46
2d
**Title:** Front-Facing Camera Rotation Matrix in ARKit: Consistency, Transformations, and `ARFrame.camera` Alignment
I'm seeking detailed information about the rotation matrix of the iPhone's front-facing (selfie) camera when using ARKit. Specifically, I need to understand: The exact rotation matrix applied to the front-facing camera's output in ARKit. Whether this matrix is consistent across all iPhone models or if there are variations. If there are any transformations applied to align the camera's coordinate system with the device's orientation, particularly in portrait mode. How this rotation matrix relates to the transform property of `ARFrame.camera
0
0
150
1w
Some use AVCaptureControl problems
Set 3 controls to the AVCaptureSession and remove them all. The number of controls in the session is indeed 0, but the camera controls button still shows the previous 3 controls. If it is only 3->2 or 3->1, it can be modified normally, 3->0 is not OK, 0->3 is OK. f (self.captureControl.zoom) { if (self.zoomScaleControl) { self.zoomScaleControl.enabled = false; [_session removeControl:self.zoomScaleControl]; } AVCaptureSlider *zoomSlider = [self.captureControl.zoom fetchCaptureSlider]; [zoomSlider setActionQueue:dispatch_get_main_queue() action:^(float zoomFactor) { @strongify(self); if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:tryChangeZoomScale:)]) { [self.dataOutputDelegate videoCaptureSession:self tryChangeZoomScale:zoomFactor]; } }]; self.zoomScaleControl = zoomSlider; } else { if (self.zoomScaleControl) { self.zoomScaleControl.enabled = false; [_session removeControl:self.zoomScaleControl]; } self.zoomScaleControl = nil; } if (self.captureControl.exposure) { if (self.exposureBiasControl) { self.exposureBiasControl.enabled = false; [_session removeControl:self.exposureBiasControl]; } AVCaptureSlider *exposureSlider = [self.captureControl.exposure fetchCaptureSlider]; [exposureSlider setActionQueue:dispatch_get_main_queue() action:^(float bias) { @strongify(self); if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:tryChangeExposureBias:)]) { [self.dataOutputDelegate videoCaptureSession:self tryChangeExposureBias:bias]; } }]; self.exposureBiasControl = exposureSlider; } else { if (self.exposureBiasControl) { self.exposureBiasControl.enabled = false; [_session removeControl:self.exposureBiasControl]; } self.exposureBiasControl = nil; } if (self.captureControl.len) { if (self.lenControl) { self.lenControl.enabled = false; [_session removeControl:self.lenControl]; } ORLenCaptureControlCustomModel *len = self.captureControl.len; AVCaptureIndexPicker *picker = [len fetchCaptureSlider]; [picker setActionQueue:dispatch_get_main_queue() action:^(NSInteger selectedIndex) { @strongify(self); if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:didChangeLenIndex:datas:)]) { [self.dataOutputDelegate videoCaptureSession:self didChangeLenIndex:selectedIndex datas:self.captureControl.len.indexDatas]; } }]; self.lenControl = picker; } else { if (self.lenControl) { self.lenControl.enabled = false; [_session removeControl:self.lenControl]; } self.lenControl = nil; } if ([_session canAddControl:self.zoomScaleControl]) { [_session addControl:self.zoomScaleControl]; } else { self.zoomScaleControl = nil; } if ([_session canAddControl:self.lenControl]) { [_session addControl:self.lenControl]; } else { self.lenControl = nil; } if ([_session canAddControl:self.exposureBiasControl]) { [_session addControl:self.exposureBiasControl]; } else { self.exposureBiasControl = nil; } if (_session.controlsDelegate == nil) { [_session setControlsDelegate:self queue:GetCaptureControlQueue()]; }
0
0
106
1w
Spatial streaming from iPhone
Hi, I am trying to stream spatial video in realtime from my iPhone 16. I am able to record spatial video as a file output using: let videoDeviceOutput = AVCaptureMovieFileOutput() However, when I try to grab the raw sample buffer, it doesn't include any spatial information: let captureOutput = AVCaptureVideoDataOutput() //when init camera session.addOutput(captureOutput) captureOutput.setSampleBufferDelegate(self, queue: sessionQueue) //finally func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { //use sample buffer (but no spatial data available here) } Is this how it's supposed to work or maybe I am missing something? this video: https://developer.apple.com/videos/play/wwdc2023/10071 gives us a clue towards setting up spatial streaming and I've got the backend all ready for 3D HLS streaming. Now I am only stuck at how to send the video stream to my server.
1
0
237
1w
Interface Orientation doesn't work for LockedCameraCapture extension.
I'm developing LockedCameraCapture extension. My extension can capture photo, save to system photo library and load from system photo library. That's pretty nice extension. I want to fix interface orientation to portrait for particular screen(capture screen). But I want some other screen to landscape orientation(photo viewing screen). So, I'm using "supportedInterfaceOrientations" property and "setNeedsUpdateOfSupportedInterfaceOrientations" method for interface orientation flexibility. This code implies the screen only supports portrait orientation. override var supportedInterfaceOrientations: UIInterfaceOrientationMask { .portrait } And I call this code to enable orientation setting. // UIViewController # viewDidLoad setNeedsUpdateOfSupportedInterfaceOrientations() My App work as expected, but my CaptureExtension doesn't work. My extension's capture screen can rotate Landscape and that's not intended behavior.
2
0
187
1w
LockedCameraCaptureManager
Task { for await update in LockedCameraCaptureManager.shared.sessionContentUpdates { switch update { case .initial(let urls): print("frank: init \(urls)") await MainActor.run { let label = UILabel(frame: CGRect(x: 100, y: 100, width: 100, height: 30)) label.text = "frank test" label.textColor = .black UIViewController.getTop().view.addSubview(label) } case .added(let url): print("frank: add \(url)") case .removed(let url): print("frank: removed \(url)") default: break } } } why 'case .initial(let urls)': never never be executed? Can some one provide a sample code?
1
0
144
2w
PHPickerResult slow loading, plus no thumbnails
A very common use case in our iOS app is that users take a large number of pictures (about 30) in low-light conditions using the camera app, and immediately after, they try to upload them to our servers. We measured the time to load photos from the PHPickerResult. For most photos, it takes less than 100 milliseconds, but for some of them, it takes several seconds—we even saw minutes in some extreme cases. We believe this started happening with iOS 17, when deferred photo processing was introduced. If users take the pictures using our in-app camera experience, the options to customize the camera are enough to avoid the long waiting times. However, the majority of our users still prefer to take the photos with the camera app, and there is little we can do about that. In the past few weeks, we tried many combinations: Without asking for permissions, we tried loadFileRepresentation, loadData, and loadObject. We explored the PHImageManager route, asking permissions and with different options for deliveryMode, resizeMode, version, isSynchronous, and allowSecondaryDegradedImage. We also tried fetching the photos in parallel, with very bad results. In summary, nothing helped the long waiting times—minutes in some cases. The first question is then, is there anything we can do to ignore the post-processing of the photos and get them fast? We could accept the unprocessed images. At a minimum, we would like to show our users what we are doing and why we are taking so much time. We tried to load thumbnails with loadPreviewImage and put a progress indicator on top. This method consistently gives us an error for all photos: (lldb) p error.localizedDescription (String) "Cannot load preview." We can load thumbnails with the PHImageManager option, but it seems excessive to need to get permissions only for that. Second question would be then, what can we do to load thumbnails without asking for permission? I created a feedback report with a video and sample code to reproduce -> FB15493683
2
0
159
2w
How to capture 48MP capture with Ultra wide lens using iPhone 16 pro max
I am working on capturing 48MP images using the iPhone 16 Pro Max with the Ultra-wide camera. I’ve updated the code to capture the maximum supported dimensions with the following snippet: if #available(iOS 16.0, *) { photoOutput.maxPhotoDimensions = device.activeFormat.supportedMaxPhotoDimensions.last! photoSettings.maxPhotoDimensions = .init(width: 5712, height: 4284) } However, I’m still not getting the expected results. My goal is to capture 48MP images, and I want to confirm if the Ultra-wide camera supports this resolution or if I’m missing any other configuration. Any guidance would be appreciated!
1
2
234
2w
Issues with ProRAW MAX(48) and stock camera app
Hello developer community. I purchase recently my new iPhone 16 Pro Max; it is a premium device with great quality overall. However, I am having a big trouble shotting in ProRaw MAX (48 mode) with native camera. Just to be clear, the problem that i will describe do not happen in 3rd apps, such as ProCam; only with native camera. When I use ProRaw Max, and take the photo, and watch the photo in the gallery the image can’t load and render properly. Even, when I maximize the image to the maximum I can see pixelated portions, defects and super low resolution and excessive denoise. For comparison, this not occur with my previous iPhone 15 PM and/or when I capture photos from ProCam (same settings and configurations) in the 16PM. I proceed to take the photo, open the gallery and I see full of details, when zoomed to 100%. I tried to format the phone, reinstall the software via my mac. Tried even to look at some forums to find if there’s someone with the same issue, the information available so far is very low. I’m in contact with apple assistant from my country (Portugal), and they escalated this problem to the engineers. (that’s what I’ve been told). They did all the tests remotely (via analysis and improvement’s) and they told me that my phone is perfect in the hardware department. I will wait for the next days to be contacted again. I’m on iOS 18.0.1. (The last software available at this time). I tried multiple 16PM, from friends, family and stores (more or less 10 units), and they all showed the exact same problematic. I’m a professional photographer, so I find this frustrating and unacceptable. I would appreciate any additional suggestion or information. Thank you! Cannot add photos or files because they are bigger than 5Mb.
1
0
258
2w
Files and Folders permission of App keeps denied, even from Settings.
Hi Apple Engineer, My app is using ImageCaptureCore framework to communicate to external DSLR Camera. When I connect my device to a camera, I execute the requestContentsAuthorization(completion:) to request for Access Files on Connected Cameras. This is the dialog when the request is executed: When I tap "OK", the status of content authorization keeps "Denied". even when I open "Files and Folders" permission in "Privacy & Security" Settings. When I switched ON the permission, the switch keeps back to turned off. You could see the reproduce in this GoogleDrive video https://drive.google.com/file/d/15B-R5TONgMWg8qFiYUGK0hTy62dsVGUX/view?usp=sharing The occurrence keeps happen even: I uninstall and install the app back Do "Reset Location & Privacy" Do "Reset All Settings" I attached the sysdiagnose files in this GoogleDrive file https://drive.google.com/file/d/11lovl_xC95AKXQTkZ1_e6UbEgS5md0Z3/view?usp=sharing I firstly experience this issue after researching ImageCaptureCore's API. I executed resetContentsAuthorizationWithCompletion:. After that, my permission request keeps denied as described above :( There are other developer that experiences the same as mine https://forums.developer.apple.com/forums/thread/756960 . There is a simple sample project there and it's reproducible in my case. Could you help me how to accomplished my app can be granted for permission to "Files and Folders" permission when using ImageCaptureCore? Could it be a bug from the system?
1
1
184
3w
Handling YOLOv8 Object Detection in 60FPS UltraWideCamera on iOS: Frame Processing Query
I am developing an iOS app that uses YOLOv8 for object detection and aims to detect objects at 60 FPS using the UltraWide camera. My goal is to process every frame within captureOutput and utilize the detected data (such as coordinates) for each one. I have a question regarding how background thread processing behaves in this scenario. Does the size of the YOLO model (n, s, m, etc.) or the weight of the operations inside captureOutput affect the number of frames that can be successfully processed? Specifically, I would like to know if all frames will be processed sequentially with a delay due to heavy processing in the background, or if some frames will be dropped and not processed at all. Any insights on how to handle this would be greatly appreciated. Thank you!
2
0
251
3w
Compatibility Between ARKit and Optical Zoom
Hello, I am a developer currently working on an AR application using ARKit. I aim to implement a Zoom feature that allows users to enlarge and reduce objects within the AR scene while simultaneously measuring the distance to those objects. Specifically, I want to incorporate Optical Zoom to provide a more natural and precise user experience. I have considered several approaches and would appreciate your advice on the most effective methods. Approaches Being Considered: Using UIPinchGestureRecognizer to Adjust the Camera's Field of View Modifying the scale Property of SCNNode to Enlarge/Reduce Specific Objects Leveraging AVFoundation to Control the Camera's Optical Zoom Questions: Compatibility Between ARKit and Optical Zoom: Is it feasible to control the camera's optical zoom using AVFoundation while utilizing ARKit's features? What should be considered when integrating these two frameworks? Integrating Object Distance Measurement with Zoom Functionality: What is the most effective approach to measure and display the distance to an object in real-time when a user zooms in on it? User Experience Considerations: Do you have any UI/UX design tips for implementing optical zoom to ensure a natural and intuitive experience? For example, how can visual feedback for zoom actions and distance measurements be effectively presented to users? Performance Optimization: What optimization strategies can minimize potential performance issues when implementing both optical zoom and distance measurement features simultaneously? Example Code and Reference Materials: Could you share any example code or reference materials that demonstrate similar functionalities? Thank you. Example Code Request: If possible, providing sample code that integrates optical zoom with distance measurement would be extremely helpful. Reference Links: Please share any tutorials or resources that demonstrate the combined use of ARKit and AVFoundation.
1
0
177
3w
Raw point cloud access
Hi, I currently have Enterprise API access and have observed that the main camera API only provides RGB data. I am trying to access point cloud information from LIDAR, but it seems ARKit doesn't offer this directly via the standard APIs that iPad uses. I wanted to ask if there are any possible options to access depth data or enhanced camera capabilities using the Enterprise API. Specifically: Does having Enterprise API access unlock any additional camera-related APIs in AVFoundation that could provide depth information or more advanced control over the camera? Are there any workarounds or alternative methods to obtain depth data from the camera?
1
0
145
3w
How to extracted stereo image pair from generated spatial photos by visionOS 2.0
Hi, My app allows users to share and view spatial photos. For viewing spatial photos, I'm using a plane in a RealityView that has a camera index switch material node, which takes the stereo images as the inputs. For sharing native spatial photos taken on the vision pro, prior to visionOS 2.0, I extract the stereo image pair and merge them into a single side-by-side image to upload to the app's backend. However, since visionOS 2.0 introduced generating spatial photos from normal photos, I've been seeing some unexpected behaviours in my app, while on the other hand, they can be viewed correctly in the system Photos app: Sometimes the extracted images have different size, the right image is smaller than the left image. See the first image in the google drive below, taken with iPhone 15 Pro. Even if the image pair have the same size, when viewed in my app, it has some artefacts, especially around the edge of objects which are closer to the camera. See the second image in the google drive below, taken with iPhone 11. Google drive link here: https://drive.google.com/drive/folders/1UTfpxvO3-ChqshwfyzY5E_KCgk8VgUaa I know that now Quicklook preview application can support viewing spatial photos now, but I would like to keep it the way I implemented in the app, for compatibility concerns. Below is a code snippet that deals with the extraction. Please point out the correct way to extract stereo image pair from a generated spatial photo. Happy to submit a code-level support request if more information is needed. // the data is from photos picker item let data = try await photo.loadTransferable(type: Data.self) let source = CGImageSourceCreateWithData(data as CFData, nil) let sbsImage = source.extractSpatialPhoto() extension CGImageSource { func extractSpatialPhoto() -> UIImage? { guard let leftCGImage = extractSpatialImage(at: 0), let rightCGImage = extractSpatialImage(at: 1) else { return nil } let leftImage = UIImage(ciImage: leftCGImage) let rightImage = UIImage(ciImage: rightCGImage) guard leftImage.size == rightImage.size else { return nil } // merge left + right let size = CGSize(width: leftImage.size.width * 2, height: leftImage.size.height) UIGraphicsBeginImageContextWithOptions(size, true, 1.0) leftImage.draw(in: CGRect(x: 0, y: 0, width: leftImage.size.width, height: leftImage.size.height)) rightImage.draw(in: CGRect(x: leftImage.size.width, y: 0, width: rightImage.size.width, height: rightImage.size.height)) let mergedImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return mergedImage } // not sure if this actually works func extractSpatialImage(at index: Int) -> CIImage? { guard let cgImage = CGImageSourceCreateImageAtIndex(self, index, nil) else { return nil } var ciImage = CIImage(cgImage: cgImage) if let properties = CGImageSourceCopyPropertiesAtIndex(self, index, nil) as? [String: Any], let heifDictionary = properties[kCGImagePropertyHEIFDictionary as String] as? [String: Any], let extrinsics = heifDictionary[kIIOMetadata_CameraExtrinsicsKey as String] as? [String: Any], let position = extrinsics[kIIOCameraExtrinsics_Position as String] as? [Double] { // Default baseline is 64mm (0 for left camera, 0.064m for right camera) let standardBaseline = 0.064 // Check if it's the right image (should be at [0.064, 0, 0]) let isRightImage = (index == 1) let expectedPosition = isRightImage ? standardBaseline : 0.0 // Calculate the translation needed to align to standard baseline let positionDelta = position[0] - expectedPosition // Apply translation only if there's a mismatch in position if positionDelta != 0 { let transform = CGAffineTransform(translationX: CGFloat(positionDelta), y: 0) ciImage = ciImage.transformed(by: transform) } } return ciImage } }
1
0
379
3w