Discuss using the camera on Apple devices.

Posts under Camera tag

160 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Depth matrix accuracy with the iPhone 14 Pro and Lidar
Hello Community, I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro. From this depth map, I generate a point cloud using the intrinsic camera parameters. I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud. For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°. My question is: Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code? To obtain the depth map, I’m using: AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) Thanks in advance for your help!
0
0
13
1d
VisionPro camera frame rate
Hi, I'm working with CameraFrameProvider from Enterprise API. Is it always capped at 30fps, or is there something I can switch to get more? I assume it is capped at 30, so let me cram in additional question here :). If I'd get a developer strap and attach an external camera capable of doing >30fps, will I get the full stream, or some other limitation will kick in?
2
0
39
12h
Image brightness adapts despite exposure lock
Short summary When setting exposureMode to .locked or .custom the brightness of a video stream still changes depending on the composition and contrast of the visible scene. These changes seem to come from contrast enhancements or dynamic range optimizations and totally break any analysis of the image that requires to assess absolute luminance. While exposure lock seems to indeed lock the physical exposure parameters of the camera (shutter speed and ISO), I cannot find any way to control these "soft" modifiers. Details Background I am the developer of the app "phyphox", an educational app that makes the phone's sensors accessible to students as measurement tools in science experiments. Currently I am working on implementing photometric measurements through the camera and one very important aspect of it is luminance measurements. This is particularly relevant since the light sensor of the phone has no publicly accessible API and the camera could to some extend make experiments available to Apple users that are otherwise only possible on Android devices. Implementation The app uses AVFoundation and explicitly picks individual cameras since camera groups do not support custom exposure settings. This means that it handles camera switching during zoom by itself and even implements its own auto exposure routines to optimize for the use in experiments. Therefore it always stays in custom exposure mode. The app uses YUV420 color space and the individual frames are analyzed in Metal using compute shaders. However, the effects discussed here still occur if I remove all code to control the camera and replace it with a simple sequence of setting the exposure mode to custom, setting custom exposure values, setting a fixed white balance and then setting the exposure mode to locked as suggested on stackoverflow. This neither helps on an iPhone 14 Pro nor on an iPhone 8 despite a report on the developer forums that it would resolve the issue for older devices. The app is open source, so the code can be seen in our current development branch (without the changes for the tests here, though) on github. The videos below use the implementation with the suggestion from stackoverflow, but they can be reproduced in the same way with "professional" camera apps that promise manual control over the camera (like the Blackmagic cam to quote a reputable company) as well as the stock camera app after pressing and holding on the preview to enable AE/AF lock. Demonstration These examples were captured on an iPhone 14 Pro. The central part of the image (highlighted by the app using metal shaders after capture) should not change with fixed exposure settings, but significant changes are noticable if there are changes at the edge of the frame when I move a black piece of cardboard in from above: https://share.icloud.com/photos/0b1f_3IB6yAQG-qSH27pm6oDQ The graph above the camera preview is the average luminance (gamma corrected and weighted based on sRGB) across the highlighted central area and as mentioned before it should not change because of something happening at the side of the frame (worst case it should get a bit darker because of the cardboard's shadow). In my opinion, the iPhone changes its mind on the ideal contrast as soon as it has a different exposure histogram because of the dark image part from the cardboard, but that's just me guessing. For completeness here is the same effect in the stock camera app with AE/AF lock enabled: https://share.icloud.com/photos/0cd7QM8ucBZKwPwE9mybnEowg Here you can also see that the iPhone "ramps" the changes. The brightness of the gray area does not change immediately but transitions smoothly, so this is clearly deliberate postprocessing. So... Any suggestion on how to prevent this behavior would be highly appreciated.
1
0
28
6m
Are VisionOS Enterprise APIs handled differently from one another?
Hi, At work, we've done some development on an Apple Vision Pro. On the project, we used object tracking to track an object in 3D and found the default tracking refresh rate (I believe 5Hz)to be too slow so we applied for enterprise APIs so we could change it. At some point, in the capabilities (as a beginner to Swift and the Apple development environment) I noticed that's where you enable the Object Tracking Parameter Adjustment API and I did so, before hearing back about whether we got access to the enterprise API's and the license file that comes with it. So I setup the re-fresh rate to 30Hz and logged the settings of the ObjectTrackingProvider, showing it was set at 30Hz and felt like it was better than the default when we ran our app. In the Xcode runtime logs, there was no warning or error saying that the license file for the enterprise API was not found (and I don't think we heard back from Apple if they had granted our request or not - even if they did I think the license would be expired by now). Fast forward to today, I was running the sample code of the Main Camera access for VisionOS linked in the official developer documentation and when I ran the project in Xcode, I noticed in the logs that it wanted an enterprise license and that's why it wasn't running as expected in the immersive space. We've since applied for the Enterprise API for Main Camera Access. I'm now confused - did I mistakenly believe the object tracking refresh rate was set to 30Hz but it actually wasn't due to the lack of a license file/being granted access to the enterprise APIs? It seemed to be running as expected without a license file. Is Object tracking Parameter Adjustment API handled with different permissions than Main Camera Access API even though they are both enterprise APIs? This is all for internal development and not planning on distributing an app but I find the behaviour to be confusing between the different enterprise API? Does anyone have more insight as I find the developer notes on the enterprise APIs to be a bit sparse.
0
0
32
1w
Vision Framework VNTrackObjectRequest: Minimum Valid Bounding Box Size Causing Internal Error (Code=9)
I'm developing a tennis ball tracking feature using Vision Framework in Swift, specifically utilizing VNDetectedObjectObservation and VNTrackObjectRequest. Occasionally (but not always), I receive the following runtime error: Failed to perform SequenceRequest: Error Domain=com.apple.Vision Code=9 "Internal error: unexpected tracked object bounding box size" UserInfo={NSLocalizedDescription=Internal error: unexpected tracked object bounding box size} From my investigation, I suspect the issue arises when the bounding box from the initial observation (VNDetectedObjectObservation) is too small. However, Apple's documentation doesn't clearly define the minimum bounding box size that's considered valid by VNTrackObjectRequest. Could someone clarify: What is the minimum acceptable bounding box width and height (normalized) that Vision Framework's VNTrackObjectRequest expects? Is there any recommended practice or official guidance for bounding box size validation before creating a tracking request? This information would be extremely helpful to reliably avoid this internal error. Thank you!
0
0
26
1w
Processing AVCaptureVideoDataOutput video stream with appleLog and HLG_BT2020 AVCaptureColorSpace input
I’m building a professional camera app where users can customize the video recording format and color grading. In the func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) method, I handle video frames and use Metal for real-time color grading. This works well when device.activeColorSpace is sRGB or P3, and the results are great. However, when the color space is HLG_BT2020 or appleLog, the MTKTextureLoader.newTexture(cgImage: cgImage, options: options) method throws an error. After researching, I found that the video frame in these color spaces has a bit-per-channel (bpc) greater than 8 after being converted to CGImage, causing the texture creation to fail. I tried converting the CGImage to a lower bpc to successfully create the texture, but the final output image is garbled and not as expected. Is there a solution to this issue?
1
0
31
2w
Main Camera Access Entitlement Bug
Hello everyone can you help me, i have requested main camera access API Enterprise and have got the license to, and i have setting up the project main camera access demo from apple with my new license and have create app bundle and identifier for it but when i tried to deploy it test flight i got some error say "Profile doesn't support Main Camera Access" and "Profile doesn't include the com.apple.developer.arkit.main-camera-access.alow entitlement, even have do it it app Certificates, Identifiers & Profiles and add the additional capability Main Camera Access. can you help me fixing this so that i can use Main Camera Access Entitlement
3
0
61
Mar ’25
iPhone 13promax camera issue
Ever since the last update i have issues with my camera app. Sometimes when I open the app the forward facing cameras don’t work and it’s just a Black screen. I also get a warning that I may not have genuine iPhone parts installed. I have to reboot the phone every time just to have it app function again. It’s annoying. Please fix this. I never had any issues with the camera or its app up until after the update.
1
0
97
Mar ’25
Prevent iOS from Switching Between Back Camera Lenses in getUserMedia (Safari/WebView, iOS 18)
I’m developing a hybrid app (WebView / Turbo Native) that uses getUserMedia to access the back camera for a PPG/heart rate measurement feature (the user places their finger on the camera). Problem: Even when I specify constraints like: { video: { deviceId: '...', facingMode: { exact: 'environment' }, advanced: [{ zoom: 1.0 }] }, audio: false } On iPhone 15 (iOS 18), iOS unexpectedly switches between the wide, ultra-wide, and telephoto lenses during the measurement. This breaks the heart rate detection, and it forces the user to move their finger in the middle of the measurement. Question: Is there any way, via getUserMedia/WebRTC, to force iOS to use only the wide-angle lens and prevent automatic lens switching? I know that with AVFoundation (Swift) you can pick .builtInWideAngleCamera, but I’m hoping to avoid building a custom native layer and would prefer to stick with WebView/JavaScript if possible to save time and complexity. Any suggestions, workarounds, or updates from Apple would be greatly appreciated! Thanks a lot!
1
0
292
Mar ’25
Crash observed in iOS 18.4 beta on opening camera from WebView.
Adding Stack Trace for your reference: thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BREAKPOINT (code=1, subcode=0x1a6efe5b8) frame #0: 0x00000001a6efe5b8 WebCoreWebCore::BaseAudioSharedUnit::BaseAudioSharedUnit() + 668 frame #1: 0x00000001a6efe044 WebCoreWebCore::CoreAudioSharedUnit::singleton() + 80 frame #2: 0x00000001a9521fe4 WebCoreWebCore::CoreAudioCaptureSource::create(WebCore::CaptureDevice const&, WebCore::MediaDeviceHashSalts&&, WebCore::MediaConstraints const*, std::__1::optional<WTF::ObjectIdentifierGeneric<WebCore::PageIdentifierType, WTF::ObjectIdentifierMainThreadAccessTraits<unsigned long long>, unsigned long long>>) + 360 frame #3: 0x00000001a94f180c WebCoreWebCore::RealtimeMediaSourceCenter::getUserMediaDevices(WebCore::MediaStreamRequest const&, WebCore::MediaDeviceHashSalts&&, WTF::Vector<WebCore::RealtimeMediaSourceCenter::DeviceInfo, 0ul, WTF::CrashOnOverflow, 16ul, WTF::FastMalloc>&, WTF::Vector<WebCore::RealtimeMediaSourceCenter::DeviceInfo, 0ul, WTF::CrashOnOverflow, 16ul, WTF::FastMalloc>&, WebCore::MediaConstraintType&) + 356 frame #4: 0x00000001a94f22cc WebCoreWebCore::RealtimeMediaSourceCenter::validateRequestConstraintsAfterEnumeration(WTF::Function<void (WTF::Vector<WebCore::CaptureDevice, 0ul, WTF::CrashOnOverflow, 16ul, WTF::FastMalloc>&&, WTF::Vector<WebCore::CaptureDevice, 0ul, WTF::CrashOnOverflow, 16ul, WTF::FastMalloc>&&)>&&, WTF::Function<void (WebCore::MediaConstraintType)>&&, WebCore::MediaStreamRequest const&, WebCore::MediaDeviceHashSalts&&) + 356 frame #5: 0x00000001a94fb394 WebCoreWTF::Detail::CallableWrapper<WebCore::RealtimeMediaSourceCenter::enumerateDevices(bool, bool, bool, bool, WTF::CompletionHandler<void ()>&&)::$_0, void>::~CallableWrapper() + 164 frame #6: 0x00000001a814bbe8 WebCoreWTF::Detail::CallableWrapper<WebCore::AVCaptureDeviceManager::refreshCaptureDevicesInternal(WTF::CompletionHandler<void ()>&&, WebCore::AVCaptureDeviceManager::ShouldSetUserPreferredCamera)::$_0::operator()()::'lambda'(), void>::call() + 520 frame #7: 0x00000001ab7f1aac JavaScriptCoreWTF::RunLoop::performWork() + 524 frame #8: 0x00000001ab7f1880 JavaScriptCoreWTF::RunLoop::performWork(void*) + 36 frame #9: 0x00000001935e7d0c CoreFoundationCFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION + 28 frame #10: 0x00000001935e7ca0 CoreFoundation__CFRunLoopDoSource0 + 172 frame #11: 0x00000001935e6a24 CoreFoundation__CFRunLoopDoSources0 + 232 frame #12: 0x00000001935e5c64 CoreFoundation__CFRunLoopRun + 840 frame #13: 0x000000019360a730 CoreFoundationCFRunLoopRunSpecific + 572 frame #14: 0x00000001e0fb5190 GraphicsServicesGSEventRunModal + 168 frame #15: 0x0000000196239f34 UIKitCore-[UIApplication _run] + 816 frame #16: 0x0000000196238164 UIKitCore`UIApplicationMain + 336 frame #17: 0x000000010811bec4 AppName.debug.dylibmain at AppDelegate.swift:25:13 frame #18: 0x00000001bae06a58 dyldstart + 5964
8
1
756
Mar ’25
iPhone 16 Camera Control and AVCaptureSlider – Is there a way to detect which slider is active?
I am following the Apple sample code and trying to add a manual focus lens position slider: @available(iOS 18.0, *) private func addCameraControls() { if !self.session.controls.isEmpty { for control in self.session.controls { self.session.removeControl(control) } } self.cameraControlFocusSlider = nil //Focus Slider if self.videoDevice!.isLockingFocusWithCustomLensPositionSupported { self.cameraControlFocusSlider = AVCaptureSlider("Focus", symbolName: "dot.square", in: 0.0...1.0) self.cameraControlFocusSlider!.setActionQueue(self.sessionQueue) { focusValue in //Do manual focus } if self.session.canAddControl(self.cameraControlFocusSlider!) { self.session.addControl(self.cameraControlFocusSlider!) } } } So there are these AVCaptureSessionControlsDelegate methods: final func sessionControlsDidBecomeActive(_ session: AVCaptureSession) { print ("sessionControlsDidBecomeActive") } final func sessionControlsWillEnterFullscreenAppearance(_ session: AVCaptureSession) { print ("sessionControlsWillEnterFullscreenAppearance") } final func sessionControlsWillExitFullscreenAppearance(_ session: AVCaptureSession) { print ("sessionControlsWillExitFullscreenAppearance") } final func sessionControlsDidBecomeInactive(_ session: AVCaptureSession) { print ("sessionControlsDidBecomeInactive") } So when self.cameraControlFocusSlider is presented, I have to show the current value of the lense position. Lens position can change from auto focus and also from manual focus by the user using the app UI. Is there a way to see if self.cameraControlFocusSlider is active or being used? Please note that I will have more than one AVCaptureSlider in the final code.
0
0
317
Mar ’25
CameraFrameProvider distortion correction
Hi, I'm trying to correct the lens distortion in frames provided by Enterprise API camera frame provider. The frames provided seem to have only in/extrinsics info, but not the distortion lookup table. Is there some magic setting, or function to do that (I can't seem to find anything like this)? Or is there a way to use AVCameraCalibrationData together with provider?
2
0
317
3w
iPhone 15 Pro Has USB-C, but AVCaptureDevice Doesn't Support External Devices?
I'm using an iPhone 15 Pro, which has switched from Lightning to USB Type-C. My iOS version is 18.3. According to Apple's documentation, AVCaptureDevice.DeviceType should support external device types. 🔗 Apple's Official Documentation: https://developer.apple.com/documentation/avfoundation/avcapturedevice/devicetype-swift.struct/external The documentation clearly states that iPadOS 17.0+ and iOS 17.0+ support external devices. However, in my actual tests: On iPhone, discoverySession does not detect any external devices. On iPad, discoverySession can detect external devices without any issues. My Question: Does iPhone USB-C actually support external devices (e.g., UVC cameras)? If not, why does Apple's documentation claim that iOS 17 supports external devices instead of specifying iPadOS 17 only?
1
0
286
Mar ’25
Accessing AV External Storage
Is it possible to use the AVExternalStorageDevice to access external storage from a connected camera or usb drive (via USB C or Lightning connector) on an iPad/iPhone. I have tested the following code on an iPhone 14 (iOS 18.1.1) and an iPad Gen 10 (18.3.1), and both return false for: // returns false on iPhone 14, iPad gen 10 print(AVExternalStorageDeviceDiscoverySession.isSupported) The following code returns null, when I try to access the external storage discovery session. // returns null on iOS devices print(AVExternalStorageDeviceDiscoverySession.shared) The following returns false, without displaying a permission dialog: AVExternalStorageDevice.requestAccess(completionHandler: { (granted: Bool) in // returns false with no permission dialog print(granted); What type of iOS devices are supported by AVExternalStorageDeviceDiscoverySession? What situations has it been used for (e.g. connecting to Camera via the external storage protocol, accessing photos from a SD card with an adapter, accessing photos from usb drive). Is there are sample code for using the AV External Storage api?
0
0
337
Feb ’25
Distortion corrected images
Hi, Currently I am developing a 3D reconstruction project. Which requires images to be distortion-free (rectilinear) and with known intrinsics. The session I am developing on is a builtInDualWideCamera, with isGeometricDistortionCorrectionEnabled set to false to be able to get the intrinsic matrix of the images, isVirtualDeviceConstituentPhotoDeliveryEnabled set to true and isAutoVirtualDeviceFusionEnabled set to false to get both images and isCameraCalibrationDataDeliveryEnabled set to true to actually get the calibration data. The distortion correction parameters such as lensDistortionLookupTable are used. The 42 coefficients mapping array is used as described in the AVCameraCalibrationData header file. A simple piecewise linear interpolation. There are two questions I would like to get support on: A way to set the calibration parameters in each image. I have an approach that sets the parameters in the kCGImagePropertyExifDictionary -> "UserComment". Is there a better approach to write calibration parameter data into the images? I feel like this is a bit dirty and there might be a better and neat approach. For the ultra-wide angle camera's images, the lensDistortionLookupTable contains several zeros at the end of the array. For example (last 10 elements are zero): "LensDistortionLookupTable":"0.000000000000000,0.000349554029526,0.001385628827848,0.003071037586778,... ,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000" The problem comes when the complete array is used to correct the image (including zeros), the end result is a wrapped-like-circle image close to the edges of it which is completely wrong. In contrast, if the LensDistortionLookupTable is used without the last zeros and the new size accommodated the image looks better (although not as rectilinear as if you take the image from the iPhone's camera app), but definitely less distorted. Including zeros (full array): Excluding zeros (array size changed): Am I missing an important point in the usage of the lensDistortionLookupTable where this case is addressed (zeros at the end)? What is the criteria to shrink/exclude elements of the array? Any advice is very much welcome.
0
0
343
Feb ’25
Set the capture device color space to apple log not works.
I set the device format and colorspace to Apple Log and turn off the HDR, why the movie output is still in HDR format rather than ProRes Log? Full runnable demo here: https://github.com/SpaceGrey/ColorSpaceDemo session.sessionPreset = .inputPriority // get the back camera let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back) backCamera = deviceDiscoverySession.devices.first! try! backCamera.lockForConfiguration() backCamera.automaticallyAdjustsVideoHDREnabled = false backCamera.isVideoHDREnabled = false let formats = backCamera.formats let appleLogFormat = formats.first { format in format.supportedColorSpaces.contains(.appleLog) } print(appleLogFormat!.supportedColorSpaces.contains(.appleLog)) backCamera.activeFormat = appleLogFormat! backCamera.activeColorSpace = .appleLog print("colorspace is Apple Log \(backCamera.activeColorSpace == .appleLog)") backCamera.unlockForConfiguration() do { let input = try AVCaptureDeviceInput(device: backCamera) session.addInput(input) } catch { print(error.localizedDescription) } // add output output = AVCaptureMovieFileOutput() session.addOutput(output) let connection = output.connection(with: .video)! print( output.outputSettings(for: connection) ) /* ["AVVideoWidthKey": 1920, "AVVideoHeightKey": 1080, "AVVideoCodecKey": apch,<----- prores has enabled. "AVVideoCompressionPropertiesKey": { AverageBitRate = 220029696; ExpectedFrameRate = 30; PrepareEncodedSampleBuffersForPaddedWrites = 1; PrioritizeEncodingSpeedOverQuality = 0; RealTime = 1; }] */ previewSource = DefaultPreviewSource(session: session) queue.async { self.session.startRunning() } }
1
0
346
Mar ’25
Question about how to access to camera on Swift Playground
When I was working on my project for Swift Student Challenge, I found an interesting fact regarding camera access. If I create an App project on Xcode, the camera capture works well on it. However, if I copied and pasted the code on an App Playground project on Xcode, it crashed and output several errors. I am wondering why this is happening.
1
0
401
Feb ’25
Info.plist
Hello, im trying to get my app to be able to ask the device for permission to have access to the camera. To do so i created an info.plist and turned off the generate Info.plist file in packaging. I then added what i believe are all the necessary keys. However, when i try to build and test on my phone i keep running an error that says that my app has a missing or invalid CFBundleExecutable in its info.plist. I tried to fix it by adding Key: CFBundleExecutable Type: String Value: $(EXECUTABLE_NAME) However, this isnt working. I have already added a BundleIdentifier using my com.name.appname the Bundle version string and Bundle Version. Now im not fully sure what to put to fix this issue. Is there another way to get the camera to work without having to create an info.plist? Or is this the only way?
1
0
288
Feb ’25
input type="file"でアップロードした画像のGPS情報
input type="file"でアップロードした画像データからGPS情報が除去されます。 こちらはiPadOS16.5.1で発生しておりました。 iPadOS17.4.1、iPadOS18.3では正常にGPS情報が保持されます。 iOS、iPadOSのアップデートでGPS情報を保持するよう修正されたと見受けられますが、リリースノートを参照しても上記修正についての記事を見つけられませんでした。 お手数ですが、上記修正に該当する記事をご教示頂けませんでしょうか。 どうぞよろしくお願い致します。
0
0
320
Feb ’25