Hi,
I’m building a PPG-based heart rate feature where the user places their finger over the rear telephoto camera. On iPhone 16 Pro Max, I'm explicitly selecting the telephoto lens like this:
videoDevice = AVCaptureDevice.default(.builtInTelephotoCamera, for: .video, position: .back)
And trying to lock it:
if #available(iOS 15.0, *),
device.activePrimaryConstituentDeviceSwitchingBehavior != .unsupported {
try? device.lockForConfiguration()
device.setPrimaryConstituentDeviceSwitchingBehavior(.locked, restrictedSwitchingBehaviorConditions: [])
device.unlockForConfiguration()
}
I also lock everything else to prevent dynamic changes:
try device.lockForConfiguration()
device.focusMode = .locked
device.exposureMode = .locked
device.whiteBalanceMode = .locked
device.videoZoomFactor = 1.0
device.automaticallyEnablesLowLightBoostWhenAvailable = false
device.automaticallyAdjustsVideoHDREnabled = false
device.unlockForConfiguration()
Despite this, the camera still switches to another lens, especially under different lighting, even though the user’s finger fully covers the lens.
Questions:
How can I completely prevent lens switching in this scenario?
Would using videoZoomFactor = 3.0 or 5.0 better enforce use of the telephoto lens?
Thanks!
Gal
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
When shooting with an iPhone 15 or later, it’s possible to capture HEIC or JPEG images that include gain map information conforming to the ISO 21496-1 standard. However, during image format transcoding, the HEIC codec is able to preserve the ISO 21496-1 gain map. But when converting from HEIC to JPEG, the gain map is transformed into the Apple Gain Map format instead. Is there any solution to this issue?
Hi everyone,
We're encountering an unexpected issue with our iPhone-only camera app:
👉 TimeMark - Photo Proof
https://apps.apple.com/us/app/timemark-photo-proof/id6446071834
Problem Description:
Our app uses a full-screen camera view via AVCaptureSession. In some cases reported by users, the camera fails immediately upon app launch, and we receive this interruption reason:
AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps
According to the Apple documentation https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps?language=objc , this interruption typically occurs when the app is running in a multi-app layout such as Slide Over, Split View, or Picture in Picture — all of which are iPad-only features.
However, this issue is being reported on iPhones, and our app does not support iPad at all.
Also noted in the documentation:
"Given your present AVCaptureSession configuration, the session may only be run if your app occupies the full screen."
Additional Context:
The issue occurs immediately on app launch, before the user can interact with the camera.
We don’t enable multitaskingCameraAccessEnabled.
We are 100% sure this is happening on iPhone, not iPad.
It’s hard to reproduce; users report it happening sporadically.
Locally, we tried playing Picture-in-Picture videos (e.g., Safari/YouTube) before launching our app, but we could not reproduce the issue.
Questions:
Why is this interruption reason occurring on iPhone, which doesn’t officially support Slide Over or Split View?
Could this be caused by some system-level multitasking or resource contention (e.g., Picture in Picture from FaceTime or Safari)?
Would enabling multitaskingCameraAccessEnabled help prevent this issue on iPhone, even though it's designed for iPad?
Enabling multitaskingCameraAccessEnabled seems to require enabling UIBackgroundModes → voip.
Would adding this background mode cause any App Store review risk or rejection if our app doesn't actually use VoIP functionality?
Any help, insight, or suggestions would be greatly appreciated. Thanks in advance!
AVCaptureSession's startRunning method is thread blocking and seems to be slow. What is this method doing behind the scenes?
For context: I'm working on Simulator Camera support and I have a 'fake' AVCaptureDevice that might be causing this. My hypothesis is that AVCaptureSession tries to connect to the device and waits for a notification to be posted back.
I'd love to find a way to let my fake device message AVCaptureSession that it's connected.
Hello everyone,
I'm working on a feature where I need to capture the highest possible quality photo (e.g., 24MP on supported devices) and upload it to our server. I don't need the photos to appear in user's main Photos app so I thought I could store the photos in app's private directory using FileManager until they are uploaded. This wouldn't require requesting Photo Library permission, maximizing user privacy.
The documentation on AVCapturePhotoOutput states that "the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled"
/**
@property maxPhotoDimensions
@abstract
Indicates the maximum resolution of the requested photo.
@discussion
Set this property to enable requesting of images up to as large as the specified dimensions. Images returned by AVCapturePhotoOutput may be smaller than these dimensions but will never be larger. Once set, images can be requested with any valid maximum photo dimensions by setting AVCapturePhotoSettings.maxPhotoDimensions on a per photo basis. The dimensions set must match one of the dimensions returned by AVCaptureDeviceFormat.supportedMaxPhotoDimensions for the current active format. Changing this property may trigger a lengthy reconfiguration of the capture render pipeline so it is recommended that this is set before calling -[AVCaptureSession startRunning].
Note: When supported, the 24MP setting (5712, 4284) is only serviced as 24MP when opted-in to autoDeferredPhotoDeliveryEnabled.
*/
@available(iOS 16.0, *)
open var maxPhotoDimensions: CMVideoDimensions
(btw. this note is not present in the docs https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/maxphotodimensions)
Enabling autoDeferredPhotoDeliveryEnabled means that for a 24MP capture, the system will call the photoOutput(_:didFinishCapturingDeferredPhotoProxy:error:) delegate method, providing a proxy object instead of the final image data.
According to the WWDC23 session "Create a more responsive camera experience," this AVCaptureDeferredPhotoProxy must be saved to the PHPhotoLibrary using a PHAssetCreationRequest with the resource type .photoProxy. The system then handles the final processing in the background within the library.
To use deferred photo processing, you'll need to have write permission to the photo library to store the proxy photo, and read permission if your app needs to show the final photo or wants to modify it in any way.
https://developer.apple.com/videos/play/wwdc2023/10105/?time=799
This seems to create a hard dependency on the Photo Library for accessing 24MP images.
My question is:
Is there any way to receive the final, processed 24MP image data directly in the app after a deferred capture, without using PHPhotoLibrary as the processing intermediary?
For example, is there a delegate callback or a mechanism I'm missing that provides the final data for a deferred photo, allowing an app to handle it in-memory or in its own private sandbox, completely bypassing the user's Photo Library?
Our goal is to follow Apple's privacy-first principles by avoiding requesting a PHPhotoLibrary authorization when our app's core function doesn't require access to the user's photo collection.
Thank you for your time and any clarification you can provide.
I am able to capture 48mp photos using .builtInWideAngleCamera, but it seems like .builtInTripleCamera is capped at 12mp?
Is there a way to capture 48mp photos using .builtInTripleCamera? Because .builtInTripleCamera provides smooth transition between cameras during zooming, and I'd like to keep this behavior.
New iPhone 17 Pro have all their cameras at 48mp. Is there a chance that their .builtInTripleCamera is capable of capturing 48mp? Or is this an API limitation?
If the app is launched from LockedCameraCapture and if the settings button is tapped, I need to launch the main app.
CameraViewController:
func settingsButtonTapped() {
#if isLockedCameraCaptureExtension
//App is launched from Lock Screen
//Launch main app here...
#else
//App is launched from Home Screen
self.showSettings(animated: true)
#endif
}
In this document:
https://developer.apple.com/documentation/lockedcameracapture/creating-a-camera-experience-for-the-lock-screen
Apple asks you to use:
func launchApp(with session: LockedCameraCaptureSession, info: String) {
Task {
do {
let activity = NSUserActivityTypeLockedCameraCapture
activity.userInfo = [UserInfoKey: info]
try await session.openApplication(for: activity)
} catch {
StatusManager.displayError("Unable to open app - \(error.localizedDescription)")
}
}
}
However, the documentation states that this should be placed within the extension code - LockedCameraCapture. If I do that, how can I call that all the way down from the main app's CameraViewController?
Hi all,
we are in the business of scanning documents and barcodes with the camera system of mobile devices. Since there is a wide variety of use cases, from scanning tiniest barcodes and small business cards to scanning barcodes or large documents from far distances we preferably rely on the triple camera devices, if available, with automatic constituent device switching.
This approach used to be working perfectly fine. Depending on the zoom level (we prefer to use an initial zoom value of 2.0) and the focusing distance the iPhone Pro models switched through the different camera systems at light speed: from ultra-wide to wide, tele and back. No issues at all.
Unfortunately the new iPhone 16 Pro models behave very different when it comes to constituent device switching based on focus distance. The switching is slow and sometimes it does not happen at all when the focusing distance changes. Especially when aiming for a at a distant object for a longer time and then aiming at a very close object that is maybe 2" away. The iPhone 15 Pro here always switches immediately to the ultra-wide camera, while the iPhone 16 Pro takes at least 2-3 seconds, in rare cases up to 10 seconds and sometimes forever to switch to the ultra-wide camera.
Of course we assumed that our code is responsible for these issues. So we experimented with restricting the devices and so on. Then we stripped more and more configuration code but nothing we tried improved the situation.
So we ended up writing a minimal example app that demonstrates the problem. You can find the code below. Execute it on various iPhones and aim at far distance (> 10 feet) and then quickly to very close distance (<5 inches).
Here is a list of devices and our test results:
iPhone 15 Pro, iOS 17.6: very fast and reliable switching
iPhone 15 Pro, iOS 18.1: very fast and reliable switching
iPhone 13 Pro Max, iOS 15.3: very fast and reliable switching
iPhone 16 (dual-wide camera), iOS 18.1: very fast and reliable switching
iPhone 16 Pro, iOS 18.1: slow switching, unreliable
iPhone 16 Pro Max, iOS 18.1: slow switching, unreliable
Questions:
Does anyone else have seen this issue? And possibly found a workaround?
Is this behaviour intended on iPhone 16 Pro models? Can we somehow improve the switching speed?
Further the iPhone 16 Pro models also show a jumping preview in the preview layer when they switch the constituent active device. Not dramatic, but compared to the other phones it looks like a glitch.
Thank you very much!
Kind regards,
Sebastian
import UIKit
import AVFoundation
class ViewController: UIViewController {
var captureSession : AVCaptureSession!
var captureDevice : AVCaptureDevice!
var captureInput : AVCaptureInput!
var previewLayer : AVCaptureVideoPreviewLayer!
var activePrimaryConstituentToken: NSKeyValueObservation?
var zoomToken: NSKeyValueObservation?
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
checkPermissions()
setupAndStartCaptureSession()
}
func checkPermissions() {
let cameraAuthStatus = AVCaptureDevice.authorizationStatus(for: AVMediaType.video)
switch cameraAuthStatus {
case .authorized:
return
case .denied:
abort()
case .notDetermined:
AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler:
{ (authorized) in
if(!authorized){
abort()
}
})
case .restricted:
abort()
@unknown default:
fatalError()
}
}
func setupAndStartCaptureSession() {
DispatchQueue.global(qos: .userInitiated).async{
self.captureSession = AVCaptureSession()
self.captureSession.beginConfiguration()
if self.captureSession.canSetSessionPreset(.photo) {
self.captureSession.sessionPreset = .photo
}
self.captureSession.automaticallyConfiguresCaptureDeviceForWideColor = true
self.setupInputs()
DispatchQueue.main.async {
self.setupPreviewLayer()
}
self.captureSession.commitConfiguration()
self.captureSession.startRunning()
self.activePrimaryConstituentToken = self.captureDevice.observe(\.activePrimaryConstituent, options: [.new], changeHandler: { (device, change) in
let type = device.activePrimaryConstituent!.deviceType.rawValue
print("Device type: \(type)")
})
self.zoomToken = self.captureDevice.observe(\.videoZoomFactor, options: [.new], changeHandler: { (device, change) in
let zoom = device.videoZoomFactor
print("Zoom: \(zoom)")
})
let switchZoomFactor = 2.0
DispatchQueue.main.async {
self.setZoom(CGFloat(switchZoomFactor), animated: false)
}
}
}
func setupInputs() {
if let device = AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) {
captureDevice = device
} else {
fatalError("no back camera")
}
guard let input = try? AVCaptureDeviceInput(device: captureDevice) else {
fatalError("could not create input device from back camera")
}
if !captureSession.canAddInput(input) {
fatalError("could not add back camera input to capture session")
}
captureInput = input
captureSession.addInput(input)
}
func setupPreviewLayer() {
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
previewLayer.frame = self.view.layer.frame
}
func setZoom(_ value: CGFloat, animated: Bool) {
guard let device = captureDevice else { return }
let maxZoom: CGFloat = captureDevice.maxAvailableVideoZoomFactor
let minZoom: CGFloat = captureDevice.minAvailableVideoZoomFactor
let zoomValue = max(min(value, maxZoom), minZoom)
let deltaZoom = Float(abs(zoomValue - device.videoZoomFactor))
do {
try device.lockForConfiguration()
if animated {
device.ramp(toVideoZoomFactor: zoomValue, withRate: max(deltaZoom * 50.0, 50.0))
} else {
device.videoZoomFactor = zoomValue
}
device.unlockForConfiguration()
} catch {
return
}
}
}
Support external cameras in your iPadOS app and use Swift to read multiple camera feeds?
thanks
I am using AVMulti so the user captures two images how can I access those images if there is only one url that stores the captured images for the lockScreenCapture extension ? Plus how can I detect if the user opened the app from the extension to be able to navigate the user to the right screen ?
This is an issue with the Insta360 Flow Pro 2.
My iOS app uses DockKit to control the gimbal; in particular, my app disables tracking and sends angular velocity commands to control the gimbal's orientation. I only try to modify the yaw (rotation around the vertical axis); never the pitch or yaw. Note that I don't send the gimbal to a particular orientation directly; I modify the velocity.
Everything works great for a long period of time: typically for a continuous run of 4-6 hours; in the most recent case, I managed about 36 hours of continous operation before the following problem occurred.
I came back to check on the system, and because no visual activity had occurred in the camera's field of view for a while, the phone had commanded the gimbal to rotate back to a yaw angle of 0 degrees.
So the phone in the gimbal should have been looking straight ahead (i.e. the 0 degree yaw position), but it was definitely looking off at an angle. I've seen this twice now. The first time, when it should have been looking straight ahead, it was in fact looking 60 degrees off center. This time (caught on video, see below), it was off by 22 degrees from center.
Here's the weird part: the gimbal reports this way off center positioning as zero degrees (well close enough to zero, like 0.2 or something that's fine). But, mechanically, the gimbal still knows where zero degrees is: if we double click on the trigger of the Flow Pro 2, which is supposed to reset the gimbal to 0 degrees yaw and pitch, the gimbal responds correctly and reorients to a 0 degree position. However, the yaw values it reports are not zero, but as shown in my video, 22 degrees off axis or so.
Power cycling the gimbal and restarting immediately fixes the problem. Also, I switched from my app to the Insta360 app, which caused the phone to flip from landscape to portrait, then when I returned to my app and switched back to landscape, the gimbal now started reporting correct yaw angles.
Is there a possibility this is a bug in the DockKit framework? Has anyone seen this? I have a case open with Insta360, but although it's clearly a software issue, it's not clear if it's in Insta360's code or the DockKit layer. Any ideas for how I can get out of this mode? My concern is that the phone is in a tripod about 10' off the floor, and not very accessible. Also, if all goes well, we may have about 50 of these systems running, and having to fix them one by one after a few hours is not good.
For a demonstration of this bug, see the following video:
https://octoparry.com/offset.MOV
Any help greatly appreciated.
In the AVP project, a selector pops up, only wanting to filter spatial videos. When selecting the material of one of the spatial videos, the selection result returns empty. How can we obtain the video selected by the user and get the path and the URL of the file
The code is as follows:
PhotosPicker(selection: $selectedItem, matching: .videos) {
Text("Choose a spatial photo or video")
}
func loadTransferable(from imageSelection: PhotosPickerItem) -> Progress {
return imageSelection.loadTransferable(type: URL.self) { result in
DispatchQueue.main.async {
// guard imageSelection == self.imageSelection else { return }
print("加载成功的图片集合:(result)")
switch result {
case .success(let url?):
self.selectSpatialVideoURL = url
print("获取视频链接:(url)")
case .success(nil):
break
// Handle the success case with an empty value.
case .failure(let error):
print("spatial错误:(error)")
// Handle the failure case with the provided error.
}
}
}
}
Can i use iokit usb lib to disable build-in camera?
Hello,
Does anyone have a recipe on how to raycast VNFaceLandmarkRegion2D points obtained from a frame's capturedImage?
More specifically, how to construct the "from" parameter of the frame's raycastQuery from a VNFaceLandmarkRegion2D point?
Do the points need to be flipped vertically? Is there any other transformation that needs to be performed on the points prior to passing them to raycastQuery?
I have an app that edits photos in your library. When I call
try CIContext().writeHEIFRepresentation(of: editedImage, to: fileURL, format: .RGBA8, colorSpace: originalImage.colorSpace!)
The following is logged to the console:
writeImageAtIndex:1012: ⭕️ ERROR: 'App' is trying to save an opaque image (5712x4284) with 'AlphaLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha.
What does that mean and how can I resolve it?
Xcode Version 16.0 (16A242d)
iOS 18.1 (22B82)
I set both the AVCapturePhotoOutput and the AVCapturePhotoSettings with maxPhotoDimensions = .init(width: 8064, height: 6048), but still get the 12mp photo.
Topic:
Media Technologies
SubTopic:
Photos & Camera
I have a complex CoreImage pipeline which I'm keen to optimise. I'm aware that calling back to the CPU can have a significant impact on the performance - what is the best way to find out where this is happening?
Hey everyone😊, I am building an app that includes a live camera feed preview. That's all I need to do along side identifying the images with createML's image classification. I don't need to capture images at all. I've seen some very complicated tutorials. I just want to use a couple of lines of code.
Topic:
Media Technologies
SubTopic:
Photos & Camera
I found that when the development tool above Xcode16 ran my app, I opened the suspended inscription function, and then opened the system camera, the content in the suspended window would not be displayed, and the suspended window would have a black screen. However, this phenomenon does not appear on Xcode15.4 development tools, it is the same code, I do not know why
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Swift Packages
App Clips
Developer Tools
iOS
Hello everyone,
I am looking for a solution to programmatically, e.g. using AppleScript to import photos into the Photos library on MacOS and also push them to the shared library, like it can be done using the standard GUI of the Photos application.
Maybe it is not possible using AppleScript, but using a short Swift script and PhotoKit, I do not not know.
Any help is appreciated!
Thomas