This is an issue with the Insta360 Flow Pro 2.
My iOS app uses DockKit to control the gimbal; in particular, my app disables tracking and sends angular velocity commands to control the gimbal's orientation. I only try to modify the yaw (rotation around the vertical axis); never the pitch or yaw. Note that I don't send the gimbal to a particular orientation directly; I modify the velocity.
Everything works great for a long period of time: typically for a continuous run of 4-6 hours; in the most recent case, I managed about 36 hours of continous operation before the following problem occurred.
I came back to check on the system, and because no visual activity had occurred in the camera's field of view for a while, the phone had commanded the gimbal to rotate back to a yaw angle of 0 degrees.
So the phone in the gimbal should have been looking straight ahead (i.e. the 0 degree yaw position), but it was definitely looking off at an angle. I've seen this twice now. The first time, when it should have been looking straight ahead, it was in fact looking 60 degrees off center. This time (caught on video, see below), it was off by 22 degrees from center.
Here's the weird part: the gimbal reports this way off center positioning as zero degrees (well close enough to zero, like 0.2 or something that's fine). But, mechanically, the gimbal still knows where zero degrees is: if we double click on the trigger of the Flow Pro 2, which is supposed to reset the gimbal to 0 degrees yaw and pitch, the gimbal responds correctly and reorients to a 0 degree position. However, the yaw values it reports are not zero, but as shown in my video, 22 degrees off axis or so.
Power cycling the gimbal and restarting immediately fixes the problem. Also, I switched from my app to the Insta360 app, which caused the phone to flip from landscape to portrait, then when I returned to my app and switched back to landscape, the gimbal now started reporting correct yaw angles.
Is there a possibility this is a bug in the DockKit framework? Has anyone seen this? I have a case open with Insta360, but although it's clearly a software issue, it's not clear if it's in Insta360's code or the DockKit layer. Any ideas for how I can get out of this mode? My concern is that the phone is in a tripod about 10' off the floor, and not very accessible. Also, if all goes well, we may have about 50 of these systems running, and having to fix them one by one after a few hours is not good.
For a demonstration of this bug, see the following video:
https://octoparry.com/offset.MOV
Any help greatly appreciated.
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I want to create a Live Photo. The project includes a .jpg image and a .mov video (2 seconds). I am sure they are correct.
Two permissions in xcode have been added:
Privacy - Photo Library Usage Description
Privacy - Photo Library Additions Usage Description Simulate: iphone 16, ios 18.3
The codes in ContentView.swift :
private func saveLivePhoto(imageURL: URL, videoURL: URL, completion: @escaping (Bool, Error?) -> Void) {
PHPhotoLibrary.shared().performChanges {
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = false
creationRequest.addResource(with: .photo, fileURL: imageURL, options: options)
creationRequest.addResource(with: .pairedVideo, fileURL: videoURL, options: options)
} completionHandler: { success, error in
DispatchQueue.main.async {
print(error)
completion(success, error)
}
}
}
guard let imageURL = Bundle.main.url(forResource: "livephoto", withExtension: "jpeg"),
let videoURL = Bundle.main.url(forResource: "livephoto", withExtension: "mov") else {
showAlertMessage(title: "error", message: "cant find Live Photo ")
return
}
print("imageURL: \(imageURL)")
print("videoURL: \(videoURL)")
saveLivePhoto(imageURL: imageURL, videoURL: videoURL) { success, error in
if success {
xxxxx
} else {
xxxxx
}
}
Really need help, thanks
I want to create a Live Photo. The project includes a .jpg image and a .mov video (2 seconds).
Two permissions in xcode have been added:
Privacy - Photo Library Usage Description
Privacy - Photo Library Additions Usage Description
Simulate: iphone 16, ios 18.3
The codes in ContentView.swift :
private func saveLivePhoto(imageURL: URL, videoURL: URL, completion: @escaping (Bool, Error?) -> Void) {
PHPhotoLibrary.shared().performChanges {
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = false
creationRequest.addResource(with: .photo, fileURL: imageURL, options: options)
creationRequest.addResource(with: .pairedVideo, fileURL: videoURL, options: options)
} completionHandler: { success, error in
DispatchQueue.main.async {
print(error)
completion(success, error)
}
}
}
guard let imageURL = Bundle.main.url(forResource: "livephoto", withExtension: "jpeg"),
let videoURL = Bundle.main.url(forResource: "livephoto", withExtension: "mov") else {
showAlertMessage(title: "error", message: "cant find Live Photo ")
return
}
print("imageURL: \(imageURL)")
print("videoURL: \(videoURL)")
saveLivePhoto(imageURL: imageURL, videoURL: videoURL) { success, error in
if success {
xxxxx
} else {
xxxxx
}
}
Really need help, thanks
I have a complex CoreImage pipeline which I'm keen to optimise. I'm aware that calling back to the CPU can have a significant impact on the performance - what is the best way to find out where this is happening?
Hey,
I have a camera app that captures a ProRaw photo and then runs a few Core Image filters before saving it to the device as a HEIC. However I'm finding that capturing at 48MP is rather slow. Testing a minimal pipeline on an iPhone 16 Pro:
Shutter press => file received in output: 1.2 ~ 1.6s
CIRawFilter created using photo file representation then rendered to context, without any filters: 0.8s ~ 1s
Saving to device ~0.15s
Is this the expected time for capturing processing? The native camera app seems to save the images within half a second. I'm using QualityPrioritization.balanced and the highest resolution available which is 48MP.
Would using the CIRawFilter with the pixelBuffer from the photo output be faster? I tried it but couldn't get it to output an image. Are there any other things I could try to speed this up? Is it possible to capture at 24MP instead?
Thanks,
Alex
I’m developing a hybrid app (WebView / Turbo Native) that uses getUserMedia to access the back camera for a PPG/heart rate measurement feature (the user places their finger on the camera).
Problem: Even when I specify constraints like:
{
video: {
deviceId: '...',
facingMode: { exact: 'environment' },
advanced: [{ zoom: 1.0 }]
},
audio: false
}
On iPhone 15 (iOS 18), iOS unexpectedly switches between the wide, ultra-wide, and telephoto lenses during the measurement.
This breaks the heart rate detection, and it forces the user to move their finger in the middle of the measurement.
Question: Is there any way, via getUserMedia/WebRTC, to force iOS to use only the wide-angle lens and prevent automatic lens switching?
I know that with AVFoundation (Swift) you can pick .builtInWideAngleCamera, but I’m hoping to avoid building a custom native layer and would prefer to stick with WebView/JavaScript if possible to save time and complexity.
Any suggestions, workarounds, or updates from Apple would be greatly appreciated!
Thanks a lot!
I am following the Apple sample code and trying to add a manual focus lens position slider:
@available(iOS 18.0, *)
private func addCameraControls() {
if !self.session.controls.isEmpty {
for control in self.session.controls {
self.session.removeControl(control)
}
}
self.cameraControlFocusSlider = nil
//Focus Slider
if self.videoDevice!.isLockingFocusWithCustomLensPositionSupported {
self.cameraControlFocusSlider = AVCaptureSlider("Focus", symbolName: "dot.square", in: 0.0...1.0)
self.cameraControlFocusSlider!.setActionQueue(self.sessionQueue) { focusValue in
//Do manual focus
}
if self.session.canAddControl(self.cameraControlFocusSlider!) {
self.session.addControl(self.cameraControlFocusSlider!)
}
}
}
So there are these AVCaptureSessionControlsDelegate methods:
final func sessionControlsDidBecomeActive(_ session: AVCaptureSession) {
print ("sessionControlsDidBecomeActive")
}
final func sessionControlsWillEnterFullscreenAppearance(_ session: AVCaptureSession) {
print ("sessionControlsWillEnterFullscreenAppearance")
}
final func sessionControlsWillExitFullscreenAppearance(_ session: AVCaptureSession) {
print ("sessionControlsWillExitFullscreenAppearance")
}
final func sessionControlsDidBecomeInactive(_ session: AVCaptureSession) {
print ("sessionControlsDidBecomeInactive")
}
So when self.cameraControlFocusSlider is presented, I have to show the current value of the lense position. Lens position can change from auto focus and also from manual focus by the user using the app UI. Is there a way to see if self.cameraControlFocusSlider is active or being used?
Please note that I will have more than one AVCaptureSlider in the final code.
Hi
This is one of our top crashes. It does not contain any of our code in the stacktrace and we can't reproduce it. Those points make this crash very hard to understand and fix. We know that most of the crashes are happening on iPhone 13 with iOS 18.x.x. Also we see that a lot of cases happen when app goes into background (stacktrace contains -[UIApplication _applicationDidEnterBackground]).
2025-03-04_16-06-00.3670_-0500-6a273c7d5da97f098b5cc24898bb9761dc45208e.crash
2025-03-04_20-21-08.6609_-0500-2c08f640900f8a62c4f8a4f6f2a61feb052e66dd.crash
2025-03-04_20-46-27.7138_+0000-4d7ea89b1b564eda22ca63e708f7ad3909c7b768.crash
I found that when the development tool above Xcode16 ran my app, I opened the suspended inscription function, and then opened the system camera, the content in the suspended window would not be displayed, and the suspended window would have a black screen. However, this phenomenon does not appear on Xcode15.4 development tools, it is the same code, I do not know why
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Swift Packages
App Clips
Developer Tools
iOS
Is it possible to use the AVExternalStorageDevice to access external storage from a connected camera or usb drive (via USB C or Lightning connector) on an iPad/iPhone.
I have tested the following code on an iPhone 14 (iOS 18.1.1) and an iPad Gen 10 (18.3.1), and both return false for:
// returns false on iPhone 14, iPad gen 10
print(AVExternalStorageDeviceDiscoverySession.isSupported)
The following code returns null, when I try to access the external storage discovery session.
// returns null on iOS devices
print(AVExternalStorageDeviceDiscoverySession.shared)
The following returns false, without displaying a permission dialog:
AVExternalStorageDevice.requestAccess(completionHandler: { (granted: Bool) in
// returns false with no permission dialog
print(granted);
What type of iOS devices are supported by AVExternalStorageDeviceDiscoverySession?
What situations has it been used for (e.g. connecting to Camera via the external storage protocol, accessing photos from a SD card with an adapter, accessing photos from usb drive).
Is there are sample code for using the AV External Storage api?
Hi,
Currently I am developing a 3D reconstruction project.
Which requires images to be distortion-free (rectilinear) and with known intrinsics.
The session I am developing on is a builtInDualWideCamera, with isGeometricDistortionCorrectionEnabled set to false to be able to get the intrinsic matrix of the images, isVirtualDeviceConstituentPhotoDeliveryEnabled set to true and isAutoVirtualDeviceFusionEnabled set to false to get both images and isCameraCalibrationDataDeliveryEnabled set to true to actually get the calibration data.
The distortion correction parameters such as lensDistortionLookupTable are used.
The 42 coefficients mapping array is used as described in the AVCameraCalibrationData header file. A simple piecewise linear interpolation.
There are two questions I would like to get support on:
A way to set the calibration parameters in each image.
I have an approach that sets the parameters in the kCGImagePropertyExifDictionary -> "UserComment". Is there a better approach to write calibration parameter data into the images? I feel like this is a bit dirty and there might be a better and neat approach.
For the ultra-wide angle camera's images, the lensDistortionLookupTable contains several zeros at the end of the array.
For example (last 10 elements are zero):
"LensDistortionLookupTable":"0.000000000000000,0.000349554029526,0.001385628827848,0.003071037586778,... ,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000"
The problem comes when the complete array is used to correct the image (including zeros), the end result is a wrapped-like-circle image close to the edges of it which is completely wrong.
In contrast, if the LensDistortionLookupTable is used without the last zeros and the new size accommodated the image looks better (although not as rectilinear as if you take the image from the iPhone's camera app), but definitely less distorted.
Including zeros (full array):
Excluding zeros (array size changed):
Am I missing an important point in the usage of the lensDistortionLookupTable where this case is addressed (zeros at the end)?
What is the criteria to shrink/exclude elements of the array?
Any advice is very much welcome.
I am writing an iOS app to present a slide show of assets in a Photo album, in a random order, including videos and live photos. I have got it all working quite nicely but for a Live Photo, I need to know what effect is selected (Live, Loop, Bounce, Long Exposure, Live Off) to display the image correctly. I can't find any mention of getting this information in the documentation. Anyone know how to do this? Thanks in advance.
Adrian.
(Xcode 16.1 iOS 18.0)
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hi,
I'm using Core Graphics to load a .DNG photo shot by a Leica Q3 camera.
The photo is shot in portrait, however the embedded preview is rotated 90 degrees to landscape.
I load the photo like this:
let options = [kCGImageSourceDecodeRequest: kCGImageSourceDecodeToHDR] as CFDictionary
let source = CGImageSourceCreateWithData(data as CFData, nil)
let cgimage = CGImageSourceCreateImageAtIndex(source, 0, options)
let properties = CGImageSourceCopyPropertiesAtIndex(source, 0, nil) as? [CFString : Any]
When doing this I can see that the orientation property is 1 indicating that the orientation is 'Up', which it isn't.
If I don't specify the kCGImageSourceDecodeToHDR option (eseentially setting options to nil) - the orientation property is 8 (rotated 90 degrees).
What puzzles me is that a chang to the CGImageSourceCreateImageAtIndex call can have an influence on that latter call to CGImageSourceCopyPropertiesAtIndex ?
I would expect these to work independently?
Cheers
Thomas
I set the device format and colorspace to Apple Log and turn off the HDR, why the movie output is still in HDR format rather than ProRes Log?
Full runnable demo here:
https://github.com/SpaceGrey/ColorSpaceDemo
session.sessionPreset = .inputPriority
// get the back camera
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back)
backCamera = deviceDiscoverySession.devices.first!
try! backCamera.lockForConfiguration()
backCamera.automaticallyAdjustsVideoHDREnabled = false
backCamera.isVideoHDREnabled = false
let formats = backCamera.formats
let appleLogFormat = formats.first { format in
format.supportedColorSpaces.contains(.appleLog)
}
print(appleLogFormat!.supportedColorSpaces.contains(.appleLog))
backCamera.activeFormat = appleLogFormat!
backCamera.activeColorSpace = .appleLog
print("colorspace is Apple Log \(backCamera.activeColorSpace == .appleLog)")
backCamera.unlockForConfiguration()
do {
let input = try AVCaptureDeviceInput(device: backCamera)
session.addInput(input)
} catch {
print(error.localizedDescription)
}
// add output
output = AVCaptureMovieFileOutput()
session.addOutput(output)
let connection = output.connection(with: .video)!
print(
output.outputSettings(for: connection)
)
/*
["AVVideoWidthKey": 1920, "AVVideoHeightKey": 1080, "AVVideoCodecKey": apch,<----- prores has enabled.
"AVVideoCompressionPropertiesKey": {
AverageBitRate = 220029696;
ExpectedFrameRate = 30;
PrepareEncodedSampleBuffersForPaddedWrites = 1;
PrioritizeEncodingSpeedOverQuality = 0;
RealTime = 1;
}]
*/
previewSource = DefaultPreviewSource(session: session)
queue.async {
self.session.startRunning()
}
}
According to the docs:
The first time your app performs an operation that requires [photo library] authorization, the system automatically and asynchronously prompts the user for it.
(https://developer.apple.com/documentation/photokit/delivering-an-enhanced-privacy-experience-in-your-photos-app)
I.e. it's not necessary for the app to call PHPhotoLibrary.requestAuthorization.
This does seem to be what happens when my app runs on an iPhone or iPad; the prompt is shown. But when it runs on a Mac in "designed for iPad" mode, the permission dialog is not presented. Instead the code continues to see status == .notDetermined.
That's today, on macOS 15.3. It may have worked in the past.
Is anyone else seeing issues with this? Should I call requestAuthorization explicitly? (Would that actually work?)
Hi, I have a problem when I want to attach my grayscale depth map image into the real image. The produced depth map doesn't have the cameraCalibration value which should responsible to align the depth data to the image. How do I align the depth map? I saw an article about it but it is not really detailed so I might be missing some process.
I tried to:
convert my depth map into pixel buffer
create image destination ref and add the image there.
add the auxData (depth map dict)
This is the output:
There is some black space there and my RGB image colour changes
I'm developing an iOS app using AVFoundation for real-time video capture and object detection.
While implementing torch functionality with camera switching (between Wide and Ultra-Wide lenses), I encountered a critical issue where the camera freezes when toggling the torch while the Ultra-Wide camera is active.
Issue
If the torch is ON and I switch from Wide to Ultra-Wide, the camera freezes
If the Ultra-Wide camera is active and I try to turn the torch ON, the camera freezes
The iPhone Camera app allows using the torch while recording video with the Ultra-Wide lens, so this should be possible via AVFoundation as well.
Code snippet
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
guard let self = self else { return }
let isSwitchingToUltraWide = !self.isUsingFisheyeCamera
let cameraType: AVCaptureDevice.DeviceType = isSwitchingToUltraWide ? .builtInUltraWideCamera : .builtInWideAngleCamera
let cameraName = isSwitchingToUltraWide ? "Ultra Wide" : "Wide"
guard let selectedCamera = AVCaptureDevice.default(cameraType, for: .video, position: .back) else {
DispatchQueue.main.async {
self.showAlert(title: "Camera Error", message: "\(cameraName) camera is not available on this device.")
}
return
}
do {
let currentInput = self.videoCapture.captureSession.inputs.first as? AVCaptureDeviceInput
self.videoCapture.captureSession.beginConfiguration()
if isSwitchingToUltraWide && self.isFlashlightOn {
self.forceEnableTorchThroughWide()
}
if let currentInput = currentInput {
self.videoCapture.captureSession.removeInput(currentInput)
}
let videoInput = try AVCaptureDeviceInput(device: selectedCamera)
self.videoCapture.captureSession.addInput(videoInput)
self.videoCapture.captureSession.commitConfiguration()
self.videoCapture.updateVideoOrientation()
DispatchQueue.main.async {
if let barButton = sender as? UIBarButtonItem {
barButton.title = isSwitchingToUltraWide ? "Wide" : "Ultra Wide"
barButton.tintColor = isSwitchingToUltraWide ? UIColor.systemGreen : UIColor.white
}
print("Switched to \(cameraName) camera.")
}
self.isUsingFisheyeCamera.toggle()
} catch {
DispatchQueue.main.async {
self.showAlert(title: "Camera Error", message: "Failed to switch to \(cameraName) camera: \(error.localizedDescription)")
}
}
}
}
Expected Behavior
Torch should be able to work when Ultra-Wide is active, just like the iPhone Camera app does.
The camera should not freeze when switching between Wide and Ultra-Wide with the torch ON.
AVCaptureSession should not crash when toggling the torch while Ultra-Wide is active.
Questions & Help Needed
Is this a known issue with AVFoundation?
How does the iPhone Camera app allow using the torch while recording in Ultra-Wide?
What’s the correct way to switch between Wide and Ultra-Wide cameras without freezing when the torch is active?
Info
Device tested: iPhone 13 Pro / iPhone 15 Pro / Iphone 15
iOS Version: iOS 17.3 / iOS 18.0
Xcode Version: 16.2
I am using AVMulti so the user captures two images how can I access those images if there is only one url that stores the captured images for the lockScreenCapture extension ? Plus how can I detect if the user opened the app from the extension to be able to navigate the user to the right screen ?
Hello, I'm wondering how to capture 24MP photos.
I'm currently testing on an iPhone 16 Pro Max. By default, the device's activeFormat supports 24MP (photo dimensions: {4032x3024, 5712x4284}). For the photoOutput, I'm setting the maxPhotoDimensions to videoDevice.activeFormat.supportedMaxPhotoDimensions.lastObject, and setting MaxPhotoQualityPrioritization to quality.
When capturing, I'm applying the same maxPhotoDimensions and photoQualityPrioritization settings from the photoOutput directly to the AVCapturePhotoSettings.
What could be the issue?
// Objective-C
// setup
[self.photoOutput setMaxPhotoQualityPrioritization:AVCapturePhotoQualityPrioritizationQuality];
CMVideoDimensions maxPhotoDimensions = [(NSValue *)videoDevice.activeFormat.supportedMaxPhotoDimensions.lastObject CMVideoDimensionsValue];
[self.photoOutput setMaxPhotoDimensions:maxPhotoDimensions];
// capturing
AVCapturePhotoSettings *photoSettings = [AVCapturePhotoSettings photoSettings];
photoSettings.maxPhotoDimensions = self.photoOutput.maxPhotoDimensions;
photoSettings.photoQualityPrioritization = self.photoOutput.maxPhotoQualityPrioritization;
[self.photoOutput capturePhotoWithSettings:photoSettings delegate:photoCaptureDelegate];
...
When trying to edit some Live Photos, calling PHLivePhotoEditingContext.saveLivePhoto results in the following error:
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12815), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x300d05380 {Error Domain=NSOSStatusErrorDomain Code=-12815 "(null)"}}
I was able to replicate it on my device by taking a new Live Photo. Not sure what's wrong with that one specifically, not all Live Photos replicate the issue.
I've submitted FB15880825 with a sysdiagnose and a Photos Diagnostics as well. Any ideas what's going on here? It's impacting multiple customers. Thanks!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Media
Photos and Imaging
PhotoKit
AVFoundation