Discuss using the camera on Apple devices.

Posts under Camera tag

171 Posts
Sort by:
Post not yet marked as solved
0 Replies
4 Views
I have a camera application which aims to take images as close to simultaneously as possible from the wide and ultra-wide cameras. The AVCaptureMultiCamSession is setup with manual connections. Note: we are not using builtInDualWideCamera with constituent photo delivery enabled since some features we use are not supported in that mode. At the moment, we are manually trying to synchronize frames between the two cameras, but we would like to use the AVCaptureDataOutputSynchronizer to improve our results. Is it possible to synchronize the wide and ultra-wide video outputs? All examples and docs that I've found show synchronization with video and depth, metadata, or audio, but not two video outputs. From my testing, I've found that the dataOutputSynchronizer either fires with the wide video output, or the ultra video output, but never both (at least one is nil), suggesting that they are not being synchronized. self.outputSync = AVCaptureDataOutputSynchronizer(dataOutputs: [wideCameraOutput, ultraCameraOutput]) outputSync.setDelegate(self, queue: .main) ... func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) { guard let syncWideData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.wideCameraOutput) as? AVCaptureSynchronizedSampleBufferData, let syncedUltraData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.ultraCameraOutput) as? AVCaptureSynchronizedSampleBufferData else { return; } // either syncWideData or syncUltraData is always nil, so the guard condition never passes. }
Posted
by nanders.
Last updated
.
Post not yet marked as solved
1 Replies
104 Views
Hello everyone,I am a student who is working on my final project of my college.I do not get an official development account since I do not need to put my app on AppStore. In my project,I need to use the camera of iOS device, and I know I need to add NSCameraUsageDesciption in Info.plist.However, as I add the description in my Info and build my project, it failed and says"Provisioning profile "iOS Team Provisioning Profile: " doesn't include the NSCameraUsageDescription and NSPhotoLibraryUsageDescription entitlements." I also notice that in the Info.plist file, when I change the property type to entitlements,I just cannot find NSCameraUsageDescription when I add row. What's the problem?Is this because I am not an official developer?
Posted Last updated
.
Post not yet marked as solved
0 Replies
99 Views
I've been trying to follow the "Supporting Continuity Camera in Your Mac App" article to implement the "Import from iPhone or iPad" menu for my MacOS app. I've been able to replicate most of the article in a test AppKit application but cannot do the same in my SwiftUI application. I'm not sure how to get the "NSMenuItemImportFromDeviceIdentifier" identifier into a SwiftUI Menu or create a NSMenu with a NSMenuItem for a SwiftUI app. I'm also not sure how to handle receiving the image in the SwiftUI environment. Any advice you might have is appreciated. Thanks!
Posted Last updated
.
Post not yet marked as solved
1 Replies
151 Views
I'm not sure if I just missed a recent breaking change, but we are having an issue with the camera in our single page app on iOS 17.4.1 in Safari. We can open the camera and display it to the user using getUserMedia. However, if the path of the site changes at all (for example, the user clicks a button to opens a sidepanel which results in the path in the browser changing) the camera goes black, even if the video element is still being displayed. I can see in the browser that the camera has stopped, and the user has to re-enable it manually by tapping "Start Using Camera". Any idea's what could be going on here?
Posted
by jungles13.
Last updated
.
Post not yet marked as solved
1 Replies
112 Views
I have the following code function load() { navigator.mediaDevices.getUserMedia({ video: true }) .then(function (stream) { var videoElement = document.getElementById('video'); videoElement.srcObject = stream; }) .catch(function (error) { console.log('navigator.MediaDevices.getUserMedia error: ', error.message, error.name); }); } <video id="video" playsinline autoplay></video> While the code works fine on browsers and load a camera on IOS. I can't seem to get the full IOS camera overlay (such as zooming etc) I just get a basic camera stream. Is it possible to stream the camera on a browser with full IOS camera functionality?
Posted Last updated
.
Post not yet marked as solved
0 Replies
79 Views
**Why does using CameraPicker require user authorization through a pop-up? ** Why don't ImagePicker or PhotoPicker require additional pop-up authorizations for accessing the photo library? All of these are implemented using UIImagePickerController, so why does one require a pop-up and the others do not? Additionally, I thought that by configuring the picker, I would theoretically not need any permissions. If permissions are still required, wouldn’t it make more sense to directly request camera permissions and utilize the native camera functionality? What then are the advantages of using the picker?
Posted Last updated
.
Post not yet marked as solved
2 Replies
240 Views
in demo ,load index.html into WKWebView, when i click file button, the camera page present and then dismiss quickly ViewController.h @property (nonatomic, strong) WKWebView *wkWebView; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; WKWebViewConfiguration *configuration = [WKWebViewConfiguration new]; self.wkWebView = [[WKWebView alloc] initWithFrame:CGRectMake(0, 0, 400, 300) configuration:configuration]; NSURL *url = [[NSBundle mainBundle] URLForResource:@"index" withExtension:@"html"]; [self.wkWebView loadFileURL:url allowingReadAccessToURL:[[NSBundle mainBundle] bundleURL]]; self.wkWebView.backgroundColor = [UIColor blueColor]; [self.view addSubview:self.wkWebView]; } index. html <html lang="en"> <head> <meta charset="UTF-8"> </head> <body> <div> <label style="font-size: 40px;">open camera</label> <input type="file" accept="image/*" capture="camera" id="file-input" class="file-input"> </div> </body> </html>
Posted
by leozzz.
Last updated
.
Post not yet marked as solved
0 Replies
114 Views
Hi guys, I'm designing a customized camera based on avfoundation. I can output Live Photo from avCaptureDeviceInput for now. I expect to take still and live Photos with different aspect ratio, just like the apple's camera app does (1:1, 4:3, 16:9). I didn't find any useful infos from docs, any suggestion?
Posted
by ayumizll.
Last updated
.
Post not yet marked as solved
0 Replies
138 Views
I have built a camera application which uses a AVCaptureSession with the AVCaptureDevice set to .builtInDualWideCamera and isVirtualDeviceConstituentPhotoDeliveryEnabled=true to enable delivery of "simultaneous" photos (AVCapturePhoto) for a single capture request. Our app ideally would have the timestamp difference between the photos in a single capture request as short as possible, but we don't have a good idea of what the theoretical or practical limits of this timestamp difference are. In my testing on an iPhone 12 Pro, with a frame rate of 33Hz and the preset set to hd1920x1080, I get the timestamp difference between photos at approx 0.3ms, which seems smaller than I would expect, unless the frames are being synchronised incredibly well under the hood. This leaves the following unanswered questions: What sort of ranges of values should we expect to come out of these timestamp differences between photos? What factors influence this? Is there any way to control these values to ensure they are as small as possible? (Will likely be answered by (2))
Posted
by nanders.
Last updated
.
Post not yet marked as solved
0 Replies
234 Views
After my iPad 6 upgrades from iOS 17.3 to 17.4, the AVCaptureMetadataOutput delegate is not called anymore. I find there is the same problem in a stackoverflow post: https://stackoverflow.com/questions/78128010/ipados-17-4-avcapturemetadataoutput-delegate-not-called-qrscanner An Apple webpage said the "QR code scanning" issue is fixed in iPadOS 17.4.1: If your iPad is unable to scan QR codes after updating to iPadOS 17.4 - Apple Support - https://support.apple.com/en-lamr/118614 That's true, I confirm that on my iPad 6. But, unfortunately, iPadOS 17.4.1 does fix ONLY QR code scanning! It doesn't fix barcode scanning, like PDF 417 Happening on iPad (7th Generation) iPad (6th Generation) iPad Pro 12.9-inch (2nd Generation) iPad Pro 10.5-inch
Posted
by ArshadHK.
Last updated
.
Post not yet marked as solved
1 Replies
215 Views
Hey! I'm trying to open the front camera on my demo app, and from what I read on the Apple docs and forums if you have configured your Persona you will get that image. But I'm having some issues with it, this is my code: struct ContentView: View { @Environment(\.presentationMode) var presentationMode var body: some View { ZStack { VStack { Image("logo") .resizable() .frame(width: 337, height: 211) .accessibilityHidden(true) Text("My first Vision Pro app.") .multilineTextAlignment(.center) .font(.headline) .frame(width: 340) .padding(.bottom, 10) Button { // Add camera functionality here } label: { Text("Open Camera") .frame(maxWidth: .infinity) } .onAppear { requestCameraAccess() } .onTapGesture { // Check if camera permission is granted if AVCaptureDevice.authorizationStatus(for: .video) == .authorized { openFrontCamera() } else { requestCameraAccess() } } } } } func requestCameraAccess() { AVCaptureDevice.requestAccess(for: .video) { authorized in DispatchQueue.main.async { if authorized { // Permission granted, open camera if needed openFrontCamera() } else { // Handle permission denied case (optional) } } } } func openFrontCamera() { } }``` On the openFrontCamera() function I tried using .devices() .default() and other methods like you would use for other Apple devices but this doesn't work with Vision Pro and I can't find anything that tells me how to open it. Has anyone been able to work this out?
Posted Last updated
.
Post not yet marked as solved
0 Replies
170 Views
For our application, we are aiming to have full control over setting and locking the camera exposure settings when taking a video. We’re working with Apple’s AVFoundation framework for a range of devices, but most of the development is focused on the iPad 8 front camera. The manual settings are specific to our use, so we aim to use the custom exposure mode with e.g ISO = 100, exposureDuration = 1/60, and a fixed white balance. The duration, ISO, and white balance are all set in advance of recording, but when we begin we can see that something is still adjusting and compensating for lighting changes. We then also tried locking the exposure mode after setting the custom values, but there appears to be a delay in this lock taking effect. While tracking the ISO during a recording, we see that the ISO values change in the first second of the recording, leading to oversaturated images, despite our efforts to keep it locked. This is our attempt to lock the settings using custom mode, which we don’t adjust ourselves during the recording, but it does not work as expected: func setCameraSettings(newValueISO: Float, newValueDuration: CMTime){ do { try cameraDevice?.lockForConfiguration() cameraDevice?.automaticallyAdjustsVideoHDREnabled = false cameraDevice?.setExposureModeCustom(duration: newValueDuration, iso: newValueISO, completionHandler: { [self] _ in cameraDevice?.setWhiteBalanceModeLocked(with: cameraDevice!.deviceWhiteBalanceGains) if ((cameraDevice!.isFocusModeSupported(.locked))) { do { cameraDevice!.focusMode = .locked debugPrint("Focus mode set to locked.") } } cameraDevice?.unlockForConfiguration() }) } catch { debugPrint("Error adjusting the exposure") cameraDevice?.unlockForConfiguration() } } We then tried to lock the exposure mode after setting the custom values, but it then changes during the first second of the recording. We also explicitly tried setting exposureTargetBias to 0, but this made no difference. func setCameraSettings(newValueISO: Float, newValueDuration: CMTime){ guard let camera = cameraDevice else { return } do { if camera.isExposureModeSupported(.custom) { do { try camera.lockForConfiguration() let customExposureBias: Float = 0 //camera.setExposureTargetBias(customExposureBias, completionHandler: nil) if camera.isExposureModeSupported(.custom) { camera.setExposureModeCustom(duration: newValueDuration, iso: newValueISO) { [weak camera ] _ in guard let camera = camera else { return } if camera.isExposureModeSupported(.locked) { camera.exposureMode = .locked } } } camera.unlockForConfiguration() print("Exposure settings locked with custom values.") } catch { print("Failed to lock configuration for capture device: \(error.localizedDescription)") camera.unlockForConfiguration() } } else { print("Custom exposure mode is not supported.") } } } We would very much appreciate input on how to keep the manually selected camera settings fixed throughout the video recording.
Posted Last updated
.
Post not yet marked as solved
0 Replies
170 Views
Hey all! I'm building a Camera app using AVFoundation, and I am using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. (I cannot use AVCaptureMovieFileOutput because I am doing some processing inbetween) When recording the audio CMSampleBuffers to the AVAssetWriter, I noticed that compared to the stock iOS camera app, they are mono-audio, not stereo audio. I wonder how recording in stereo audio works, are there any guides or documentation available for that? Is a stereo audio frame still one CMSampleBuffer, or will it be multiple CMSampleBuffers? Do I need to synchronize them? Do I need to set up the AVAssetWriter/AVAssetWriterInput differently? This is my Audio Session code: func configureAudioSession(configuration: CameraConfiguration) throws { ReactLogger.log(level: .info, message: "Configuring Audio Session...") // Prevent iOS from automatically configuring the Audio Session for us audioCaptureSession.automaticallyConfiguresApplicationAudioSession = false let enableAudio = configuration.audio != .disabled // Check microphone permission if enableAudio { let audioPermissionStatus = AVCaptureDevice.authorizationStatus(for: .audio) if audioPermissionStatus != .authorized { throw CameraError.permission(.microphone) } } // Remove all current inputs for input in audioCaptureSession.inputs { audioCaptureSession.removeInput(input) } audioDeviceInput = nil // Audio Input (Microphone) if enableAudio { ReactLogger.log(level: .info, message: "Adding Audio input...") guard let microphone = AVCaptureDevice.default(for: .audio) else { throw CameraError.device(.microphoneUnavailable) } let input = try AVCaptureDeviceInput(device: microphone) guard audioCaptureSession.canAddInput(input) else { throw CameraError.parameter(.unsupportedInput(inputDescriptor: "audio-input")) } audioCaptureSession.addInput(input) audioDeviceInput = input } // Remove all current outputs for output in audioCaptureSession.outputs { audioCaptureSession.removeOutput(output) } audioOutput = nil // Audio Output if enableAudio { ReactLogger.log(level: .info, message: "Adding Audio Data output...") let output = AVCaptureAudioDataOutput() guard audioCaptureSession.canAddOutput(output) else { throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output")) } output.setSampleBufferDelegate(self, queue: CameraQueues.audioQueue) audioCaptureSession.addOutput(output) audioOutput = output } } This is how I activate the audio session just before I start recording: let audioSession = AVAudioSession.sharedInstance() try audioSession.updateCategory(AVAudioSession.Category.playAndRecord, mode: .videoRecording, options: [.mixWithOthers, .allowBluetoothA2DP, .defaultToSpeaker, .allowAirPlay]) if #available(iOS 14.5, *) { // prevents the audio session from being interrupted by a phone call try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true) } if #available(iOS 13.0, *) { // allow system sounds (notifications, calls, music) to play while recording try audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true) } audioCaptureSession.startRunning() And this is how I set up the AVAssetWriter: let audioSettings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: options.fileType) let format = audioInput.device.activeFormat.formatDescription audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: format) audioWriter!.expectsMediaDataInRealTime = true assetWriter.add(audioWriter!) ReactLogger.log(level: .info, message: "Initialized Audio AssetWriter.") The rest is trivial - I receive CMSampleBuffers of the audio in my delegate's callback, write them to the audioWriter, and it ends up in the .mov file - but it is not stereo, it's mono. Is there anything I'm missing here?
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
8 Replies
1.7k Views
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput and scan QRCode. After updating to iPadOS 17.4, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue. iPad (7th Gen) iPad (6th Gen) iPad Pro (10.5) iPad Pro (12.9 2nd Gen) This issue has not occur on any other devices I have. This may only occur on devices with model number "iPad7,x". I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Are any additional settings required after iPadOS17.4? Or is there some problem on the OS side?
Posted
by N.Otani.
Last updated
.
Post not yet marked as solved
0 Replies
220 Views
Similar post on StackOverflow and multiple people reported this (you will encounter it if you run the app for like 10 minutes). I'm hoping this could get Apple's attention somehow After downloading the project code (https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera) and running the Swift sample code on an iPhone 14 Pro, the app crashes intermittently, throwing this error: Execution of the command buffer was aborted due to an error during execution. Caused GPU Timeout Error (00000002:kIOGPUCommandBufferCallbackErrorTimeout) Sometimes it will crash within a few seconds, sometimes it can take around 10 minutes. Has anyone here experienced this crash from the sample code, or using the LiDAR camera? I have spent a long time trying to solve this issue, I have searched the web high and low and submitted (I think) a report to Apple about it. I am unable to get xcode to show me the line of code where the crash is happening. Any help would be greatly appreciated.
Posted Last updated
.
Post not yet marked as solved
0 Replies
221 Views
I am currently renovating an application for macOS Sonoma (14.4) which triggers a Canon 60D via USB cable. Unlike what happened before in MacOS 10.6, the camera (ICCameraDevice) has description that contains only 2 capabilities: { UUIDString = "00000000-0000-0000-0000-000004A93215"; autolaunchApplicationPath = ""; capabilities = ( ICCameraDeviceCanDeleteOneFile, ICCameraDeviceCanAcceptPTPCommands ); class = ICCameraDevice; connectionID = 0xffff0001; delegate = "<0x600003157ac0>"; deviceID = 0xffff0001; deviceRef = 0xffff0001; iconPath = "(null)"; locationDescription = ICDeviceLocationDescriptionUSB; moduleExecutableArchitecture = 0; modulePath = "/System/Library/Image Capture/Devices/PTPCamera.app"; moduleVersion = "1.0"; name = "Canon EOS 60D"; persistentIDString = "00000000-0000-0000-0000-000004A93215"; shared = NO; softwareInstallPercentDone = "0.000000"; transportType = ICTransportTypeUSB; type = 0x00000101; } timeOffset : 0.000000 hasConfigurableWiFiInterface : N/A isAccessRestrictedAppleDevice : NO As you can see, ICCameraDeviceCanTakePicture is not present now, and so I cannot take a picture with requestTakePicture. Do I need to do anything special to regain these capabilities, like in older versions of macOS? Is my only option to use PTP commands? Thanks!
Posted
by da_gagnon.
Last updated
.
Post not yet marked as solved
2 Replies
291 Views
When running an iOS app as designed for iPad on an m1 Mac mini the UIImagePickerController.isSourceTypeAvailable(.camera) api returns true leading to a crash (attached) if the camera is selected to upload an image to the app as my much loved Mac mini does not have a camera. For the moment have disabled camera if platform is Mac by adding the qualification: ProcessInfo().isiOSAppOnMac == false but this seems like a bug or does the crash also happen on Macs with cameras? Other image picker options work fine. Crash log
Posted
by ptclarke.
Last updated
.
Post marked as solved
1 Replies
230 Views
I want to take 48MP photos and get the same iso and exposure duration as I set. Configuration Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048) Set if (AVCaptureDevice.isExposureModeSupported:.custom) { AVCaptureDevice.exposureMode = .custom; } Set AVCaptureDevice.setExposureModeCustomWithDuration:1/20 ISO:100 completionHandler:handler Taking a photo Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048) The API discussion of setExposureModeCustomWithDuration told me https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624646-setexposuremodecustomwithduratio/ To ensure that the receiver's ISO and exposureDuration values are honored while in AVCaptureExposureModeCustom or AVCaptureExposureModeLocked, you must set your AVCapturePhotoSettings.photoQualityPrioritization property to AVCapturePhotoQualityPrioritizationSpeed. But at last step, when I set AVCapturePhotoSettings.maxPhotoQualityPrioritization = .speed, the photo resolution is (4000, 3000), only 12MP, not is (8000, 6000). the iso and exposure duration on the photo are the same as what I set. and when I set AVCapturePhotoSettings.maxPhotoQualityPrioritization = .balanced/.qulity, the photo is (8000, 6000) , but the iso and exposeure duration obtained on the photo is different from the one I set. What do I need to do to take 48MP photos and set the iso and exposure duration successfully?
Posted
by Zard.
Last updated
.
Post not yet marked as solved
3 Replies
359 Views
The methods described in https://developer.apple.com/forums/thread/715452?answerId=729571022#729571022 to obtain 48 MP image captures no longer seem to work on iOS 17.4 under certain circumstances. Previously, the following steps were sufficient to get 48 MP capture from AVFoundation: Configuration Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048) Set AVCapturePhotoOutput.maxPhotoQualityPrioritization to .quality Taking a photo Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048) Set AVCapturePhotoSettings.photoQualityPrioritization to .quality As of iOS 17.4, the exact same code that worked through 17.3 no longer works if the session was configured manually (resulting in the .inputPriority session preset) rather than using a session preset (like .high). When configuring the session manually, all the intervening steps work (an active format can be found with the appropriate dimensions, the photo output settings can be set to 8064x6048 successfully, etc.), but the resulting photo is 4032x3024. Again, these same steps worked flawlessly prior to iOS 17.4. Am I missing something? Did iOS 17.4 change the requirements for 48 MP capture, or is this a bug?
Posted
by tenuki.
Last updated
.