Discuss using the camera on Apple devices.

Posts under Camera tag

126 Posts
Sort by:
Post not yet marked as solved
4 Replies
4.7k Views
Hi, Is the LiDAR scanner on the new iPad pro en iPhone 12 series a good device to make a 3D scan of an object? How high res would this be? And what is de ideal object size? And also: can the camera system and LiDAR sensor work together to achieve a 3D model with texture? Any help is much appreciated. Kind regards, Sybren
Posted
by
Post marked as solved
2 Replies
1k Views
I have a 3d camera app that I'm working on and I am wondering how to put the two videos side-by-side to save to Photos as one video using this delegate method: func fileOutput(_ output: AVCaptureFileOutput,                         didFinishRecordingTo outputFileURL: URL,                         from connections: [AVCaptureConnection],                         error: Error?) { Thank You!
Posted
by
Post not yet marked as solved
1 Replies
224 Views
My macOS app (targeting Catalina only) uses camera, mic and screen recording. While developing the app the system asks me permission every time I rebuild and run the app. This does not happen on iOS. Is there anyway to prevent this? Secondly when I distribute the app to other Macs, every build needs the consent re-affirmed. This doesn't seem like the way it should be. What could I be doing wrong?
Posted
by
Post not yet marked as solved
12 Replies
10k Views
Overview: Our phones currently are configured with a basic MDM iOS policy. The policy has not been changed since January this year. After the update from iOS 13 to iOS 14, the functionality to scan bar code via the Camera app stopped working. A border surrounding the barcode shows up, but the safari pop-up link does not drop down for a user to click. Relevant settings: There are no restrictions on the camera app and the "Scan QR Code" setting is turned ON After more research, the following has been identified: After un-enrolling the device with MDM, the QR Code scanner works as expected. A factory reset selecting "Erase All Content" option was completed on an iPhone6S and an iPhone7 (iOS 14.0.1) and this resolved the issue. After deploying an empty MDM policy to a test phone, the functionality was still not restored. Any ideas on what could be causing the issue?
Posted
by
Post not yet marked as solved
20 Replies
13k Views
Hi! I recently bought the new iPhone 12 Pro Max. I have noticed that when I shoot video's in the dark (with the lights on in the house), there is some kind of flickering visible in the video. Apparently it is possible that due to very fast flickering of lights, slowmo video's make this kind of flickering visible when you can not see it with the ***** eye. I however have this problem with normal video's as well. I have compared it with the video's on my iPhone X and it is definitely worse in my iPhone 12 video's. I noticed that this happens while recording video on HD (or 4K) at 60 FPS, if you switch to 30 FPS this doesn't happen. Anyone else that has this problem? Problem happening on iOS 14.2.1 and iOS 14.3 Beta 2. Thanks!
Posted
by
Post not yet marked as solved
4 Replies
1.6k Views
Getting pretty bad lens glare on photos taken in Night Mode on the iPhone 12 Pro Max. Support thinks this is a software issue and specifically iOS 14.3 but I think this is a hardware issue. Is anyone else having nightmode camera problems
Posted
by
Post not yet marked as solved
1 Replies
879 Views
Hello all, I'm having some issues with the now available getUserMedia api available in the WKWebView. On a page, I will access the camera using the following code: navigator.mediaDevices.getUserMedia({ 		video: { 				facingMode: "user" 		}, 		audio: false }).then(function(webcamStream) { 		document.querySelector("#lr_record_video").srcObject = webcamStream; /* this is a HTML video tag available on the page */ }).catch(function() { 		console.log("fail") }); This... mostly works. Unlike in Safari (and now Chrome), instead of the video element just showing what is in the video track of the webcamStream MediaStream object, it opens up a "Live Broadcast" panel and the video track pauses whenever this is closed. Is there anyway to replicate the behaviour in Safari and Chrome, where there is no panel popup? Thanks
Posted
by
Post not yet marked as solved
0 Replies
629 Views
Hi Apple support, I'm using AVMultiCamPip as a project starting base and I have the following setup(sample[1]). It connects without headphones but generate a runtime error(sample[1]) with headphones. I'm running on iPhone Xs with iOS 14.3 The source code I'm using is the sample AVMultiCamPip code. How should I config the external headphone mic to prevent the runtime error to occur? sample[1] 2021-01-23 11:10:54.216702-0500 AVMultiCamPiP[3760:649414] Metal GPU Frame Capture Enabled 2021-01-23 11:10:54.218037-0500 AVMultiCamPiP[3760:649414] Metal API Validation Enabled (lldb) po (notification.object as! AVCaptureMultiCamSession).connections ▿ 6 elements &#9;- 0 : <AVCaptureConnection: 0x282092f80 (AVCaptureDeviceInput: 0x282081280 Back Camera) -> (AVCaptureVideoDataOutput: 0x2820f7840) [type:vide][enabled:1][active:1]> &#9;- 1 : <AVCaptureConnection: 0x282082ba0 (AVCaptureDeviceInput: 0x282081280 Back Camera) -> (AVCaptureVideoPreviewLayer: 0x2820e8060) [type:vide][enabled:1][active:1]> &#9;- 2 : <AVCaptureConnection: 0x282085220 (AVCaptureDeviceInput: 0x282081920 Front Camera) -> (AVCaptureVideoDataOutput: 0x2820f7a80) [type:vide][enabled:1][active:1]> &#9;- 3 : <AVCaptureConnection: 0x2820939e0 (AVCaptureDeviceInput: 0x282081920 Front Camera) -> (AVCaptureVideoPreviewLayer: 0x2820e8200) [type:vide][enabled:1][active:1]> &#9;- 4 : <AVCaptureConnection: 0x28208d460 (AVCaptureDeviceInput: 0x2820843e0 Headphones) -> (AVCaptureAudioDataOutput: 0x2820f7ba0) [type:soun][enabled:1][active:1]> &#9;- 5 : <AVCaptureConnection: 0x28208e480 (AVCaptureDeviceInput: 0x2820843e0 Headphones) -> (AVCaptureAudioDataOutput: 0x2820f7d20) [type:soun][enabled:1][active:1]> (lldb) po (notification.object as! AVCaptureMultiCamSession).hardwareCost 0.58957714 (lldb) po (notification.object as! AVCaptureMultiCamSession).systemPressureCost 0.81041515 (lldb) po (notification.object as! AVCaptureMultiCamSession).outputs ▿ 4 elements &#9;- 0 : <AVCaptureVideoDataOutput: 0x2820f7840> &#9;- 1 : <AVCaptureVideoDataOutput: 0x2820f7a80> &#9;- 2 : <AVCaptureAudioDataOutput: 0x2820f7ba0> &#9;- 3 : <AVCaptureAudioDataOutput: 0x2820f7d20> (lldb) po (notification.object as! AVCaptureMultiCamSession).inputs ▿ 3 elements &#9;- 0 : <AVCaptureDeviceInput: 0x282081280 [Back Camera]> &#9;- 1 : <AVCaptureDeviceInput: 0x282081920 [Front Camera]> &#9;- 2 : <AVCaptureDeviceInput: 0x2820843e0 [Headphones]> (lldb) po notification.userInfo ▿ Optional<Dictionary<AnyHashable, Any>> &#9;▿ some : 1 element &#9;&#9;▿ 0 : 2 elements &#9;&#9;&#9;▿ key : AnyHashable("AVCaptureSessionErrorKey") &#9;&#9;&#9;&#9;- value : "AVCaptureSessionErrorKey" &#9;&#9;&#9;- value : Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x282e81200 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}} (lldb)
Posted
by
Post not yet marked as solved
2 Replies
1.1k Views
I currently have an issue with the "High Efficiency" (HEIC) images that are taken on my iPhone X. It seems that when I upload my HEIC image via the browser, the HEIC image is converted to a JPG and the EXIF data is lost. I tested this on mobile Chrome, Safari, and FF by sending an image to myself using Gmail. However, when sending that same HEIC image through the Gmail mobile app, the EXIF data was retained and was still converted to a JPG. I wasn't able to find much information on this except for speculation and was wondering if someone could clarify this behavior and possibly the "why". I'm only aware of two different workarounds: Using the mobile application of the platform I'm interested in. Changing the camera image format to "Most Compatible"
Posted
by
Post not yet marked as solved
1 Replies
783 Views
Issue I'm using AVFoundation to implement a Camera that is able to record videos while running special AI processing. Having an AVCaptureMovieFileOutput (for video recording) and a AVCaptureVideoDataOutput (for processing AI) running at the same time is not supported (see https://stackoverflow.com/q/4944083/5281431), so I have decided to use a single AVCaptureVideoDataOutput which is able to record videos to a file while running the AI processing in the same captureOutput(...) callback. To my surprise, doing that drastically increases RAM usage from 58 MB to 187 MB (!!!), and CPU from 3-5% to 7-12% while idle. While actually recording, the RAM goes up even more (260 MB!). I am wondering what I did wrong here, since I disabled all the AI processing and just compared the differences between AVCaptureMovieFileOutput and AVCaptureVideoDataOutput. My code: AVCaptureMovieFileOutput Setup swift if let movieOutput = self.movieOutput { captureSession.removeOutput(movieOutput) } movieOutput = AVCaptureMovieFileOutput() captureSession.addOutput(movieOutput!) Delegate (well there is none, AVCaptureMovieFileOutput handles all that internally) Benchmark When idle, so not recording at all: RAM: 56 MB CPU: 3-5% When recording using AVCaptureMovieFileOutput.startRecording: RAM: 56 MB (how???) CPU: 20-30% AVCaptureVideoDataOutput Setup swift // Video if let videoOutput = self.videoOutput { captureSession.removeOutput(videoOutput) self.videoOutput = nil } videoOutput = AVCaptureVideoDataOutput() videoOutput!.setSampleBufferDelegate(self, queue: videoQueue) videoOutput!.alwaysDiscardsLateVideoFrames = true captureSession.addOutput(videoOutput!) // Audio if let audioOutput = self.audioOutput { captureSession.removeOutput(audioOutput) self.audioOutput = nil } audioOutput = AVCaptureAudioDataOutput() audioOutput!.setSampleBufferDelegate(self, queue: audioQueue) captureSession.addOutput(audioOutput!) Delegate swift extension CameraView: AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate { public final func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from _: AVCaptureConnection) { // empty } public final func captureOutput(_ captureOutput: AVCaptureOutput, didDrop buffer: CMSampleBuffer, from _: AVCaptureConnection) { // empty } } yes, they are literally empty methods. My RAM and CPU usage is still that high without doing any work here. Benchmark When idle, so not recording at all: RAM: 151-187 MB CPU: 7-12% When recording using a custom AVAssetWriter: RAM: 260 MB CPU: 64% Why is the AVCaptureMovieFileOutput so much more efficient than an empty AVCaptureVideoDataOutput? Also, why does it's RAM not go up at all when recording, compared to how my AVAssetWriter implementation alone consumes 80 MB? Here's my custom AVAssetWriter implementation: [RecordingSession.swift](https://github.com/cuvent/react-native-vision-camera/blob/frame-processors/ios/RecordingSession.swift), and here's where I call it - https://github.com/cuvent/react-native-vision-camera/blob/a48ca839e93e6199ad731f348e19427774c92821/ios/CameraView%2BRecordVideo.swift#L16-L86. Any help appreciated!
Posted
by
Post not yet marked as solved
1 Replies
604 Views
Hello, on Mac OS, is it possible to see a virtual Camera, such as OBS (https://obsproject.com) as a CaptureDevice? I see that, for example, Google Chrome can use this camera, but using AVCaptureDevice.DiscoverySession I am unable to see it. Am I doing wrong?     var deviceTypes: [AVCaptureDevice.DeviceType] = [.builtInMicrophone, .builtInWideAngleCamera]     #if os(OSX)     deviceTypes.append(.externalUnknown)     #else     deviceTypes.append(contentsOf: [.builtInDualCamera, .builtInDualWideCamera, .builtInTelephotoCamera, .builtInTripleCamera, .builtInTrueDepthCamera, .builtInUltraWideCamera])     #endif     let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes,         mediaType: nil, position: .unspecified)     result = discoverySession.devices.map { device in         device.localizedName     }
Posted
by
Post marked as solved
1 Replies
746 Views
I want to record the TrueDepth or Dual camera's depth data output when recording the video data. I have already managed to get the AVCaptureDepthDataOutput object and displayed it in realtime, but I also need the depth to be recorded as an individual track of AVMediaTypeVideo or AVMediaTypeMetadata in the movie, and read them back for post processing. Compared to use AVCaptureMovieFileOutput, I use movieWriter and AVAssetWriterInputPixelBufferAdaptor to append pixel buffer. I have tried to append the streaming depth as normal AVAssetWriterInput with AVVideoCodecTypeH264, but failed. Is it possible to append depth data buffer in the same way as video data for depth data, or with any other way of doing it?
Posted
by
Post not yet marked as solved
0 Replies
894 Views
CameraUI -[CAMPriorityNotificationCenter _removeObserver:fromObserversByName:] i am seeing crashes on firebase and here is the stacktrace Fatal Exception: NSGenericException *** Collection <__NSArrayM: 0x28024a0a0> was mutated while being enumerated. keyboard_arrow_up 0 CoreFoundation __exceptionPreprocess 2 arrow_drop_down CoreFoundation -[__NSSingleObjectEnumerator initWithObject:collection:] arrow_right 3 CameraUI -[CAMPriorityNotificationCenter _removeObserver:fromObserversByName:] 4 CameraUI -[CAMPriorityNotificationCenter removeObserver:] 13 arrow_drop_down libsystem_pthread.dylib start_wqthread Crashed: com.google.firebase.crashlytics.ios.exception SIGABRT ABORT 0x00000001c1eb7334 keyboard_arrow_up 0 FirebaseCrashlytics FIRCLSProcess.c - Line 393 FIRCLSProcessRecordAllThreads + 393 1 FirebaseCrashlytics FIRCLSProcess.c - Line 424 FIRCLSProcessRecordAllThreads + 424 2 FirebaseCrashlytics FIRCLSHandler.m - Line 34 FIRCLSHandler + 34 3 FirebaseCrashlytics FIRCLSException.mm - Line 218 __FIRCLSExceptionRecord_block_invoke + 218 4 libdispatch.dylib _dispatch_client_callout + 20 5 arrow_drop_down libdispatch.dylib _dispatch_lane_barrier_sync_invoke_and_complete + 60 6 FirebaseCrashlytics FIRCLSException.mm - Line 225 FIRCLSExceptionRecord + 225 7 FirebaseCrashlytics FIRCLSException.mm - Line 111 FIRCLSExceptionRecordNSException + 111 8 FirebaseCrashlytics FIRCLSException.mm - Line 279 FIRCLSTerminateHandler() + 279 9 libc++abi.dylib std::__terminate(void (*)()) + 20 24 arrow_drop_down libsystem_pthread.dylib start_wqthread + 8 This started happening in 14.4 and above and was working fine in the previous version and am recently seeing an increase in the crash. Is it an OS or am i doing something wrong
Posted
by
Post not yet marked as solved
0 Replies
279 Views
I recently updated my iPhone to iOS 15 beta 2 which was working perfectly fine till I had updated my phone to iOS 15 beta 3 were my back camera stopped working with the camera app. Basically the back camera works with any app other then Snapchat or the camera app. I’m just now wondering what solution you would have as I already updated my phone to the 4 beta after sending in a feedback to the feedback portal. Nothing has changed and the camera still doesn’t work. Is there any solution?
Posted
by
Post not yet marked as solved
1 Replies
466 Views
Objective and steps Use the device front true depth camera (iPhone 12 Pro Max) to capture image data, live photo data and metadata (e.g. depth data and portrait effects matte) using AVFoundation capture principles into an AVCapturePhoto object. Save this captured object with its metadata to PHPhotoLibrary using a PHAssetCreationRequest object API. Result Image data, live data, disparity depth data (640x480 px) and some metadata is stored with the image through the PHPhotoLibrary API but the high quality portrait effects matte is lost. Notes Upon receiving the AVCapturePhoto object from AVFoundation capture delegate API I can verify that AVCapturePhoto object contains a high quality portrait effects matte member object. Using object's fileDataRepresentation() to obtain Data blob, writing that to a test file URL and reading it back I can see that flattened data API writes and restores the portrait effects matte. However, it gets stripped from the data when writing through the PHPhotoLibrary asset creation request. When later picking the image e.g. with PHPickerViewController + PHPickerResult and peeking into the object's data with CGImageSourceCopyAuxiliaryDataInfoAtIndex() I can see that there is data dictionary only for key kCGImageAuxiliaryDataTypeDisparity, and kCGImageAuxiliaryDataTypeDepth and kCGImageAuxiliaryDataTypePortraitEffectsMatte are both missing. Please, anyone has more detailed information if this possible at all? Thanks!
Posted
by
Post not yet marked as solved
0 Replies
419 Views
Can center stage work with any app that has video. I am building an App that uses the video as a presentation that can be a live presentation, recorded or live streamed. Not quite a videoconferencing App but close. Thanks, Chris
Posted
by