Discuss using the camera on Apple devices.

Posts under Camera tag

168 Posts
Sort by:
Post not yet marked as solved
1 Replies
114 Views
in demo ,load index.html into WKWebView, when i click file button, the camera page present and then dismiss quickly ViewController.h @property (nonatomic, strong) WKWebView *wkWebView; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; WKWebViewConfiguration *configuration = [WKWebViewConfiguration new]; self.wkWebView = [[WKWebView alloc] initWithFrame:CGRectMake(0, 0, 400, 300) configuration:configuration]; NSURL *url = [[NSBundle mainBundle] URLForResource:@"index" withExtension:@"html"]; [self.wkWebView loadFileURL:url allowingReadAccessToURL:[[NSBundle mainBundle] bundleURL]]; self.wkWebView.backgroundColor = [UIColor blueColor]; [self.view addSubview:self.wkWebView]; } index. html <html lang="en"> <head> <meta charset="UTF-8"> </head> <body> <div> <label style="font-size: 40px;">open camera</label> <input type="file" accept="image/*" capture="camera" id="file-input" class="file-input"> </div> </body> </html>
Posted
by leozzz.
Last updated
.
Post not yet marked as solved
0 Replies
67 Views
Hi guys, I'm designing a customized camera based on avfoundation. I can output Live Photo from avCaptureDeviceInput for now. I expect to take still and live Photos with different aspect ratio, just like the apple's camera app does (1:1, 4:3, 16:9). I didn't find any useful infos from docs, any suggestion?
Posted
by ayumizll.
Last updated
.
Post not yet marked as solved
0 Replies
86 Views
I have built a camera application which uses a AVCaptureSession with the AVCaptureDevice set to .builtInDualWideCamera and isVirtualDeviceConstituentPhotoDeliveryEnabled=true to enable delivery of "simultaneous" photos (AVCapturePhoto) for a single capture request. Our app ideally would have the timestamp difference between the photos in a single capture request as short as possible, but we don't have a good idea of what the theoretical or practical limits of this timestamp difference are. In my testing on an iPhone 12 Pro, with a frame rate of 33Hz and the preset set to hd1920x1080, I get the timestamp difference between photos at approx 0.3ms, which seems smaller than I would expect, unless the frames are being synchronised incredibly well under the hood. This leaves the following unanswered questions: What sort of ranges of values should we expect to come out of these timestamp differences between photos? What factors influence this? Is there any way to control these values to ensure they are as small as possible? (Will likely be answered by (2))
Posted
by nanders.
Last updated
.
Post not yet marked as solved
0 Replies
118 Views
After my iPad 6 upgrades from iOS 17.3 to 17.4, the AVCaptureMetadataOutput delegate is not called anymore. I find there is the same problem in a stackoverflow post: https://stackoverflow.com/questions/78128010/ipados-17-4-avcapturemetadataoutput-delegate-not-called-qrscanner An Apple webpage said the "QR code scanning" issue is fixed in iPadOS 17.4.1: If your iPad is unable to scan QR codes after updating to iPadOS 17.4 - Apple Support - https://support.apple.com/en-lamr/118614 That's true, I confirm that on my iPad 6. But, unfortunately, iPadOS 17.4.1 does fix ONLY QR code scanning! It doesn't fix barcode scanning, like PDF 417 Happening on iPad (7th Generation) iPad (6th Generation) iPad Pro 12.9-inch (2nd Generation) iPad Pro 10.5-inch
Posted
by ArshadHK.
Last updated
.
Post not yet marked as solved
1 Replies
154 Views
Hey! I'm trying to open the front camera on my demo app, and from what I read on the Apple docs and forums if you have configured your Persona you will get that image. But I'm having some issues with it, this is my code: struct ContentView: View { @Environment(\.presentationMode) var presentationMode var body: some View { ZStack { VStack { Image("logo") .resizable() .frame(width: 337, height: 211) .accessibilityHidden(true) Text("My first Vision Pro app.") .multilineTextAlignment(.center) .font(.headline) .frame(width: 340) .padding(.bottom, 10) Button { // Add camera functionality here } label: { Text("Open Camera") .frame(maxWidth: .infinity) } .onAppear { requestCameraAccess() } .onTapGesture { // Check if camera permission is granted if AVCaptureDevice.authorizationStatus(for: .video) == .authorized { openFrontCamera() } else { requestCameraAccess() } } } } } func requestCameraAccess() { AVCaptureDevice.requestAccess(for: .video) { authorized in DispatchQueue.main.async { if authorized { // Permission granted, open camera if needed openFrontCamera() } else { // Handle permission denied case (optional) } } } } func openFrontCamera() { } }``` On the openFrontCamera() function I tried using .devices() .default() and other methods like you would use for other Apple devices but this doesn't work with Vision Pro and I can't find anything that tells me how to open it. Has anyone been able to work this out?
Posted Last updated
.
Post not yet marked as solved
0 Replies
100 Views
For our application, we are aiming to have full control over setting and locking the camera exposure settings when taking a video. We’re working with Apple’s AVFoundation framework for a range of devices, but most of the development is focused on the iPad 8 front camera. The manual settings are specific to our use, so we aim to use the custom exposure mode with e.g ISO = 100, exposureDuration = 1/60, and a fixed white balance. The duration, ISO, and white balance are all set in advance of recording, but when we begin we can see that something is still adjusting and compensating for lighting changes. We then also tried locking the exposure mode after setting the custom values, but there appears to be a delay in this lock taking effect. While tracking the ISO during a recording, we see that the ISO values change in the first second of the recording, leading to oversaturated images, despite our efforts to keep it locked. This is our attempt to lock the settings using custom mode, which we don’t adjust ourselves during the recording, but it does not work as expected: func setCameraSettings(newValueISO: Float, newValueDuration: CMTime){ do { try cameraDevice?.lockForConfiguration() cameraDevice?.automaticallyAdjustsVideoHDREnabled = false cameraDevice?.setExposureModeCustom(duration: newValueDuration, iso: newValueISO, completionHandler: { [self] _ in cameraDevice?.setWhiteBalanceModeLocked(with: cameraDevice!.deviceWhiteBalanceGains) if ((cameraDevice!.isFocusModeSupported(.locked))) { do { cameraDevice!.focusMode = .locked debugPrint("Focus mode set to locked.") } } cameraDevice?.unlockForConfiguration() }) } catch { debugPrint("Error adjusting the exposure") cameraDevice?.unlockForConfiguration() } } We then tried to lock the exposure mode after setting the custom values, but it then changes during the first second of the recording. We also explicitly tried setting exposureTargetBias to 0, but this made no difference. func setCameraSettings(newValueISO: Float, newValueDuration: CMTime){ guard let camera = cameraDevice else { return } do { if camera.isExposureModeSupported(.custom) { do { try camera.lockForConfiguration() let customExposureBias: Float = 0 //camera.setExposureTargetBias(customExposureBias, completionHandler: nil) if camera.isExposureModeSupported(.custom) { camera.setExposureModeCustom(duration: newValueDuration, iso: newValueISO) { [weak camera ] _ in guard let camera = camera else { return } if camera.isExposureModeSupported(.locked) { camera.exposureMode = .locked } } } camera.unlockForConfiguration() print("Exposure settings locked with custom values.") } catch { print("Failed to lock configuration for capture device: \(error.localizedDescription)") camera.unlockForConfiguration() } } else { print("Custom exposure mode is not supported.") } } } We would very much appreciate input on how to keep the manually selected camera settings fixed throughout the video recording.
Posted Last updated
.
Post not yet marked as solved
0 Replies
107 Views
Hey all! I'm building a Camera app using AVFoundation, and I am using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. (I cannot use AVCaptureMovieFileOutput because I am doing some processing inbetween) When recording the audio CMSampleBuffers to the AVAssetWriter, I noticed that compared to the stock iOS camera app, they are mono-audio, not stereo audio. I wonder how recording in stereo audio works, are there any guides or documentation available for that? Is a stereo audio frame still one CMSampleBuffer, or will it be multiple CMSampleBuffers? Do I need to synchronize them? Do I need to set up the AVAssetWriter/AVAssetWriterInput differently? This is my Audio Session code: func configureAudioSession(configuration: CameraConfiguration) throws { ReactLogger.log(level: .info, message: "Configuring Audio Session...") // Prevent iOS from automatically configuring the Audio Session for us audioCaptureSession.automaticallyConfiguresApplicationAudioSession = false let enableAudio = configuration.audio != .disabled // Check microphone permission if enableAudio { let audioPermissionStatus = AVCaptureDevice.authorizationStatus(for: .audio) if audioPermissionStatus != .authorized { throw CameraError.permission(.microphone) } } // Remove all current inputs for input in audioCaptureSession.inputs { audioCaptureSession.removeInput(input) } audioDeviceInput = nil // Audio Input (Microphone) if enableAudio { ReactLogger.log(level: .info, message: "Adding Audio input...") guard let microphone = AVCaptureDevice.default(for: .audio) else { throw CameraError.device(.microphoneUnavailable) } let input = try AVCaptureDeviceInput(device: microphone) guard audioCaptureSession.canAddInput(input) else { throw CameraError.parameter(.unsupportedInput(inputDescriptor: "audio-input")) } audioCaptureSession.addInput(input) audioDeviceInput = input } // Remove all current outputs for output in audioCaptureSession.outputs { audioCaptureSession.removeOutput(output) } audioOutput = nil // Audio Output if enableAudio { ReactLogger.log(level: .info, message: "Adding Audio Data output...") let output = AVCaptureAudioDataOutput() guard audioCaptureSession.canAddOutput(output) else { throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output")) } output.setSampleBufferDelegate(self, queue: CameraQueues.audioQueue) audioCaptureSession.addOutput(output) audioOutput = output } } This is how I activate the audio session just before I start recording: let audioSession = AVAudioSession.sharedInstance() try audioSession.updateCategory(AVAudioSession.Category.playAndRecord, mode: .videoRecording, options: [.mixWithOthers, .allowBluetoothA2DP, .defaultToSpeaker, .allowAirPlay]) if #available(iOS 14.5, *) { // prevents the audio session from being interrupted by a phone call try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true) } if #available(iOS 13.0, *) { // allow system sounds (notifications, calls, music) to play while recording try audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true) } audioCaptureSession.startRunning() And this is how I set up the AVAssetWriter: let audioSettings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: options.fileType) let format = audioInput.device.activeFormat.formatDescription audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: format) audioWriter!.expectsMediaDataInRealTime = true assetWriter.add(audioWriter!) ReactLogger.log(level: .info, message: "Initialized Audio AssetWriter.") The rest is trivial - I receive CMSampleBuffers of the audio in my delegate's callback, write them to the audioWriter, and it ends up in the .mov file - but it is not stereo, it's mono. Is there anything I'm missing here?
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
6 Replies
1.5k Views
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput and scan QRCode. After updating to iPadOS 17.4, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue. iPad (7th Gen) iPad (6th Gen) iPad Pro (10.5) iPad Pro (12.9 2nd Gen) This issue has not occur on any other devices I have. This may only occur on devices with model number "iPad7,x". I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Are any additional settings required after iPadOS17.4? Or is there some problem on the OS side?
Posted
by N.Otani.
Last updated
.
Post not yet marked as solved
0 Replies
167 Views
Similar post on StackOverflow and multiple people reported this (you will encounter it if you run the app for like 10 minutes). I'm hoping this could get Apple's attention somehow After downloading the project code (https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera) and running the Swift sample code on an iPhone 14 Pro, the app crashes intermittently, throwing this error: Execution of the command buffer was aborted due to an error during execution. Caused GPU Timeout Error (00000002:kIOGPUCommandBufferCallbackErrorTimeout) Sometimes it will crash within a few seconds, sometimes it can take around 10 minutes. Has anyone here experienced this crash from the sample code, or using the LiDAR camera? I have spent a long time trying to solve this issue, I have searched the web high and low and submitted (I think) a report to Apple about it. I am unable to get xcode to show me the line of code where the crash is happening. Any help would be greatly appreciated.
Posted Last updated
.
Post not yet marked as solved
0 Replies
176 Views
I am currently renovating an application for macOS Sonoma (14.4) which triggers a Canon 60D via USB cable. Unlike what happened before in MacOS 10.6, the camera (ICCameraDevice) has description that contains only 2 capabilities: { UUIDString = "00000000-0000-0000-0000-000004A93215"; autolaunchApplicationPath = ""; capabilities = ( ICCameraDeviceCanDeleteOneFile, ICCameraDeviceCanAcceptPTPCommands ); class = ICCameraDevice; connectionID = 0xffff0001; delegate = "<0x600003157ac0>"; deviceID = 0xffff0001; deviceRef = 0xffff0001; iconPath = "(null)"; locationDescription = ICDeviceLocationDescriptionUSB; moduleExecutableArchitecture = 0; modulePath = "/System/Library/Image Capture/Devices/PTPCamera.app"; moduleVersion = "1.0"; name = "Canon EOS 60D"; persistentIDString = "00000000-0000-0000-0000-000004A93215"; shared = NO; softwareInstallPercentDone = "0.000000"; transportType = ICTransportTypeUSB; type = 0x00000101; } timeOffset : 0.000000 hasConfigurableWiFiInterface : N/A isAccessRestrictedAppleDevice : NO As you can see, ICCameraDeviceCanTakePicture is not present now, and so I cannot take a picture with requestTakePicture. Do I need to do anything special to regain these capabilities, like in older versions of macOS? Is my only option to use PTP commands? Thanks!
Posted
by da_gagnon.
Last updated
.
Post not yet marked as solved
2 Replies
237 Views
When running an iOS app as designed for iPad on an m1 Mac mini the UIImagePickerController.isSourceTypeAvailable(.camera) api returns true leading to a crash (attached) if the camera is selected to upload an image to the app as my much loved Mac mini does not have a camera. For the moment have disabled camera if platform is Mac by adding the qualification: ProcessInfo().isiOSAppOnMac == false but this seems like a bug or does the crash also happen on Macs with cameras? Other image picker options work fine. Crash log
Posted
by ptclarke.
Last updated
.
Post marked as solved
1 Replies
179 Views
I want to take 48MP photos and get the same iso and exposure duration as I set. Configuration Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048) Set if (AVCaptureDevice.isExposureModeSupported:.custom) { AVCaptureDevice.exposureMode = .custom; } Set AVCaptureDevice.setExposureModeCustomWithDuration:1/20 ISO:100 completionHandler:handler Taking a photo Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048) The API discussion of setExposureModeCustomWithDuration told me https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624646-setexposuremodecustomwithduratio/ To ensure that the receiver's ISO and exposureDuration values are honored while in AVCaptureExposureModeCustom or AVCaptureExposureModeLocked, you must set your AVCapturePhotoSettings.photoQualityPrioritization property to AVCapturePhotoQualityPrioritizationSpeed. But at last step, when I set AVCapturePhotoSettings.maxPhotoQualityPrioritization = .speed, the photo resolution is (4000, 3000), only 12MP, not is (8000, 6000). the iso and exposure duration on the photo are the same as what I set. and when I set AVCapturePhotoSettings.maxPhotoQualityPrioritization = .balanced/.qulity, the photo is (8000, 6000) , but the iso and exposeure duration obtained on the photo is different from the one I set. What do I need to do to take 48MP photos and set the iso and exposure duration successfully?
Posted
by Zard.
Last updated
.
Post not yet marked as solved
3 Replies
314 Views
The methods described in https://developer.apple.com/forums/thread/715452?answerId=729571022#729571022 to obtain 48 MP image captures no longer seem to work on iOS 17.4 under certain circumstances. Previously, the following steps were sufficient to get 48 MP capture from AVFoundation: Configuration Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048) Set AVCapturePhotoOutput.maxPhotoQualityPrioritization to .quality Taking a photo Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048) Set AVCapturePhotoSettings.photoQualityPrioritization to .quality As of iOS 17.4, the exact same code that worked through 17.3 no longer works if the session was configured manually (resulting in the .inputPriority session preset) rather than using a session preset (like .high). When configuring the session manually, all the intervening steps work (an active format can be found with the appropriate dimensions, the photo output settings can be set to 8064x6048 successfully, etc.), but the resulting photo is 4032x3024. Again, these same steps worked flawlessly prior to iOS 17.4. Am I missing something? Did iOS 17.4 change the requirements for 48 MP capture, or is this a bug?
Posted
by tenuki.
Last updated
.
Post not yet marked as solved
1 Replies
234 Views
I'm currently working on an iPad application that uses a third party sdk to scan a drivers license, and then allows the user to take a picture of themselves. However, when the user is directed to the self photo view, the AVCaptureSession preview will freeze. The app as a whole does not freeze. Only the view preview. I believe this is an issue with the OS, because this only happens on iPad 9s. All the other iPads work fine. Has anyone else seen this issue? Also, is there anyway to see logs from the AVCaptureSession so I can see what is happening? Maybe there is a way I can see when it freezes and then restart it.
Posted Last updated
.
Post not yet marked as solved
1 Replies
188 Views
Hi hope all are well! We've been working on a live streaming app and it's going quite well! Just got the aspect ratio locked as desired. Now the audio, its volume is extremely low. It sounds like it's using the headset mic instead of the bottom mic that's used on Facetime or on speakerphone calls. We tried flipping cameras and specifying sample rates, almost every constraint in MediaConstraints - no go! Is there any way to specify this? Thanks in advance!
Posted Last updated
.
Post not yet marked as solved
1 Replies
383 Views
As the title already suggests, is it possible with the current Apple Vision Simulator to recognize objects/humans, like it is currently possible on the iPhone. I am not even sure, if we have an api for accessing the cameras of the Vision Pro? My goal is, to recognize for example a human and add to this object an 3D object, for example a hat. Can this be done?
Posted
by wladislaw.
Last updated
.
Post not yet marked as solved
2 Replies
207 Views
Dear Team, I am trying to add contact from QRCode. But it seems that the built-in QR code reader of iphone camera isn't able to decode the FullName with space containing in last name correctly ex:-Collin A. Al Miller. I have attached all the screenshot for your reference. Here are the examples: When I am trying to focus iphone camera on QRCode the fullname (Collin A. Al Miller). scan the The full name its giving the empty result without the fullname. The attached screenshot details a)CameraQRNotWorking b)NotWorkingQRCOde 2)When i try to removed the blank space and tried to add comma or - in the full name its getting recognised and its working perfectly. The attached screenshot name a)CameraQRCodeWorking b)workingQRCODE 3)Both the full name are working perfectly in QR camera scanner of android Collin A. Al-Miller or Collin A, Al Miller. The attached screenshot name AndroidQRCODE Hope this issue will get resolved in upcoming release. Kindly provide the feedback relatedto this issue Code to generate vcard var str = "BEGIN:VCARD \n" + "VERSION:2.1 \n" + "FN:\("Collin A. Al Miller") \n" + "TITLE:\("") \n" if options.showPersonalPhone { str.append(contentsOf: "item1.TEL;CELL:\("+91987654320") \n") str.append(contentsOf: "item1.X-ABLabel:Mobile\n") } if options.showWorkPhone { str.append(contentsOf: "item2.TEL;WORK;VOICE:\("+91987654320") \n") str.append(contentsOf: "item2.X-ABLabel:Work Phone\n") } if options.showEmail { str.append(contentsOf: "item3.EMAIL;WORK;INTERNET:\("test@gmail.com") \n") str.append(contentsOf: "item3.X-ABLabel:Work Email\n") } if options.showWebsite { str.append(contentsOf: "URL:www.test.com \n") } if options.showLocation { str.append(contentsOf: "ADR;WORK:;;\("Bangalore") \n") } str.append(contentsOf: "END:VCARD")
Posted
by Shohib.
Last updated
.
Post not yet marked as solved
0 Replies
230 Views
I need to capture 4k photos with 4:3 ratio from the camera. I can do this, but i want to disable video stabilization. I can disable video stabilization using the AVCaptureSessionPresetHigh preset. But AVCaptureSessionPresetHigh gives me a 16:9 photo with the surroundings cropped. Unfortunately, the 16:9 ratio does not solve my needs. When I run the session using the AVCaptureSessionPresetPhoto preset and adding AVCapturePhotoOutput, I cannot turn off image stabilization. self.capturePhotoOutput = AVCapturePhotoOutput.init() self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } if let connection = capturePhotoOutput?.connection(with: .video) { if connection.isVideoStabilizationSupported { connection.preferredVideoStabilizationMode = .off } } DispatchQueue.main.async { [self] in self.capturePhotoOutput?.isHighResolutionCaptureEnabled = true self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ let photoSettings = AVCapturePhotoSettings() if let photoPreviewType = photoSettings.availablePreviewPhotoPixelFormatTypes.first { photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String:photoPreviewType] photoSettings.isAutoStillImageStabilizationEnabled = false capturePhotoOutput?.capturePhoto(with: photoSettings, delegate: self) } } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { if let dataImage = photo.fileDataRepresentation() { print(UIImage(data: dataImage)?.size as Any) let dataProvider = CGDataProvider(data: dataImage as CFData) let cgImageRef: CGImage! = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent) let image = UIImage(cgImage: cgImageRef, scale: 1.0, orientation: rotateImage(orientation: currentOrientation)) } else { print("some error here") } } As a temporary solution, I added only AVCaptureVideoDataOutput to the session without adding AVCapturePhotoOutput, and I can capture in 4:3 format with the captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) function. However, this time I cannot get a 4K image. In short, I need to turn off video stabilization in a session with AVCapturePhotoOutput added. self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) videoDataOutput = AVCaptureVideoDataOutput() videoDataOutput?.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA) ] videoDataOutput?.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue")) if ((captureSession?.canAddOutput(videoDataOutput!)) != nil) { captureSession?.addOutput(videoDataOutput!) } /* If I cancel the comment line, video stabilization is enabled. if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } */ DispatchQueue.main.async { [self] in self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ takePicture = true } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { if !takePicture { return //we have nothing to do with the image buffer } //try and get a CVImageBuffer out of the sample buffer guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } let rect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(cvBuffer), height: CVPixelBufferGetHeight(cvBuffer)) let ciImage = CIImage.init(cvImageBuffer: cvBuffer) let ciContext = CIContext() let cgImage = ciContext.createCGImage(ciImage, from: rect) guard cgImage != nil else {return } let uiImage = UIImage(cgImage: cgImage!) }
Posted Last updated
.
Post not yet marked as solved
0 Replies
196 Views
while trying to use the external camera, ios is not detecting exposure setting of connected external camera and check isExposureModeSupported is always returning false. And capture image also don't have any exposure details. How can we use or change these settings
Posted
by Parv156.
Last updated
.