Photos and Imaging

RSS for tag

Integrate still images and other forms of photography into your apps.

Posts under Photos and Imaging tag

76 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

[IOS 17] Bug when save video from URL and read it by PHPickerViewController
From IOS 17. Have an issue when saving video and reading it from PHPickerViewController. Video become into jpeg file Code save video and I saw reason because i changed creationDate. But IOS 16 no bug here doVertifyAccessAblum() { DispatchQueue.global(qos: .background).async { if let url = URL(string: videoURL), let urlData = NSData(contentsOf: url) { let galleryPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]; let filePath="\(galleryPath)/\(url.lastPathComponent).mp4" DispatchQueue.main.async { urlData.write(toFile: filePath, atomically: true) PHPhotoLibrary.shared().performChanges({ let changeRequest = PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: URL(fileURLWithPath: filePath)) changeRequest?.creationDate = Date() }) { success, error in LOGGING.debug("Save video with status: success=\(success) error=\(String(describing: error))") } } } } } }
5
1
510
Dec ’23
Photos sample app can't access full-resolution photos on iOS
I'm running this SwiftUI sample app for photos without any modifications except for adding my developer profile, which is necessary to build it. When I tap on the thumbnail to see the photo library (after granting access to my photo library), I see that some of the thumbnails are stuck in a loading state, and when I tap on thumbnails, I only see a low-resolution image (the thumbnail), not the full-resolution image that should load. In the console I can see this error that occurs when tapping on a thumbnail to see the full-resolution image: CachedImageManager requestImage error: The operation couldn’t be completed. (PHPhotosErrorDomain error 3164.) When I make a few modifications necessary to run the app as a native macOS app, all the thumbnails load immediately, and clicking on them reveals the full-resolution images.
2
0
970
Dec ’23
Should Photo Extensions work in iOS simulators?
When I: open an existing project create a new PhotoExtensions target run the new target in an iOS simulator (eg iPhone 15, iOS 17.0) Select photos as the app to run Open a photo Tap the ... button at the top right I see: Copy, Duplicate, Hide, etc. But I do not see my new Extension. Is there something else I need to be doing in order to see my new Extension in 'action'?
1
0
501
Jan ’24
Sharing a JPEG via Action or Share Extension fails in Photos on macOS
We have a Share Extension that fails in Photos on macOS when trying to share a JPEG image for the following reason: From the NSItemProvider we get from the NSExtensionItem.attachments, we try to load the image using loadFileRepresentation(forTypeIdentifier: “public.image”, completionHandler: …). This fails for .jpeg images in the library. There seems to be a mismatch in expected and actual file extension internally. Here is the log: Error copying file type public.image. Error: Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.jpeg" UserInfo={NSLocalizedDescription=Cannot load representation of type public.jpeg, NSUnderlyingError=0x1527c1a80 {Error Domain=NSItemProviderErrorDomain Code=-1 "Cannot copy file at URL file:///Users/frank/Library/Containers/com.apple.Photos/Data/tmp/TemporaryItems/ShareKit-Exports/7CCFA760-AAC9-42B0-812D-68F051ED1543/F912E593-2BE5-4E70-86AB-7657A40657E5/IMG_3517.jpg." UserInfo={NSLocalizedDescription=Cannot copy file at URL file:///Users/frank/Library/Containers/com.apple.Photos/Data/tmp/TemporaryItems/ShareKit-Exports/7CCFA760-AAC9-42B0-812D-68F051ED1543/F912E593-2BE5-4E70-86AB-7657A40657E5/IMG_3517.jpg., NSUnderlyingError=0x152789670 {Error Domain=NSItemProviderErrorDomain Code=-1 "Cannot create a temporary file. Error: Undefined error: 0" UserInfo={NSLocalizedDescription=Cannot create a temporary file. Error: Undefined error: 0}}}}}``` In the specified folder, there is an image, however, it’s named IMG_3517.jpeg, not IMG_3517.jpg. This seems to be a bug in Photo’s item provider implementation. If we use loadObject(ofClass: URL.self, completionHandler: …) instead, we get the correct .jpeg URL in the completion handler.
1
0
606
Jan ’24
Code=-11803 "Cannot Record" Error while capturing photo from AVCaptureSession ?
Hi Everyone need your help . I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4 ERROR: error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record}) Here is my Code: `final class CedulaScanningVC: UIViewController { var captureSession: AVCaptureSession! var stillImageOutput: AVCapturePhotoOutput! var videoPreviewLayer: AVCaptureVideoPreviewLayer! var delegate: ScanCedulaDelegate? override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) self.captureSession.stopRunning() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) setupCamera() } // MARK: - Configure Camera func setupCamera() { captureSession = AVCaptureSession() captureSession.sessionPreset = .medium guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video) else { print("Unable to access back camera!") return } let input: AVCaptureDeviceInput do { input = try AVCaptureDeviceInput(device: backCamera) //Step 9 stillImageOutput = AVCapturePhotoOutput() if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) { captureSession.addInput(input) captureSession.addOutput(stillImageOutput) setupLivePreview() } } catch let error { print("Error Unable to initialize back camera: \(error.localizedDescription)") } } func setupLivePreview() { videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) videoPreviewLayer.videoGravity = .resizeAspectFill videoPreviewLayer.connection?.videoOrientation = .portrait self.view.layer.addSublayer(videoPreviewLayer) //Step12 DispatchQueue.global(qos: .userInitiated).async { [weak self] in self?.captureSession.startRunning() //Step 13 DispatchQueue.main.async { self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero } } } func failed() { let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert) ac.addAction(UIAlertAction(title: "OK", style: .default)) present(ac, animated: true) captureSession = nil } // MARK: - actions func cameraButtonPressed() { let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]) stillImageOutput.capturePhoto(with: settings, delegate: self) } } extension CedulaScanningVC: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { print("error: \(error)") captureSession.stopRunning() DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in guard let self = self else {return} guard let imageData = photo.fileDataRepresentation() else { print("NO image captured") return } let image = UIImage(data: imageData) self.delegate?.capturedImage(image: image) } } }` I don't know what am doing wrong ?
0
0
505
Jan ’24
Capturing Photot error Code=-11803 "Cannot Record"
Hi iOS community need your help. I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4 ERROR: ` error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record}) My Code here: final class CedulaScanningVC: UIViewController { var captureSession: AVCaptureSession! var stillImageOutput: AVCapturePhotoOutput! var videoPreviewLayer: AVCaptureVideoPreviewLayer! var delegate: ScanCedulaDelegate? override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) self.captureSession.stopRunning() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) setupCamera() } // MARK: - Configure Camera func setupCamera() { captureSession = AVCaptureSession() captureSession.sessionPreset = .medium guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video) else { print("Unable to access back camera!") return } let input: AVCaptureDeviceInput do { input = try AVCaptureDeviceInput(device: backCamera) //Step 9 stillImageOutput = AVCapturePhotoOutput() if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) { captureSession.addInput(input) captureSession.addOutput(stillImageOutput) setupLivePreview() } } catch let error { print("Error Unable to initialize back camera: \(error.localizedDescription)") } } func setupLivePreview() { videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) videoPreviewLayer.videoGravity = .resizeAspectFill videoPreviewLayer.connection?.videoOrientation = .portrait self.view.layer.addSublayer(videoPreviewLayer) //Step12 DispatchQueue.global(qos: .userInitiated).async { [weak self] in self?.captureSession.startRunning() //Step 13 DispatchQueue.main.async { self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero } } } func failed() { let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert) ac.addAction(UIAlertAction(title: "OK", style: .default)) present(ac, animated: true) captureSession = nil } // MARK: - actions func cameraButtonPressed() { let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]) stillImageOutput.capturePhoto(with: settings, delegate: self) } } extension CedulaScanningVC: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { print("error: \(error)") captureSession.stopRunning() DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in guard let self = self else {return} guard let imageData = photo.fileDataRepresentation() else { print("NO image captured") return } let image = UIImage(data: imageData) self.delegate?.capturedImage(image: image) } } } I don't know what am doing wrong ?
0
0
516
Jan ’24
Capabilities of Sensitive Content Analysis and iOS 17?
Hello. I have three questions about the Sensitive Content Analysis (SCA) framework: SCA seems to be asynchronous. Is there a limit to how much a single app can send through it at a time? For video analysis, can the video be broken into smaller chunks, and then all chunks be hit concurrently? Can a video stream be sampled as it's being streamed? e.g. Maybe it samples one frame every 3 seconds and scans those? Thanks.
0
0
414
Jan ’24
Photogrammetry on iOS 17.0+ with TrueDepth
Hello, I came across the Object Capture for iOS example from WWDC23, which utilizes LiDAR sensor. However, I’m interested in using the TrueDepth camera system instead. What I have tried is to save depth photos (.HEIC) to the Images/ folder (based on modifying the example below), which is hopefully used by the Photogrammetry session. But I haven’t been successful so far in starting the 3D reconstruction. Could there be something I’ve missed, or is the Object Capture sample code exclusively designed for LiDAR? Or maybe .HEIC is not the right format to use? Thank you for your assistance. import AVFoundation import UIKit class DepthPhotoCapture: NSObject, AVCapturePhotoCaptureDelegate { let photoOutput = AVCapturePhotoOutput() let captureSession = AVCaptureSession() override init() { super.init() setupCaptureSession() } func setupCaptureSession() { // Get the front camera (TrueDepth camera) guard let frontCamera = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .front) else { print("Unable to access front camera!") return } do { // Create an input object from the camera let input = try AVCaptureDeviceInput(device: frontCamera) // Add the input to the capture session captureSession.addInput(input) } catch { print("Unable to create AVCaptureDeviceInput: \(error)") } // Check if depth data capture is supported if photoOutput.isDepthDataDeliverySupported { // Enable depth data capture photoOutput.isDepthDataDeliveryEnabled = true } // Add the photo output to the capture session captureSession.addOutput(photoOutput) // Start the capture session captureSession.startRunning() } func captureDepthPhoto() { // Create a photo settings object let photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc]) photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliveryEnabled // Capture a photo with depth data photoOutput.capturePhoto(with: photoSettings, delegate: self) } // Implement the AVCapturePhotoCaptureDelegate method func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { guard let imageData = photo.fileDataRepresentation() else { print("Error while generating image from photo capture data.") return } // Get the documents directory let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! // Append the image directory and a unique image name let fileURL = documentsDirectory.appendingPathComponent("Images/").appendingPathComponent(UUID().uuidString).appendingPathExtension("heic") do { // Write the image data to the file try imageData.write(to: fileURL) print("Saved photo with depth data to \(fileURL)") } catch { print("Failed to write the image data to disk: \(error)") } } }
1
1
539
Jan ’24
How to visualize 16bit raw image data
I'm working on a very simple App where I need to visualize an image on the screen of an iPhone. However, the image has some special properties. It's a 16bit, yuv422_yuy2 encoded image. I already have all the raw bytes saved in a Data object. After googling for a long time, I still did not figure out the correct way. My current understanding is first create a CVPixelBuffer to properly represent the encoding information. Then conver the CVPixelBuffer to an UIImage. The following is my current implementation. public func YUV422YUY2ToUIImage(data: Data, height: Int, width: Int, bytesPerRow: Int) -> UIImage { return rosImage.data.withUnsafeMutableBytes { rawPointer in let baseAddress = rawPointer.baseAddress! let tempBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.allocate(capacity: 1) CVPixelBufferCreateWithBytes( kCFAllocatorDefault, width, height, kCVPixelFormatType_422YpCbCr16, baseAddress, bytesPerRow, nil, nil, nil, tempBufferPointer) let ciImage = CIImage(cvPixelBuffer: tempBufferPointer.pointee!) return UIImage(ciImage: ciImage) } } However, when I execute the code, I have the followin error -[CIImage initWithCVPixelBuffer:options:] failed because its pixel format v216 is not supported. So it seems CIImage is unhappy. I think I need to convert the encoding from yuv422_yuy2 to something like plain ARGB. But after a long tim googling, I didn't find a way to do that. The closest function I cand find is https://developer.apple.com/documentation/accelerate/1533015-vimageconvert_422cbypcryp16toarg But the function is too complex for me to understand how to use it. Any help is appreciated. Thank you!
2
0
584
Mar ’24
Image crop and edit
Hello Apple Developer Community, I'm excited to make my first post here and am seeking guidance for a feature I'd like to implement in my app. My objective is to enable users to select an image and crop it. Ideally, there should be a visible indicator, like a rectangle, to show the area that will be cropped. Upon clicking the save button, the image would be saved with the selected cropped area. I'm aiming for functionality to the image editor in the Photos app. Is there a straightforward method or integration for this that adheres to Apple's native frameworks, without resorting to external GitLab repositories? Thank you in advance for your assistance. Best regards, Nicola
1
0
410
Jan ’24
photogrammetry
i do not really know how this works but hi I am Philemon. for a school assignment I need to program a app I have 2 years for this and it is for people that are interested in coding. I want to make a iOS app that can make 3d models from pictures (photogrammetry) and I know that there are already apps for this but I want to code this myself. I have a little bit of experience coding c# in unity but I really don't know where to start can someone help me? and I know that apple has reality kit but I want that people without a LiDAR Scanner can use this too. so where do I start witch language do I need to learn? every comment is welcome!!! kind regards Philemon
0
0
441
Jan ’24
PhotosPicker, how to select additional images later on?
In my app I use PhotosPicker to select images. After selection the images the image data will be saved in a CoreData entity - this works fine. However, When the user wants to add more images and go back to adding photos with PhotosPicker - how can I reference the already added images and show them as selected in PhotosPicker? The imageIdentifier is not allowed to use, so how can I do get a reference to the selected images to display them as selected in PhotosPicker? Thanks for any hint!
1
0
464
Feb ’24
Motion not available ios 17 live photo
Hello. Does anyone have any ideas on how to work with the new iOS 17 Live Photo? I can save the live photo, but I can't set it as wallpaper. Error: "Motion is not available in iOS 17" There are already applications that allow you to do this - VideoToLive and the like. What should I use to implement this with swift language? Most likely the metadata needs to be changed, but I'm not sure.
0
0
962
Feb ’24
PHExternalAssetResource: Unable to issue sandbox extension for file.mov
I'm trying to add a video asset to my app's photo library, via drag/drop from the Photos app. I managed to get the video's URL from the drag, but when I try to create the PHAsset for it I get an error: PHExternalAssetResource: Unable to issue sandbox extension for /private/var/mobile/Containers/Data/Application/28E04EDD-56C1-405E-8EE0-7842F9082875/tmp/.com.apple.Foundation.NSItemProvider.fXiVzf/IMG_6974.mov Here's my code to add the asset: let url = URL(string: videoPath)! PHPhotoLibrary.shared().performChanges({ PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: url) }) { saved, error in // Error !! } Addictionally, this check is true in the debugger: UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(videoPath) == true Note that adding still images, much in the same way, works fine. And I naturally have photo library permissions enabled. Any idea what I'm missing? I'm seeing the same error on iOS17.2 and iPadOS 17.2, with Xcode 15.2. Thanks for any tips ☺️
3
0
1.1k
Feb ’24
Error When Saving Video To Camera Roll
I am working on enabling the option for users to save a video from a post in a social media app to their cameral roll. I am trying to use PHPhotoLibrary to perform the task similarly to how I did the functionality for saving images and gifs. However, when I try to perform the task with the code as is, I get the following errors: Error Domain=PHPhotosErrorDomain Code=-1 "(null)" The operation couldn’t be completed. (PHPhotosErrorDomain error -1.) The implementation is as follows: Button(action: { guard let videoURL = URL(string: media.link.absoluteString) else { print("Invalid video url.") return } PHPhotoLibrary.shared().performChanges({ PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL) print("Video URL: \(videoURL)") }) { (success, error) in if let error = error { debugPrint(error) print(error.localizedDescription) } else { print("Video saved to camera roll!") } } }) { Text("Save Video") Image(systemName: "square.and.arrow.down") } The video URL is successfully fetched dynamically from the post, but there's an issue with storing it locally in the library. What am I missing?
0
0
537
Feb ’24
Camera intrinsic matrix for single photo capture
Is it possible to get the camera intrinsic matrix for a captured single photo on iOS? I know that one can get the cameraCalibrationData from a AVCapturePhoto, which also contains the intrinsicMatrix. However, this is only provided when using a constituent (i.e. multi-camera) capture device and setting virtualDeviceConstituentPhotoDeliveryEnabledDevices to multiple devices (or enabling isDualCameraDualPhotoDeliveryEnabled on older iOS versions). Then photoOutput(_:didFinishProcessingPhoto:) is called multiple times, delivering one photo for each camera specified. Those then contain the calibration data. As far as I know, there is no way to get the calibration data for a normal, single-camera photo capture. I also found that one can set isCameraIntrinsicMatrixDeliveryEnabled on a capture connection that leads to a AVCaptureVideoDataOutput. The buffers that arrive at the delegate of that output then contain the intrinsic matrix via the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix metadata. However, this requires adding another output to the capture session, which feels quite wasteful just for getting this piece of metadata. Also, I would somehow need to figure out which buffer was temporarily closest to when the actual photo was taken. Is there a better, simpler way for getting the camera intrinsic matrix for a single photo capture? If not, is there a way to calculate the matrix based on the image's metadata?
0
0
536
Feb ’24
PHPickerViewController crashing with _PFAssertFailHandler
Hello, we are embedding a PHPickerViewController with UIKit (adding the vc as a child vc, embedding the view, calling didMoveToParent) in our app using the compact mode. We are disabling the following capabilities .collectionNavigation, .selectionActions, .search. One of our users using iOS 17.2.1 and iPhone 12 encountered a crash with the following stacktrace: Crashed: com.apple.main-thread 0 libsystem_kernel.dylib 0x9fbc __pthread_kill + 8 1 libsystem_pthread.dylib 0x5680 pthread_kill + 268 2 libsystem_c.dylib 0x75b90 abort + 180 3 PhotoFoundation 0x33b0 -[PFAssertionPolicyCrashReport notifyAssertion:] + 66 4 PhotoFoundation 0x3198 -[PFAssertionPolicyComposite notifyAssertion:] + 160 5 PhotoFoundation 0x374c -[PFAssertionPolicyUnique notifyAssertion:] + 176 6 PhotoFoundation 0x2924 -[PFAssertionHandler handleFailureInFunction:file:lineNumber:description:arguments:] + 140 7 PhotoFoundation 0x3da4 _PFAssertFailHandler + 148 8 PhotosUI 0x22050 -[PHPickerViewController _handleRemoteViewControllerConnection:extension:extensionRequestIdentifier:error:completionHandler:] + 1356 9 PhotosUI 0x22b74 __66-[PHPickerViewController _setupExtension:error:completionHandler:]_block_invoke_3 + 52 10 libdispatch.dylib 0x26a8 _dispatch_call_block_and_release + 32 11 libdispatch.dylib 0x4300 _dispatch_client_callout + 20 12 libdispatch.dylib 0x12998 _dispatch_main_queue_drain + 984 13 libdispatch.dylib 0x125b0 _dispatch_main_queue_callback_4CF + 44 14 CoreFoundation 0x3701c __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16 15 CoreFoundation 0x33d28 __CFRunLoopRun + 1996 16 CoreFoundation 0x33478 CFRunLoopRunSpecific + 608 17 GraphicsServices 0x34f8 GSEventRunModal + 164 18 UIKitCore 0x22c62c -[UIApplication _run] + 888 19 UIKitCore 0x22bc68 UIApplicationMain + 340 20 WorkAngel 0x8060 main + 20 (main.m:20) 21 ??? 0x1bd62adcc (Missing) Please share if you have any ideas as to what might have caused that, or what to look at in such a case. I haven't been able to reproduce this myself unfortunately.
1
0
443
Feb ’24
How do I disable video stabilization in a AVCaptureSession with AVCapturePhotoOutput added?
I need to capture 4k photos with 4:3 ratio from the camera. I can do this, but i want to disable video stabilization. I can disable video stabilization using the AVCaptureSessionPresetHigh preset. But AVCaptureSessionPresetHigh gives me a 16:9 photo with the surroundings cropped. Unfortunately, the 16:9 ratio does not solve my needs. When I run the session using the AVCaptureSessionPresetPhoto preset and adding AVCapturePhotoOutput, I cannot turn off image stabilization. self.capturePhotoOutput = AVCapturePhotoOutput.init() self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } if let connection = capturePhotoOutput?.connection(with: .video) { if connection.isVideoStabilizationSupported { connection.preferredVideoStabilizationMode = .off } } DispatchQueue.main.async { [self] in self.capturePhotoOutput?.isHighResolutionCaptureEnabled = true self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ let photoSettings = AVCapturePhotoSettings() if let photoPreviewType = photoSettings.availablePreviewPhotoPixelFormatTypes.first { photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String:photoPreviewType] photoSettings.isAutoStillImageStabilizationEnabled = false capturePhotoOutput?.capturePhoto(with: photoSettings, delegate: self) } } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { if let dataImage = photo.fileDataRepresentation() { print(UIImage(data: dataImage)?.size as Any) let dataProvider = CGDataProvider(data: dataImage as CFData) let cgImageRef: CGImage! = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent) let image = UIImage(cgImage: cgImageRef, scale: 1.0, orientation: rotateImage(orientation: currentOrientation)) } else { print("some error here") } } As a temporary solution, I added only AVCaptureVideoDataOutput to the session without adding AVCapturePhotoOutput, and I can capture in 4:3 format with the captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) function. However, this time I cannot get a 4K image. In short, I need to turn off video stabilization in a session with AVCapturePhotoOutput added. self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) videoDataOutput = AVCaptureVideoDataOutput() videoDataOutput?.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA) ] videoDataOutput?.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue")) if ((captureSession?.canAddOutput(videoDataOutput!)) != nil) { captureSession?.addOutput(videoDataOutput!) } /* If I cancel the comment line, video stabilization is enabled. if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } */ DispatchQueue.main.async { [self] in self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ takePicture = true } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { if !takePicture { return //we have nothing to do with the image buffer } //try and get a CVImageBuffer out of the sample buffer guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } let rect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(cvBuffer), height: CVPixelBufferGetHeight(cvBuffer)) let ciImage = CIImage.init(cvImageBuffer: cvBuffer) let ciContext = CIContext() let cgImage = ciContext.createCGImage(ciImage, from: rect) guard cgImage != nil else {return } let uiImage = UIImage(cgImage: cgImage!) }
0
0
479
Mar ’24