Photos and Imaging

RSS for tag

Integrate still images and other forms of photography into your apps.

Posts under Photos and Imaging tag

87 Posts
Sort by:
Post not yet marked as solved
0 Replies
265 Views
I'm developing a 3D scanner works on a iPad(6th gen, 12-inch). Photogrammetry with ObjectCaptureSession was successful, but other trials are not. I've tried Photogrammetry with URL inputs, these are pictures from AVCapturePhoto. It is strange... if metadata is not replaced, photogrammetry would be finished but it seems to be no depthData or gravity info were used. (depth and gravity is separated files). but if metadata is injected, this trial are fails. and this time i tried to Photogrammetry with PhotogrammetrySamples sequence and it also failed. the settings are: camera: back Lidar camera, image format: kCVPicelFormatType_32BGRA(failed with crash) or hevc(just failed) image depth format: kCVPixelFormatType_DisparityFloat32 or kCVPixelFormatType_DepthFloat32 photoSetting: isDepthDataDeliveryEnabled = true, isDepthDataFiltered = false, embeded = true I wonder iPad supports Photogrammetry with PhotogrammetrySamples I've already tested some sample codes provided by apple: https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture What should I do to make Photogrammetry successful?
Posted Last updated
.
Post not yet marked as solved
0 Replies
40 Views
**Why does using CameraPicker require user authorization through a pop-up? ** Why don't ImagePicker or PhotoPicker require additional pop-up authorizations for accessing the photo library? All of these are implemented using UIImagePickerController, so why does one require a pop-up and the others do not? Additionally, I thought that by configuring the picker, I would theoretically not need any permissions. If permissions are still required, wouldn’t it make more sense to directly request camera permissions and utilize the native camera functionality? What then are the advantages of using the picker?
Posted Last updated
.
Post not yet marked as solved
4 Replies
750 Views
In my app I get a UIImage for a PHAsset via PHImageManager.requestImage(for:targetSize:contentMode:options:resultHandler:). I directly display that image in a UIImageView that has preferredImageDynamicRange set to .high. The problem is I do not see the high dynamic range. I see the HDRDemo23 sample code uses PhotosPicker to get a UIImage from Data through UIImageReader whose config enables prefersHighDynamicRange. Is there a way to support HDR when using the Photos APIs to request display images? And is there support for PHLivePhoto displayed in PHLivePhotoView retrieved via PHImageManager.requestLivePhoto?
Posted
by Jordan.
Last updated
.
Post not yet marked as solved
1 Replies
377 Views
I'm working on a game which uses HDR display output for a much brighter range. One of a feature of the game is the ability to export in-game photos. The only appropriate format I found for this is Open EXR. The embedded Photos app is capable of showing HDR photos on an HDR display. However, if drop an EXR file to the photos with a large range, it won't be properly displayed with HDR mode with the full range. At the same time, pressing Edit on the file makes it HDR displayable and it remains displayable if save the edit with any, even a tiny, change. Moreover, if the EXR file is placed next to 'true' HDR one (or an EXR 'fixed' as on above), then durring scroll between the files, the broken EXR magically fixes at the exact moment the other HDR drives up to the screen. I tested on different files with various internal format. Seems to be a coomon problem for all. Tested on the latest iOS 17.0.3. Thank you in advance.
Posted
by Goryh.
Last updated
.
Post not yet marked as solved
0 Replies
128 Views
With USB cable connection (no cloud) to import from updated iPhones (11 Pro Max, 12 mini, and 13 with their updated iOSes) into updated macOSes (Ventura v13.x, Big Sur v10.11.x, and Mojave v10.14.x)'s Photos app, I noticed imports show already imported medias and missing brand new medias. Others and I noticed this problem in our multiple macOSes with iPhones: https://discussions.apple.com/thread/255565285 and https://talk.tidbits.com/t/does-anyone-have-problems-importing-iphones-medias-into-macos-photos-app/27406/. Thank you for reading and hopefully answering soon. :)
Posted
by antdude.
Last updated
.
Post not yet marked as solved
0 Replies
273 Views
Context So basically I've trained my model for object detection with +4k images. Under preview I'm able to check the prediction for Image "A" which detects two labels with 100% and its Bounding Boxes look accurate. The problem itself However, inside the Swift Playground, when I try to perform object detection using the same model and same Image I don't get same results. What I expected Is that after performing the request and processing the array of VNRecognizedObjectObservation would show the very same results that appear in CreateML Preview. Notes: So the way I'm importing the model into playground is just by drag and drop. I've trained the images using JPEG format. The test Image is rotated so that it looks vertical using MacOS Finder rotation tool. I've tried, while creating VNImageRequestHandlerto pass a different orientation, with the same result. Swift Playground code This is the code I'm using. import UIKit import Vision do{ let model = try MYMODEL_FROMCREATEML(configuration: MLModelConfiguration()) let mlModel = model.model let coreMLModel = try VNCoreMLModel(for: mlModel) let request = VNCoreMLRequest(model: coreMLModel) { request, error in guard let results = request.results as? [VNRecognizedObjectObservation] else { return } results.forEach { result in print(result.labels) print(result.boundingBox) } } let image = UIImage(named: "TEST_IMAGE.HEIC")! let requestHandler = VNImageRequestHandler(cgImage: image.cgImage!) try requestHandler.perform([request]) } catch { print(error) } Additional Notes & Uncertainties Not sure if this is relevant, but just in case: I've trained the model using pictures I took from my iPhone using 48MP HEIC format. All photos were on vertical position. With a python script I overwrote the EXIF orientation to 1 (Normal). This was in order to be able to annotate the images using the CVAT tool and then convert to CreateML annotation format. Assumption #1 Since I've read that Object Detection in Create ML is based on YOLOv3 architecture which inside the first layer resizes the image dimension, meaning that I don't have to worry about using very large images to train my model. Is this correct? Assumption #2 Also makes me asume that the same thing happens when I try to make a prediction?
Posted
by joe_dev.
Last updated
.
Post not yet marked as solved
0 Replies
198 Views
I am currently renovating an application for macOS Sonoma (14.4) which triggers a Canon 60D via USB cable. Unlike what happened before in MacOS 10.6, the camera (ICCameraDevice) has description that contains only 2 capabilities: { UUIDString = "00000000-0000-0000-0000-000004A93215"; autolaunchApplicationPath = ""; capabilities = ( ICCameraDeviceCanDeleteOneFile, ICCameraDeviceCanAcceptPTPCommands ); class = ICCameraDevice; connectionID = 0xffff0001; delegate = "<0x600003157ac0>"; deviceID = 0xffff0001; deviceRef = 0xffff0001; iconPath = "(null)"; locationDescription = ICDeviceLocationDescriptionUSB; moduleExecutableArchitecture = 0; modulePath = "/System/Library/Image Capture/Devices/PTPCamera.app"; moduleVersion = "1.0"; name = "Canon EOS 60D"; persistentIDString = "00000000-0000-0000-0000-000004A93215"; shared = NO; softwareInstallPercentDone = "0.000000"; transportType = ICTransportTypeUSB; type = 0x00000101; } timeOffset : 0.000000 hasConfigurableWiFiInterface : N/A isAccessRestrictedAppleDevice : NO As you can see, ICCameraDeviceCanTakePicture is not present now, and so I cannot take a picture with requestTakePicture. Do I need to do anything special to regain these capabilities, like in older versions of macOS? Is my only option to use PTP commands? Thanks!
Posted
by da_gagnon.
Last updated
.
Post not yet marked as solved
0 Replies
216 Views
Why does PhotogrammetrySession.isSupported return true if Object Capture is supported? It would be great if you could use PhotogrammetrySession on iOS devices without lidar and feed it a folder of pictures to make a 3D model. Thanks!
Posted Last updated
.
Post not yet marked as solved
2 Replies
382 Views
I'm working on a very simple App where I need to visualize an image on the screen of an iPhone. However, the image has some special properties. It's a 16bit, yuv422_yuy2 encoded image. I already have all the raw bytes saved in a Data object. After googling for a long time, I still did not figure out the correct way. My current understanding is first create a CVPixelBuffer to properly represent the encoding information. Then conver the CVPixelBuffer to an UIImage. The following is my current implementation. public func YUV422YUY2ToUIImage(data: Data, height: Int, width: Int, bytesPerRow: Int) -> UIImage { return rosImage.data.withUnsafeMutableBytes { rawPointer in let baseAddress = rawPointer.baseAddress! let tempBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.allocate(capacity: 1) CVPixelBufferCreateWithBytes( kCFAllocatorDefault, width, height, kCVPixelFormatType_422YpCbCr16, baseAddress, bytesPerRow, nil, nil, nil, tempBufferPointer) let ciImage = CIImage(cvPixelBuffer: tempBufferPointer.pointee!) return UIImage(ciImage: ciImage) } } However, when I execute the code, I have the followin error -[CIImage initWithCVPixelBuffer:options:] failed because its pixel format v216 is not supported. So it seems CIImage is unhappy. I think I need to convert the encoding from yuv422_yuy2 to something like plain ARGB. But after a long tim googling, I didn't find a way to do that. The closest function I cand find is https://developer.apple.com/documentation/accelerate/1533015-vimageconvert_422cbypcryp16toarg But the function is too complex for me to understand how to use it. Any help is appreciated. Thank you!
Posted
by yaoyuh.
Last updated
.
Post not yet marked as solved
0 Replies
242 Views
I need to capture 4k photos with 4:3 ratio from the camera. I can do this, but i want to disable video stabilization. I can disable video stabilization using the AVCaptureSessionPresetHigh preset. But AVCaptureSessionPresetHigh gives me a 16:9 photo with the surroundings cropped. Unfortunately, the 16:9 ratio does not solve my needs. When I run the session using the AVCaptureSessionPresetPhoto preset and adding AVCapturePhotoOutput, I cannot turn off image stabilization. self.capturePhotoOutput = AVCapturePhotoOutput.init() self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } if let connection = capturePhotoOutput?.connection(with: .video) { if connection.isVideoStabilizationSupported { connection.preferredVideoStabilizationMode = .off } } DispatchQueue.main.async { [self] in self.capturePhotoOutput?.isHighResolutionCaptureEnabled = true self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ let photoSettings = AVCapturePhotoSettings() if let photoPreviewType = photoSettings.availablePreviewPhotoPixelFormatTypes.first { photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String:photoPreviewType] photoSettings.isAutoStillImageStabilizationEnabled = false capturePhotoOutput?.capturePhoto(with: photoSettings, delegate: self) } } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { if let dataImage = photo.fileDataRepresentation() { print(UIImage(data: dataImage)?.size as Any) let dataProvider = CGDataProvider(data: dataImage as CFData) let cgImageRef: CGImage! = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent) let image = UIImage(cgImage: cgImageRef, scale: 1.0, orientation: rotateImage(orientation: currentOrientation)) } else { print("some error here") } } As a temporary solution, I added only AVCaptureVideoDataOutput to the session without adding AVCapturePhotoOutput, and I can capture in 4:3 format with the captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) function. However, this time I cannot get a 4K image. In short, I need to turn off video stabilization in a session with AVCapturePhotoOutput added. self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) videoDataOutput = AVCaptureVideoDataOutput() videoDataOutput?.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA) ] videoDataOutput?.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue")) if ((captureSession?.canAddOutput(videoDataOutput!)) != nil) { captureSession?.addOutput(videoDataOutput!) } /* If I cancel the comment line, video stabilization is enabled. if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } */ DispatchQueue.main.async { [self] in self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ takePicture = true } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { if !takePicture { return //we have nothing to do with the image buffer } //try and get a CVImageBuffer out of the sample buffer guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } let rect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(cvBuffer), height: CVPixelBufferGetHeight(cvBuffer)) let ciImage = CIImage.init(cvImageBuffer: cvBuffer) let ciContext = CIContext() let cgImage = ciContext.createCGImage(ciImage, from: rect) guard cgImage != nil else {return } let uiImage = UIImage(cgImage: cgImage!) }
Posted Last updated
.
Post not yet marked as solved
1 Replies
243 Views
Hello, we are embedding a PHPickerViewController with UIKit (adding the vc as a child vc, embedding the view, calling didMoveToParent) in our app using the compact mode. We are disabling the following capabilities .collectionNavigation, .selectionActions, .search. One of our users using iOS 17.2.1 and iPhone 12 encountered a crash with the following stacktrace: Crashed: com.apple.main-thread 0 libsystem_kernel.dylib 0x9fbc __pthread_kill + 8 1 libsystem_pthread.dylib 0x5680 pthread_kill + 268 2 libsystem_c.dylib 0x75b90 abort + 180 3 PhotoFoundation 0x33b0 -[PFAssertionPolicyCrashReport notifyAssertion:] + 66 4 PhotoFoundation 0x3198 -[PFAssertionPolicyComposite notifyAssertion:] + 160 5 PhotoFoundation 0x374c -[PFAssertionPolicyUnique notifyAssertion:] + 176 6 PhotoFoundation 0x2924 -[PFAssertionHandler handleFailureInFunction:file:lineNumber:description:arguments:] + 140 7 PhotoFoundation 0x3da4 _PFAssertFailHandler + 148 8 PhotosUI 0x22050 -[PHPickerViewController _handleRemoteViewControllerConnection:extension:extensionRequestIdentifier:error:completionHandler:] + 1356 9 PhotosUI 0x22b74 __66-[PHPickerViewController _setupExtension:error:completionHandler:]_block_invoke_3 + 52 10 libdispatch.dylib 0x26a8 _dispatch_call_block_and_release + 32 11 libdispatch.dylib 0x4300 _dispatch_client_callout + 20 12 libdispatch.dylib 0x12998 _dispatch_main_queue_drain + 984 13 libdispatch.dylib 0x125b0 _dispatch_main_queue_callback_4CF + 44 14 CoreFoundation 0x3701c __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16 15 CoreFoundation 0x33d28 __CFRunLoopRun + 1996 16 CoreFoundation 0x33478 CFRunLoopRunSpecific + 608 17 GraphicsServices 0x34f8 GSEventRunModal + 164 18 UIKitCore 0x22c62c -[UIApplication _run] + 888 19 UIKitCore 0x22bc68 UIApplicationMain + 340 20 WorkAngel 0x8060 main + 20 (main.m:20) 21 ??? 0x1bd62adcc (Missing) Please share if you have any ideas as to what might have caused that, or what to look at in such a case. I haven't been able to reproduce this myself unfortunately.
Posted Last updated
.
Post not yet marked as solved
0 Replies
287 Views
Is it possible to get the camera intrinsic matrix for a captured single photo on iOS? I know that one can get the cameraCalibrationData from a AVCapturePhoto, which also contains the intrinsicMatrix. However, this is only provided when using a constituent (i.e. multi-camera) capture device and setting virtualDeviceConstituentPhotoDeliveryEnabledDevices to multiple devices (or enabling isDualCameraDualPhotoDeliveryEnabled on older iOS versions). Then photoOutput(_:didFinishProcessingPhoto:) is called multiple times, delivering one photo for each camera specified. Those then contain the calibration data. As far as I know, there is no way to get the calibration data for a normal, single-camera photo capture. I also found that one can set isCameraIntrinsicMatrixDeliveryEnabled on a capture connection that leads to a AVCaptureVideoDataOutput. The buffers that arrive at the delegate of that output then contain the intrinsic matrix via the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix metadata. However, this requires adding another output to the capture session, which feels quite wasteful just for getting this piece of metadata. Also, I would somehow need to figure out which buffer was temporarily closest to when the actual photo was taken. Is there a better, simpler way for getting the camera intrinsic matrix for a single photo capture? If not, is there a way to calculate the matrix based on the image's metadata?
Posted Last updated
.
Post not yet marked as solved
0 Replies
303 Views
I am working on enabling the option for users to save a video from a post in a social media app to their cameral roll. I am trying to use PHPhotoLibrary to perform the task similarly to how I did the functionality for saving images and gifs. However, when I try to perform the task with the code as is, I get the following errors: Error Domain=PHPhotosErrorDomain Code=-1 "(null)" The operation couldn’t be completed. (PHPhotosErrorDomain error -1.) The implementation is as follows: Button(action: { guard let videoURL = URL(string: media.link.absoluteString) else { print("Invalid video url.") return } PHPhotoLibrary.shared().performChanges({ PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL) print("Video URL: \(videoURL)") }) { (success, error) in if let error = error { debugPrint(error) print(error.localizedDescription) } else { print("Video saved to camera roll!") } } }) { Text("Save Video") Image(systemName: "square.and.arrow.down") } The video URL is successfully fetched dynamically from the post, but there's an issue with storing it locally in the library. What am I missing?
Posted Last updated
.
Post marked as solved
3 Replies
515 Views
I'm trying to add a video asset to my app's photo library, via drag/drop from the Photos app. I managed to get the video's URL from the drag, but when I try to create the PHAsset for it I get an error: PHExternalAssetResource: Unable to issue sandbox extension for /private/var/mobile/Containers/Data/Application/28E04EDD-56C1-405E-8EE0-7842F9082875/tmp/.com.apple.Foundation.NSItemProvider.fXiVzf/IMG_6974.mov Here's my code to add the asset: let url = URL(string: videoPath)! PHPhotoLibrary.shared().performChanges({ PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: url) }) { saved, error in // Error !! } Addictionally, this check is true in the debugger: UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(videoPath) == true Note that adding still images, much in the same way, works fine. And I naturally have photo library permissions enabled. Any idea what I'm missing? I'm seeing the same error on iOS17.2 and iPadOS 17.2, with Xcode 15.2. Thanks for any tips ☺️
Posted
by manuelC.
Last updated
.
Post not yet marked as solved
1 Replies
293 Views
In my app I use PhotosPicker to select images. After selection the images the image data will be saved in a CoreData entity - this works fine. However, When the user wants to add more images and go back to adding photos with PhotosPicker - how can I reference the already added images and show them as selected in PhotosPicker? The imageIdentifier is not allowed to use, so how can I do get a reference to the selected images to display them as selected in PhotosPicker? Thanks for any hint!
Posted
by frunny.
Last updated
.
Post not yet marked as solved
0 Replies
600 Views
Hello. Does anyone have any ideas on how to work with the new iOS 17 Live Photo? I can save the live photo, but I can't set it as wallpaper. Error: "Motion is not available in iOS 17" There are already applications that allow you to do this - VideoToLive and the like. What should I use to implement this with swift language? Most likely the metadata needs to be changed, but I'm not sure.
Posted
by MRKIOS.
Last updated
.
Post not yet marked as solved
0 Replies
306 Views
i do not really know how this works but hi I am Philemon. for a school assignment I need to program a app I have 2 years for this and it is for people that are interested in coding. I want to make a iOS app that can make 3d models from pictures (photogrammetry) and I know that there are already apps for this but I want to code this myself. I have a little bit of experience coding c# in unity but I really don't know where to start can someone help me? and I know that apple has reality kit but I want that people without a LiDAR Scanner can use this too. so where do I start witch language do I need to learn? every comment is welcome!!! kind regards Philemon
Posted
by Philemon.
Last updated
.
Post not yet marked as solved
1 Replies
536 Views
Hi! I am creating a document based app and I am wondering how can I save images in my model and integrate them in the document file. "Image" does not conform to "Persistence model" and I am sure that "Data" will be saved in the document file. Any clue on how can I do it? Many thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
224 Views
how can we change the image quality, size, camera and cadence used during RoomPlan's scanning. We are getting the images from RoomCaptureSession.
Posted
by Ramneet.
Last updated
.