Photos and Imaging

RSS for tag

Integrate still images and other forms of photography into your apps.

Posts under Photos and Imaging tag

76 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Add the info of each picture in the photo app, which is derived from the name of the user's self-built album.
Well, I will collect a lot of memes from the Internet and save them on my iPhone. I will name and classify them, but I will click on a photo in "All Photos", and its info does not show which album I added to, which makes me very distressed. If I have this function, I will easily manage the memes that I did not correctly add to the corresponding album.
1
0
104
4d
Reducing storage of similar PNGs by compressing them into a video and retrieving them losslessly--possibility or dumb idea?
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another. I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist. Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage. Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
18
0
273
1d
Add 30 frames per secons in assetWriter
Hello, I have converted UIImage to CVPixelBuffer. I am creating a video writing app. In some cases, the same CVPixelBuffer should last in the video for 2 seconds or more. However, I need to add 30 CVPixelBuffers per second because the video, to work on social media, must be 30 frames per second. The problem is that whenever I try to add frames to long videos, like 50-minute videos, it gives an error. The error is something like "Operation cannot be completed". Give me an example of a loop to add 30 CVPixelBuffers per second to a currently written video. Example: while true { if videoInput.isReadyForMoreMediaData { break } if videoInput.isReadyForMoreMediaData, let buffer = videoProvider.getNextFrame() { adaptor.append(buffer, withPresentationTime: CMTime(value: 1, timescale: 30)) } } I await your response.
0
0
146
1w
Not able to view custom stereo/spatial images in VisionOS 2
Hello, I've been creating my own stereoscopic images on my laptop and airdropping them to the Vision Pro to view them in 3D. My custom images have a left_eye.png and right_eye.png and have been combined into one HEIF image (as it is done natively with the headset) In VisionOS 1.xx Photos app, I was able to see my custom images in 3D, but in VisionOS 2, the device no longer recognizes that my image(s) should also be shown stereoscopically and instead, it shows it in 2D. I see that it gives me the option to use the AI tool to convert 2D into 3D, but the original file that I airdropped to myself (Mac --> AVP Photos Album) already has a left and right image pair. Is this something that can be fixed?
1
0
159
1w
Images retain memory usage
This is a very simple code in which there is only one button to start with. After you click the button, a list of images appear. The issue I have is that when I click on the new button to hide the images, the memory stays the same as when all the images appeared for the first time. As you can see from the images below, when I start the app, it starts with 18.5 mb, when I show the images it jumps to 38.5 mb and remains like that forever. I have tried various way to try and reduce the memory usage but I just can't find a solution that works. Does anyone know how to solve this? Thank you! import SwiftUI struct ContentView: View { @State private var imagesBeingShown = false @State var listOfImages = ["ImageOne", "ImageTwo", "ImageThree", "ImageFour", "ImageFive", "ImageSix", "ImageSeven", "ImageEight", "ImageNine", "ImageTen", "ImageEleven", "ImageTwelve", "ImageThirteen", "ImageFourteen", "ImageFifteen", "ImageSixteen", "ImageSeventeen", "ImageEighteen"] var body: some View { if !imagesBeingShown { VStack{ Button(action: { imagesBeingShown = true }, label: { Text("Turn True") }) } .padding() } else { VStack { Button(action: { imagesBeingShown = false }, label: { Text("Turn false") }) ScrollView { LazyVStack { ForEach(0..<listOfImages.count, id: \.self) { many in Image(listOfImages[many]) } } } } } } }
1
0
205
May ’24
The FOV is not inconsistent when taking photo if stabilization applied in iPhone15 pro max
Hi, I am developing iOS mobile camera. I noticed one issue related to the user privacy. when AVCaptureVideoStabilizationModeStandard is set to AVCaptureConnection which sessionPreset is 1920x1080Preset, after using system API to take a photo, the FOV of the photo will be bigger than preview stream and it will show more content especially in iPhone 15 pro max rear camera. I think this inconsistency will cause the user privacy issue. Can you show me the solution if I don't want to turn the StabilizationMode OFF? I tried other devices, this issue is ok but in iPhone 15pm this issue is very obvious. Any suggestions are appreciated.
1
0
279
May ’24
Why Does CameraPicker Require Authorization While ImagePicker and PhotoPicker Do Not?
**Why does using CameraPicker require user authorization through a pop-up? ** Why don't ImagePicker or PhotoPicker require additional pop-up authorizations for accessing the photo library? All of these are implemented using UIImagePickerController, so why does one require a pop-up and the others do not? Additionally, I thought that by configuring the picker, I would theoretically not need any permissions. If permissions are still required, wouldn’t it make more sense to directly request camera permissions and utilize the native camera functionality? What then are the advantages of using the picker?
0
0
296
Apr ’24
Problems importing iPhones’ medias into macOS’ Photos app via USB cables.
With USB cable connection (no cloud) to import from updated iPhones (11 Pro Max, 12 mini, and 13 with their updated iOSes) into updated macOSes (Ventura v13.x, Big Sur v10.11.x, and Mojave v10.14.x)'s Photos app, I noticed imports show already imported medias and missing brand new medias. Others and I noticed this problem in our multiple macOSes with iPhones: https://discussions.apple.com/thread/255565285 and https://talk.tidbits.com/t/does-anyone-have-problems-importing-iphones-medias-into-macos-photos-app/27406/. Thank you for reading and hopefully answering soon. :)
0
0
304
Apr ’24
Object Detection using Vision performs different than in Create ML Preview
Context So basically I've trained my model for object detection with +4k images. Under preview I'm able to check the prediction for Image "A" which detects two labels with 100% and its Bounding Boxes look accurate. The problem itself However, inside the Swift Playground, when I try to perform object detection using the same model and same Image I don't get same results. What I expected Is that after performing the request and processing the array of VNRecognizedObjectObservation would show the very same results that appear in CreateML Preview. Notes: So the way I'm importing the model into playground is just by drag and drop. I've trained the images using JPEG format. The test Image is rotated so that it looks vertical using MacOS Finder rotation tool. I've tried, while creating VNImageRequestHandlerto pass a different orientation, with the same result. Swift Playground code This is the code I'm using. import UIKit import Vision do{ let model = try MYMODEL_FROMCREATEML(configuration: MLModelConfiguration()) let mlModel = model.model let coreMLModel = try VNCoreMLModel(for: mlModel) let request = VNCoreMLRequest(model: coreMLModel) { request, error in guard let results = request.results as? [VNRecognizedObjectObservation] else { return } results.forEach { result in print(result.labels) print(result.boundingBox) } } let image = UIImage(named: "TEST_IMAGE.HEIC")! let requestHandler = VNImageRequestHandler(cgImage: image.cgImage!) try requestHandler.perform([request]) } catch { print(error) } Additional Notes & Uncertainties Not sure if this is relevant, but just in case: I've trained the model using pictures I took from my iPhone using 48MP HEIC format. All photos were on vertical position. With a python script I overwrote the EXIF orientation to 1 (Normal). This was in order to be able to annotate the images using the CVAT tool and then convert to CreateML annotation format. Assumption #1 Since I've read that Object Detection in Create ML is based on YOLOv3 architecture which inside the first layer resizes the image dimension, meaning that I don't have to worry about using very large images to train my model. Is this correct? Assumption #2 Also makes me asume that the same thing happens when I try to make a prediction?
0
0
521
Mar ’24
Cannot take picture using requestTakePicture
I am currently renovating an application for macOS Sonoma (14.4) which triggers a Canon 60D via USB cable. Unlike what happened before in MacOS 10.6, the camera (ICCameraDevice) has description that contains only 2 capabilities: { UUIDString = "00000000-0000-0000-0000-000004A93215"; autolaunchApplicationPath = ""; capabilities = ( ICCameraDeviceCanDeleteOneFile, ICCameraDeviceCanAcceptPTPCommands ); class = ICCameraDevice; connectionID = 0xffff0001; delegate = "<0x600003157ac0>"; deviceID = 0xffff0001; deviceRef = 0xffff0001; iconPath = "(null)"; locationDescription = ICDeviceLocationDescriptionUSB; moduleExecutableArchitecture = 0; modulePath = "/System/Library/Image Capture/Devices/PTPCamera.app"; moduleVersion = "1.0"; name = "Canon EOS 60D"; persistentIDString = "00000000-0000-0000-0000-000004A93215"; shared = NO; softwareInstallPercentDone = "0.000000"; transportType = ICTransportTypeUSB; type = 0x00000101; } timeOffset : 0.000000 hasConfigurableWiFiInterface : N/A isAccessRestrictedAppleDevice : NO As you can see, ICCameraDeviceCanTakePicture is not present now, and so I cannot take a picture with requestTakePicture. Do I need to do anything special to regain these capabilities, like in older versions of macOS? Is my only option to use PTP commands? Thanks!
0
0
351
Mar ’24
How do I disable video stabilization in a AVCaptureSession with AVCapturePhotoOutput added?
I need to capture 4k photos with 4:3 ratio from the camera. I can do this, but i want to disable video stabilization. I can disable video stabilization using the AVCaptureSessionPresetHigh preset. But AVCaptureSessionPresetHigh gives me a 16:9 photo with the surroundings cropped. Unfortunately, the 16:9 ratio does not solve my needs. When I run the session using the AVCaptureSessionPresetPhoto preset and adding AVCapturePhotoOutput, I cannot turn off image stabilization. self.capturePhotoOutput = AVCapturePhotoOutput.init() self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } if let connection = capturePhotoOutput?.connection(with: .video) { if connection.isVideoStabilizationSupported { connection.preferredVideoStabilizationMode = .off } } DispatchQueue.main.async { [self] in self.capturePhotoOutput?.isHighResolutionCaptureEnabled = true self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ let photoSettings = AVCapturePhotoSettings() if let photoPreviewType = photoSettings.availablePreviewPhotoPixelFormatTypes.first { photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String:photoPreviewType] photoSettings.isAutoStillImageStabilizationEnabled = false capturePhotoOutput?.capturePhoto(with: photoSettings, delegate: self) } } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { if let dataImage = photo.fileDataRepresentation() { print(UIImage(data: dataImage)?.size as Any) let dataProvider = CGDataProvider(data: dataImage as CFData) let cgImageRef: CGImage! = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent) let image = UIImage(cgImage: cgImageRef, scale: 1.0, orientation: rotateImage(orientation: currentOrientation)) } else { print("some error here") } } As a temporary solution, I added only AVCaptureVideoDataOutput to the session without adding AVCapturePhotoOutput, and I can capture in 4:3 format with the captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) function. However, this time I cannot get a 4K image. In short, I need to turn off video stabilization in a session with AVCapturePhotoOutput added. self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) videoDataOutput = AVCaptureVideoDataOutput() videoDataOutput?.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA) ] videoDataOutput?.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue")) if ((captureSession?.canAddOutput(videoDataOutput!)) != nil) { captureSession?.addOutput(videoDataOutput!) } /* If I cancel the comment line, video stabilization is enabled. if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } */ DispatchQueue.main.async { [self] in self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ takePicture = true } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { if !takePicture { return //we have nothing to do with the image buffer } //try and get a CVImageBuffer out of the sample buffer guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } let rect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(cvBuffer), height: CVPixelBufferGetHeight(cvBuffer)) let ciImage = CIImage.init(cvImageBuffer: cvBuffer) let ciContext = CIContext() let cgImage = ciContext.createCGImage(ciImage, from: rect) guard cgImage != nil else {return } let uiImage = UIImage(cgImage: cgImage!) }
0
0
434
Mar ’24
PHPickerViewController crashing with _PFAssertFailHandler
Hello, we are embedding a PHPickerViewController with UIKit (adding the vc as a child vc, embedding the view, calling didMoveToParent) in our app using the compact mode. We are disabling the following capabilities .collectionNavigation, .selectionActions, .search. One of our users using iOS 17.2.1 and iPhone 12 encountered a crash with the following stacktrace: Crashed: com.apple.main-thread 0 libsystem_kernel.dylib 0x9fbc __pthread_kill + 8 1 libsystem_pthread.dylib 0x5680 pthread_kill + 268 2 libsystem_c.dylib 0x75b90 abort + 180 3 PhotoFoundation 0x33b0 -[PFAssertionPolicyCrashReport notifyAssertion:] + 66 4 PhotoFoundation 0x3198 -[PFAssertionPolicyComposite notifyAssertion:] + 160 5 PhotoFoundation 0x374c -[PFAssertionPolicyUnique notifyAssertion:] + 176 6 PhotoFoundation 0x2924 -[PFAssertionHandler handleFailureInFunction:file:lineNumber:description:arguments:] + 140 7 PhotoFoundation 0x3da4 _PFAssertFailHandler + 148 8 PhotosUI 0x22050 -[PHPickerViewController _handleRemoteViewControllerConnection:extension:extensionRequestIdentifier:error:completionHandler:] + 1356 9 PhotosUI 0x22b74 __66-[PHPickerViewController _setupExtension:error:completionHandler:]_block_invoke_3 + 52 10 libdispatch.dylib 0x26a8 _dispatch_call_block_and_release + 32 11 libdispatch.dylib 0x4300 _dispatch_client_callout + 20 12 libdispatch.dylib 0x12998 _dispatch_main_queue_drain + 984 13 libdispatch.dylib 0x125b0 _dispatch_main_queue_callback_4CF + 44 14 CoreFoundation 0x3701c __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16 15 CoreFoundation 0x33d28 __CFRunLoopRun + 1996 16 CoreFoundation 0x33478 CFRunLoopRunSpecific + 608 17 GraphicsServices 0x34f8 GSEventRunModal + 164 18 UIKitCore 0x22c62c -[UIApplication _run] + 888 19 UIKitCore 0x22bc68 UIApplicationMain + 340 20 WorkAngel 0x8060 main + 20 (main.m:20) 21 ??? 0x1bd62adcc (Missing) Please share if you have any ideas as to what might have caused that, or what to look at in such a case. I haven't been able to reproduce this myself unfortunately.
1
0
400
Feb ’24
Camera intrinsic matrix for single photo capture
Is it possible to get the camera intrinsic matrix for a captured single photo on iOS? I know that one can get the cameraCalibrationData from a AVCapturePhoto, which also contains the intrinsicMatrix. However, this is only provided when using a constituent (i.e. multi-camera) capture device and setting virtualDeviceConstituentPhotoDeliveryEnabledDevices to multiple devices (or enabling isDualCameraDualPhotoDeliveryEnabled on older iOS versions). Then photoOutput(_:didFinishProcessingPhoto:) is called multiple times, delivering one photo for each camera specified. Those then contain the calibration data. As far as I know, there is no way to get the calibration data for a normal, single-camera photo capture. I also found that one can set isCameraIntrinsicMatrixDeliveryEnabled on a capture connection that leads to a AVCaptureVideoDataOutput. The buffers that arrive at the delegate of that output then contain the intrinsic matrix via the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix metadata. However, this requires adding another output to the capture session, which feels quite wasteful just for getting this piece of metadata. Also, I would somehow need to figure out which buffer was temporarily closest to when the actual photo was taken. Is there a better, simpler way for getting the camera intrinsic matrix for a single photo capture? If not, is there a way to calculate the matrix based on the image's metadata?
0
0
477
Feb ’24
Error When Saving Video To Camera Roll
I am working on enabling the option for users to save a video from a post in a social media app to their cameral roll. I am trying to use PHPhotoLibrary to perform the task similarly to how I did the functionality for saving images and gifs. However, when I try to perform the task with the code as is, I get the following errors: Error Domain=PHPhotosErrorDomain Code=-1 "(null)" The operation couldn’t be completed. (PHPhotosErrorDomain error -1.) The implementation is as follows: Button(action: { guard let videoURL = URL(string: media.link.absoluteString) else { print("Invalid video url.") return } PHPhotoLibrary.shared().performChanges({ PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL) print("Video URL: \(videoURL)") }) { (success, error) in if let error = error { debugPrint(error) print(error.localizedDescription) } else { print("Video saved to camera roll!") } } }) { Text("Save Video") Image(systemName: "square.and.arrow.down") } The video URL is successfully fetched dynamically from the post, but there's an issue with storing it locally in the library. What am I missing?
0
0
479
Feb ’24
PHExternalAssetResource: Unable to issue sandbox extension for file.mov
I'm trying to add a video asset to my app's photo library, via drag/drop from the Photos app. I managed to get the video's URL from the drag, but when I try to create the PHAsset for it I get an error: PHExternalAssetResource: Unable to issue sandbox extension for /private/var/mobile/Containers/Data/Application/28E04EDD-56C1-405E-8EE0-7842F9082875/tmp/.com.apple.Foundation.NSItemProvider.fXiVzf/IMG_6974.mov Here's my code to add the asset: let url = URL(string: videoPath)! PHPhotoLibrary.shared().performChanges({ PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: url) }) { saved, error in // Error !! } Addictionally, this check is true in the debugger: UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(videoPath) == true Note that adding still images, much in the same way, works fine. And I naturally have photo library permissions enabled. Any idea what I'm missing? I'm seeing the same error on iOS17.2 and iPadOS 17.2, with Xcode 15.2. Thanks for any tips ☺️
3
0
951
Feb ’24
Motion not available ios 17 live photo
Hello. Does anyone have any ideas on how to work with the new iOS 17 Live Photo? I can save the live photo, but I can't set it as wallpaper. Error: "Motion is not available in iOS 17" There are already applications that allow you to do this - VideoToLive and the like. What should I use to implement this with swift language? Most likely the metadata needs to be changed, but I'm not sure.
0
0
900
Feb ’24
PhotosPicker, how to select additional images later on?
In my app I use PhotosPicker to select images. After selection the images the image data will be saved in a CoreData entity - this works fine. However, When the user wants to add more images and go back to adding photos with PhotosPicker - how can I reference the already added images and show them as selected in PhotosPicker? The imageIdentifier is not allowed to use, so how can I do get a reference to the selected images to display them as selected in PhotosPicker? Thanks for any hint!
1
0
438
Feb ’24