Photos and Imaging

RSS for tag

Integrate still images and other forms of photography into your apps.

Posts under Photos and Imaging tag

76 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Cannot take picture using requestTakePicture
I am currently renovating an application for macOS Sonoma (14.4) which triggers a Canon 60D via USB cable. Unlike what happened before in MacOS 10.6, the camera (ICCameraDevice) has description that contains only 2 capabilities: { UUIDString = "00000000-0000-0000-0000-000004A93215"; autolaunchApplicationPath = ""; capabilities = ( ICCameraDeviceCanDeleteOneFile, ICCameraDeviceCanAcceptPTPCommands ); class = ICCameraDevice; connectionID = 0xffff0001; delegate = "<0x600003157ac0>"; deviceID = 0xffff0001; deviceRef = 0xffff0001; iconPath = "(null)"; locationDescription = ICDeviceLocationDescriptionUSB; moduleExecutableArchitecture = 0; modulePath = "/System/Library/Image Capture/Devices/PTPCamera.app"; moduleVersion = "1.0"; name = "Canon EOS 60D"; persistentIDString = "00000000-0000-0000-0000-000004A93215"; shared = NO; softwareInstallPercentDone = "0.000000"; transportType = ICTransportTypeUSB; type = 0x00000101; } timeOffset : 0.000000 hasConfigurableWiFiInterface : N/A isAccessRestrictedAppleDevice : NO As you can see, ICCameraDeviceCanTakePicture is not present now, and so I cannot take a picture with requestTakePicture. Do I need to do anything special to regain these capabilities, like in older versions of macOS? Is my only option to use PTP commands? Thanks!
0
0
385
Mar ’24
Object Detection using Vision performs different than in Create ML Preview
Context So basically I've trained my model for object detection with +4k images. Under preview I'm able to check the prediction for Image "A" which detects two labels with 100% and its Bounding Boxes look accurate. The problem itself However, inside the Swift Playground, when I try to perform object detection using the same model and same Image I don't get same results. What I expected Is that after performing the request and processing the array of VNRecognizedObjectObservation would show the very same results that appear in CreateML Preview. Notes: So the way I'm importing the model into playground is just by drag and drop. I've trained the images using JPEG format. The test Image is rotated so that it looks vertical using MacOS Finder rotation tool. I've tried, while creating VNImageRequestHandlerto pass a different orientation, with the same result. Swift Playground code This is the code I'm using. import UIKit import Vision do{ let model = try MYMODEL_FROMCREATEML(configuration: MLModelConfiguration()) let mlModel = model.model let coreMLModel = try VNCoreMLModel(for: mlModel) let request = VNCoreMLRequest(model: coreMLModel) { request, error in guard let results = request.results as? [VNRecognizedObjectObservation] else { return } results.forEach { result in print(result.labels) print(result.boundingBox) } } let image = UIImage(named: "TEST_IMAGE.HEIC")! let requestHandler = VNImageRequestHandler(cgImage: image.cgImage!) try requestHandler.perform([request]) } catch { print(error) } Additional Notes & Uncertainties Not sure if this is relevant, but just in case: I've trained the model using pictures I took from my iPhone using 48MP HEIC format. All photos were on vertical position. With a python script I overwrote the EXIF orientation to 1 (Normal). This was in order to be able to annotate the images using the CVAT tool and then convert to CreateML annotation format. Assumption #1 Since I've read that Object Detection in Create ML is based on YOLOv3 architecture which inside the first layer resizes the image dimension, meaning that I don't have to worry about using very large images to train my model. Is this correct? Assumption #2 Also makes me asume that the same thing happens when I try to make a prediction?
0
0
575
Mar ’24
Problems importing iPhones’ medias into macOS’ Photos app via USB cables.
With USB cable connection (no cloud) to import from updated iPhones (11 Pro Max, 12 mini, and 13 with their updated iOSes) into updated macOSes (Ventura v13.x, Big Sur v10.11.x, and Mojave v10.14.x)'s Photos app, I noticed imports show already imported medias and missing brand new medias. Others and I noticed this problem in our multiple macOSes with iPhones: https://discussions.apple.com/thread/255565285 and https://talk.tidbits.com/t/does-anyone-have-problems-importing-iphones-medias-into-macos-photos-app/27406/. Thank you for reading and hopefully answering soon. :)
0
0
344
Apr ’24
Why Does CameraPicker Require Authorization While ImagePicker and PhotoPicker Do Not?
**Why does using CameraPicker require user authorization through a pop-up? ** Why don't ImagePicker or PhotoPicker require additional pop-up authorizations for accessing the photo library? All of these are implemented using UIImagePickerController, so why does one require a pop-up and the others do not? Additionally, I thought that by configuring the picker, I would theoretically not need any permissions. If permissions are still required, wouldn’t it make more sense to directly request camera permissions and utilize the native camera functionality? What then are the advantages of using the picker?
0
0
355
Apr ’24
The FOV is not inconsistent when taking photo if stabilization applied in iPhone15 pro max
Hi, I am developing iOS mobile camera. I noticed one issue related to the user privacy. when AVCaptureVideoStabilizationModeStandard is set to AVCaptureConnection which sessionPreset is 1920x1080Preset, after using system API to take a photo, the FOV of the photo will be bigger than preview stream and it will show more content especially in iPhone 15 pro max rear camera. I think this inconsistency will cause the user privacy issue. Can you show me the solution if I don't want to turn the StabilizationMode OFF? I tried other devices, this issue is ok but in iPhone 15pm this issue is very obvious. Any suggestions are appreciated.
1
0
336
May ’24
Images retain memory usage
This is a very simple code in which there is only one button to start with. After you click the button, a list of images appear. The issue I have is that when I click on the new button to hide the images, the memory stays the same as when all the images appeared for the first time. As you can see from the images below, when I start the app, it starts with 18.5 mb, when I show the images it jumps to 38.5 mb and remains like that forever. I have tried various way to try and reduce the memory usage but I just can't find a solution that works. Does anyone know how to solve this? Thank you! import SwiftUI struct ContentView: View { @State private var imagesBeingShown = false @State var listOfImages = ["ImageOne", "ImageTwo", "ImageThree", "ImageFour", "ImageFive", "ImageSix", "ImageSeven", "ImageEight", "ImageNine", "ImageTen", "ImageEleven", "ImageTwelve", "ImageThirteen", "ImageFourteen", "ImageFifteen", "ImageSixteen", "ImageSeventeen", "ImageEighteen"] var body: some View { if !imagesBeingShown { VStack{ Button(action: { imagesBeingShown = true }, label: { Text("Turn True") }) } .padding() } else { VStack { Button(action: { imagesBeingShown = false }, label: { Text("Turn false") }) ScrollView { LazyVStack { ForEach(0..<listOfImages.count, id: \.self) { many in Image(listOfImages[many]) } } } } } } }
1
0
281
May ’24
Not able to view custom stereo/spatial images in VisionOS 2
Hello, I've been creating my own stereoscopic images on my laptop and airdropping them to the Vision Pro to view them in 3D. My custom images have a left_eye.png and right_eye.png and have been combined into one HEIF image (as it is done natively with the headset) In VisionOS 1.xx Photos app, I was able to see my custom images in 3D, but in VisionOS 2, the device no longer recognizes that my image(s) should also be shown stereoscopically and instead, it shows it in 2D. I see that it gives me the option to use the AI tool to convert 2D into 3D, but the original file that I airdropped to myself (Mac --> AVP Photos Album) already has a left and right image pair. Is this something that can be fixed?
1
0
249
Jun ’24
Add 30 frames per secons in assetWriter
Hello, I have converted UIImage to CVPixelBuffer. I am creating a video writing app. In some cases, the same CVPixelBuffer should last in the video for 2 seconds or more. However, I need to add 30 CVPixelBuffers per second because the video, to work on social media, must be 30 frames per second. The problem is that whenever I try to add frames to long videos, like 50-minute videos, it gives an error. The error is something like "Operation cannot be completed". Give me an example of a loop to add 30 CVPixelBuffers per second to a currently written video. Example: while true { if videoInput.isReadyForMoreMediaData { break } if videoInput.isReadyForMoreMediaData, let buffer = videoProvider.getNextFrame() { adaptor.append(buffer, withPresentationTime: CMTime(value: 1, timescale: 30)) } } I await your response.
0
0
250
Jun ’24
Reducing storage of similar PNGs by compressing them into a video and retrieving them losslessly--possibility or dumb idea?
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another. I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist. Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage. Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
18
0
595
3w
Add the info of each picture in the photo app, which is derived from the name of the user's self-built album.
Well, I will collect a lot of memes from the Internet and save them on my iPhone. I will name and classify them, but I will click on a photo in "All Photos", and its info does not show which album I added to, which makes me very distressed. If I have this function, I will easily manage the memes that I did not correctly add to the corresponding album.
1
0
214
3w
Generating Live Photo from JPG and MOV fails
I am working on an iOS application using SwiftUI where I want to convert a JPG and a MOV file to a live photo. I am utilizing the LivePhoto Class from Github for this. The JPG and MOV files are displayed correctly in my WallpaperDetailView, but I am facing issues when trying to download the live photo to the gallery and generate the Live Photo. Here is the relevant code and the errors I am encountering: Console prints: Play button should be visible Image URL fetched and set: Optional("https://firebasestorage.googleapis.com/...") Video is ready to play Video downloaded to: file:///var/mobile/Containers/Data/Application/.../tmp/CFNetworkDownload_7rW5ny.tmp Failed to generate Live Photo I have verified that the app has the necessary permissions to access the Photo Library. The JPEG and MOV files are successfully downloaded and can be displayed in the app. The issue seems to occur when generating the Live Photo from the downloaded files. struct WallpaperDetailView: View { var wallpaper: Wallpaper @State private var isLoading = false @State private var isImageSaved = false @State private var imageURL: URL? @State private var livePhotoVideoURL: URL? @State private var player: AVPlayer? @State private var playerViewController: AVPlayerViewController? @State private var isVideoReady = false @State private var showBuffering = false var body: some View { ZStack { if let imageURL = imageURL { GeometryReader { geometry in KFImage(imageURL) .resizable() ... } } if let playerViewController = playerViewController { VideoPlayerViewController(playerViewController: playerViewController) .frame(maxWidth: .infinity, maxHeight: .infinity) .clipped() .edgesIgnoringSafeArea(.all) } } .onAppear { PHPhotoLibrary.requestAuthorization { status in if status == .authorized { loadImage() } else { print("User denied access to photo library") } } } private func loadImage() { isLoading = true if let imageURLString = wallpaper.imageURL, let imageURL = URL(string: imageURLString) { self.imageURL = imageURL if imageURL.scheme == "file" { self.isLoading = false print("Local image URL set: \(imageURL)") } else { fetchDownloadURL(from: imageURLString) { url in self.imageURL = url self.isLoading = false print("Image URL fetched and set: \(String(describing: url))") } } } if let livePhotoVideoURLString = wallpaper.livePhotoVideoURL, let livePhotoVideoURL = URL(string: livePhotoVideoURLString) { self.livePhotoVideoURL = livePhotoVideoURL preloadAndPlayVideo(from: livePhotoVideoURL) } else { self.isLoading = false print("No valid image or video URL") } } private func preloadAndPlayVideo(from url: URL) { self.player = AVPlayer(url: url) let playerViewController = AVPlayerViewController() playerViewController.player = self.player self.playerViewController = playerViewController let playerItem = AVPlayerItem(url: url) playerItem.preferredForwardBufferDuration = 1.0 self.player?.replaceCurrentItem(with: playerItem) ... print("Live Photo Video URL set: \(url)") } private func saveWallpaperToPhotos() { if let imageURL = imageURL, let livePhotoVideoURL = livePhotoVideoURL { saveLivePhotoToPhotos(imageURL: imageURL, videoURL: livePhotoVideoURL) } else if let imageURL = imageURL { saveImageToPhotos(url: imageURL) } } private func saveImageToPhotos(url: URL) { ... } private func saveLivePhotoToPhotos(imageURL: URL, videoURL: URL) { isLoading = true downloadVideo(from: videoURL) { localVideoURL in guard let localVideoURL = localVideoURL else { print("Failed to download video for Live Photo") DispatchQueue.main.async { self.isLoading = false } return } print("Video downloaded to: \(localVideoURL)") self.generateAndSaveLivePhoto(imageURL: imageURL, videoURL: localVideoURL) } } private func generateAndSaveLivePhoto(imageURL: URL, videoURL: URL) { LivePhoto.generate(from: imageURL, videoURL: videoURL, progress: { percent in print("Progress: \(percent)") }, completion: { livePhoto, resources in guard let resources = resources else { print("Failed to generate Live Photo") DispatchQueue.main.async { self.isLoading = false } return } print("Live Photo generated with resources: \(resources)") self.saveLivePhotoToLibrary(resources: resources) }) } private func saveLivePhotoToLibrary(resources: LivePhoto.LivePhotoResources) { LivePhoto.saveToLibrary(resources) { success in DispatchQueue.main.async { if success { self.isImageSaved = true print("Live Photo saved successfully") } else { print("Failed to save Live Photo") } self.isLoading = false } } } private func fetchDownloadURL(from gsURL: String, completion: @escaping (URL?) -> Void) { let storageRef = Storage.storage().reference(forURL: gsURL) storageRef.downloadURL { url, error in if let error = error { print("Failed to fetch image URL: \(error)") completion(nil) } else { completion(url) } } } private func downloadVideo(from url: URL, completion: @escaping (URL?) -> Void) { let task = URLSession.shared.downloadTask(with: url) { localURL, response, error in guard let localURL = localURL, error == nil else { print("Failed to download video: \(String(describing: error))") completion(nil) return } completion(localURL) } task.resume() } }```
0
0
125
1w
Really High Energy Use
I'm developing an app where users can select items to add to a screen, similar to creating a Canva presentation or choosing blocks in Minecraft. However, I'm encountering an issue with energy usage. When users click the arrows to browse different items, the energy use spikes significantly. Although it returns to normal after a while, continuous clicking causes the energy use to skyrocket. The images I'm using are 500x500 pixels. Ideally, I would like to avoid caching all the images, as the app might have up to 500 items and caching them all would consume too much memory. I have tried numerous way to avoid this but I just can’t seem to make it work. Would anyone know how to avoid such problem? I have included a picture of the energy use when just opened, and one after like 10 seconds of continuously clicking on an arrow to see more items. Also a picture of how the app looks. struct ContentView: View { struct babyBackground { var littleImage = "" } @State var firstSet: [babyBackground] = [ babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), babyBackground(littleImage: "anotherShape"), babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), babyBackground(littleImage: "anotherShape") ] @State var secondSet: [babyBackground] = [ babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), babyBackground(littleImage: "anotherShape"), babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), babyBackground(littleImage: "anotherShape"), babyBackground(littleImage: "circle") ] @State var thirdSet: [babyBackground] = [ babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), ] let columns: [GridItem] = Array(repeating: .init(.flexible()), count: 4) func createBackgroundGridView(for backgrounds: [babyBackground], columns: [GridItem] ) -> some View { LazyVGrid(columns: columns, spacing: 10) { ForEach(0..<backgrounds.count, id: \.self) { index in Button(action: { }, label: { if let path = Bundle.main.path(forResource: backgrounds[index].littleImage, ofType: "png"), let uiImage = UIImage(contentsOfFile: path) { Image(uiImage: uiImage) .resizable() .frame(width: 126, height: 96) } }) } } .padding() } @State var indexOn = 0 var body: some View { HStack{ Button(action: { indexOn = (indexOn == 0) ? 2 : indexOn - 1 }) { Label("", systemImage: "arrowtriangle.left.fill") .font(.system(size: 50)) } Spacer() ScrollView { switch indexOn { case 0: createBackgroundGridView(for: firstSet, columns: columns) case 1: createBackgroundGridView(for: secondSet, columns: columns) case 2: createBackgroundGridView(for: thirdSet, columns: columns) case 3: createBackgroundGridView(for: thirdSet, columns: columns) default: createBackgroundGridView(for: firstSet, columns: columns) } } .frame(maxWidth: .infinity, maxHeight: .infinity) Spacer() Button(action: { indexOn = (indexOn == 2) ? 0 : indexOn + 1 }) { Label("", systemImage: "arrowtriangle.right.fill") .font(.system(size: 50)) } } } } Energy Use when app starts: Energy use after clicking for about 10 seconds: App UI:
0
1
91
1w