Photos and Imaging

RSS for tag

Integrate still images and other forms of photography into your apps.

Posts under Photos and Imaging tag

70 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

How to edit gain map image to preserve HDR in Live Photo
I have an app that allows you to edit your photos. To preserve HDR, I edit both the SDR image and gain map image, like so: let sdrImage = CIImage(data: data, options: [.applyOrientationProperty: true]) let gainMapImage = CIImage(data: data, options: [.applyOrientationProperty: true, .auxiliaryHDRGainMap: true]) // edit them... try CIContext().writeHEIFRepresentation(of: sdrImage, to: url, format: .RGBA8, colorSpace: colorSpace, options: [.hdrGainMapImage: gainMapImage]) I also support editing the still photo in Live Photos. To do this you create a PHLivePhotoEditingContext, set the frameProcessor block which gives you a CIImage that I edit when the frame.type is .photo, then you create a PHContentEditingOutput and call saveLivePhoto. I’m not seeing any way to preserve HDR here. Interestingly the frame processor is called twice with .photo frame.type, but I don’t see any difference between these images. How can I edit a gain map image to preserve HDR in the still photo of a Live Photo?
0
0
99
4d
LockedCameraCaptureExtension can not get access to photo library
I've requested the authentication in my main app. PHPhotoLibrary.requestAuthorization(for: .readWrite) { status in } Add the privacy description in both the main app and the extension. But No matter the device is locked or unlocked. When I call let fetchResult = PHAsset.fetchAssets(with: .image, options: nil) let count = fetchResult.count the count is always zero, even after a new photo is saved to the album in the same session.
1
0
111
3d
writeImageAtIndex:1012: ⭕️ ERROR: 'App' is trying to save an opaque image (5712x4284) with 'AlphaLast'.
I have an app that edits photos in your library. When I call try CIContext().writeHEIFRepresentation(of: editedImage, to: fileURL, format: .RGBA8, colorSpace: originalImage.colorSpace!) The following is logged to the console: writeImageAtIndex:1012: ⭕️ ERROR: 'App' is trying to save an opaque image (5712x4284) with 'AlphaLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha. What does that mean and how can I resolve it? Xcode Version 16.0 (16A242d) iOS 18.1 (22B82)
0
2
184
6d
Core Image Tiling and ROI
Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc. The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions: return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); } arguments:@[src] ]; The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate: float4 backwards(sampler image, destination dest) { float2 dc = dest.coord(); dc.x = image.size().x - dc.x; return image.sample(image.transform(dc))); } When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large. All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing. I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet. Any thoughts/input appreciated, thanks!
3
0
185
1w
Green Line Appears in Thumbnails in Photos but Not in Actual Photo
Hello, I’ve encountered an issue where a green line appears only in the thumbnail view of an image in the iPhone photo gallery. The green line is not present in the actual image itself. Here are some additional details: • The issue occurs only when saving the image as a JPEG. When saved as PNG, the green line does not appear. • The green line also shows up when using the PHAsset method requestImage(for:targetSize:). Depending on the targetSize, the resulting image may contain the green line. • Interestingly, this issue does not appear on iPhone Xs Max running iOS 15.2.1. However, the green line does appear on iPhone 15 Pro Max running iOS 18.0.1 when viewing the same image. I have attached the problematic image for your reference. Following images are the screen captures that shows the issue occurring on my iPhone. iPhone 15 pro max (iOS 18.0.1) iPhone Xs Max (iOS 15.2.1 ) Could this be related to a display or gallery app issue on iOS? Any advice or solutions would be greatly appreciated. Thank you!
1
0
181
3d
Issues with ProRAW MAX(48) and stock camera app
Hello developer community. I purchase recently my new iPhone 16 Pro Max; it is a premium device with great quality overall. However, I am having a big trouble shotting in ProRaw MAX (48 mode) with native camera. Just to be clear, the problem that i will describe do not happen in 3rd apps, such as ProCam; only with native camera. When I use ProRaw Max, and take the photo, and watch the photo in the gallery the image can’t load and render properly. Even, when I maximize the image to the maximum I can see pixelated portions, defects and super low resolution and excessive denoise. For comparison, this not occur with my previous iPhone 15 PM and/or when I capture photos from ProCam (same settings and configurations) in the 16PM. I proceed to take the photo, open the gallery and I see full of details, when zoomed to 100%. I tried to format the phone, reinstall the software via my mac. Tried even to look at some forums to find if there’s someone with the same issue, the information available so far is very low. I’m in contact with apple assistant from my country (Portugal), and they escalated this problem to the engineers. (that’s what I’ve been told). They did all the tests remotely (via analysis and improvement’s) and they told me that my phone is perfect in the hardware department. I will wait for the next days to be contacted again. I’m on iOS 18.0.1. (The last software available at this time). I tried multiple 16PM, from friends, family and stores (more or less 10 units), and they all showed the exact same problematic. I’m a professional photographer, so I find this frustrating and unacceptable. I would appreciate any additional suggestion or information. Thank you! Cannot add photos or files because they are bigger than 5Mb.
1
0
258
2w
Get View Full HDR state from Settings > Photos to properly set preferredImageDynamicRange in editing extension
I'm updating my Photo Editing Extension to support HDR. To do this I set imageView.preferredImageDynamicRange = .high. But you can turn off the option to view HDR photos in the complete dynamic range in Settings > Photos. When you do that, open a photo, and tap the edit button, it does not appear in the full range as expected, but when you select my app from More > Extensions, it does appear in the complete dynamic range unexpectedly. I need to set imageView.preferredImageDynamicRange = .standard when View Full HDR is off, but I don't see any way to get that in my PHContentEditingController.
1
0
272
3w
What format for writeHEIFRepresentation preserves HDR?
In the WWDC 24 session "Use HDR for dynamic image experiences in your app" it's noted this is how you save edits for Adaptive HDR: SDR + HDR: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrImage: hdrImage]) SDR + Gain: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrGainMapImage: gainImage]) This won't compile because the format argument is missing. What format should be used? In the WWDC 23 session "Support HDR images in your app" RGBAf, RGBAh, and RGBA16, and RGB10 were mentioned but I'm not sure which one to use. If relevant, I'm editing photos from the user's photo library, so the image was probably taken on iPhone but perhaps not. Thanks!
1
0
198
1w
'You don’t have permission. - The AVPlayerItem instance has failed with the error code 257 and domain "NSCocoaErrorDomain".'
[[PHImageManager defaultManager] requestAVAssetForVideo:asset options:videoOptions resultHandler:^(AVAsset *_Nullable avAsset, AVAudioMix *_Nullable audioMix, NSDictionary *_Nullable info) { if ([avAsset isKindOfClass:[AVURLAsset class]]) { AVURLAsset *urlAsset = (AVURLAsset *)avAsset; NSURL *videoURL = urlAsset.URL; mediaInfo[@"path"] = videoURL.absoluteString; } else { // Failed to get video asset completion(nil); } }];``` Before iOS 18, i could able access AVAsset video using the method mentioned above with the url, but starting from the iOS 18 version, the following error appears 'You don’t have permission. - The AVPlayerItem instance has failed with the error code 257 and domain "NSCocoaErrorDomain".'
2
0
302
3w
How to extracted stereo image pair from generated spatial photos by visionOS 2.0
Hi, My app allows users to share and view spatial photos. For viewing spatial photos, I'm using a plane in a RealityView that has a camera index switch material node, which takes the stereo images as the inputs. For sharing native spatial photos taken on the vision pro, prior to visionOS 2.0, I extract the stereo image pair and merge them into a single side-by-side image to upload to the app's backend. However, since visionOS 2.0 introduced generating spatial photos from normal photos, I've been seeing some unexpected behaviours in my app, while on the other hand, they can be viewed correctly in the system Photos app: Sometimes the extracted images have different size, the right image is smaller than the left image. See the first image in the google drive below, taken with iPhone 15 Pro. Even if the image pair have the same size, when viewed in my app, it has some artefacts, especially around the edge of objects which are closer to the camera. See the second image in the google drive below, taken with iPhone 11. Google drive link here: https://drive.google.com/drive/folders/1UTfpxvO3-ChqshwfyzY5E_KCgk8VgUaa I know that now Quicklook preview application can support viewing spatial photos now, but I would like to keep it the way I implemented in the app, for compatibility concerns. Below is a code snippet that deals with the extraction. Please point out the correct way to extract stereo image pair from a generated spatial photo. Happy to submit a code-level support request if more information is needed. // the data is from photos picker item let data = try await photo.loadTransferable(type: Data.self) let source = CGImageSourceCreateWithData(data as CFData, nil) let sbsImage = source.extractSpatialPhoto() extension CGImageSource { func extractSpatialPhoto() -> UIImage? { guard let leftCGImage = extractSpatialImage(at: 0), let rightCGImage = extractSpatialImage(at: 1) else { return nil } let leftImage = UIImage(ciImage: leftCGImage) let rightImage = UIImage(ciImage: rightCGImage) guard leftImage.size == rightImage.size else { return nil } // merge left + right let size = CGSize(width: leftImage.size.width * 2, height: leftImage.size.height) UIGraphicsBeginImageContextWithOptions(size, true, 1.0) leftImage.draw(in: CGRect(x: 0, y: 0, width: leftImage.size.width, height: leftImage.size.height)) rightImage.draw(in: CGRect(x: leftImage.size.width, y: 0, width: rightImage.size.width, height: rightImage.size.height)) let mergedImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return mergedImage } // not sure if this actually works func extractSpatialImage(at index: Int) -> CIImage? { guard let cgImage = CGImageSourceCreateImageAtIndex(self, index, nil) else { return nil } var ciImage = CIImage(cgImage: cgImage) if let properties = CGImageSourceCopyPropertiesAtIndex(self, index, nil) as? [String: Any], let heifDictionary = properties[kCGImagePropertyHEIFDictionary as String] as? [String: Any], let extrinsics = heifDictionary[kIIOMetadata_CameraExtrinsicsKey as String] as? [String: Any], let position = extrinsics[kIIOCameraExtrinsics_Position as String] as? [Double] { // Default baseline is 64mm (0 for left camera, 0.064m for right camera) let standardBaseline = 0.064 // Check if it's the right image (should be at [0.064, 0, 0]) let isRightImage = (index == 1) let expectedPosition = isRightImage ? standardBaseline : 0.0 // Calculate the translation needed to align to standard baseline let positionDelta = position[0] - expectedPosition // Apply translation only if there's a mismatch in position if positionDelta != 0 { let transform = CGAffineTransform(translationX: CGFloat(positionDelta), y: 0) ciImage = ciImage.transformed(by: transform) } } return ciImage } }
1
0
380
4w
UIKit in SwiftUI memory leak while displaying images
I'm using UIKit to display a long list of large images inside a SwiftUI ScrollView and LazyHStack using UIViewControllerRepresentable. When an image is loaded, I'm using SDWebImage to load the image from the disk. As the user navigates through the list and continues to load more images, more memory is used and is never cleared, even as the images are unloaded by the LazyHStack. Eventually, the app reaches the memory limit and crashes. This issue persists if I load the image with UIImage(contentsOfFile: ...) instead of SDWebImage. How can I free the memory used by UIImage when the view is removed? ScrollView(.horizontal, showsIndicators: false) { LazyHStack(spacing: 16) { ForEach(allItems) { item in TestImageDisplayRepresentable(item: item) .frame(width: geometry.size.width, height: geometry.size.height) .id(item.id) } } .scrollTargetLayout() } import UIKit import SwiftUI import SDWebImage class TestImageDisplay: UIViewController { var item: TestItem init(item: TestItem) { self.item = item super.init(nibName: nil, bundle: nil) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func viewDidLoad() { super.viewDidLoad() let imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 200, height: 200)) imageView.center = view.center view.addSubview(imageView) imageView.sd_setImage(with: item.imageURL, placeholder: nil) } } struct TestImageDisplayRepresentable: UIViewControllerRepresentable { var item: TestItem func makeUIViewController(context: Context) -> TestImageDisplay { return TestImageDisplay(item: item) } func updateUIViewController(_ uiViewController: TestImageDisplay, context: Context) { uiViewController.item = item } }
0
0
248
Sep ’24
Issues with @preconcurrency and AVFoundation in Swift 6 on Xcode 16.1/iOS 18 (Worked fine in Swift 5)
Question: I'm working on a project in Xcode 16.1, using Swift 6 with iOS 18. My code is working fine in Swift 5, but I'm running into concurrency issues when upgrading to Swift 6, particularly with the @preconcurrency attribute in AVFoundation. Here is the relevant part of my code: import SwiftUI @preconcurrency import AVFoundation struct OverlayButtonBar: View { ... let audioTracks = await loadTracks(asset: asset, mediaType: .audio) ... // Tracks are extracted before crossing concurrency boundaries private func loadTracks(asset: AVAsset, mediaType: AVMediaType) async -> [AVAssetTrack] { do { return try await asset.load(.tracks).filter { $0.mediaType == mediaType } } catch { print("Error loading tracks: \(error)") return [] } } } Issues: When using @preconcurrency, I get the warning: @preconcurrency attribute on module AVFoundation has no effect. Suggested fix by Xcode is: Remove @preconcurrency. But if I remove @preconcurrency, I get both a warning and an error: Warning: Add '@preconcurrency' to treat 'Sendable'-related errors from module 'AVFoundation' as warnings. Error: Non-sendable type [AVAssetTrack] returned by implicitly asynchronous call to nonisolated function cannot cross actor boundary. (Class AVAssetTrack does not conform to the Sendable protocol (AVFoundation.AVAssetTrack)). This error comes if I attempt to directly access non-Sendable AVAssetTrack in an async context : let audioTracks = await loadTracks(asset: asset, mediaType: .audio) How can I resolve this issue while staying compliant with Swift 6 concurrency rules? Is there a recommended approach to handling non-Sendable types like AVAssetTrack in concurrency contexts? Appreciate any guidance on making this work in Swift 6, especially considering it worked fine in Swift 5. Thanks in advance!
1
0
635
Sep ’24
Handling Main Actor-Isolated Values with `PHPhotoLibrary` in Swift 6
Hello, I’m encountering an issue with the PHPhotoLibrary API in Swift 6 and iOS 18. The code I’m using worked fine in Swift 5, but I’m now seeing the following error: Sending main actor-isolated value of type '() -> Void' with later accesses to nonisolated context risks causing data races Here is the problematic code: Button("Save to Camera Roll") { saveToCameraRoll() } ... private func saveToCameraRoll() { guard let overlayFileURL = mediaManager.getOverlayURL() else { return } Task { do { let status = await PHPhotoLibrary.requestAuthorization(for: .addOnly) guard status == .authorized else { return } try await PHPhotoLibrary.shared().performChanges({ if let creationRequest = PHAssetCreationRequest.creationRequestForAssetFromVideo(atFileURL: overlayFileURL) { creationRequest.creationDate = Date() } }) await MainActor.run { saveSuccessMessage = "Video saved to Camera Roll successfully" } } catch { print("Error saving video to Camera Roll: \(error.localizedDescription)") } } } Problem Description: The error message suggests that a main actor-isolated value of type () -> Void is being accessed in a nonisolated context, potentially leading to data races. This issue arises specifically at the call to PHPhotoLibrary.shared().performChanges. Questions: How can I address the data race issues related to main actor isolation when using PHPhotoLibrary.shared().performChanges? What changes, if any, are required to adapt this code for Swift 6 and iOS 18 while maintaining thread safety and actor isolation? Are there any recommended practices for managing main actor-isolated values in asynchronous operations to avoid data races? I appreciate any points or suggestions to resolve this issue effectively. Thank you!
1
0
532
Sep ’24
Crashes in PHPickerViewController PFAssertionPolicyAbort
Hello! I'm getting crash reports in PHPickerViewController for iOS 17 users only. Can someone point me into the right direction what could be the root cause in my case since it's related to PHPickerViewController? Thread 0 name: Thread 0 Crashed: 0 libsystem_kernel.dylib 0x00000001e7e9342c __pthread_kill + 8 (:-1) 1 libsystem_pthread.dylib 0x00000001fbc32c0c pthread_kill + 268 (pthread.c:1721) 2 libsystem_c.dylib 0x00000001a6d36ba0 abort + 180 (abort.c:118) 3 PhotoFoundation 0x00000001d2420280 -[PFAssertionPolicyAbort notifyAssertion:] + 68 (PFAssert.m:432) 4 PhotoFoundation 0x00000001d2420068 -[PFAssertionPolicyComposite notifyAssertion:] + 160 (PFAssert.m:259) 5 PhotoFoundation 0x00000001d242061c -[PFAssertionPolicyUnique notifyAssertion:] + 176 (PFAssert.m:292) 6 PhotoFoundation 0x00000001d241f7f4 -[PFAssertionHandler handleFailureInFunction:file:lineNumber:description:arguments:] + 140 (PFAssert.m:169) 7 PhotoFoundation 0x00000001d2420c74 _PFAssertFailHandler + 148 (PFAssert.m:127) 8 PhotosUI 0x0000000216b59e30 -[PHPickerViewController _handleRemoteViewControllerConnection:extension:extensionRequestIdentifier:error:completionHandler:] + 1356 (PHPicker.m:1502) 9 PhotosUI 0x0000000216b5a954 __66-[PHPickerViewController _setupExtension:error:completionHandler:]_block_invoke_3 + 52 (PHPicker.m:1454) Crash report: 2024-09-05_18-27-56.7526_+0500-a953eaee085338a690ac1604a78de86e3e49d182.crash
2
0
281
2w
Can I setup an AVCaptureSession exclusively for use with the new Camera Control APIs?
I have a third party app for controlling Sony mirrorless cameras over WiFi. I’m really excited to integrate the new camera controls on the iPhone 16 Pro with the app. I’ve found the documentation around this, and seems I need an AVCaptureSession setup in order to utilise them. func configureControls(_ controls: [AVCaptureControl]) { // Verify the host system supports controls; otherwise, return early. guard captureSession.supportsControls else { return } // Begin configuring the capture session. captureSession.beginConfiguration() // Remove previously configured controls, if any. for control in captureSession.controls { captureSession.removeControl(control) } // Iterate over the passed in controls. for control in controls { // Add the control to the capture session if possible. if captureSession.canAddControl(control) { captureSession.addControl(control) } else { print("Unable to add control \(control).") } } // Commit the capture session configuration. captureSession.commitConfiguration() } can I just use a freshly initialised capture session for this? Or does it need to be configured in any other ways? Are there any down sides to creating a session (CPU usage etc) that I may experience from this? Also, the scope of the controls is quite narrow. For something like shutter speed or aperture that has quite a number of possible values but requires custom labels, and a non-linear scale (so the AVCaptureIndexPicker seems to be the way to go). Will that picker support enough values to represent something like shutter speed or aperture? Is there any chance we may get non-linear float based controls in the future, which may feel more natural from a UX perspective than index-based? Apologies, lots of edits going on here as I think about this more. Is there any way, or would any way be considered of putting these controls in a disabled state like with other UI elements in iOS? There are times (during capture for example) that a lot of these settings can be unavailable (as communicated by the Sony camera) to be changed by the user, and managing a queue of changes when the function is unavailable to be set is going to be a challenge. If there won’t be, how will they behave if controls are removed whilst being interacted with? Presumably they will disappear entirely from the UI? Thanks!
3
1
568
Sep ’24
Sonoma: Is It No Longer Possible to Fetch Wallpaper File Names?
Under Ventura, desktop wallpaper image names were stored in a sqlite database at ~/Library/Application Support/Dock/desktoppicture.db. This file is no longer being used under Sonoma. I have a process I built that fetches the desktop image file names and displays them, either as a service, or on the desktop. I do this because I have many photos I've taken, and I like to know which one I'm viewing so I can make edits if necessary. I set these images across five spaces and have them randomly change every hour. I tried using AppleScript but it would not pull the file names. A few people have pointed me to ~/Library/Application Support/com.apple.wallpaper/Store/Index.plist. However, on my system, this only reveals the source folder and not the image name itself. On one of my Macs, it shows 64 items, even though I have only five spaces! Is there a way to fetch the image file names under Sonoma? Will Sequoia make this easier or harder?
2
0
286
Aug ’24
AssistantIntent for Photos without library access
The new .photos AssistantSchema for intents allow integrating App Intents for Photos-related actions with Apple Intelligence. I was wondering if it would be possible to create intents that do not require full library access. Our app supports loading image from Photos via the PHPicker, which doesn't require any user permission. Now we want to support the .photos.openAsset schema in an app intent to allow interactions like "Open this image in BeCasso and apply preset X". Would that be possible without full library access?
0
0
381
Jul ’24