Photos and Imaging

RSS for tag

Integrate still images and other forms of photography into your apps.

Posts under Photos and Imaging tag

84 Posts
Sort by:
Post not yet marked as solved
4 Replies
7.5k Views
Running on: iMac 27" 5k late 2015 - 64gb ram and a 16tb Pegasus Promise2 R4 raid5 via Thunderbolt. After trying Big Sur - found issues with Luminar Photo app, decided to return to Catalina on the iMac. Reformatted my internal drive and reinstalled Catalina 15.5 and reformatted the raid. But I keep getting the following message upon restarting: "Incompatible Disk. This disk uses features that are not supported on this version of MacOS" and my Pegasus2 R4 portion no longer appears on the desktop or in Disk Utility... Looked into this and discovered that it may be an issue of Mac OS Extended vs APFS The iMac was formatted to APFS prior to installing OS11 so I reformatted to APFS when returning to Catalina. The issues persisted so I re-reformatted from a bootable USB - this time to Mac OS Extended (journaled) and the issues seems to be resolved. The iMac runs slower on MacOS Ext, but it is running and the Raid is recognised... I'd love to go back to APFS but am afraid it will "break" things. Any thought on this would be welcome. Thanks Nick
Posted Last updated
.
Post marked as solved
2 Replies
1k Views
I have a 3d camera app that I'm working on and I am wondering how to put the two videos side-by-side to save to Photos as one video using this delegate method: func fileOutput(_ output: AVCaptureFileOutput,                         didFinishRecordingTo outputFileURL: URL,                         from connections: [AVCaptureConnection],                         error: Error?) { Thank You!
Posted Last updated
.
Post marked as solved
1 Replies
422 Views
Inside func picker(_ picker: didFinishPicking results:) I am trying to obtain a UIImage from a Live Photo. I have parsed the results and filtered by regular still photos and live photos. For the still photos, I already have the UIImage but for the live photos I do not understand how to get from a PHLivePhoto to the PHAsset. if let livePhoto = object as? PHLivePhoto {   DispatchQueue.main.async { // what code do I insert here to get from PHLivePhoto to a UIImage? I need to extract a UIImage from a PHLivePhoto } } Thanks for the help in advance!
Posted
by ninumedia.
Last updated
.
Post not yet marked as solved
0 Replies
384 Views
I am using the default HelloPhotogrammetry app you guys made: https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app/ My system originally did not fit the specs because of a GPU issue to run this command line. To solve this issue I bought the Apple supported eGPU Black Magic to allow the graphics issue to function. Here is the error when I run it despite the eGPU: apply_selection_policy_once: prefer use of removable GPUs (via (null):GPUSelectionPolicy->preferRemovable) I have deduced that there needs to be this with the application running it: https://developer.apple.com/documentation/bundleresources/information_property_list/gpuselectionpolicy I tried modifying the Terminal.plist to the updated value - but there was no luck with it. I believe the CL within Xcode needs to have the updated value -- I need help on that aspect to be able to allow the system to use the eGPU. I did create a PropertyList within the MacOS app and added GPUSelectionPolicy with preferRemovable, and I am still having issues with the same above error. Please advice. Also -- to note, I did try to temporary turn off the Prefer External GPU within Terminal -- and it was doing the processing of the Photogrammetry but it was taking awhile to process (>30 mins plus.) I ended up killing that task. I did have a look at Activity Monitor and I did see that my internal GPU was being used, not my eGPU which is what I am trying to use. Previously -- when I did not have the eGPU plugged in - I would be getting an error saying that my specs did not meet criteria, so it was interesting to see that it assumed my Mac had criteria (which it technically did) it just did processing on the less powerful GPU.
Posted Last updated
.
Post not yet marked as solved
0 Replies
329 Views
I haven't be able to generate a HEIC photo with IPTC metadata, in JPEG I get it but no in HEIC. I'm using: CIContext method heifRepresentation(of: image, format: CIFormat.RGBA8, colorSpace: colorSpace, options: options) to generate the photo data, image is a CIImage and has the IPTC metadata but the final photo doesn't have it. If I use: CIContext method jpegRepresentation(of: image, colorSpace: ColorSpace.deviceRGB, options: options) the final JPG photo has the IPTC information. Anyone with the same issue or with an idea about what's going on?
Posted
by lcalmn.
Last updated
.
Post not yet marked as solved
1 Replies
308 Views
There seems to be a difference to the way iPadOS and macOS handle raw image files (tested with Fujifilm X-T3 uncompressed RAF file - included on the compatible list). Running the following code (with url set to the file location of the RAF file) on the iPad displays the preview jpeg embedded in the RAF file whereas on the mac the raw data are converted: AsyncImage(url: item.url) { phase in       if let image = phase.image {        image.resizable().aspectRatio(contentMode: .fit)       } else if phase.error != nil {        Color.red       } else {        Color.blue       }      } Am I missing something or is this the expected behaviour and is it documented somewhere?
Posted Last updated
.
Post not yet marked as solved
2 Replies
493 Views
Hey all I’m new to the world of Xcode and building apps. I am wondering if it were possible to have an app run in the background and have it listen to the photos library and if it finds a new photo has been added it would automatically upload that to some type of rest api call without the user having to interact/do it manually? This will only be used in my house hold so I’m not looking to get it into the App Store or go through any approval process that apple does if it were going to be in the App Store. So before I spend too much time looking around - is this possible with iOS 14+? Only thing I have come across that remotely sounds like something that would work would be this PHPhotoLibraryChangeObserver but I’m not sure if the user has to interact with it in order for it to be used or not? I’m open to any suggestions you more experienced Xcode programmers have about the above.
Posted
by Stealthrt.
Last updated
.
Post not yet marked as solved
0 Replies
240 Views
I compared with several options to use get auxiliary images from CIImage. These options leak AVSemanticSegmentationMatte when using debug memory graph CIImage.init(data: data, options: [.auxiliarySemanticSegmentationSkinMatte: true]) CIImage.init(data: data, options: [.auxiliarySemanticSegmentationHairMatte: true]) CIImage.init(data: data, options: [.auxiliarySemanticSegmentationTeethMatte: true]) Other options .auxiliaryDisparity and .auxiliaryPortraitEffectsMatte do not leak AVDepthData nor AVPortraitEffectsMatte.
Posted Last updated
.
Post not yet marked as solved
0 Replies
184 Views
Hello, Would appreciate any help with an issue I am having writing Live Photos. Following the code samples shown in https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/capturing_still_and_live_photos/capturing_and_saving_live_photos, I cannot add a Live Photo to the camera roll. I get an error code -1. I am able to successfully write the individual image, and movie, so I know its not an issue with the files or a permission issue. I am writing a .jpeg and .mov file. Do Live Photos require to be written with specific file types? Any other requirements for writing a Live Photo? Please if anyone can help I would greatly appreciate it. Thank you
Posted Last updated
.
Post not yet marked as solved
0 Replies
257 Views
I have a SwiftUI application that processes image files from Fujifilm cameras - both raw and jpeg. When the image files are imported into the Photos app they are stacked so that you see only a single image when there are both raw and jpeg versions of the same image. Using Swift I cannot work out how to access both files - using the following code you get the jpeg file or the raw file if there is only a single file - but if there are both raw and jpeg files you only get the jpeg file. import SwiftUI import PhotosUI struct PhotoPicker1: UIViewControllerRepresentable {  typealias UIViewControllerType = PHPickerViewController      @ObservedObject var mediaItems: PickedMediaItems  var didFinishPicking: (_ didSelectItems: Bool) -> Void      func makeUIViewController(context: Context) -> PHPickerViewController {   var config = PHPickerConfiguration(photoLibrary: .shared())   config.filter = .any(of: [.images])   config.selectionLimit = 0   config.preferredAssetRepresentationMode = .current         let controller = PHPickerViewController(configuration: config)   controller.delegate = context.coordinator   return controller  }      func updateUIViewController(_ uiViewController: PHPickerViewController, context: Context) { }      func makeCoordinator() -> Coordinator { Coordinator(with: self) }      class Coordinator: PHPickerViewControllerDelegate {   var photoPicker1: PhotoPicker1   init(with photoPicker1: PhotoPicker1) {    self.photoPicker1 = photoPicker1   }         func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {    photoPicker1.didFinishPicking(!results.isEmpty)    guard !results.isEmpty else {     return    }            for result in results {     let itemProvider = result.itemProvider     let typeIdentifier = itemProvider.registeredTypeIdentifiers.first ?? ""     print("typeIdentifier: \(typeIdentifier)")    }   }  } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
218 Views
Just curious about the possibility of malware in images and videos on iPhones. A) are there malware protections in place in the photos app / library to prevent images and videos that contain malware from getting in. B) if not is it possible for my app which uses Picker(s) to pass infected images / videos through to our cloud storage?
Posted
by coreyd303.
Last updated
.
Post not yet marked as solved
0 Replies
305 Views
Hi everyone, I have an app that uses the front-facing TrueDepthCamera for functionality. I.e., taking a photo is essential for functionality. My problem, as quoted from App Store review team, is as follows: App crashed when we tapped to take a picture Review device details: Device type: iPad OS version: iOS 15.1 I have tested this app on iPhone 13, 12, and 11 successfully. I intended to make this an iPhone ONLY app (changed hardware requirements in info.plist. Didn't change target deployment info though... i.e. iPad still selected there). I have spent hours trying to figure out the following: is there any way to restrict an app to iPhone only for testing and publishing? I am a solo developer who has spent lots of time on trying to make this a reality, and am unfortunately stuck on this issue. All help is appreciated!
Posted
by jzooms.
Last updated
.
Post not yet marked as solved
0 Replies
375 Views
I am trying to use the PHPickerViewController to select images from my photo gallery and seeing the following error. Could not create a bookmark: NSError: Cocoa 257 "The file couldn’t be opened because you don’t have permission to view it I am using Xcode 12.5 on Mac M1 (macOS 11.2.3) And connecting to an iPhone 12 (IOS 14.8). The code in question private func getPhoto(from itemProvider: NSItemProvider) {             print(itemProvider.registeredTypeIdentifiers)             itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.image.identifier) { url, error  in                 if let error = error {                     print("Encountered Error in loadFileRepresentation \(error.localizedDescription)")                 }                 if url != nil {                     print("Obtained url: \(String(describing: url))")                 } else {                     print("Could not obtain url")                 }                 let imageData = try! Data(contentsOf: url!)                 DispatchQueue.main.async {                     self.parent.mediaItems.append(item: PhotoPickerItem(with: UIImage(data:imageData)!))                 }             }                      }     } I can see that the itemProvider has the following representations. ["public.heic", "public.jpeg"] and I have tried UTType.jpeg.identifier as well.
Posted
by pradnyc.
Last updated
.
Post marked as Apple Recommended
744 Views
After a user complained that they could no longer load partially transparent PNG images in my photo compositing app, I eventually tracked this down to a bug in iOS 15.1 (tested on beta 2). When the user selects a PNG image in the PHPickerViewController, the returned NSItemProvider reports having a file for the "public.png" UTI (and only that one). However when requesting data for that UTI, the system actually returns JPEG image data instead. Just a heads up to other developers who might run into this. Hopefully it will get fixed before 15.1 ships. I reported it as FB9665280.
Posted
by ppix.
Last updated
.
Post not yet marked as solved
0 Replies
292 Views
Question says it all. I want the transparent pixels to pass through the taps / clicks / gestures, while the opaque pixels catches them. Obviously be able to control the behaviour would be even better, so I could ignore slightly translucent pixels too. Pre-processing is not possible, user images, so it's not easy. So far, the best I thought was to get a global gesture recognizer, and try to figure out where in my complex hierarchy this tap falls, and see if the image is underneath. But that seems overly complicated for something so simple and basic, really.
Posted Last updated
.
Post not yet marked as solved
0 Replies
370 Views
Hello, I am trying to create an animated sequence of HEIC images but I cannot save the frame property duration. It seems this is a well know bug: https://github.com/SDWebImage/SDWebImage/issues/3120 The kCGImagePropertyHEICSDictionary is never saved. Here's a sample project to reproduce the bug: ImageIOHEICSEncodeDecodeBug.zip Has anybody managed to save this information in a HEIC sequence? Thanks! Here's how I am writing an reading the image sequence - (void)testHEICSBug {     // First, load an animated image (GIF)     // And you can change the type into png, which is an animated PNG format. Same result     NSData *GIFData = [NSData dataWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"image1" ofType:@"gif"]];     CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)GIFData, nil);     NSUInteger frameCount = CGImageSourceGetCount(source);     NSAssert(frameCount > 1, @"GIF frame count > 1");          // Split into frames array, encode to HEICS     NSMutableData *heicsData = [NSMutableData data];     CGImageDestinationRef destination = CGImageDestinationCreateWithData((__bridge CFMutableDataRef)heicsData, (__bridge CFStringRef)AVFileTypeHEIC, frameCount, nil);               for (int i = 0; i < frameCount; i++) {         // First get the GIF input image and duration         CGImageRef cgImage = CGImageSourceCreateImageAtIndex(source, i, nil);         NSDictionary *inputProperties = (__bridge_transfer  NSDictionary *)CGImageSourceCopyPropertiesAtIndex(source, i, nil);         NSDictionary *inputDictionary = inputProperties[(__bridge NSString *)kCGImagePropertyGIFDictionary];         NSTimeInterval duration = [inputDictionary[(__bridge NSString *)kCGImagePropertyGIFUnclampedDelayTime] doubleValue];         NSAssert(cgImage, @"CGImage not nil");         NSAssert(duration > 0, @"Input duration > 0");                                    // Then, encode into HEICS animated image         NSMutableDictionary *outputDProperties = [NSMutableDictionary dictionary];         outputDProperties[(__bridge NSString *)kCGImagePropertyHEICSDictionary] = @{(__bridge NSString *)kCGImagePropertyHEICSUnclampedDelayTime : @(duration)};         CGImageDestinationAddImage(destination, cgImage, (__bridge_retained CFDictionaryRef)outputDProperties);     }          // Output HEICS image data     BOOL result = CGImageDestinationFinalize(destination);     NSAssert(result, @"Encode HEICS failed");               // Next, try to use ImageIO to decode HEICS and check duration          CGImageSourceRef newSource = CGImageSourceCreateWithData((__bridge CFDataRef)heicsData, nil);     frameCount = CGImageSourceGetCount(newSource);     NSAssert(frameCount > 1, @"New HEICS should be aniamted image");     NSUInteger frameIndex = 1; // I pick the 2nd frame, actually any frame contains this issue.     NSDictionary *newProperties = (__bridge_transfer NSDictionary *)CGImageSourceCopyPropertiesAtIndex(newSource, frameIndex, nil);     NSDictionary *newDictionary = newProperties[(__bridge NSString *)kCGImagePropertyHEICSDictionary];     NSTimeInterval newDuration = [newDictionary[(__bridge NSString *)kCGImagePropertyHEICSUnclampedDelayTime] doubleValue];     CGImageRef newImage = CGImageSourceCreateImageAtIndex(newSource, frameIndex, nil);          // Now, check the HEICS frame duration, however, it's nil :(     // Only image is kept.     NSAssert(newImage, @"frame image is not nil");     NSAssert(newDuration > 0, @"Decode the HEICS (which encoded from GIF) will loss the frame duration"); }
Posted
by libe.
Last updated
.
Post not yet marked as solved
1 Replies
1k Views
HI! I'm using ImageJ v1,53g for work. It worked smoothly with all versions of Catalina. However, since I downloaded Big Sur, ImageJ doesn't work properly, crashing only opening 2 images or switching between images. Does anyone have any suggestion to resolve this issue? Thanks!
Posted Last updated
.
Post not yet marked as solved
0 Replies
250 Views
I am trying to set the description of an image. The metadata tag necessary to add the description is of type alternateText. I create the child tag with the new description value: let childTag = CGImageMetadataTagCreate(identifier.section.namespace, identifier.section.prefix, "[x-default]" as CFString, .string, value as CFTypeRef) I then set the description tag like this: let parentTag = CGImageMetadataTagCreate(identifier.section.namespace, identifier.section.prefix, identifier.tagKey, .alternateText, [childTag] as CFTypeRef) let result = CGImageMetadataSetTagWithPath(metadata, nil, identifier.tagKey, parentTag) However, when I write the image file, I get a runtime error message and the operation fails: XMP Error: AltText array items must have an xml:lang qualifier So, I create the qualifier tag like this: let qualifierTag = CGImageMetadataTagCreate("http://www.w3.org/XML/1998/namespace" as CFString, "xml" as CFString, "lang" as CFString, .string, "x-default" as CFTypeRef) But I have not found a way to associate this qualifier tag to the child tag with the description value. What is the way to do it?
Posted
by Jorge-LNS.
Last updated
.