Image I/O

RSS for tag

Read and write most image file formats, manage color, access image metadata using Image I/O.

Image I/O Documentation

Posts under Image I/O tag

61 Posts
Sort by:
Post not yet marked as solved
0 Replies
363 Views
- (void)cameraDevice:(ICCameraDevice*)camera didReceiveMetadata:(NSDictionary* _Nullable)metadata forItem:(ICCameraItem*)item error:(NSError* _Nullable) error API_AVAILABLE(ios(13.0)){ NSLog(@"metadata = %@",metadata); if (item) { ICCameraFile *file = (ICCameraFile *)item; NSURL *downloadsDirectoryURL = [[NSFileManager defaultManager] URLsForDirectory:NSDocumentDirectory inDomains:NSUserDomainMask].firstObject; downloadsDirectoryURL = [downloadsDirectoryURL URLByAppendingPathComponent:@"Downloads"]; NSDictionary *downloadOptions = @{ ICDownloadsDirectoryURL: downloadsDirectoryURL, ICSaveAsFilename: item.name, ICOverwrite: @YES, ICDownloadSidecarFiles: @YES }; [self.cameraDevice requestDownloadFile:file options:downloadOptions downloadDelegate:self didDownloadSelector:@selector(didDownloadFile:error:options:contextInfo:) contextInfo:nil]; } } - (void)didDownloadFile:(ICCameraFile *)file error:(NSError* _Nullable)error options:(NSDictionary<NSString*, id>*)options contextInfo:(void* _Nullable) contextInfo API_AVAILABLE(ios(13.0)){ if (error) { NSLog(@"Download failed with error: %@", error); } else { NSLog(@"Download completed for file: %@", file); } } I don't know what's wrong. I don't know if this is the right way to get the camera pictures. I hope someone can help me
Posted
by
WCF
Post not yet marked as solved
1 Replies
451 Views
Is it impossible to write tEXt chunk data to PNG on iOS? I successfully read the chunk data and modified it to add tEXt data to the chunk and saved it as an image in Gallery, But the tEXt data keeps disappearing when I read the chunk data from the image in the Gallery. Does iOS prevent preserving tEXt data when saving an image to Gallery?
Posted
by
Post not yet marked as solved
0 Replies
305 Views
I have applied some filters (like applyingGaussianBlur) to a CIImage that was converted from UIImage. The resulting image data gets corrupted only in lower end devices. What could be the reason?
Posted
by
Post not yet marked as solved
3 Replies
715 Views
Hello, I'm wondering if there is a way to programmatically write a series of UIImages into an APNG, similar to what the code below does for GIFs (credit: https://github.com/AFathi/ARVideoKit/tree/swift_5). I've tried implementing a similar solution but it doesn't seem to work. My code is included below I've also done a lot of searching and have found lots of code for displaying APNGs, but have had no luck with code for writing them. Any hints or pointers would be appreciated. func generate(gif images: [UIImage], with delay: Float, loop count: Int = 0, _ finished: ((_ status: Bool, _ path: URL?) -> Void)? = nil) { currentGIFPath = newGIFPath gifQueue.async { let gifSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFLoopCount as String : count]] let imageSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFDelayTime as String : delay]] guard let path = self.currentGIFPath else { return } guard let destination = CGImageDestinationCreateWithURL(path as CFURL, __UTTypeGIF as! CFString, images.count, nil) else { finished?(false, nil); return } //logAR.message("\(destination)") CGImageDestinationSetProperties(destination, gifSettings as CFDictionary) for image in images { if let imageRef = image.cgImage { CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary) } } if !CGImageDestinationFinalize(destination) { finished?(false, nil); return } else { finished?(true, path) } } } My adaptation of the above code for APNGs (doesn't work; outputs empty file): func generateAPNG(images: [UIImage], delay: Float, count: Int = 0) { let apngSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGLoopCount as String : count]] let imageSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGDelayTime as String : delay]] guard let destination = CGImageDestinationCreateWithURL(outputURL as CFURL, UTType.png.identifier as CFString, images.count, nil) else { fatalError("Failed") } CGImageDestinationSetProperties(destination, apngSettings as CFDictionary) for image in images { if let imageRef = image.cgImage { CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary) } } }
Posted
by
wmk
Post not yet marked as solved
1 Replies
532 Views
I want to read metadata of image files such as copyright, author etc. I did a web search and the closest thing is CGImageSourceCopyPropertiesAtIndex: - (void)tableViewSelectionDidChange:(NSNotification *)notif { NSDictionary* metadata = [[NSDictionary alloc] init]; //get selected item NSString* rowData = [fileList objectAtIndex:[tblFileList selectedRow]]; //set path to file selected NSString* filePath = [NSString stringWithFormat:@"%@/%@", objPath, rowData]; //declare a file manager NSFileManager* fileManager = [[NSFileManager alloc] init]; //check to see if the file exists if ([fileManager fileExistsAtPath:filePath] == YES) { //escape all the garbage in the string NSString *percentEscapedString = (NSString *)CFURLCreateStringByAddingPercentEscapes(NULL, (CFStringRef)filePath, NULL, NULL, kCFStringEncodingUTF8); //convert path to NSURL NSURL* filePathURL = [[NSURL alloc] initFileURLWithPath:percentEscapedString]; NSError* error; NSLog(@"%@", [filePathURL checkResourceIsReachableAndReturnError:error]); //declare a cg source reference CGImageSourceRef sourceRef; //set the cg source references to the image by passign its url path sourceRef = CGImageSourceCreateWithURL((CFURLRef)filePathURL, NULL); //set a dictionary with the image metadata from the source reference metadata = (NSDictionary *)CGImageSourceCopyPropertiesAtIndex(sourceRef,0,NULL); NSLog(@"%@", metadata); [filePathURL release]; } else { [self showAlert:@"I cannot find this file."]; } [fileManager release]; } Is there any better or easy approach than this?
Posted
by
Post not yet marked as solved
2 Replies
677 Views
This is my test code. import SwiftUI extension View { @MainActor func render(scale: CGFloat) -> UIImage? { let renderer = ImageRenderer(content: self) renderer.scale = scale return renderer.uiImage } } struct ContentView: View { @Environment(\.colorScheme) private var colorScheme @State private var snapImg: UIImage = UIImage() var snap: some View { Text("I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!") .foregroundStyle(colorScheme == .dark ? .red : .green) } @ViewBuilder func snapEx() -> some View { VStack { Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!") .foregroundStyle(colorScheme == .dark ? .red : .green) Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!") .background(.pink) Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!") .background(.purple) Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!") .foregroundStyle(colorScheme == .dark ? .red : .green) Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!") .foregroundStyle(colorScheme == .dark ? .red : .green) } } @ViewBuilder func snapView() -> some View { VStack { Text("Text") Text("Test2") .background(.green) snap snapEx() } } var body: some View { let snapView = snapView() VStack { snapView Image(uiImage: snapImg) Button("Snap") { snapImg = snapView.render(scale: UIScreen.main.scale) ?? UIImage() } } } } When using ImageRenderer, there are some problems with converting View to images. For example, Text cannot automatically modify the foreground color of Dark Mode. This is just a simple test code, not just Text. How should I solve it?
Posted
by
Post not yet marked as solved
1 Replies
512 Views
guard let rawfilter = CoreImage.CIRAWFilter(imageData: data, identifierHint: nil) else { return } guard let ciImage = rawfilter.outputImage else { return } let width = Int(ciImage.extent.width) let height = Int(ciImage.extent.height) let rect = CGRect(x: 0, y: 0, width: width, height: height) let context = CIContext() guard let cgImage = context.createCGImage(ciImage, from: rect, format: .RGBA16, colorSpace: CGColorSpaceCreateDeviceRGB()) else { return } print("cgImage prepared") guard let dataProvider = cgImage.dataProvider else { return } let rgbaData = CFDataCreateMutableCopy(kCFAllocatorDefault, 0, dataProvider.data) In iOS 16 this process is much faster than the same process in iOS 17 Is there a method to boost up the decoding speed?
Posted
by
Post marked as solved
1 Replies
634 Views
I am able to create a UIImage from webP data using UIImage(data: data) on iOS and iPadOS. When I try to do this same thing on watchOS 10, it fails. Is there a workaround to displaying webp images on watch os if this isn't expected to work?
Posted
by
Post not yet marked as solved
0 Replies
841 Views
It appears I can't add a WebP image as an Image Set in an Asset Catalog. Is that correct? As a workaround, I added the WebP image as a Data Set. I'm then loading it as a CGImage with the following code: guard let asset = NSDataAsset(name: imageName), let imageSource = CGImageSourceCreateWithData(asset.data as CFData, nil), let image = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else { return nil } // Use image Is it fine to store and load WebP images in this way? If not, then what's best practice?
Posted
by
Post not yet marked as solved
0 Replies
566 Views
I have written and used the code to get the colors from CGImage and it worked fine up to iOS16. However, when I use the same code in iOS17, Red and Blue out of RGB are reversed. Is this a temporary bug in the OS and will it be fixed in the future? Or has the specification changed and will it remain this way after iOS17? Here is my code: let pixelDataByteSize = 4 guard let cfData = image.cgImage?.dataProvider?.data else { return } let pointer:UnsafePointer = CFDataGetBytePtr(cfData) let scale = UIScreen.main.nativeScale let address = ( Int(image.size.width * scale) * Int(image.size.height * scale / 2) + Int(image.size.width * scale / 2) ) * pixelDataByteSize let r = CGFloat(pointer[address]) / 255 let g = CGFloat(pointer[address+1]) / 255 let b = CGFloat(pointer[address+2]) / 255
Posted
by
Post not yet marked as solved
0 Replies
417 Views
I am facing an issue with a blog post, where I cannot view the image that is added on the blog post. In the blog post the image is a BMP type. I have read some earlier posts and it seems that BMP was not supported on the earlier IOS version. My IOS version is 17.0.2. Browser: Safari OS: iOS 17.0.2 Device: iPhone 14 URL: https://www.optimabatteries.com/experience/blog/if-a-cars-charging-system-isnt-working-properly-why-cant-we-just-jump-start-it-with-an-external-booster-pack Please let me know what could be the issue.
Posted
by
Post not yet marked as solved
0 Replies
430 Views
Our iOS app can access the photo library when running it on an M1 Mac. The app was programmed using Xcode and Objective C. We cannot select a photo from the library and we need the Objective C code to accomplish this task. None of our attempts were successful.
Posted
by
Post not yet marked as solved
4 Replies
1.1k Views
Using the screencapture CLI on macOS Sonoma 14.0 (23A344) results in a 72dpi image file, no matter if it was captured on a retina display or not. For example, using screencapture -i ~/Desktop/test.png in Terminal lets me create a selective screenshot, but the resulting file does not contain any DPI metadata (checked using mdls ~/Desktop/test.png), nor does the image itself have the correct DPI information (should be 144, but it's always 72; checked using Preview.app). I noticed a (new?) flag option, -r, for which the documentation states: -r Do not add screen dpi meta data to captured file. Is that flag somehow automatically set? Setting it myself makes no difference and obviously results in a no-dpi-in-metadata and wrong-dpi-in-image file. The only two ways I got the correct DPI information in a resulting image file was using the default options (forced by -p): screencapture -i -p, and by making the capture go to the clipboard screencapture -i -c. Sadly, I can't use those in my case. Feedback filed: FB13208235 I'd appreciate any pointers, Matthias
Posted
by
Post not yet marked as solved
2 Replies
753 Views
Hello All, I am trying to compress PNG image by applying PNG Filters like(Sub, Up, Average, Paeth), I am applying filers using property kCGImagePropertyPNGCompressionFilter but there is no change seen in resultant images after trying any of the filter. What is the issue here can someone help me with this. Do I have compress image data after applying filter? If yes how to do that? Here is my source code CGImageDestinationRef outImageDestRef = NULL; long keyCounter = kzero; CFStringRef dstImageFormatStrRef = NULL; CFMutableDataRef destDataRef = CFDataCreateMutable(kCFAllocatorDefault,0); Handle srcHndl = //source image handle; ImageTypes srcImageType = //'JPEG', 'PNGf, etct; CGImageRef inImageRef = CreateCGImageFromHandle(srcHndl,srcImageType); if(inImageRef) { CFTypeRef keys[4] = {nil}; CFTypeRef values[4] = {nil}; dstImageFormatStrRef = CFSTR("public.png"); long png_filter = IMAGEIO_PNG_FILTER_SUB; //IMAGEIO_PNG_FILTER_SUB, IMAGEIO_PNG_FILTER_UP, IMAGEIO_PNG_FILTER_AVG, IMAGEIO_PNG_FILTER_PAETH .. it is one of this at a time keys[keyCounter] = kCGImagePropertyPNGCompressionFilter; values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&png_filter); keyCounter++; outImageDestRef = CGImageDestinationCreateWithData(destDataRef, dstImageFormatStrRef, 1, NULL); if(outImageDestRef) { // keys[keyCounter] = kCGImagePropertyDPIWidth; // values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&Resolution); // keyCounter++; // // keys[keyCounter] = kCGImagePropertyDPIHeight; // values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&Resolution); // keyCounter++; CFDictionaryRef options = CFDictionaryCreate(NULL,keys,values,keyCounter,&kCFTypeDictionaryKeyCallBacks,&kCFTypeDictionaryValueCallBacks); CGImageDestinationAddImage(outImageDestRef,inImageRef, options); CFRelease(options); status = CGImageDestinationFinalize(outImageDestRef); if(status == true) { UInt8 *destImagePtr = CFDataGetMutableBytePtr(destDataRef); destSize = CFDataGetLength(destDataRef); //using destImagePtr after this ... } CFRelease(outImageDestRef); } for(long cnt = kzero; cnt < keyCounter; cnt++) if(values[cnt]) CFRelease(values[cnt]); if(inImageRef) CGImageRelease(inImageRef); }
Posted
by
Post not yet marked as solved
0 Replies
418 Views
struct ContentView: View { @State var listOfImages: [String] = ["One", "Two", "Three", "Four"] @State var counter = 0 var body: some View { VStack { Button(action: { counter += 1 }, label: { Text("Next Image") }) } .background(Image(listOfImages[counter])) .padding() } } When I click on the button, counter increases and the next image is displayed as the background. The memory usage of the app increases as each image changes. Is there anyway to maintain a steady memory use?
Posted
by
Post not yet marked as solved
3 Replies
1.2k Views
Hello! After recent talk on the WWDC2023 about HDR support and finding this documentation page on Applying Apple HDR effect on photos, I became very interested in the HDR Gain Map format. From documentation page it is clear how we can restore original HDR from SDR and Gain Map representation, but my question is - how from HDR we can convert back to the SDR + Gain Map representation? As I understand right know, conversion from HDR to SDR + Gain Map includes two steps: Tone mapping of HDR for getting correct SDR When we have both HDR and SDR, from equation in the documentation page we can calculate Gain Map Am I correct? If so, what tone mapping algorithm for HDR -> SDR conversion is used right know? Can't find any information about this in the internet:( Would be very grateful for your response!
Posted
by
Post not yet marked as solved
1 Replies
590 Views
I would like to use a third-party app to edit the metadata of a photo to change its Caption and then be able to search in the Photos app to find that image with the edited caption. I have managed to do this by duplicating the photo with the edited metadata. The Photos app recognizes it as a new photo and indexes it with the new caption, making it searchable. However, when editing the photo in-place, the Photos app will not re-index the photo, therefore it will not be searchable. Is there a way to edit photos in-place and have them searchable with the new metadata?
Posted
by
Post not yet marked as solved
1 Replies
691 Views
I have been researching and enduring the most unbelievable attack you have ever heard of. You won't believe me. AT&T doesn't. Apple doesn't. They have not bothered to even take a look. I have multiple screen videos and screenshots that prove that when my phone was stolen, damaged, then returned, it was infected with a very resilient virus of some sort, so that it has been cloned and is duplicated on a MAC. Now the WiFi network I use is likely the jumping off point, but I am not familiar with this level of invasion. I have restored the phone but I have not deleted the E-Sim. And I always restored from a backup. Well once. I did not and still no change. I am not a neophyte, and while I have limited knowledge of the apple code, swift or Xcode or whatever, I do have some coding knowledge from other platforms. One thing I do know though is my iPhone. I bought the first one in 2007 and watched the keynote announcing the App Store on the first iPhone. I know when it is operating as it should. For instance, whenever I reset my google password which is often, or I just look through the google account at devices, this iPhone 14 {Plus running iOS 17 is always listed as a MAC OS X from another region. Sometimes I can see this phone on there as well but it is never the (device I am using). Also, I have screen videos of very strange errors, like certain options will be shut off. Like right now I cannot turn on voice assist. During the damage when it was stolen, the Face ID was damaged. Just now it suddenly got brighter and it does that often almost as often as the volume suddenly going up. My contacts are constantly deleted and that is not because of switching accounts. As this has been occurring four over 4 months now, I have tirelessly investigated every explanation for the errors. I know it is a mac connected to the same WiFi because occasionally the font will change in size and small part of text will be highlighted and it is much smaller than the display of the phone. Everytime I login to Apple ID the password has been changed and this is with 2FA on. Also I just checked yesterday to see, on the AT&T website, and my number in just one day has (5Text messages listed that I did not get. My roommates seem to know things about me I haven't told them. At&T sent me a bill for a different number without a phone connected to it. I use an ESim. I have endless more details and screenshots and I cannot seem to get a response from apple, but my belongings have been stolen, my identity stolen, and my sanity also taken away. You begin to question reality, but just going over my body of evidence is enough to show I have something going on. I want to report this to someone but I would like to maybe get some info on how I could be sure, what is happening? MY Crash reports show interesting stuff. I am actually going to check and make sure the imei numbers in those reports match the device. But please help . This is like super high tech invisible terrorizing spyware and it will be worse for the next person. I mean when I unlock my phone it is never opened to the app I closed it on. NEVER. Email johnmichaelpowers@yahoo.com
Posted
by