I have a custom app running on a Mac Studio with Ventura that grabs a snapshot image from a network camera. It then adds some extra information into the EXIF "MakerNote" field. However the metadata cannot be read back out of the image when running Ventrua, it can however be read out of the same image file on a Mac that is not running Ventura.
It would appear Apple has removed support for reading MakerNote in Ventura but still supports writing MakerNote in Ventura.
This code is about 7 years old and written in ObjC and has worked with no issue until Ventura came along.
Calls used
CGImageDestinationAddImageFromSource(); // used to write the image to disk with the extra metadata - Works on Ventura
CGImageSourceCopyPropertiesAtIndex(); // used to read the meta data from an image - does not return "MakeNote" data
Is there a new way to read EXIF "MakeNote" data from image files that was introduced with Ventura?
Image I/O
RSS for tagRead and write most image file formats, manage color, access image metadata using Image I/O.
Posts under Image I/O tag
54 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Under Sonoma 14.4 the compression option doesn't work with PNG images. It works for JPG/HEIF. Preview can export PNG file to HEIC with compression option. What am I missing? Previously this has worked. I am trying with 0.01 and 0.9 as compression quality and the file size is the same for PNG.
Is Preview using some trick to convert the image using ciContext.createCGImage?
PS: Compression option of 1.0 was broken under 14.4 RC and Preview created empty file.
func heifImageDataUsingDestination(at url: URL, compressionQuality : CGFloat) -> Data? {
guard let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) else { return nil }
guard let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else { return nil }
var mutableData = NSMutableData()
guard let imageDestination = CGImageDestinationCreateWithData(mutableData, "public.heic" as CFString, 1, nil) else { return nil }
let options = [ kCGImageDestinationLossyCompressionQuality: compressionQuality ] as CFDictionary
CGImageDestinationAddImage(imageDestination, cgImage, options)
let success = CGImageDestinationFinalize(imageDestination)
if success {
return mutableData as Data
}
return nil
}
func heifImageDataUsingCIContext(at url: URL, compressionQuality : CGFloat) -> Data? {
guard let ciImage = CIImage(contentsOf: url) else { return nil }
let context = CIContext()
let colorspace = ciImage.colorSpace ?? CGColorSpaceCreateDeviceRGB()
let options = [CIImageRepresentationOption(rawValue: kCGImageDestinationLossyCompressionQuality as String) : compressionQuality]
return context.heifRepresentation(of: ciImage, format: .RGBA8, colorSpace: colorspace, options: options)
}
A simple view has misaligned localized content after being converted to an image using ImageRenderer.
This is still problematic on real phone and TestFlight
I'm not sure what the problem is, I'm assuming it's an ImageRenderer bug.
I tried to use UIGraphicsImageRenderer, but the UIGraphicsImageRenderer captures the image in an inaccurate position, and it will be offset resulting in a white border. And I don't know why in some cases it encounters circular references that result in blank images.
"(1) days" is also not converted to "1 day" properly.
I want to create framework for this repo:https://github.com/BradLarson/GPUImage
but failed.
1.I downloaded this repo and run below:
xcodebuild archive
-project GPUImage.xcodeproj
-scheme GPUImage
-destination "generic/platform=iOS"
-archivePath "archives/GPUImage"
xcodebuild archive
-project GPUImage.xcodeproj
-scheme GPUImage
-destination "generic/platform=iOS Simulator"
-archivePath "archivessimulator/GPUImage"
xcodebuild -create-xcframework
-archive archives/GPUImage.xcarchive -framework GPUImage.framework
-archive archivessimulator/GPUImage.xcarchive -framework GPUImage.framework
-output xcframeworks/GPUImage.xcframework
there is error :cryptexDiskImage' is an unknown content type
and com.apple.platform.xros' is an unknown platform identifier
Topic:
Developer Tools & Services
SubTopic:
Xcode
Tags:
Image I/O
Graphics and Games
visionOS
Frameworks
I am currently running iOS 18 Beta 3 and am working on enabling users to paste (and copy) custom emojis (AdaptiveImageGlyph, such as Memoji, Stickers, and soon GenMoji) into a text field.
I am looking for the UTI for AdaptiveImageGlyph—something similar to "public.adaptive-image-glyph". Does anyone know if such a UTI exists?
Here’s my situation: When typing AdaptiveImageGlyph using the System keyboard, everything functions correctly. However, if I copy some text containing AdaptiveImageGlyph from the Notes app and paste it into my playground app, it only pastes the text. The reverse is also true. In fact, if I copy some AdaptiveImageGlyph from the playground app and paste it, it only pasts the text.
Interestingly, copying AdaptiveImageGlyph from the Notes app and pasting it into iMessage works flawlessly, and vice versa. I am trying to achieve the same seamless functionality in my app.
Given that this feature works in iMessage and Notes, I am inclined to believe the issue might be on my side, though I recognize these are system apps and not third-party.
Example Code:
import SwiftUI
import UIKit
struct AdaptiveImageGlyphTextView: UIViewRepresentable {
class Coordinator: NSObject, UITextViewDelegate {
var parent: AdaptiveImageGlyphTextView
init(parent: AdaptiveImageGlyphTextView) {
self.parent = parent
}
func textViewDidChange(_ textView: UITextView) {
parent.text = textView.text
}
func textView(_ textView: UITextView, shouldChangeTextIn range: NSRange, replacementText text: String) -> Bool {
// Handle insertion of adaptive image glyphs here if needed
return true
}
}
@Binding var text: String
func makeCoordinator() -> Coordinator {
Coordinator(parent: self)
}
func makeUIView(context: Context) -> UITextView {
let textView = UITextView()
textView.delegate = context.coordinator
textView.supportsAdaptiveImageGlyph = true
textView.isEditable = true
textView.isSelectable = true
textView.font = UIFont.systemFont(ofSize: 17)
// Enable paste with NSAdaptiveImageGlyphs
textView.pasteConfiguration = UIPasteConfiguration(acceptableTypeIdentifiers: [
"public.text",
"public.image",
"public.adaptive-image-glyph" // Replace with the correct UTI if different
])
return textView
}
func updateUIView(_ uiView: UITextView, context: Context) {
if uiView.text != text {
uiView.text = text
}
}
}
struct ContentView: View {
@State private var text: String = ""
var body: some View {
AdaptiveImageGlyphTextView(text: $text)
.frame(height: 200)
.padding()
}
}
#Preview {
ContentView()
}
Hi,
Currently my app is using ImageCaptureCore framework to work with DSLR camera. But when I tested it in iOS 18, it turns out my camera cannot do connection with iPhone by wired connection.
It seems there are some developer run into the same problem, there are:
https://forums.developer.apple.com/forums/thread/756960
https://stackoverflow.com/questions/78618886/icdevicebrowser-fails-to-find-any-devices-after-ios-18-update
And it’s reproduced in some apps that expected to use ImageCaptureCore framework.
I’d like to clarify that:
Is the issue currently iOS 18 bugs?
Is there any plan of Apple to remove wired connection support of ImageCaptureCore framework?
Thank you.
Hi, Experts
I am using phpickerviewcontroller to open photos in iPhone. It works well for most of photos, such as jpeg, heif. But it failed for photo with raw image on iPhone14 plus. And I found the TypeIdentifier for it is com.adobe.raw-image.
I use result.itemProvider.loadFileRepresentation(forTypeIdentifier: "com.adobe.raw-image") to load the raw photo, it always failed, with "Error loading file representation: Cannot load representation of type com.adobe.raw-image".
I had try some other param: such as forTypeIdentifier: public.image, public.camera-raw-image, both of them did not work.
How can I load this type of raw photo?
Below is my code details:
// MARK: - PHPickerViewControllerDelegate
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
picker.dismiss(animated: true, completion: nil)
var resultIndex = 0
DDLogInfo("Pick \(results.count) photos")
for result in results {
resultIndex += 1
DDLogInfo("Process \(resultIndex) photo")
DDLogInfo("Registered type identifiers for itemProvider:")
for typeIdentifier in result.itemProvider.registeredTypeIdentifiers {
DDLogInfo("TypeIdentifier \(typeIdentifier)")
}
if(result.itemProvider.hasItemConformingToTypeIdentifier(UTType.image.identifier)) {
DDLogInfo("Result \(resultIndex) is image")
}
if result.itemProvider.canLoadObject(ofClass: UIImage.self) {
DDLogInfo("Can load \(resultIndex) image")
//more code for photo
} else {
DDLogInfo("Load special image, such as raw")
result.itemProvider.loadFileRepresentation(forTypeIdentifier: "com.adobe.raw-image") { url, error in
if let error = error {
DDLogInfo("Error loading file representation: \(error.localizedDescription)")
return
}
It seems that there’s still no way to get all TIFF tags from a TIFF image, is that right? I've got these GeoTIFF images that have a handful of specialized TIFF tags in them. Calling CGImageSourceCopyPropertiesAtIndex(), I can see basic properties common to all TIFF images, like dimensions and color/pixel information, but no others.
Short of including libtiff, is there another way to get at the metadata? I've tried all of the options in CGImageSourceCopyAuxiliaryDataInfoAtIndex.
I've written a few bugs about this since 2020, all ignored.
Dear all,
I'm building my first MacOs app.
I've created my app icon and add it to AppIcon folder, but when I'm building the application the icon shows in the dock of the screen with no rounded borders like all the other apps.
I'm attaching here the icon and as you can see it has sharp edges. It is the same way in which it shows on the dock.
Why? Has anybody experienced the same?
Thanks for the support in advance,
A.
So I have an ios app using wkwebview - ionic framework, my images (not local and hosted) can be seen in ios 17 in my app but not in ios 18.
Is there some changes in wkwebview for this particular thing or am I missing something else?
I have a 3х3 Matrix which I need to apply to UIImage and save it in Documents folder. I successfully converted the 3x3 Matrix (represented as [[Double]]) to CATrasform3D and then I have broken my head with trying to figure out how to apply it to UIImage.
The only property where I can I apply it is UIView(or UIImageView in case with working with UIImage) transform property. But it has nothing to do with UIImage itself. I can't save the UIImage from transformed the UIImageView with all the transformations.
And all the CoreGraphic methods (like concatenate for CGContext) only work with affine transformations which not suits for me.
Please give me a hint what direction I should look.
Does Apple has native methods or I have to use 3rd party frameworks for this functionality?
Hi Team,
We have been working on one image processing app developed using react. In this app we are making the XMLHttpRequests to the server and storing the response in the cache which has around 200MB - 250MB of size. We are tracking the memory footprint using the Xcode instrument tool.
While downloading and rendering the data in app the Xcode instrument shows the memory footprint around 800MB - 1000MB. We are assuming that garbage collection is not working as expected or some resources are not released after use and because of this we get this high memory footprint for 200MB - 250MB data. If the data is changed then we are removing the existing data from cache and storing the new data. But here, when we delete the data from cache, it does not release the memory immediately and takes some time of 3 seconds or more.
In between this, the memory gets allocated to new data too and that increases the overall memory footprint of the app and in some cases the app is crashing. The maximum memory we have seen is average 1.5GB which varies with the device configuration. When we try the same activity on a safari browser where memory gets released immediately. If an app releases the initial acquired memory while loading new data we see very less app crashes. We need help to understand if there is a way to release the memory immediately to avoid the app crash.
To reproduce this scenario, we have created a simple app which creates an array with size of 100MB and checks the memory footprint using the Xcode instrument tool. When we create an array of 100MB size, sometimes it shows the memory footprint peak of around 700MB-800MB and when we clear the array by assigning it with an empty array it releases the memory after 2-3 seconds.
Created an array and then removed it and after removal of the array, created a new array of the same size immediately and again removed it. Because the memory is not released in time, if you repeat these steps a few times the app memory footprint will increase and that crashes the app.
My App will dynamically load different immersive furniture design scenes.
After each scene is loaded, I need to set the HDR image as ImageBasedLight.
How can I load EnvironmentResource dynamically?
This way I can set the ImageBasedLightComponent dynamically
I'm using UIKit to display a long list of large images inside a SwiftUI ScrollView and LazyHStack using UIViewControllerRepresentable. When an image is loaded, I'm using SDWebImage to load the image from the disk.
As the user navigates through the list and continues to load more images, more memory is used and is never cleared, even as the images are unloaded by the LazyHStack. Eventually, the app reaches the memory limit and crashes. This issue persists if I load the image with UIImage(contentsOfFile: ...) instead of SDWebImage.
How can I free the memory used by UIImage when the view is removed?
ScrollView(.horizontal, showsIndicators: false) {
LazyHStack(spacing: 16) {
ForEach(allItems) { item in
TestImageDisplayRepresentable(item: item)
.frame(width: geometry.size.width, height: geometry.size.height)
.id(item.id)
}
}
.scrollTargetLayout()
}
import UIKit
import SwiftUI
import SDWebImage
class TestImageDisplay: UIViewController {
var item: TestItem
init(item: TestItem) {
self.item = item
super.init(nibName: nil, bundle: nil)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func viewDidLoad() {
super.viewDidLoad()
let imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 200, height: 200))
imageView.center = view.center
view.addSubview(imageView)
imageView.sd_setImage(with: item.imageURL, placeholder: nil)
}
}
struct TestImageDisplayRepresentable: UIViewControllerRepresentable {
var item: TestItem
func makeUIViewController(context: Context) -> TestImageDisplay {
return TestImageDisplay(item: item)
}
func updateUIViewController(_ uiViewController: TestImageDisplay, context: Context) {
uiViewController.item = item
}
}
I'm adding support for Genmoji to my app but it's unclear to me if this is something that should be called manually in code to insert an adaptive image glyph or if this is something that the system calls when the user inserts a Genmoji / adaptive image glyph object (ie. sticker) into the UITextView.
I have a user who keeps crashing on his iOS 18 device, I need some help~
Exception 1, Code 26, Subcode 8 > Attempted to dereference garbage pointer 0x1a.
0
ImageIO IIOScanner::getVal32() + 36
1
ImageIO PSDReadPlugin::initialize(IIODictionary*) + 620
2
ImageIO PSDReadPlugin::initialize(IIODictionary*) + 620
3
ImageIO IIOReadPlugin::callInitialize() + 400
4
ImageIO IIO_Reader::initImageAtOffset(CGImagePlugin*, unsigned long, unsigned long, unsigned long) + 164
5
ImageIO IIOImageSource::makeImagePlus(unsigned long, IIODictionary*) + 832
6
ImageIO IIOImageSource::getPropertiesAtIndexInternal(unsigned long, IIODictionary*) + 72
7
ImageIO IIOImageSource::createThumbnailAtIndex(unsigned long, IIODictionary*, int*) + 1352
8
ImageIO CGImageSourceCreateThumbnailAtIndex + 740
9
Photos _createDecodedImageUsingImageIOWithFileUrlOrData + 856
10
Photos __91-[PHImageIODecoder decodeImageFromData:orFileURL:options:existingRequestHandle:completion:]_block_invoke_2 + 176
11
libdispatch.dylib _dispatch_call_block_and_release + 32
12
libdispatch.dylib _dispatch_client_callout + 20
13
libdispatch.dylib _dispatch_continuation_pop + 596
14
libdispatch.dylib _dispatch_async_redirect_invoke + 580
15
libdispatch.dylib _dispatch_root_queue_drain + 392
16
libdispatch.dylib _dispatch_worker_thread2 + 156
17
libsystem_pthread.dylib _pthread_wqthread + 228
I am building an app about photos and
I want to create a photo sharing feature like Apple's Photos App.
Please see Steps to Reproduce and attached project.
The current share method has the following issues
The file name of the shared photo changes to “FullSizeRender”.
The creation and update dates of shared photos will change to the date they were edited or shared.
I want to ensure that the following conditions are definitely met
Share the latest edited version.
The creation date should be when the original photo was first created.
How can I improve the code?
STEPS TO REPRODUCE
class PHAssetShareManager {
static func shareAssets(_ assets: [PHAsset], from viewController: UIViewController, sourceView: UIView) {
let manager = PHAssetResourceManager.default()
var filesToShare: [URL] = []
let group = DispatchGroup()
for asset in assets {
group.enter()
getAssetFile(asset, resourceManager: manager) { fileURL in
if let fileURL = fileURL {
filesToShare.append(fileURL)
}
group.leave()
}
}
group.notify(queue: .main) {
self.presentShareSheet(filesToShare, from: viewController, sourceView: sourceView)
}
}
private static func getAssetFile(_ asset: PHAsset, resourceManager: PHAssetResourceManager, completion: @escaping (URL?) -> Void) {
print("getAssetFile")
let resources: [PHAssetResource]
switch asset.mediaType {
case .image:
if asset.mediaSubtypes.contains(.photoLive) {
// let editedResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .fullSizePairedVideo }
// let originalResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .pairedVideo }
let editedResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .fullSizePhoto }
let originalResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .photo }
resources = editedResources.isEmpty ? originalResources : editedResources
} else {
let editedResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .fullSizePhoto }
let originalResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .photo }
resources = editedResources.isEmpty ? originalResources : editedResources
}
case .video:
let editedResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .fullSizeVideo }
let originalResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .video }
resources = editedResources.isEmpty ? originalResources : editedResources
default:
print("Unsupported media type")
completion(nil)
return
}
guard let resource = resources.first else {
print("No resource found")
completion(nil)
return
}
let fileName = resource.originalFilename
let tempDirectoryURL = FileManager.default.temporaryDirectory
let localURL = tempDirectoryURL.appendingPathComponent(fileName)
// Delete existing files and reset cache
if FileManager.default.fileExists(atPath: localURL.path) {
do {
try FileManager.default.removeItem(at: localURL)
} catch {
print("Error removing existing file: \(error)")
}
}
let options = PHAssetResourceRequestOptions()
options.isNetworkAccessAllowed = true
resourceManager.writeData(for: resource, toFile: localURL, options: options) { (error) in
if let error = error {
print("Error writing asset data: \(error)")
completion(nil)
} else {
completion(localURL)
}
}
}
private static func presentShareSheet(_ items: [Any], from viewController: UIViewController, sourceView: UIView) {
print("presentShareSheet")
let activityViewController = UIActivityViewController(activityItems: items, applicationActivities: nil)
if UIDevice.current.userInterfaceIdiom == .pad {
activityViewController.popoverPresentationController?.sourceView = sourceView
activityViewController.popoverPresentationController?.sourceRect = sourceView.bounds
}
viewController.present(activityViewController, animated: true, completion: nil)
}
}```
The apps. like viber, ******** are not able to recognize/ upload the photos after 18.01 installation.
Hi everyone,
I've been working with the autoAdjustmentFilters provided by Core Image, which includes filters like CIHighlightShadowAdjust, CIVibrance, and CIToneCurve. However, I’ve noticed that the results differ significantly from the "Auto" enhancement feature in the Photos app. In the Photos app, the Auto function seems to adjust multiple parameters such as contrast, exposure, white balance, highlights, and shadows in a more advanced manner.
Is there an API or a framework available that can replicate the more sophisticated "Auto" adjustments as seen in the Photos app? Or would I need to manually combine filters (like CIExposureAdjust, CIWhitePointAdjust, etc.) to approximate this functionality?
Any insights or recommendations on how to achieve this would be greatly appreciated. Thank you!
I am a developer working on iOS apps.
In the demo, I planned to replace the local images with Heic format instead of PNG format, but the actual test results showed abnormalities on this device, while the other test devices displayed normally
Heic images are converted by the built-in image conversion function on Mac. I tested multiple Heic images, but none of them were displayed and the image information returned nil,,but PNG images can be displayed normally.
device information: