Due to it's new UI with transprent theme, sometimes its lacked between switching the apps.
Please add AI eraser in photos app so that we can experience new ai tool.
Due to india region, unable to use wallet options, Please add some options so that we can add cards and use this option.
Image I/O
RSS for tagRead and write most image file formats, manage color, access image metadata using Image I/O.
Posts under Image I/O tag
50 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here.
Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo
let photos = PHAsset.fetchAssets(with: .image, options: nil)
// enumerating photos ....
if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) {
spatialAsset = asset
}
// other code show below
I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos).
// imageCount is 1 when it comes to generated spatial photo
let imageCount = CGImageSourceGetCount(source)
I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource.
The full code below, the imagePair extraction will stop at "no groups found":
func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) {
let options = PHImageRequestOptions()
options.isNetworkAccessAllowed = true
options.deliveryMode = .highQualityFormat
options.resizeMode = .none
options.version = .original
return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) {
imageData, _, _, _ in
guard let imageData,
let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil)
else {
completion(nil)
return
}
let stereoImagePair = stereoImagePair(from: imageSource)
completion(stereoImagePair)
}
}
}
func stereoImagePair(from source: CGImageSource) -> StereoImagePair? {
guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else {
return nil
}
let imageCount = CGImageSourceGetCount(source)
print(String(format: "%d images found", imageCount))
guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else {
/// function returns here
print("no groups found")
return nil
}
guard
let stereoGroup = groups.first(where: {
let groupType = $0[kCGImagePropertyGroupType] as! CFString
return groupType == kCGImagePropertyGroupTypeStereoPair
})
else {
return nil
}
guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int,
let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int,
let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil),
let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil),
let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil),
let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil)
else {
return nil
}
return (leftImage, rightImage, self.identifier)
}
Any suggestion? Thanks
visionOS 2.4
ImageIO encoding to HEICS fails in macOS 15.5.
log
writeImageAtIndex:1246: *** CMPhotoCompressionSessionAddImageToSequence: err = kCMPhotoError_UnsupportedOperation [-16994] (codec: 'hvc1')
seems to be related with
https://github.com/SDWebImage/SDWebImage/issues/3732
affected version
iOS 18.4 (sim and device), macOS 15.5
unaffected version
iOS 18.3 (sim and device), macOS 15.3
i have a wallpaper app , i need a photos access when i try to add it in capabilities i don't find it.is there any solution?
Since iOS 18.3, icons are no longer generated correctly with QLThumbnailGenerator.
No error is returned either.
But this error message now appears in the console:
Error returned from iconservicesagent image request: <ISTypeIcon: 0x3010f91a0>,Type: com.adobe.pdf - <ISImageDescriptor: 0x302f188c0> - (36.00, 36.00)@3x v:1 l:5 a:0:0:0:0 t:() b:0 s:2 ps:0 digest: B19540FD-0449-3E89-AC50-38F92F9760FE error: Error Domain=NSOSStatusErrorDomain Code=-609 "Client is disallowed from making such an icon request" UserInfo={NSLocalizedDescription=Client is disallowed from making such an icon request}
Does anyone know this error? Is there a workaround?
Are there new permissions to consider?
Here is the code how icons are generated:
let request = QLThumbnailGenerator.Request(fileAt: url, size: size, scale: scale, representationTypes: self.thumbnailType)
request.iconMode = true
let generator = QLThumbnailGenerator.shared
generator.generateRepresentations(for: request) { [weak self] thumbnail, _, error in
}
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
In our web application some functionalities will allow user to upload multiple images (More than 25 images) in a single page
It is working find in all OS and browsers except iOS
When user try to upload images directly from camera there will be some overlaps, duplication, missing etc.
This is happening in both Safari and Chrome, we had a thorough check in our application and found every thing is working fine from our end
You can reproduce the issue by creating a web page which accept more than 50 images (we tried the same in ASP MVC Core & PHP) and showing the images in order
access the page through your iPhone using Safari or Chrome
Try to upload images directly from your camera, try sequential images (Image of a stop watch, or some thing like that) so that you can easily identify the order of files uploaded
and check the listing page of uploaded image (Try these steps multiple times)
You can find some images are duplicated and some are missing
Some users reported that their images are not loading correctly in our app. After a lot of debugging we identified the following:
This only happens when the app is build for Mac Catalyst. Not on iOS, iPadOS, or “real” macOS (AppKit).
The images in question have unusual color spaces. We observed the issue for uRGB and eciRGB v2.
Those images are rendered correctly in Photos and Preview on all platforms.
When displaying the image inside of a UIImageView or in a SwiftUI Image, they render correctly.
The issue only occurs when loading the image via Core Image.
When comparing the different Core Image render graphs between AppKit (working) and Catalyst (faulty) builds, they look identical—except for the result.
Mac (AppKit):
Catalyst:
Something seems to be off when Core Image tries to load an image with foreign color space in Catalyst.
We identified a workaround: By using a CGImageDestination to transcode the image using the kCGImageDestinationOptimizeColorForSharing option, Image I/O will convert the image to sRGB (or similar) and Core Image is able to load the image correctly. However, one potentially loses fidelity this way.
Or might there be a better workaround?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
Core Image
Core Graphics
Hello Apple Developer Community,
I’m facing a recurring issue in my SwiftUI iOS app where a specific view fails to reload correctly after saving edits and navigating back to it. The failure happens consistently post-save, and I’m looking for insights from the community.
🛠 App Overview
Purpose
A SwiftUI app that manages user-created data, including images, text fields, and completion tracking.
Tech Stack:
SwiftUI, Swift 5.x
MSAL for authentication
Azure Cosmos DB (NoSQL) for backend data
Azure Blob Storage for images
Environment:
Xcode 15.x
iOS 17 (tested on iOS 18.2 simulator and iPhone 16 Pro)
User Context:
Users authenticate via MSAL.
Data is fetched via Azure Functions, stored in Cosmos DB, and displayed dynamically.
🚨 Issue Description
🔁 Steps to Reproduce
Open a SwiftUI view (e.g., a dashboard displaying a user’s saved data).
Edit an item (e.g., update a name, upload a new image, modify completion progress).
Save changes via an API call (sendDataToBackend).
The view navigates back, but the image URL loses its SAS token.
Navigate away (e.g., back to the home screen or another tab).
Return to the view.
❌ Result
The view crashes, displays blank, or fails to load updated data.
SwiftUI refreshes text-based data correctly, but AsyncImage does not.
Printing the image URL post-save shows that the SAS token (?sv=...) is missing.
❓ Question
How can I properly reload AsyncImage after saving, without losing the SAS token?
🛠 What I’ve Tried
✅ Verified JSON Structure
Debugged pre- and post-save JSON.
Confirmed field names match the Codable model.
✅ Forced SwiftUI to Refresh
Tried .id(UUID()) on NavigationStack:
NavigationStack {
ProjectDashboardView()
.id(UUID()) // Forces reinit
}
Still fails intermittently.
✅ Forced AsyncImage to Reload
Tried appending a UUID() to the image URL:
AsyncImage(url: URL(string: "\(imageUrl)?cacheBust=\(UUID().uuidString)"))
Still fails when URL query parameters (?sv=...) are trimmed.
I’d greatly appreciate any insights, code snippets, or debugging suggestions! Let me know if more logs or sample code would help.
Thanks in advance!
If I want to edit image in preview app. But there is only option to rotate left and right 90degree rotations. No option to rorate in any prticular angle. So Please look into this and provide option in next update
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Graphics and Games
App Review
Media
I am extracting a JPEG2000 (JP2) facial image from an NFC passport chip (ISO/IEC 19794-5) and attempting to create a UIImage from it.
On iOS 16, the following code works fine:
import ImageIO
import UIKit
func getUIImage(from imageData: [UInt8]) -> UIImage? {
let data = Data(imageData)
guard let imageSource = CGImageSourceCreateWithData(data as CFData, nil),
let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else {
print("Failed to decode JP2 image!")
return nil
}
return UIImage(cgImage: cgImage)
}
However, on iOS 18, this fails with errors like:
initialize:1415: *** invalid JPEG2000 file ***
makeImagePlus:3752: *** ERROR: 'JP2 ' - failed to create image [-50]
CGImageSourceCreateImageAtIndex: *** ERROR: failed to create image [-59]
Questions:
Did Apple remove or modify JPEG2000 support in iOS 18?
Is there an official workaround for decoding JPEG2000 on iOS 18?
Should I use Vision/Metal/Core Image instead?
Is there a recommended way to convert JPEG2000 to JPEG/PNG before creating a UIImage?
Are there any Apple-provided APIs that maintain backward compatibility for JPEG2000 decoding?
Additional Info:
The UInt8 array has a valid JPEG2000 header (0x00 0x00 0x00 0x0C 6A 50 ...).
The image works on iOS 16 but fails on iOS 18.
Tested on iPhone running iOS 18.0 beta.
Any insights on how to handle JPEG2000 decoding in iOS 18 would be greatly appreciated! 🚀
In SwiftUI we have these nice properties that we can use for an Image: .saturation, .contrast, .brightness.
In my case I have a backend where I would want to apply the same exact effects using OpenCV.
Can I find the mathematical formulas anywhere?
Any idea how can I achieve this?
Thank you!
Hi, I have a problem when I want to attach my grayscale depth map image into the real image. The produced depth map doesn't have the cameraCalibration value which should responsible to align the depth data to the image. How do I align the depth map? I saw an article about it but it is not really detailed so I might be missing some process.
I tried to:
convert my depth map into pixel buffer
create image destination ref and add the image there.
add the auxData (depth map dict)
This is the output:
There is some black space there and my RGB image colour changes
A simple view has misaligned localized content after being converted to an image using ImageRenderer.
This is still problematic on real phone and TestFlight
I'm not sure what the problem is, I'm assuming it's an ImageRenderer bug.
I tried to use UIGraphicsImageRenderer, but the UIGraphicsImageRenderer captures the image in an inaccurate position, and it will be offset resulting in a white border. And I don't know why in some cases it encounters circular references that result in blank images.
"(1) days" is also not converted to "1 day" properly.
A specific image fails to load properly using UIImageView on iOS 16 and later systems, but loads normally on iOS 15 and earlier versions. Similarly, on Mac computers, this image cannot be opened on MacOS 13 and later, whereas it opens without issue on MacOS 12 and earlier. I am curious about the reasons behind this differing behavior on both iPhone and Mac.
基于iPhone 14 Max相机,实现模型识别,并在识别对象周围画一个矩形框。宽度和高度使用激光雷达计算,并在实时更新的图像上以厘米为单位显示。
swift code
I’m experiencing significant performance and memory management issues in my SwiftUI application when displaying a large number of images using LazyVStack within a ScrollView. The application uses Swift Data to manage and display images.
Here’s the model I’m working with:
@Model
final class Item {
var id: UUID = UUID()
var timestamp: Date = Date.now
var photo: Data = Data()
init(photo: Data = Data(), timestamp: Date = Date.now) {
self.photo = photo
self.timestamp = timestamp
}
}
extension Item: Identifiable {}
The photo property is used to store images. However, when querying Item objects using Swift Data in a SwiftUI ScrollView, the app crashes if there are more than 100 images in the database.
Scrolling down through the LazyVStack loads all images into memory leading to the app crashing when memory usage exceeds the device’s limits.
Here’s my view: A LazyVStack inside a ScrollView displays the images.
struct LazyScrollView: View {
@Environment(\.modelContext) private var modelContext
@State private var isShowingPhotosPicker: Bool = false
@State private var selectedItems: [PhotosPickerItem] = []
@Query private var items: [Item]
var body: some View {
NavigationStack {
ScrollView {
LazyVStack {
ForEach(items) { item in
NavigationLink {
Image(uiImage: UIImage(data: item.photo)!)
.resizable()
.scaledToFit()
} label: {
Image(uiImage: UIImage(data: item.photo)!)
.resizable()
.scaledToFit()
}
}
}
}
.navigationTitle("LazyScrollView")
.photosPicker(isPresented: $isShowingPhotosPicker, selection: $selectedItems, maxSelectionCount: 100, matching: .images)
.onChange(of: selectedItems) {
Task {
for item in selectedItems {
if let data = try await item.loadTransferable(type: Data.self) {
let newItem = Item(photo: data)
modelContext.insert(newItem)
}
}
try? modelContext.save()
selectedItems = []
}
}
}
}
}
Based on this:
How can I prevent SwiftUI from loading all the binary data (photo) into memory when the whole view is scrolled until the last item?
Why does SwiftUI not free memory from the images that are not being displayed?
Any insights or suggestions would be greatly appreciated. Thank you!
I will put the full view code in the comments so anyone can test if needed.
Hi,
I'm working with a very simple app that tries to read a coordinates card and past the data into diferent fields. The card's layout is COLUMNS from 1-10, ROWs from A-J and a two digit number for each cell. In my app, I have field for each of those cells (A1, A2...). I want that OCR to read that card and paste the info but I just cant. I have two problems. The camera won't close. It remains open until I press the button SAVE (this is not good because a user could take 3, 4, 5... pictures of the same card with, maybe, different results, and then? Which is the good one?). Then, after I press save, I can see the OCR kinda works ( the console prints all the date read) but the info is not pasted at all.
Any idea? I know is hard to know what's wrong but I've tried chatgpt and all it does... just doesn't work
This is the code from the scanview
import SwiftUI
import Vision
import VisionKit
struct ScanCardView: UIViewControllerRepresentable {
@Binding var scannedCoordinates: [String: String]
var useLettersForColumns: Bool
var numberOfColumns: Int
var numberOfRows: Int
@Environment(.presentationMode) var presentationMode
func makeUIViewController(context: Context) -> VNDocumentCameraViewController {
let scannerVC = VNDocumentCameraViewController()
scannerVC.delegate = context.coordinator
return scannerVC
}
func updateUIViewController(_ uiViewController: VNDocumentCameraViewController, context: Context) {}
func makeCoordinator() -> Coordinator {
return Coordinator(self)
}
class Coordinator: NSObject, VNDocumentCameraViewControllerDelegate {
let parent: ScanCardView
init(_ parent: ScanCardView) {
self.parent = parent
}
func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) {
print("Escaneo completado, procesando imagen...")
guard scan.pageCount > 0, let image = scan.imageOfPage(at: 0).cgImage else {
print("No se pudo obtener la imagen del escaneo.")
controller.dismiss(animated: true, completion: nil)
return
}
recognizeText(from: image)
DispatchQueue.main.async {
print("Finalizando proceso OCR y cerrando la cámara.")
controller.dismiss(animated: true, completion: nil)
}
}
func documentCameraViewControllerDidCancel(_ controller: VNDocumentCameraViewController) {
print("Escaneo cancelado por el usuario.")
controller.dismiss(animated: true, completion: nil)
}
func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFailWithError error: Error) {
print("Error en el escaneo: \(error.localizedDescription)")
controller.dismiss(animated: true, completion: nil)
}
private func recognizeText(from image: CGImage) {
let request = VNRecognizeTextRequest { (request, error) in
guard let observations = request.results as? [VNRecognizedTextObservation], error == nil else {
print("Error en el reconocimiento de texto: \(String(describing: error?.localizedDescription))")
DispatchQueue.main.async {
self.parent.presentationMode.wrappedValue.dismiss()
}
return
}
let recognizedStrings = observations.compactMap { observation in
observation.topCandidates(1).first?.string
}
print("Texto reconocido: \(recognizedStrings)")
let filteredCoordinates = self.filterValidCoordinates(from: recognizedStrings)
DispatchQueue.main.async {
print("Coordenadas detectadas después de filtrar: \(filteredCoordinates)")
self.parent.scannedCoordinates = filteredCoordinates
}
}
request.recognitionLevel = .accurate
let handler = VNImageRequestHandler(cgImage: image, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try handler.perform([request])
print("OCR completado y datos procesados.")
} catch {
print("Error al realizar la solicitud de OCR: \(error.localizedDescription)")
}
}
}
private func filterValidCoordinates(from strings: [String]) -> [String: String] {
var result: [String: String] = [:]
print("Texto antes de filtrar: \(strings)")
for string in strings {
let trimmedString = string.replacingOccurrences(of: " ", with: "")
if parent.useLettersForColumns {
let pattern = "^[A-J]\\d{1,2}$" // Letras de A-J seguidas de 1 o 2 dígitos
if trimmedString.range(of: pattern, options: .regularExpression) != nil {
print("Coordenada válida detectada (letras): \(trimmedString)")
result[trimmedString] = "Valor" // Asignación de prueba
}
} else {
let pattern = "^[1-9]\\d{0,1}$" // Solo números, de 1 a 99
if trimmedString.range(of: pattern, options: .regularExpression) != nil {
print("Coordenada válida detectada (números): \(trimmedString)")
result[trimmedString] = "Valor"
}
}
}
print("Coordenadas finales después de filtrar: \(result)")
return result
}
}
}
Hey, I'm building a camera app and I want to use the captured HDRGainMap along side the photo to do some processing with a CIFilter chain. How can this be done? I can't find any documentation any where on this, only on how to access the HDRGainMap from an existing HEIC file, which I have done successfully. For this I'm doing something like the following:
let gainmap = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source, 0, kCGImageAuxiliaryDataTypeHDRGainMap)
let gainDict = NSDictionary(dictionary: gainmap)
let gainData = gainDict[kCGImageAuxiliaryDataInfoData] as? Data
let gainDescription = gainDict[kCGImageAuxiliaryDataInfoDataDescription]
let gainMeta = gainDict[kCGImageAuxiliaryDataInfoMetadata]
However I'm not sure what the approach is with a AVCapturePhoto output from a AVCaptureDevice.
Thanks!