Hi, I have a problem when I want to attach my grayscale depth map image into the real image. The produced depth map doesn't have the cameraCalibration value which should responsible to align the depth data to the image. How do I align the depth map? I saw an article about it but it is not really detailed so I might be missing some process.
I tried to:
convert my depth map into pixel buffer
create image destination ref and add the image there.
add the auxData (depth map dict)
This is the output:
There is some black space there and my RGB image colour changes
Image I/O
RSS for tagRead and write most image file formats, manage color, access image metadata using Image I/O.
Posts under Image I/O tag
57 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Since iOS 18.3, icons are no longer generated correctly with QLThumbnailGenerator.
No error is returned either.
But this error message now appears in the console:
Error returned from iconservicesagent image request: <ISTypeIcon: 0x3010f91a0>,Type: com.adobe.pdf - <ISImageDescriptor: 0x302f188c0> - (36.00, 36.00)@3x v:1 l:5 a:0:0:0:0 t:() b:0 s:2 ps:0 digest: B19540FD-0449-3E89-AC50-38F92F9760FE error: Error Domain=NSOSStatusErrorDomain Code=-609 "Client is disallowed from making such an icon request" UserInfo={NSLocalizedDescription=Client is disallowed from making such an icon request}
Does anyone know this error? Is there a workaround?
Are there new permissions to consider?
Here is the code how icons are generated:
let request = QLThumbnailGenerator.Request(fileAt: url, size: size, scale: scale, representationTypes: self.thumbnailType)
request.iconMode = true
let generator = QLThumbnailGenerator.shared
generator.generateRepresentations(for: request) { [weak self] thumbnail, _, error in
}
I’m experiencing significant performance and memory management issues in my SwiftUI application when displaying a large number of images using LazyVStack within a ScrollView. The application uses Swift Data to manage and display images.
Here’s the model I’m working with:
@Model
final class Item {
var id: UUID = UUID()
var timestamp: Date = Date.now
var photo: Data = Data()
init(photo: Data = Data(), timestamp: Date = Date.now) {
self.photo = photo
self.timestamp = timestamp
}
}
extension Item: Identifiable {}
The photo property is used to store images. However, when querying Item objects using Swift Data in a SwiftUI ScrollView, the app crashes if there are more than 100 images in the database.
Scrolling down through the LazyVStack loads all images into memory leading to the app crashing when memory usage exceeds the device’s limits.
Here’s my view: A LazyVStack inside a ScrollView displays the images.
struct LazyScrollView: View {
@Environment(\.modelContext) private var modelContext
@State private var isShowingPhotosPicker: Bool = false
@State private var selectedItems: [PhotosPickerItem] = []
@Query private var items: [Item]
var body: some View {
NavigationStack {
ScrollView {
LazyVStack {
ForEach(items) { item in
NavigationLink {
Image(uiImage: UIImage(data: item.photo)!)
.resizable()
.scaledToFit()
} label: {
Image(uiImage: UIImage(data: item.photo)!)
.resizable()
.scaledToFit()
}
}
}
}
.navigationTitle("LazyScrollView")
.photosPicker(isPresented: $isShowingPhotosPicker, selection: $selectedItems, maxSelectionCount: 100, matching: .images)
.onChange(of: selectedItems) {
Task {
for item in selectedItems {
if let data = try await item.loadTransferable(type: Data.self) {
let newItem = Item(photo: data)
modelContext.insert(newItem)
}
}
try? modelContext.save()
selectedItems = []
}
}
}
}
}
Based on this:
How can I prevent SwiftUI from loading all the binary data (photo) into memory when the whole view is scrolled until the last item?
Why does SwiftUI not free memory from the images that are not being displayed?
Any insights or suggestions would be greatly appreciated. Thank you!
I will put the full view code in the comments so anyone can test if needed.
Hi,
I'm working with a very simple app that tries to read a coordinates card and past the data into diferent fields. The card's layout is COLUMNS from 1-10, ROWs from A-J and a two digit number for each cell. In my app, I have field for each of those cells (A1, A2...). I want that OCR to read that card and paste the info but I just cant. I have two problems. The camera won't close. It remains open until I press the button SAVE (this is not good because a user could take 3, 4, 5... pictures of the same card with, maybe, different results, and then? Which is the good one?). Then, after I press save, I can see the OCR kinda works ( the console prints all the date read) but the info is not pasted at all.
Any idea? I know is hard to know what's wrong but I've tried chatgpt and all it does... just doesn't work
This is the code from the scanview
import SwiftUI
import Vision
import VisionKit
struct ScanCardView: UIViewControllerRepresentable {
@Binding var scannedCoordinates: [String: String]
var useLettersForColumns: Bool
var numberOfColumns: Int
var numberOfRows: Int
@Environment(.presentationMode) var presentationMode
func makeUIViewController(context: Context) -> VNDocumentCameraViewController {
let scannerVC = VNDocumentCameraViewController()
scannerVC.delegate = context.coordinator
return scannerVC
}
func updateUIViewController(_ uiViewController: VNDocumentCameraViewController, context: Context) {}
func makeCoordinator() -> Coordinator {
return Coordinator(self)
}
class Coordinator: NSObject, VNDocumentCameraViewControllerDelegate {
let parent: ScanCardView
init(_ parent: ScanCardView) {
self.parent = parent
}
func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) {
print("Escaneo completado, procesando imagen...")
guard scan.pageCount > 0, let image = scan.imageOfPage(at: 0).cgImage else {
print("No se pudo obtener la imagen del escaneo.")
controller.dismiss(animated: true, completion: nil)
return
}
recognizeText(from: image)
DispatchQueue.main.async {
print("Finalizando proceso OCR y cerrando la cámara.")
controller.dismiss(animated: true, completion: nil)
}
}
func documentCameraViewControllerDidCancel(_ controller: VNDocumentCameraViewController) {
print("Escaneo cancelado por el usuario.")
controller.dismiss(animated: true, completion: nil)
}
func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFailWithError error: Error) {
print("Error en el escaneo: \(error.localizedDescription)")
controller.dismiss(animated: true, completion: nil)
}
private func recognizeText(from image: CGImage) {
let request = VNRecognizeTextRequest { (request, error) in
guard let observations = request.results as? [VNRecognizedTextObservation], error == nil else {
print("Error en el reconocimiento de texto: \(String(describing: error?.localizedDescription))")
DispatchQueue.main.async {
self.parent.presentationMode.wrappedValue.dismiss()
}
return
}
let recognizedStrings = observations.compactMap { observation in
observation.topCandidates(1).first?.string
}
print("Texto reconocido: \(recognizedStrings)")
let filteredCoordinates = self.filterValidCoordinates(from: recognizedStrings)
DispatchQueue.main.async {
print("Coordenadas detectadas después de filtrar: \(filteredCoordinates)")
self.parent.scannedCoordinates = filteredCoordinates
}
}
request.recognitionLevel = .accurate
let handler = VNImageRequestHandler(cgImage: image, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try handler.perform([request])
print("OCR completado y datos procesados.")
} catch {
print("Error al realizar la solicitud de OCR: \(error.localizedDescription)")
}
}
}
private func filterValidCoordinates(from strings: [String]) -> [String: String] {
var result: [String: String] = [:]
print("Texto antes de filtrar: \(strings)")
for string in strings {
let trimmedString = string.replacingOccurrences(of: " ", with: "")
if parent.useLettersForColumns {
let pattern = "^[A-J]\\d{1,2}$" // Letras de A-J seguidas de 1 o 2 dígitos
if trimmedString.range(of: pattern, options: .regularExpression) != nil {
print("Coordenada válida detectada (letras): \(trimmedString)")
result[trimmedString] = "Valor" // Asignación de prueba
}
} else {
let pattern = "^[1-9]\\d{0,1}$" // Solo números, de 1 a 99
if trimmedString.range(of: pattern, options: .regularExpression) != nil {
print("Coordenada válida detectada (números): \(trimmedString)")
result[trimmedString] = "Valor"
}
}
}
print("Coordenadas finales después de filtrar: \(result)")
return result
}
}
}
基于iPhone 14 Max相机,实现模型识别,并在识别对象周围画一个矩形框。宽度和高度使用激光雷达计算,并在实时更新的图像上以厘米为单位显示。
swift code
Hey, I'm building a camera app and I want to use the captured HDRGainMap along side the photo to do some processing with a CIFilter chain. How can this be done? I can't find any documentation any where on this, only on how to access the HDRGainMap from an existing HEIC file, which I have done successfully. For this I'm doing something like the following:
let gainmap = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source, 0, kCGImageAuxiliaryDataTypeHDRGainMap)
let gainDict = NSDictionary(dictionary: gainmap)
let gainData = gainDict[kCGImageAuxiliaryDataInfoData] as? Data
let gainDescription = gainDict[kCGImageAuxiliaryDataInfoDataDescription]
let gainMeta = gainDict[kCGImageAuxiliaryDataInfoMetadata]
However I'm not sure what the approach is with a AVCapturePhoto output from a AVCaptureDevice.
Thanks!
1 CoreFoundation _dataWrite + 144
2 CoreFoundation _CFWriteStreamWrite + 312
3 ImageIO IIOColorMap::writeToStream(__CFWriteStream*) + 128
4 ImageIO GlobalGIFInfo::writeToStream(__CFWriteStream*, CFRange const&) + 336
5 ImageIO GlobalGIFInfo::createDataRepresentation(CFRange const&) + 80
6 ImageIO IIO_Reader_GIF::createGlobalInfoData(IIOImageReadSession*) + 68
7 ImageIO IIOReadPlugin::callDecodeImage(IIODecodeParameter*, IIOImageType, __IOSurface**, __CVBuffer**, CGImageBlockSet**) + 608
8 ImageIO IIO_Reader::CopyImageBlockSetProc(void*, CGImageProvider*, CGRect, CGSize, __CFDictionary const*) + 696
9 ImageIO IIOImageProviderInfo::copyImageBlockSetWithOptions(CGImageProvider*, CGRect, CGSize, __CFDictionary const*) + 740
10 ImageIO IIOImageProviderInfo::CopyImageBlockSetWithOptions(void*, CGImageProvider*, CGRect, CGSize, __CFDictionary const*) + 920
11 QuartzCore CA::Render::copy_image(CGImage*, CGColorSpace*, unsigned int, double, double) + 3080
12 QuartzCore CA::Render::prepare_image(CGImage*, CGColorSpace*, unsigned int, double) + 24
13 QuartzCore CA::Layer::prepare_contents(CALayer*, CA::Transaction*) + 220
14 QuartzCore CA::Layer::prepare_commit(CA::Transaction*) + 284
15 QuartzCore CA::Context::commit_transaction(CA::Transaction*, double, double*) + 484
16 QuartzCore CA::Transaction::commit() + 648
17 QuartzCore CA::Transaction::flush_as_runloop_observer(bool) + 88
18 UIKitCore __UIApplicationFlushCATransaction + 52
19 UIKitCore ___setupUpdateSequence_block_invoke_2 + 332
20 UIKitCore __UIUpdateSequenceRun + 84
21 UIKitCore _schedulerStepScheduledMainSection + 172
22 UIKitCore _runloopSourceCallback + 92
23 CoreFoundation ___CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 28
24 CoreFoundation ___CFRunLoopDoSource0 + 176
25 CoreFoundation ___CFRunLoopDoSources0 + 244
26 CoreFoundation ___CFRunLoopRun + 840
27 CoreFoundation _CFRunLoopRunSpecific + 588
28 GraphicsServices 0x00000001ebf114c0 GSEventRunModal + 164
29 UIKitCore -[UIApplication _run] + 816
30 UIKitCore _UIApplicationMain + 340
Hi everyone,
I'm working on an iOS app that uses VisionKit and I'm exploring the .visualLookUp feature. Specifically, I want to extract the detailed information that Visual Look Up provides after identifying an object in an image (e.g., if the object is a flower, retrieve its name; if it’s a clothing tag, get the tag's content).
Hi Team,
We’re encountering an issue where Open Graph metadata (e.g., og:image) isn’t rendering properly on iOS/macOS platforms or WhatsApp previews. Here’s a brief summary of the problem:
SSL Configuration: Our SSL Labs report shows a grade of B due to:
Improper certificate chain setup.
Outdated cipher suites (e.g., TLS_RSA_WITH_3DES).
Support for deprecated TLS protocols (1.0/1.1).
Frontend Observations:
Metadata (e.g., og:image) is not reliably picked up on iOS/macOS crawlers.
Crawlers may have issues accessing assets due to CORS or TLS limitations.
What We Need:
Guidance on resolving Open Graph preview issues specific to iOS/macOS environments.
Best practices for ensuring compatibility with Apple’s crawlers and WhatsApp on iOS.
Suggestions for optimizing server-side SSL/TLS configurations and frontend setup to improve metadata visibility.
We’re currently using Next.js 14 for the frontend. Any insights or debugging tips are greatly appreciated!
Thanks in advance!
I am trying to create an empty metadata, and set the HDRGainMapHeadroom at ***. However the final returned mutableMetadata doesn't contain the HDRGainMap:HDRGainMapVersion or HDRGainMap:HDRGainMapHeadroom. But iio:hasXMP exist.
why? Is that the reason that the namespace HDRGainMap is private?
func createHDRGainMapMetadata(version: Int, headroom: Double) -> CGImageMetadata? {
// Create a mutable metadata object
let mutableMetadata = CGImageMetadataCreateMutable()
// Define the namespace for HDRGainMap
let namespace = "HDRGainMap"
let xmpKeyPath = "iio:hasXMP"
let xmpValue = String(true)
// Set the HDRGainMapVersion item
let versionKeyPath = "\(namespace):HDRGainMapVersion"
let versionValue = String(version)
// Set the version value
let xmpSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, xmpKeyPath as CFString, xmpValue as CFString)
if xmpSetResult == false {
print("Failed to set xmp")
}
// Set the version value
let versionSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, versionKeyPath as CFString, versionValue as CFString)
if versionSetResult == false {
print("Failed to set HDRGainMapVersion")
}
// Set the HDRGainMapHeadroom item
let headroomKeyPath = "\(namespace):HDRGainMapHeadroom"
let headroomValue = String(headroom)
// Set the headroom value
let headroomSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, headroomKeyPath as CFString, headroomValue as CFString)
if headroomSetResult == false {
print("Failed to set HDRGainMapHeadroom")
}
return mutableMetadata
}
I'm trying to make an app that is able to quietly run in the background. It needs to detect other apps' or the system's incoming video and/or audio, using only on-device resources to determine if it might be a scam caller.
It will tap into an escalating cascade of resources to do so. For video/image scam detection, it uses OpenCV to detect faces, then refers to a known database of reported scam imagery. For audio scam calls, we defer to known techniques of voice modulation in frequency and/or amplitude. Each video and/or audio result will be relayed via notification banner as well as recorded in-app. Crucially, if the results are uncertain, users have the option to submit it to a global collaborative cloud database for investigative teams; 60 second audio snippets or series of images where faces were detected (60 second equivalent).
In the end, we expect to deploy this app across most parts of Asia and Africa, thereby protecting generations of iPhone and iPad users.
However, we have not been able to find a method that does this, and there is no known correspondance able to provide such technical guidance.
Please assist.
There seems to be an issue in iOS 18 / macOS 15 related to image thumbnail generation and/or HEIC.
We are transcoding JPEG images to HEIC when they are loaded into our app (HEIC has a much lower memory footprint when loaded by Core Image, for some reason). We use Image I/O for that:
guard let source = CGImageSourceCreateWithURL(inputURL, nil),
let destination = CGImageDestinationCreateWithURL(outputURL, UTType.heic.identifier as CFString, 1, nil) else {
throw <error>
}
let primaryImageIndex = CGImageSourceGetPrimaryImageIndex(source)
CGImageDestinationAddImageFromSource(destination, source, primaryImageIndex, nil)
When we use CGImageDestinationAddImageFromSource, we get the following warnings on the console:
createImage:1445: *** ERROR: bad image size (0 x 0) rb: 0
CGImageSourceCreateThumbnailAtIndex:5195: *** ERROR: CGImageSourceCreateThumbnailAtIndex[0] - 'HJPG' - failed to create thumbnail [-67] {alw:-1, abs: 1 tra:-1 max:4620}
writeImageAtIndex:1025: ⭕️ ERROR: '<app>' is trying to save an opaque image (4620x3466) with 'AlphaPremulLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha.
It seems that CGImageDestinationAddImageFromSource is trying to extract/create a thumbnail, which fails somehow.
I re-wrote the last part like this:
guard let primaryImage = CGImageSourceCreateImageAtIndex(source, primaryImageIndex, nil),
let properties = CGImageSourceCopyPropertiesAtIndex(source, primaryImageIndex, nil) else {
throw <error>
}
CGImageDestinationAddImage(destination, primaryImage, properties)
This doesn't cause any warnings.
An issue that might be related has been reported here.
I've also heard from others having issues with CGImageSourceCreateThumbnailAtIndex.
Hi, i’m doing a work.
i need to transfer files (photo, raw format) from sd to iphone. On my iPad pro.
i need to move photo first on file, then Import to photo.
now. If i do this job for few files there is no problem.
if i copy more then 300 files job is not done.
when i select raw file on file app, and then click on share/ save to my photo option …. Ther‘s no progress bar, no way to conclude copy with success. At least 298 photo are imported. Whitout any error or warning.
Someone do the trick?
can be possible to copy from file to file or import from file to photo with a progress bar display?
it’s very very strange that the basic is turned off.
please help me.
thank you very much in advance.
I extracted the gain map info from an image using
let url = Bundle.main.url(forResource: "IMG_1181", withExtension: "HEIC")
let source = CGImageSourceCreateWithURL(url! as CFURL, nil)
let portraitData = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source!, 0, kCGImageAuxiliaryDataTypeHDRGainMap) as! [AnyHashable : Any]
let metaData = portraitData[kCGImageAuxiliaryDataInfoMetadata] as! CGImageMetadata
Then I printed all the metadata tags
func printMetadataProperties(from metadata: CGImageMetadata) {
guard let tags = CGImageMetadataCopyTags(metadata) as? [CGImageMetadataTag] else {
return
}
for tag in tags {
if let prefix = CGImageMetadataTagCopyPrefix(tag) as String?,
let namespace = CGImageMetadataTagCopyNamespace(tag) as String?,
let key = CGImageMetadataTagCopyName(tag) as String?,
let value = CGImageMetadataTagCopyValue(tag){
print("Namespace: \(namespace), Key: \(key), Prefix: \(prefix), value: \(value)")
} else {
}
}
}
//Namespace: http://ns.apple.com/ImageIO/1.0/, Key: hasXMP, Prefix: iio, value: True
//Namespace: http://ns.apple.com/HDRGainMap/1.0/, Key: HDRGainMapVersion, Prefix: HDRGainMap, value: 131072
//Namespace: http://ns.apple.com/HDRGainMap/1.0/, Key: HDRGainMapHeadroom, Prefix: HDRGainMap, value: 3.586325
I want to create a new CGImageMetadata and tags.
But when it comes to the HDR tags. It always fails to add to metadata.
let tag = CGImageMetadataTagCreate(
"http://ns.apple.com/HDRGainMap/1.0/" as CFString,
"HDRGainMap" as CFString,
"HDRGainMapHeadroom" as CFString,
.default,
3.56 as CFNumber
)
let path = "\(HDRGainMap):\(HDRGainMapHeadroom)" as CFString
let success = CGImageMetadataSetTagWithPath(metadata, nil, path, tag)// always false
The hasXMP works fine.
Is HDR a private dict for Apple?
Hi, I have a problem that I can't solve, and I hope you can help me:
When I switch tabs or scroll down, the background color of the tab bar changes automatically.
This happened to me with Xcode 16.0.
Can yo help me please?
I am using Apple’s Vision framework with DetectHorizonRequest to detect the horizon in an image. Here is my code:
func processHorizonImage(_ ciImage: CIImage) async {
let request = DetectHorizonRequest()
do {
let result = try await request.perform(on: ciImage)
print(result)
} catch {
print(error)
}
}
After calling the perform method, I am getting result as nil. To ensure the request's correctness, I have verified the following:
The input CIImage is valid and contains a visible horizon.
No errors are being thrown.
The relevant frameworks are properly imported.
Given that my image contains a clear horizon, why am I still not getting any results? I would appreciate any help or suggestions to resolve this issue.
Thank you for your support!
This is the image
Hello, during the development of my website, I discovered that when there are numerous or large WebP images on the screen, the screen keeps flickering, accompanied by the phenomenon of mobile phone heating. The page returns to normal after converting to PNG. It seems that the issue is caused by the update of IOS 18. Could you please assist me in taking a look?
PNG (it works)
https://static.xdbbtswu.com/bbt_of/assets/test/good/#/
WebP (it doesn't work)
https://static.xdbbtswu.com/bbt_of/assets/test/nogood/#/
I am a developer working on iOS apps.
In the demo, I planned to replace the local images with Heic format instead of PNG format, but the actual test results showed abnormalities on this device, while the other test devices displayed normally
Heic images are converted by the built-in image conversion function on Mac. I tested multiple Heic images, but none of them were displayed and the image information returned nil,,but PNG images can be displayed normally.
device information:
Hi everyone,
I've been working with the autoAdjustmentFilters provided by Core Image, which includes filters like CIHighlightShadowAdjust, CIVibrance, and CIToneCurve. However, I’ve noticed that the results differ significantly from the "Auto" enhancement feature in the Photos app. In the Photos app, the Auto function seems to adjust multiple parameters such as contrast, exposure, white balance, highlights, and shadows in a more advanced manner.
Is there an API or a framework available that can replicate the more sophisticated "Auto" adjustments as seen in the Photos app? Or would I need to manually combine filters (like CIExposureAdjust, CIWhitePointAdjust, etc.) to approximate this functionality?
Any insights or recommendations on how to achieve this would be greatly appreciated. Thank you!
The apps. like viber, whatsapp are not able to recognize/ upload the photos after 18.01 installation.