Hi,
I'm using Core Graphics to load a .DNG photo shot by a Leica Q3 camera.
The photo is shot in portrait, however the embedded preview is rotated 90 degrees to landscape.
I load the photo like this:
let options = [kCGImageSourceDecodeRequest: kCGImageSourceDecodeToHDR] as CFDictionary
let source = CGImageSourceCreateWithData(data as CFData, nil)
let cgimage = CGImageSourceCreateImageAtIndex(source, 0, options)
let properties = CGImageSourceCopyPropertiesAtIndex(source, 0, nil) as? [CFString : Any]
When doing this I can see that the orientation property is 1 indicating that the orientation is 'Up', which it isn't.
If I don't specify the kCGImageSourceDecodeToHDR option (eseentially setting options to nil) - the orientation property is 8 (rotated 90 degrees).
What puzzles me is that a chang to the CGImageSourceCreateImageAtIndex call can have an influence on that latter call to CGImageSourceCopyPropertiesAtIndex ?
I would expect these to work independently?
Cheers
Thomas
Core Graphics
RSS for tagHarness the power of Quartz technology to perform lightweight 2D rendering with high-fidelity output using Core Graphics.
Posts under Core Graphics tag
47 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi,
Apple’s documentation on Order-Independent Transparency (OIT) describes an approach using image blocks, where an array of size 4 is allocated per fragment to store depth and color in a tile shading compute pass.
However, when increasing the scene’s depth complexity by adding more overlapping quads, the OIT implementation fails due to the fixed array size.
Is there a way to dynamically allocate storage for fragments based on actual depth complexity encountered during rasterization, rather than using a fixed-size array? Specifically, can an adaptive array of fragments be maintained and sorted by depth, where the size grows as needed instead of being limited to 4 entries?
Any insights or alternative approaches would be greatly appreciated.
Thank you!
After ther Mac application is launched:
Log error: CGSWindowShmemCreateWithPort failed on port 0
and when the application quit:
No error handler for XPC error: Connection invalid
Appear with Xcode 15.4 but not with 12.4
As repported by Steve4442 in "Can someone explain this message" https://Forums.Developer.Apple.com/Forums/Thread/727803
.
The code don't use "windowNumbersWithOptions"
Can I ignore this log message ?
Hello!
I’m experiencing a crash in my iOS/iPadOS app related to a CALayer rendering process. The crash occurs when attempting to render a UIImage on a background thread. The crashes are occurring in our production app, and while we can monitor them through Crashlytics, we are unable to reproduce the issue in our development environment.
Relevant Code
I have a custom view controller that handles rendering CALayers onto images. This method creates a CALayer on the main thread and then starts a detached task to render this CALayer into a UIImage. The whole idea is learnt from this StackOverflow post: https://stackoverflow.com/a/77834613/9202699
Here are key parts of my implementation:
class MyViewController: UIViewController {
@MainActor
func renderToUIImage(size: CGSize, itemsToDraw: [MyDrawingItem], transform: CGAffineTransform) async -> UIImage? {
// Create CALayer and add it to the view.
CATransaction.begin()
let customLayer = MyDrawingLayer()
customLayer.setupContent(itemsToDraw: itemsToDraw)
// Position the frame off-screen to it hidden.
customLayer.frame = CGRect(
origin: CGPoint(x: -100 - size.width, y: -100 - size.height),
size: size)
customLayer.masksToBounds = true
customLayer.drawsAsynchronously = true
view.layer.addSublayer(customLayer)
CATransaction.commit()
// Render CALayer to UIImage in background thread.
let image = await Task.detached {
customLayer.setNeedsDisplay()
let renderer = UIGraphicsImageRenderer(size: size)
let image = renderer.image { // CRASH happens on this line
let cgContext = $0.cgContext
cgContext.saveGState()
cgContext.concatenate(transform)
customLayer.render(in: cgContext)
cgContext.restoreGState()
}
return image
}.value
// Remove the CALayer from the view.
CATransaction.begin()
customLayer.removeFromSuperlayer()
CATransaction.commit()
return image
}
}
class MyDrawingLayer: CALayer {
var itemsToDraw: [MyDrawingItem] = []
func setupContent(itemsToDraw: [MyDrawingItem]) {
self.itemsToDraw = itemsToDraw
}
override func draw(in ctx: CGContext) {
for item in itemsToDraw {
// Render the item to the context (example pseudo-code).
// All items are thread-safe to use.
// Things to draw may include CGPath, CGImages, UIImages, NSAttributedString, etc.
item.draw(in: ctx)
}
}
}
Crash Log
The crash occurs at the following location:
Crashed: com.apple.root.default-qos.cooperative
0 MyApp 0x5cb300 closure #1 in closure #1 in MyViewController.renderToUIImage(size: CGSize, itemsToDraw: [MyDrawingItem], transform: CGAffineTransform) + 4313002752 (<compiler-generated>:4313002752)
1 MyApp 0x5cb300 closure #1 in closure #1 in MyViewController.renderToUIImage(size: CGSize, itemsToDraw: [MyDrawingItem], transform: CGAffineTransform) + 4313002752 (<compiler-generated>:4313002752)
2 MyApp 0x1a4578 AnyModifier.modified(for:) + 4308649336 (<compiler-generated>:4308649336)
3 MyApp 0x7b4e64 thunk for @escaping @callee_guaranteed (@guaranteed UIGraphicsPDFRendererContext) -> () + 4315008612 (<compiler-generated>:4315008612)
4 UIKitCore 0x1489c0 -[UIGraphicsRenderer runDrawingActions:completionActions:format:error:] + 324
5 UIKitCore 0x14884c -[UIGraphicsRenderer runDrawingActions:completionActions:error:] + 92
6 UIKitCore 0x148778 -[UIGraphicsImageRenderer imageWithActions:] + 184
7 MyApp 0x5cb1c0 closure #1 in MyViewController.renderToUIImage(size: CGSize, itemsToDraw: [MyDrawingItem], transform: CGAffineTransform) + 100 (FileName.swift:100)
8 libswift_Concurrency.dylib 0x60f5c swift::runJobInEstablishedExecutorContext(swift::Job*) + 252
9 libswift_Concurrency.dylib 0x62514 swift_job_runImpl(swift::Job*, swift::SerialExecutorRef) + 144
10 libdispatch.dylib 0x15ec0 _dispatch_root_queue_drain + 392
11 libdispatch.dylib 0x166c4 _dispatch_worker_thread2 + 156
12 libsystem_pthread.dylib 0x3644 _pthread_wqthread + 228
13 libsystem_pthread.dylib 0x1474 start_wqthread + 8
Questions
Is it safe to run UIGraphicsImageRenderer.image on the background thread?
Given that I want to leverage GPU rendering, what are some best practices for rendering images off the main thread while ensuring stability?
Are there alternatives to using UIGraphicsImageRenderer for background rendering that can still take advantage of GPU rendering?
It is particularly interesting that the crash logs indicate the error may be related to UIGraphicsPDFRendererContext (crash log line number 3). It would be very helpful if someone could explain the connection between starting and drawing on a UIGraphicsImageRenderer and UIGraphicsPDFRendererContext.
Any insights or guidance on this issue would be greatly appreciated. Thanks!!!
I am trying to simulate a paste command and it seems to not want to paste. It worked at one point with the same code and now is causing issues.
My code looks like this:
` func simulatePaste() {
guard let source = CGEventSource(stateID: .hidSystemState) else {
print("Failed to create event source")
return
}
let keyDown = CGEvent(keyboardEventSource: source, virtualKey: CGKeyCode(9), keyDown: true)
let keyUp = CGEvent(keyboardEventSource: source, virtualKey: CGKeyCode(9), keyDown: false)
keyDown?.flags = .maskCommand
keyUp?.flags = .maskCommand
keyDown?.post(tap: .cgAnnotatedSessionEventTap)
keyUp?.post(tap: .cgAnnotatedSessionEventTap)
print("Simulated Cmd + V")
}
I know that there is some issues around permissions and so in my Info.plist I have this:
<string>NSApplication</string>
<key>NSAppleEventsUsageDescription</key>
<string>This app requires permission to send keyboard input for pasting from the clipboard.</string>
I have also disabled sandbox. It does ask me if I want to give the app permissions but after approving it, it still doesn't paste.
Hi, I have a problem when I want to attach my grayscale depth map image into the real image. The produced depth map doesn't have the cameraCalibration value which should responsible to align the depth data to the image. How do I align the depth map? I saw an article about it but it is not really detailed so I might be missing some process.
I tried to:
convert my depth map into pixel buffer
create image destination ref and add the image there.
add the auxData (depth map dict)
This is the output:
There is some black space there and my RGB image colour changes
Problem Description
We are developing a app for iOS and iPadOS that involves extensive custom drawing of paths, shapes, texts, etc. To improve drawing and rendering speed, we use CARenderer to generate cached images (CGImage) on a background thread. We adopted this approach based on this StackOverflow post: https://stackoverflow.com/a/75497329/9202699.
However, we are experiencing frequent crashes in our production environment that we can hardly reproduce in our development environment. Despite months of debugging and seeking support from DTS and the Apple Feedback platform, we have not been able to fully resolve this issue. Our recent crash reports indicate that the crashes occur when calling CATransaction.commit().
We suspect that CATransaction may not be functioning properly outside the main thread. However, based on feedback from the Apple Feedback platform, we were advised to use CATransaction.begin() and CATransaction.commit() on a background thread.
If anyone has any insights, we would greatly appreciate it.
Code Sample
The line CATransaction.commit() is causing the crash: [EXC_BREAKPOINT: com.apple.root.****-qos.cooperative]
private let transactionLock = NSLock() // to ensure one transaction at a time
private let device = MTLCreateSystemDefaultDevice()!
@inline(never)
static func drawOnCGImageWithCARenderer(
layerRect: CGRect,
itemsToDraw: [ItemsToDraw]
)
-> CGImage? {
// We have encapsulated everything related to CALayer and its
// associated creations and manipulations within CATransaction
// as suggested by engineers from Apple Feedback Portal.
transactionLock.lock()
CATransaction.begin()
// Create the root layer.
let layer = CALayer()
layer.bounds = layerRect
layer.masksToBounds = true
// Add one sublayer for each item to draw
itemsToDraw.forEach { item in
// We have thousands or hundred thousands of drawing items to add.
// Each drawing item may produce a CALayer, CAShapeLayer or CATextLayer.
// This is also why we want to utilise CARenderer to leverage GPU rendering.
layer.addSublayer(
item.createCALayerOrCATextLayerOrCAShapeLayer()
)
}
// Create MTLTexture and CARenderer.
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: .rgba8Unorm,
width: Int(layer.frame.size.width),
height: Int(layer.frame.size.height),
mipmapped: false
)
textureDescriptor.usage = [MTLTextureUsage.shaderRead, .shaderWrite, .renderTarget]
let texture = device.makeTexture(descriptor: textureDescriptor)!
let renderer = CARenderer(mtlTexture: texture)
renderer.bounds = layer.frame
renderer.layer = layer.self
/* ********************************************************* */
// From our crash report, this is where the crash happens.
CATransaction.commit()
/* ********************************************************* */
transactionLock.unlock()
// Rendering layers onto MTLTexture using CARenderer.
renderer.beginFrame(atTime: 0, timeStamp: nil)
renderer.render()
renderer.endFrame()
// Draw MTLTexture onto image.
guard
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB),
let ciImage = CIImage(mtlTexture: texture, options: [.colorSpace: colorSpace]) else {
return nil
}
// Convert CIImage to CGImage.
let context = CIContext()
return context.createCGImage(ciImage, from: ciImage.extent)
}
Description
We are developing a app for iOS and iPadOS that involves extensive custom drawing of paths, shapes, texts, etc. To improve drawing and rendering speed, we use CARenderer to generate cached images (CGImage) on a background thread. We adopted this approach based on this StackOverflow post: https://stackoverflow.com/a/75497329/9202699.
However, we are experiencing frequent crashes in our production environment that we cannot reproduce in our development environment. Despite months of debugging and seeking support from DTS and the Apple Feedback platform, we have not been able to fully resolve this issue. Our recent crash reports indicate that the crashes occur when calling CATransaction.commit().
Crash traceback
The method names in this traceback are mapped to those in the code sample below. The app name has been masked.
Crashed: com.apple.root.user-initiated-qos.cooperative
0 MyApp 0x887408 specialized static CAUtils.commitCATransaction() + 4340151304 (<compiler-generated>:4340151304)
1 MyApp 0x887408 specialized static CAUtils.commitCATransaction() + 4340151304 (<compiler-generated>:4340151304)
2 MyApp 0x8874a4 specialized static CAUtils.addDrawingItemsToRenderer(xxx) + 250 (CAUtils.swift:250)
3 MyApp 0x887710 specialized static CAUtils.drawOnCGImageWithCARenderer(xxx) + 267 (CAUtils.swift:267)
4 MyApp 0x8878c0 specialized static CAUtils.drawOnCGImageWithCARendererWithRetry(xxx) + 315 (CAUtils.swift:315)
5 MyApp 0x736294 XXXManager.generateCGImages(xxx) + 570 (XXXManager.swift:570)
6 MyApp 0x73404c closure #1 in XXXManager.updateCachedCGImages(xxx) + 427 (XXXManager.swift:427)
7 libswift_Concurrency.dylib 0x61104 swift::runJobInEstablishedExecutorContext(swift::Job*) + 252
8 libswift_Concurrency.dylib 0x62514 swift_job_runImpl(swift::Job*, swift::SerialExecutorRef) + 144
9 libdispatch.dylib 0x15d8c _dispatch_root_queue_drain + 392
10 libdispatch.dylib 0x16590 _dispatch_worker_thread2 + 156
11 libsystem_pthread.dylib 0x4c40 _pthread_wqthread + 228
12 libsystem_pthread.dylib 0x1488 start_wqthread + 8
Code Sample
Below is a sample of our code. While the complete snippet is too long, the issue occurs in addDrawingItemsToRenderer. Please refer to the other methods for completeness and reference purposes.
private let transactionLock = NSLock()
private let deviceLock = NSLock()
private let device = MTLCreateSystemDefaultDevice()!
/// This is the method we call from outside.
@inline(never)
static func drawOnCGImageWithCARenderer(
layerRect: CGRect,
drawingItems: [DrawingItem]
)
-> CGImage? {
guard
let (texture, renderer) = addDrawingItemsToRenderer(
layerRect: layerRect,
drawingItems: drawingItems
) else {
return nil
}
renderer.beginFrame(atTime: 0, timeStamp: nil)
renderer.render()
renderer.endFrame()
guard
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB),
let ciImage = CIImage(mtlTexture: texture, options: [.colorSpace: colorSpace]) else {
return nil
}
let context = CIContext()
return context.createCGImage(ciImage, from: ciImage.extent)
}
/// This is the method will the crash happens
@inline(never)
fileprivate static func addDrawingItemsToRenderer(
layerRect: CGRect,
drawingItems: [DrawingItem]
)
-> (MTLTexture, CARenderer)? {
// We have encapsulated everything related to CALayer and its
// associated creations and manipulations within CATransaction
// as suggested by engineers from Apple Feedback Portal.
beginCATransaction()
defer {
commitCATransaction() // The crash happens here
}
let (layer, imageWidth, imageHeight) =
addDrawingItemsToLayer(layerRect: layerRect, drawingItems: drawingItems)
return createTextureAndRenderer(
layer: layer,
imageWidth: imageWidth,
imageHeight: imageHeight
)
}
// Below are all internal methods. We have split the method into very
// granular parts and marked them as @inline(never) to prevent the
// compiler from inlining our code, which may otherwise obscure usage
// trackback information in our crash reports.
@inline(never)
fileprivate static func beginCATransaction() {
transactionLock.lock()
CATransaction.begin()
}
@inline(never)
fileprivate static func commitCATransaction() {
// From our crash report, we believe the crash happens on this line.
CATransaction.commit()
// It is unlikely that the lock cause the crash as we added it only recently
// to ensure that there is only one transaction on our background thread,
// and after we added this lock, the crash rate indeed lowered, but still
// not fully disappear
transactionLock.unlock()
}
--------------------------------
// The methods below are provided for reference and completeness. While // they may have issues, they do not frequently appear in our crash
// reports as the one caused by `CATransaction.commit()`
@inline(never)
fileprivate static func addDrawingItemsToLayer(
layerRect: CGRect,
drawingItems: [DrawingItem]
)
-> (layer: CALayer, imageWidth: CGFloat, imageHeight: CGFloat) {
let layer = CALayer()
layer.isGeometryFlipped = SharedAppUtils.isIOS
layer.anchorPoint = CGPoint.zero
layer.bounds = layerRect
layer.masksToBounds = true
for drawingItem in drawingItems {
// We have thousands or hundred thousands of drawing items to add.
// Each drawing item may produce a CALayer, CAShapeLayer or CATextLayer.
// This is also why we want to utilise CARenderer to leverage GPU rendering.
let sublayerForDrawingItem =
drawingItem.createCALayerOrCATextLayerOrCAShapeLayer()
layer.addSublayer(sublayerForDrawingItem)
}
let imageWidth = max(1, layer.frame.size.width * UIScreen.main.scale)
let imageHeight = max(1, layer.frame.size.height * UIScreen.main.scale)
layer.transform = CATransform3DMakeScale(UIScreen.main.scale, UIScreen.main.scale, 1)
layer.frame = .init(origin: .zero, size: .init(width: imageWidth, height: imageHeight))
return (layer, imageWidth, imageHeight)
}
@inline(never)
fileprivate static func createTextureAndRenderer(
layer: CALayer,
imageWidth: CGFloat,
imageHeight: CGFloat
)
-> (MTLTexture, CARenderer)? {
deviceLock.lock()
defer {
deviceLock.unlock()
}
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: .rgba8Unorm,
width: Int(imageWidth),
height: Int(imageHeight),
mipmapped: false
)
textureDescriptor.usage = [MTLTextureUsage.shaderRead, .shaderWrite, .renderTarget]
guard
let texture = device.makeTexture(descriptor: textureDescriptor) else {
return nil
}
let renderer = CARenderer(mtlTexture: texture)
renderer.bounds = layer.frame
renderer.layer = layer.self
return (texture, renderer)
}
I have a CoreImage pipeline and one of my steps is to rotate my image about the origin (bottom left corner) and then translate it. I'm not seeing the behaviour I'm expecting, and I think my problem is in how I'm combining these two steps.
As an example, I start with an identity transform
(lldb) po transform333
▿ CGAffineTransform
- a : 1.0
- b : 0.0
- c : 0.0
- d : 1.0
- tx : 0.0
- ty : 0.0
I then rotate 1.57 radians (approx. 90 degrees, CCW)
transform333 = transform333.rotated(by: 1.57)
- a : 0.0007963267107332633
- b : 0.9999996829318346
- c : -0.9999996829318346
- d : 0.0007963267107332633
- tx : 0.0
- ty : 0.0
I understand the current contents of the transform.
But then I translate by 10, 10:
(lldb) po transform333.translatedBy(x: 10, y: 10)
- a : 0.0007963267107332633
- b : 0.9999996829318346
- c : -0.9999996829318346
- d : 0.0007963267107332633
- tx : -9.992033562211013
- ty : 10.007960096425679
I was expecting tx and ty to be 10 and 10.
I have noticed that when I reverse the order of these operations, the transform contents look correct. So I'll most likely just perform the steps in what feels to me like the incorrect order.
Is anyone willing/able to point me to an explanation of why the steps I'm performing are giving me these results?
thanks,
mike
Hi everyone, im in the process of delving more into coregraphics with swiftui, but I am at a roadblock.
First I would like to ask, what are some good resources to learn coregraphics?
Secondly:
I currently have a circle view made and what I want to do is to make my circle view modular so that it can be directly connected to another given circle by a line. How can I do this?
For example, I want my circles to represent nodes and be able to connect by lines to other nodes that are related.
Thanks in advanced.
Here is my code for the circle view:
@State private var circleProgress: CGFloat = 0
let timer = Timer.publish(every: 0.016, on: .main, in: .common).autoconnect()
private let animationDuration: TimeInterval = 1.5
@Binding var startPoint: CGPoint
@Binding var endPoint: CGPoint
var body: some View {
GeometryReader { geometry in
Canvas { context, size in
// Circle parameters
let circleSize: CGFloat = 50
let circleOrigin = CGPoint(
x: size.width / 4,
y: size.height / 2 - circleSize / 2
)
let circleRect = CGRect(
origin: circleOrigin,
size: CGSize(width: circleSize, height: circleSize)
)
let circleCenter = CGPoint(
x: circleOrigin.x + circleSize / 2,
y: circleOrigin.y + circleSize / 2
)
// Animate circle creation
var circlePath = Path()
circlePath.addArc(
center: circleCenter,
radius: circleSize / 2,
startAngle: .degrees(0),
endAngle: .degrees(360 * circleProgress),
clockwise: false
)
context.addFilter(.shadow(color: .white.opacity(0.6), radius: 5, x: 1, y: 1)) // Add white shadow
context.stroke(
circlePath,
with: .linearGradient(
Gradient(colors: [.purple, .white]),
startPoint: circleRect.origin,
endPoint: CGPoint(x: circleRect.maxX, y: circleRect.maxY)
),
lineWidth: 5
)
}
.frame(width: 300, height: 150)
.onReceive(timer) { _ in
// Update circle progress
let progressChange = 0.02 / animationDuration
if circleProgress < 1.0 {
circleProgress = min(circleProgress + progressChange, 1.0)
} else {
circleProgress = 0.0 // Reset the circle to repeat the animation
}
// Get the starting and ending points of the Canvas view
startPoint = CGPoint(x: geometry.frame(in: .global).minX, y: geometry.frame(in: .global).minY)
endPoint = CGPoint(x: geometry.frame(in: .global).maxX, y: geometry.frame(in: .global).maxY)
// Print the points for debugging
print("Start Point: \(startPoint.x), \(startPoint.y)")
print("End Point: \(endPoint.x), \(endPoint.y)")
}
}
.frame(width: 300, height: 150)
}
}
Following WWDC 2023 "Support HDR images in your app", I'm trying to save 48-megapixel ProRAWs (taken on an iPhone 14 Pro Max) as HDR HEICs to the Photo Library. After processing the ProRAW file using CIRAWFilter, whether I use CIContext.heif10Representation() or convert to a CGImage, then UIImage, and use UIImage.heicData(), I get photos that behave oddly in the Photo Library. They appear too dark, and visibly brighten when first viewed, but more problematic is that the photos brighten a great deal more when you edit them with the Photos editor. This is the behavior when using the itur_2100_PQ color space, but itur_2100_HLG behaves similarly, except that it gets dramatically darker when edited. This behavior occurs whether CIRAWFilter.extendedDynamicRangeAmount is set to 0.0, or 2.0, or not set at all.
So what am I doing wrong? Here is a minimal iOS app -- well, just the ContentView -- that demonstrates the issue. You also need a .dng ProRAW file included in the project directory named test.dng. I'd love to include such a file, but I can't.
Be prepared for a multi-second wait when you save the photo.
import SwiftUI
import Photos
struct ContentView: View {
let context = CIContext()
let hdrColorSpace = CGColorSpace(name: CGColorSpace.itur_2100_PQ)!
var body: some View {
VStack(spacing: 100) {
Button("Save Photo From CGImage/UIImage") {
savePhotoFromUIImage()
}
Button("Save Photo From CIImage") {
savePhotoDirectFromCIImage()
}
}.padding(60)
}
//convert RAW with CIRAWFilter to CIImage, then convert to CGImage, then UIImage, then HEIF
private func savePhotoFromUIImage() {
if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) {
guard let outputCGImage = context.createCGImage(ciImage, from: ciImage.extent, format: .RGB10, colorSpace: hdrColorSpace) else { return }
let uiImage = UIImage(cgImage: outputCGImage)
if let heicData = uiImage.heicData() {
saveHEIFPhotoToLibrary(imageData: heicData)
} else {
print("Failed to convert UIImage to HEIC")
}
}
}
//convert RAW with CIRAWFilter to CIImage, then to HEIF
private func savePhotoDirectFromCIImage() {
if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) {
do {
let heif = try context.heif10Representation(of: ciImage, colorSpace: hdrColorSpace)
saveHEIFPhotoToLibrary(imageData: heif)
} catch {
print("Failed to get HEIF representation from CIContext")
}
}
}
private func processRAW(url: URL) -> CIImage? {
guard let coreRawFilter = CIRAWFilter(imageURL: url) else { return nil }
coreRawFilter.extendedDynamicRangeAmount = 2.0 //the issue persists whether this is not set, or set to 0, or set to, say, 2.0
guard let ciImage = coreRawFilter.outputImage else { return nil }
return ciImage
}
private func saveHEIFPhotoToLibrary(imageData: Data) {
PHPhotoLibrary.shared().performChanges({
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
creationRequest.addResource(with: .photo, data: imageData, options: options)
}) { success, error in
if let error = error {
print("Error saving photo: \(error.localizedDescription)")
} else {
print("Photo saved.")
}
}
}
}
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Photos and Imaging
Core Graphics
Core Image
EDR
macOS 15.2, MBP M1, built-in display.
The following code produces a line outside the bounds of my clipping region when drawing to CGLayers, to produce a clockwise arc:
CGContextBeginPath(m_s.outContext);
CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -startRads, -endRads, 1);
CGContextSetRGBStrokeColor(m_s.outContext, col.fRed(), col.fGreen(), col.fBlue(), col.fAlpha());
CGContextSetLineWidth(m_s.outContext, width);
CGContextStrokePath(m_s.outContext);
Drawing other shapes such as rects or ellipses doesn't cause a problem.
I can work around the issue by bringing the path back to the start of the arc:
CGContextBeginPath(m_s.outContext);
CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -startRads, -endRads, 1);
// add a second arc back to the start
CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -endRads, -startRads, 0);
CGContextSetRGBStrokeColor(m_s.outContext, col.fRed(), col.fGreen(), col.fBlue(), col.fAlpha());
CGContextSetLineWidth(m_s.outContext, width);
CGContextStrokePath(m_s.outContext);
But this does appear to be a bug.
I’m trying to create a CGContext that matches my screen using the following code
let context = CGContext(
data: pixelBuffer,
width: pixelWidth, // 3456
height: pixelHeight, // 2234
bitsPerComponent: 10, // PixelEncoding = "--RRRRRRRRRRGGGGGGGGGGBBBBBBBBBB"
bytesPerRow: bytesPerRow, // 13824
space: CGColorSpace(name: CGColorSpace.extendedSRGB)!,
bitmapInfo: CGImageAlphaInfo.none.rawValue
| CGImagePixelFormatInfo.RGBCIF10.rawValue
| CGImageByteOrderInfo.order16Little.rawValue
)
But it fails with an error
CGBitmapContextCreate: unsupported parameter combination:
RGB
10 bits/component, integer
13824 bytes/row
kCGImageAlphaNone
kCGImageByteOrder16Little
kCGImagePixelFormatRGBCIF10
Valid parameters for RGB color space model are:
16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast
32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents
128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents
See Quartz 2D Programming Guide (available online) for more information.
Why is it unsupported if it matches the 6th option? (32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little)
My app reports a lot of crashes from 18.2 users.
I have been able to narrow down the issue to this line of code:
CGImageDestinationFinalize(imageDestination)
The error is Thread 93: EXC_BAD_ACCESS (code=1, address=0x146318000)
But I have no idea why this suddently started to crash.
Here is the code of the function:
private func estimateSizeUsingThumbnailMethod(fromImageURL url: URL, imageSettings: ImageSettings) -> (Int, Int) {
let sourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let source = CGImageSourceCreateWithURL(url as CFURL, sourceOptions),
let imageProperties = CGImageSourceCopyPropertiesAtIndex(source, 0, nil) as? [CFString: Any],
let imageWidth = imageProperties[kCGImagePropertyPixelWidth] as? CGFloat,
let imageHeight = imageProperties[kCGImagePropertyPixelHeight] as? CGFloat else {
return (0, 0)
}
let maxImageSize = max(imageWidth, imageHeight)
let thumbMaxSize = min(2400, maxImageSize) // Use original size if possible, but not if larger than 2400, in this case we'll extrapolate from thumbnail
let downsampleOptions = [
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceCreateThumbnailWithTransform: true,
kCGImageSourceThumbnailMaxPixelSize: thumbMaxSize as CFNumber,
] as CFDictionary
guard let cgImage = CGImageSourceCreateThumbnailAtIndex(source, 0, downsampleOptions) else {
DLog("CGImage thumb creation error")
return (0, 0)
}
let data = NSMutableData()
guard let imageDestination = CGImageDestinationCreateWithData(data, UTType.jpeg.identifier as CFString, 1, nil) else {
DLog("CGImage destination creation error")
return (0, 0)
}
let destinationProperties = [
kCGImageDestinationLossyCompressionQuality: imageSettings.quality.compressionRatio() // Set jpeg compression ratio
] as CFDictionary
CGImageDestinationAddImage(imageDestination, cgImage, destinationProperties)
CGImageDestinationFinalize(imageDestination) // <----- CRASHES HERE with EXC_BAD_ACCESS
...
}
So far, I'm stuck. Any idea that could help would be greatly appreciated, as I'm scared that this crash will propagate on the official release of 18.2
I try to convert view to vector graphic (PDF) using ImageRenderer. But output image has ALL coordinates rounded to integer values, which is not OK - some very small circles become elipses, squares are no longer squares, very detailed or small shapes are distorted by rounding coordinates. Is a some option to switch off rounding?
I try to generate PDF from view. View is nice, there is a lot of transparency and many gradients (circular and linear). But if I use ImageRenderer in in a way that documentation suggest all transparency and gradients disappear. Is this bug or some feature? Is it way to generate vector graphic from view with transparency and gradients? PDF allows those features, so why not?
I'm looking at a case where a handler for NSWindowDidBecomeMain gets the NSWindow* from the notification object and verifies that window.isVisible == YES, window.windowNumber > 0 and window.screen != nil. However, window.windowNumber is missing from the array [NSWindow windowNumbersWithOptions: NSWindowNumberListAllSpaces] and from CGWindowListCopyWindowInfo( kCGWindowListOptionOnScreenOnly, kCGNullWindowID ), how can that be?
The window number is in the array returned by CGWindowListCopyWindowInfo( kCGWindowListOptionAll, kCGNullWindowID ).
I'm seeing this issue in macOS 15, maybe 14, but not 13.
I would like to implement the same kind of behavior as the Dropoverapp application, specifically being able to perform a specific action when files are dragged (such as opening a window, for example).
I have written the code below to capture the mouse coordinates, drag, drop, and click globally. However, I don't know how to determine the nature of what is being dropped. Do you have any ideas on how to detect the nature of what is being dragged outside the application's scope?
Here is my current code:
import SwiftUI
import CoreGraphics
struct ContentView: View {
@State private var mouseX: CGFloat = 0.0
@State private var mouseY: CGFloat = 0.0
@State private var isClicked: Bool = false
@State private var isDragging: Bool = false
private var mouseTracker = MouseTracker.shared
var body: some View {
VStack {
Text("Mouse coordinates: \(mouseX, specifier: "%.2f"), \(mouseY, specifier: "%.2f")")
.padding()
Text(isClicked ? "Mouse is clicked" : "Mouse is not clicked")
.padding()
Text(isDragging ? "Mouse is dragging" : "Mouse is not dragging")
.padding()
}
.frame(width: 400, height: 200)
.onAppear {
mouseTracker.startTracking { newMouseX, newMouseY, newIsClicked, newIsDragging in
self.mouseX = newMouseX
self.mouseY = newMouseY
self.isClicked = newIsClicked
self.isDragging = newIsDragging
}
}
}
}
class MouseTracker {
static let shared = MouseTracker()
private var eventTap: CFMachPort?
private var runLoopSource: CFRunLoopSource?
private var isClicked: Bool = false
private var isDragging: Bool = false
private var callback: ((CGFloat, CGFloat, Bool, Bool) -> Void)?
func startTracking(callback: @escaping (CGFloat, CGFloat, Bool, Bool) -> Void) {
self.callback = callback
let mask: CGEventMask = (1 << CGEventType.mouseMoved.rawValue) |
(1 << CGEventType.leftMouseDown.rawValue) |
(1 << CGEventType.leftMouseUp.rawValue) |
(1 << CGEventType.leftMouseDragged.rawValue)
// Pass 'self' via 'userInfo'
let selfPointer = UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque())
guard let eventTap = CGEvent.tapCreate(
tap: .cghidEventTap,
place: .headInsertEventTap,
options: .defaultTap,
eventsOfInterest: mask,
callback: MouseTracker.mouseEventCallback,
userInfo: selfPointer
) else {
print("Failed to create event tap")
return
}
self.eventTap = eventTap
runLoopSource = CFMachPortCreateRunLoopSource(kCFAllocatorDefault, eventTap, 0)
CFRunLoopAddSource(CFRunLoopGetCurrent(), runLoopSource, .commonModes)
CGEvent.tapEnable(tap: eventTap, enable: true)
}
// Static callback function
private static let mouseEventCallback: CGEventTapCallBack = { _, eventType, event, userInfo in
guard let userInfo = userInfo else {
return Unmanaged.passUnretained(event)
}
// Retrieve the instance of MouseTracker from userInfo
let tracker = Unmanaged<MouseTracker>.fromOpaque(userInfo).takeUnretainedValue()
let location = NSEvent.mouseLocation
// Update the click and drag state
switch eventType {
case .leftMouseDown:
tracker.isClicked = true
tracker.isDragging = false
case .leftMouseUp:
tracker.isClicked = false
tracker.isDragging = false
case .leftMouseDragged:
tracker.isDragging = true
default:
break
}
// Call the callback on the main thread
DispatchQueue.main.async {
tracker.callback?(location.x, location.y, tracker.isClicked, tracker.isDragging)
}
return Unmanaged.passUnretained(event)
}
}
I extracted the gain map info from an image using
let url = Bundle.main.url(forResource: "IMG_1181", withExtension: "HEIC")
let source = CGImageSourceCreateWithURL(url! as CFURL, nil)
let portraitData = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source!, 0, kCGImageAuxiliaryDataTypeHDRGainMap) as! [AnyHashable : Any]
let metaData = portraitData[kCGImageAuxiliaryDataInfoMetadata] as! CGImageMetadata
Then I printed all the metadata tags
func printMetadataProperties(from metadata: CGImageMetadata) {
guard let tags = CGImageMetadataCopyTags(metadata) as? [CGImageMetadataTag] else {
return
}
for tag in tags {
if let prefix = CGImageMetadataTagCopyPrefix(tag) as String?,
let namespace = CGImageMetadataTagCopyNamespace(tag) as String?,
let key = CGImageMetadataTagCopyName(tag) as String?,
let value = CGImageMetadataTagCopyValue(tag){
print("Namespace: \(namespace), Key: \(key), Prefix: \(prefix), value: \(value)")
} else {
}
}
}
//Namespace: http://ns.apple.com/ImageIO/1.0/, Key: hasXMP, Prefix: iio, value: True
//Namespace: http://ns.apple.com/HDRGainMap/1.0/, Key: HDRGainMapVersion, Prefix: HDRGainMap, value: 131072
//Namespace: http://ns.apple.com/HDRGainMap/1.0/, Key: HDRGainMapHeadroom, Prefix: HDRGainMap, value: 3.586325
I want to create a new CGImageMetadata and tags.
But when it comes to the HDR tags. It always fails to add to metadata.
let tag = CGImageMetadataTagCreate(
"http://ns.apple.com/HDRGainMap/1.0/" as CFString,
"HDRGainMap" as CFString,
"HDRGainMapHeadroom" as CFString,
.default,
3.56 as CFNumber
)
let path = "\(HDRGainMap):\(HDRGainMapHeadroom)" as CFString
let success = CGImageMetadataSetTagWithPath(metadata, nil, path, tag)// always false
The hasXMP works fine.
Is HDR a private dict for Apple?
In genealogy software, I have an NSView that displays a family tree where each person is in a subview. Before Sequoia, the subviews would draw in the order I add them to the parent NSView. For some reason, Sequoia calls drawRect in reverse order.
For example, a tree with cells for son, father, and mother used to be drawn in that order, but Sequoia draws them as mother, father, and son.
For static cell placement it would not matter, but these trees dynamically adjust as users change data or expand and contract branches from the tree. Drawing them in the wrong order messes up all the logic and many trees are corrupted.
Has the new drawing order been implemented for some reason and can it be changed? Is it documented?