Core Graphics

RSS for tag

Harness the power of Quartz technology to perform lightweight 2D rendering with high-fidelity output using Core Graphics.

Core Graphics Documentation

Posts under Core Graphics tag

53 Posts
Sort by:
Post not yet marked as solved
7 Replies
5.1k Views
On MacOSX 10.14 (Mojave) the behavior changed, the following code runs on 10.13 but fail on 10.14.The creation of "CGEventTapCreate" is failing (returning null) on Mojave but works before.Any thoughts? Thanks in advance!// alterkeys.c // http://osxbook.com // // Complile using the following command line: // clang -Wall -o alterkeys alterkeys.c -framework ApplicationServices // #include <ApplicationServices/ApplicationServices.h> // This callback will be invoked every time there is a keystroke. // CGEventRef myCGEventCallback(CGEventTapProxy proxy, CGEventType type, CGEventRef event, void *refcon) { // Paranoid sanity check. if ((type != kCGEventKeyDown) && (type != kCGEventKeyUp)) return event; // The incoming keycode. CGKeyCode keycode = (CGKeyCode)CGEventGetIntegerValueField( event, kCGKeyboardEventKeycode); // Swap 'a' (keycode=0) and 'z' (keycode=6). if (keycode == (CGKeyCode)0) keycode = (CGKeyCode)6; else if (keycode == (CGKeyCode)6) keycode = (CGKeyCode)0; // Set the modified keycode field in the event. CGEventSetIntegerValueField( event, kCGKeyboardEventKeycode, (int64_t)keycode); // We must return the event for it to be useful. return event; } int main(void) { CGEventMask eventMask = CGEventMaskBit(kCGEventLeftMouseDown) | CGEventMaskBit(kCGEventLeftMouseUp); CFMachPortRef eventTap = CGEventTapCreate(kCGSessionEventTap, kCGHeadInsertEventTap, 0, eventMask, myCGEventCallback, NULL); if (!eventTap) { fprintf(stderr, "failed to create event tap\n"); exit(1); } // Create a run loop source. CFRunLoopSourceRef runLoopSource = CFMachPortCreateRunLoopSource( kCFAllocatorDefault, eventTap, 0); // Add to the current run loop. CFRunLoopAddSource(CFRunLoopGetCurrent(), runLoopSource, kCFRunLoopCommonModes); // Enable the event tap. CGEventTapEnable(eventTap, true); // Set it all running. CFRunLoopRun(); // In a real program, one would have arranged for cleaning up. exit(0); }
Posted
by
Post not yet marked as solved
1 Replies
1.4k Views
I needed an infinite canvas for my app which is basically a drawing board where one can draw things using pen. So, I thought of having a very large custom UIView inside a UIScrollView. And in the custom view, I could keep drawing things. But, I ended up with a warning saying something like below and nothing drawn on screen. [<CALayer: 0x5584190> display]: Ignoring bogus layer size (50000, 50000) Which means, I can't have such a big CALayer to draw things. Now, solution? alternative? Then comes CATiledLayer. I made my large UIView backed by CATiledLayer now. After having a proper levelOfDetails and levelOfDetailsBias value, things worked like charm. Until I ended up facing another problem. Since, CATiledLayer caches drawing in different zoom levels if I try to scale the view after changing the drawing content the cached drawings appear and then the new contents get drawn. I don't find an option to invalidate caches in different levels. All the solutions I came across leads me to clear the entire contents of the CATiledLayer on drawing content change which won't help again. Do I miss something here? Is there a way with which I can clear caches at different levels? Or is there any other solutions which could solve my need? Can someone help me with this?
Post not yet marked as solved
14 Replies
14k Views
Hi There, I just bought a IIYAMA G-MASTER GB3461WQSU-B1 which has a native resolution of 3440x1440 but my MacBook Pro (Retina, 15-inch, Mid 2014) doesn't recognise the monitor and I can't run it at its full resolution. It is currently recognised as a PL3461WQ 34.5-inch (2560 x 1440). Is there anything that I can do to get it sorted or I will have to wait until this monitor driver is added to the Big Sur list? Thanks
Posted
by
Post marked as solved
2 Replies
1.4k Views
I try to rotate a page 180° in a pdf file. I nearly get it, but the page is also mirrored horizontally. Some images to illustrate: Initial page: Result after rotation (see code): it is rotated 180° BUT mirrored horizontally as well: The expected result It is just as if it was rotated 180°, around the x axis of the page. And I would need to rotate 180° around z axis (perpendicular to the page). It is probably the result of writeContext!.scaleBy(x: 1, y: -1) I have tried a lot of changes for transform, translate, scale parameters, including removing calls to some of them, to no avail. @IBAction func createNewPDF(_ sender: UIButton) { var originalPdfDocument: CGPDFDocument! let urls = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask) let documentsDirectory = urls[0] // read some pdf from bundle for test if let path = Bundle.main.path(forResource: "Test", ofType: "pdf"), let pdf = CGPDFDocument(URL(fileURLWithPath: path) as CFURL) { originalPdfDocument = pdf } else { return } // create new pdf let modifiedPdfURL = documentsDirectory.appendingPathComponent("Modified.pdf") guard let page = originalPdfDocument.page(at: 1) else { return } // Starts at page 1 var mediaBox: CGRect = page.getBoxRect(CGPDFBox.mediaBox) // mediabox which will set the height and width of page let writeContext = CGContext(modifiedPdfURL as CFURL, mediaBox: &mediaBox, nil) // get the context var pageRect: CGRect = page.getBoxRect(CGPDFBox.mediaBox) // get the page rect writeContext!.beginPage(mediaBox: &pageRect) let m = page.getDrawingTransform(.mediaBox, rect: mediaBox, rotate: 0, preserveAspectRatio: true) // Because of rotate 0, no effect ; changed rotate to 180, then get an empty page writeContext!.translateBy(x: 0, y: pageRect.size.height) writeContext!.scaleBy(x: 1, y: -1) writeContext!.concatenate(m) writeContext!.clip(to: pageRect) writeContext!.drawPDFPage(page) // draw content in page writeContext!.endPage() // end the current page writeContext!.closePDF() } Note: This is a follow up of a previous thread, https://developer.apple.com/forums/thread/688436
Posted
by
Post not yet marked as solved
11 Replies
4.7k Views
Hi team, We have an iOS app. Since July 15, 2022, our users met a kind of app crash due to an invalid memory fetch. The time is when Apple released iOS 16 beta officially. After Sep 12, crash count started to increase drastically. The time is Apple released iOS 16 officially. Crash backtrace can be seen as follows. Thread 14 Crashed: 0 libsystem_platform.dylib 0x00000001f8810930 _platform_memmove + 96 1 CoreGraphics 0x00000001adb64104 CGDataProviderCreateWithCopyOfData + 20 2 CoreGraphics 0x00000001adb4cdb4 CGBitmapContextCreateImage + 172 3 VisionKitCore 0x00000001ed813f10 -[VKCRemoveBackgroundResult _createCGImageFromBGRAPixelBuffer:cropRect:] + 348 4 VisionKitCore 0x00000001ed813cc0 -[VKCRemoveBackgroundResult createCGImage] + 156 5 VisionKitCore 0x00000001ed8ab6f8 __vk_cgImageRemoveBackgroundWithDownsizing_block_invoke + 64 6 VisionKitCore 0x00000001ed881474 __63-[VKCRemoveBackgroundRequestHandler performRequest:completion:]_block_invoke.5 + 436 7 MediaAnalysisServices 0x00000001eec58968 __92-[MADService performRequests:onPixelBuffer:withOrientation:andIdentifier:completionHandler:]_block_invoke.38 + 400 8 CoreFoundation 0x00000001abff0a14 __invoking___ + 148 9 CoreFoundation 0x00000001abf9cf2c -[NSInvocation invoke] + 428 10 Foundation 0x00000001a6464d38 __NSXPCCONNECTION_IS_CALLING_OUT_TO_REPLY_BLOCK__ + 16 11 Foundation 0x00000001a64362fc -[NSXPCConnection _decodeAndInvokeReplyBlockWithEvent:sequence:replyInfo:] + 520 12 Foundation 0x00000001a6a10f44 __88-[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:]_block_invoke_5 + 188 13 libxpc.dylib 0x00000001f89053e4 _xpc_connection_reply_callout + 124 14 libxpc.dylib 0x00000001f88f8580 _xpc_connection_call_reply_async + 88 15 libdispatch.dylib 0x00000001b340205c _dispatch_client_callout3 + 20 16 libdispatch.dylib 0x00000001b341ff58 _dispatch_mach_msg_async_reply_invoke + 344 17 libdispatch.dylib 0x00000001b340956c _dispatch_lane_serial_drain + 376 18 libdispatch.dylib 0x00000001b340a214 _dispatch_lane_invoke + 436 19 libdispatch.dylib 0x00000001b3414e10 _dispatch_workloop_worker_thread + 652 20 libsystem_pthread.dylib 0x00000001f88a4df8 _pthread_wqthread + 288 21 libsystem_pthread.dylib 0x00000001f88a4b98 start_wqthread + 8 Last but not the least. The users who met this kind of app crash use iOS16+. We think this crash is related to iOS 16 SDK. We're appreciate that you can provide some clues how to fix this kind of crash.
Posted
by
Post not yet marked as solved
0 Replies
1.2k Views
I'm using CoreGraphics and Image I/O in a MacOS command-line tool. My program works fine, but after the first drawing to a bitmap context there are messages output to the console like the following: 2022-12-20 16:33:47.824937-0500 RandomImageGenerator[4436:90170] Metal API Validation Enabled AVEBridge Info: AVEEncoder_CreateInstance: Received CreateInstance (from VT) AVEBridge Info: connectHandler: Device connected (0x000000010030b520)Assert - (remoteService != NULL) - f: /AppleInternal/Library/BuildRoots/43362d89-619c-11ed-a949-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/AppleAVEBridge/AppleAVEEncoder/AppleAVEEncoder.c l: 291 AVE XPC Error: could not find remote service Assert - (err == noErr) - f: /AppleInternal/Library/BuildRoots/43362d89-619c-11ed-a949-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/AppleAVEBridge/AppleAVEEncoder/AppleAVEEncoder.c l: 1961 AVE ERROR: XPC failed AVEBridge Info: stopUserClient: IOServiceClose was successful. AVEBridge Error: AVEEncoder_CreateInstance: returning err = -12908 These messages get in the way of my own console output. How do I stop these messages from being displayed? This post on StackOverflow (https://stackoverflow.com/questions/37800790/hide-strange-unwanted-xcode-logs) does not appear to be relevant to this issue.
Posted
by
Post not yet marked as solved
0 Replies
1.5k Views
In iOS 16, can not displaying PDF included gradients. PDF displayed normally if used Radial or Linear Gradient. Complex shape gradients are not displayed. The PDF is not displayed inside the iOS-application resources and when open file in iPhone use Files app normally. Attached an example with a regular Diamond gradient, which was created use Figma. iOS 15 displays the example file correctly. You can open file if change file format: Example.json rename to -> Example.pdf. P.S. I can't attached pdf or zip files. Example.json
Posted
by
Post not yet marked as solved
2 Replies
665 Views
I'm trying to resize NSImages on macOS. I'm doing so with an extension like this. extension NSImage { // MARK: Resizing /// Resize the image to the given size. /// /// - Parameter size: The size to resize the image to. /// - Returns: The resized image. func resized(toSize targetSize: NSSize) -> NSImage? { let frame = NSRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height) guard let representation = self.bestRepresentation(for: frame, context: nil, hints: nil) else { return nil } let image = NSImage(size: targetSize, flipped: false, drawingHandler: { (_) -> Bool in return representation.draw(in: frame) }) return image } } The problem is, as far as I can tell, the image that comes out of the drawing handler has lost the original color profile of the original image rep. I'm testing it with a wide color gamut image, attached. This becomes pure red when examing the image result. If this was UIKit I guess I'd use the UIGraphicsImageRenderer and select the right UIGraphicsImageRendererFormat.Range and so I'm suspecting I need to use the NSGraphicsContext here to do the rendering but I can't see what on that I would set to make it use wide color or how I'd use it?
Posted
by
Post not yet marked as solved
1 Replies
740 Views
I'm working on a toy Swift implementation of Teamviewer where users can collaborate. To achieve this, I'm creating a secondary, remotely controlled cursor on macOS using Cocoa. My goal is to allow this secondary cursor to manipulate windows and post mouse events below it. I've managed to create the cursor and successfully made it move and animate within the window. However, I'm struggling with enabling mouse events to be fired by this secondary cursor. When I try to post synthetic mouse events, it doesn't seem to have any effect. Here's the relevant portion of my code: func click(at point: CGPoint) { guard let mouseDown = CGEvent(mouseEventSource: nil, mouseType: .leftMouseDown, mouseCursorPosition: point, mouseButton: .left), let mouseUp = CGEvent(mouseEventSource: nil, mouseType: .leftMouseUp, mouseCursorPosition: point, mouseButton: .left) else { return } mouseDown.post(tap: .cgSessionEventTap) mouseUp.post(tap: .cgSessionEventTap) } I have enabled the Accessibility features, tried posting to specific PIDs, tried posting events twice in a row (to ensure it's not a focus issue), replaced .cgSessionEventTap with .cghidEventTap, all to no avail. Here's the full file if you'd like more context: import Cocoa import Foundation class CursorView: NSView { let image: NSImage init(image: NSImage) { self.image = image super.init(frame: .zero) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func draw(_ dirtyRect: NSRect) { super.draw(dirtyRect) image.draw(in: dirtyRect) } } @NSApplicationMain class AppDelegate: NSObject, NSApplicationDelegate { var window: NSWindow! var userCursorView: CursorView? var remoteCursorView: CursorView? var timer: Timer? var destination: CGPoint = .zero var t: CGFloat = 0 let duration: TimeInterval = 2 let clickProbability: CGFloat = 0.01 func applicationDidFinishLaunching(_ aNotification: Notification) { let screenRect = NSScreen.main!.frame window = NSWindow(contentRect: screenRect, styleMask: .borderless, backing: .buffered, defer: false) window.level = NSWindow.Level(rawValue: Int(CGWindowLevelForKey(.maximumWindow))) window.backgroundColor = NSColor.clear window.ignoresMouseEvents = true let maxHeight: CGFloat = 70.0 if let userImage = NSImage(named: "userCursorImage") { let aspectRatio = userImage.size.width / userImage.size.height let newWidth = aspectRatio * maxHeight userCursorView = CursorView(image: userImage) userCursorView!.frame.size = NSSize(width: newWidth, height: maxHeight) window.contentView?.addSubview(userCursorView!) } if let remoteImage = NSImage(named: "remoteCursorImage") { let aspectRatio = remoteImage.size.width / remoteImage.size.height let newWidth = aspectRatio * maxHeight remoteCursorView = CursorView(image: remoteImage) remoteCursorView!.frame.size = NSSize(width: newWidth, height: maxHeight) window.contentView?.addSubview(remoteCursorView!) // Initialize remote cursor position and destination remoteCursorView!.frame.origin = randomPointWithinScreen() destination = randomPointWithinScreen() } window.makeKeyAndOrderFront(nil) window.orderFrontRegardless() NSCursor.hide() NSEvent.addGlobalMonitorForEvents(matching: [.mouseMoved, .leftMouseDragged, .rightMouseDragged]) { [weak self] event in self?.updateCursorPosition(with: event) } // Move the remote cursor every 0.01 second timer = Timer.scheduledTimer(withTimeInterval: 0.01, repeats: true) { [weak self] _ in self?.moveRemoteCursor() } // Exit the app when pressing the escape key NSEvent.addGlobalMonitorForEvents(matching: .keyDown) { event in if event.keyCode == 53 { NSApplication.shared.terminate(self) } } } func updateCursorPosition(with event: NSEvent) { var newLocation = event.locationInWindow newLocation.y -= userCursorView!.frame.size.height userCursorView?.frame.origin = newLocation } func moveRemoteCursor() { if remoteCursorView!.frame.origin.distance(to: destination) < 1 || t >= 1 { destination = randomPointWithinScreen() t = 0 let windowPoint = remoteCursorView!.frame.origin let screenPoint = window.convertToScreen(NSRect(origin: windowPoint, size: .zero)).origin let screenHeight = NSScreen.main?.frame.height ?? 0 let cgScreenPoint = CGPoint(x: screenPoint.x, y: screenHeight - screenPoint.y) click(at: cgScreenPoint) } else { let newPosition = cubicBezier(t: t, start: remoteCursorView!.frame.origin, end: destination) remoteCursorView?.frame.origin = newPosition t += CGFloat(0.01 / duration) } } func click(at point: CGPoint) { guard let mouseDown = CGEvent(mouseEventSource: nil, mouseType: .leftMouseDown, mouseCursorPosition: point, mouseButton: .left), let mouseUp = CGEvent(mouseEventSource: nil, mouseType: .leftMouseUp, mouseCursorPosition: point, mouseButton: .left) else { return } // Post the events to the session event tap mouseDown.post(tap: .cgSessionEventTap) mouseUp.post(tap: .cgSessionEventTap) } func randomPointWithinScreen() -> CGPoint { guard let screen = NSScreen.main else { return .zero } let randomX = CGFloat.random(in: 0...screen.frame.width / 2) let randomY = CGFloat.random(in: 100...screen.frame.height) return CGPoint(x: randomX, y: randomY) } func cubicBezier(t: CGFloat, start: CGPoint, end: CGPoint) -> CGPoint { let control1 = CGPoint(x: 2 * start.x / 3 + end.x / 3, y: start.y) let control2 = CGPoint(x: start.x / 3 + 2 * end.x / 3, y: end.y) let x = pow(1 - t, 3) * start.x + 3 * pow(1 - t, 2) * t * control1.x + 3 * (1 - t) * pow(t, 2) * control2.x + pow(t, 3) * end.x let y = pow(1 - t, 3) * start.y + 3 * pow(1 - t, 2) * t * control1.y + 3 * (1 - t) * pow(t, 2) * control2.y + pow(t, 3) * end.y return CGPoint(x: x, y: y) } func applicationWillTerminate(_ aNotification: Notification) { // Show the system cursor when the application is about to terminate NSCursor.unhide() } } extension CGPoint { func distance(to point: CGPoint) -> CGFloat { return hypot(point.x - x, point.y - y) } }
Posted
by
Post not yet marked as solved
5 Replies
734 Views
I'm dynamically creating a UIImage from NSData using [UIImage imageWithData:data]. The call succeeds and returns a valid UIImage but every time I make the call CGImageCopyImageSource:4692: *** CGImageGetImageSource: cannot get CGImageSourceRef from CGImageRef (CGImageSourceRef was already released) is printed in the Console - not the Xcode console but the Console app streaming from the device. I haven't determined this is actually a problem but I have customer reports of my app crashing after a long period of time where this particular code path is being called frequently. If I put a symbolic breakpoint at ERROR_CGImageCopyImageSource_WAS_CALLED_WITH_INVALID_CGIMAGE is is hit. I'm not sure what I could be doing to cause this error since I'm passing valid data in and getting what looks like valid output.
Posted
by
Post not yet marked as solved
0 Replies
543 Views
I want to rotate 10 bit/component, 3 component RGB CGImages (or NSImages) by 90 degree angles. The images are loaded from 10 bpc heif files. This is for a Mac app, so I don't have access to UIKit. My first thought is to use the Accelerate vImage framework. However, vImage does not seem to support 10 bpc images. In fact, I've tried this approach without success. I know I can do this using the CIImage.oriented() method, but I don't want the substantial overhead of creating a CIImage from a CGImage and then rendering to a CGImage. Any suggestions? Efficiency/speed are important. Thanks.
Posted
by
Post not yet marked as solved
3 Replies
908 Views
CGRequestScreenCaptureAccess() and CGPreflightScreenCaptureAccess() should return true or false depending on whether the app has screen recording permission. CGRequestScreenCaptureAccess() will bring up a dialog if there is no permission. The dialog directs the user to the System Settings, where the permission can be set. CGRequestScreenCaptureAccess() returns immediately in any case. If it brings up a dialog, that happens from a separate process (Finder?) That dialog stays up even if the app quits before the dialog is dismissed. Since CGRequestScreenCaptureAccess() returns immediately, I tried tracking the permission state by setting up a timer to repeatedly call CGPreflightScreenCaptureAccess() until the permission is set, but if it returned false to begin with, it will continue to return false, even if the app has been given permission for screen capture. Why is that? When the permission to capture is set in the System Settings, a dialog says, "(App Name) may not be able to record the contents of your screen until it is quit." I imagine that is related to CGPreflightScreenCaptureAccess() keeping returning false. But why "may not be able"? When does the permission setting change take effect immediately? How can this be detected?
Posted
by
Post not yet marked as solved
2 Replies
1.3k Views
I used the following code to decode the image in png format into the allocated memory block *imageData. - (void)decodeImage:(UIImage*)image { GLubyte* imageData = (GLubyte*)malloc(image.size.width * image.size.height * 4); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef imageContext = CGBitmapContextCreate(imageData, image.size.width, image.size.height, 8, image.size.width * 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault); CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, image.size.width, image.size.height), image.CGImage); CGContextRelease(imageContext); CGColorSpaceRelease(colorSpace); int bytesPerRow = image.size.width * 4; //you can log [330, 150] RGBA value here, then gets wrong alpha value; int targetRow = 330; int targetCol = 150; u_int32_t r = (u_int32_t)imageData[targetRow*bytesPerRow*4 + targetCol*4 + 0]; u_int32_t g = (u_int32_t)imageData[targetRow*bytesPerRow*4 + targetCol*4 + 1]; u_int32_t b = (u_int32_t)imageData[targetRow*bytesPerRow*4 + targetCol*4 + 2]; u_int32_t a = (u_int32_t)imageData[targetRow*bytesPerRow*4 + targetCol*4 + 3]; free(imageData); } The CGBitmapInfo used is kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault. The problem is that the alpha channel is lost in the decoded result: For example, the RGBA values ​​of the target png at [row, col] = [330, 150] are R = 240, B = 125, G = 106, A = 80. If the decoding is correct, the expected result should be R = 75, G = 39, B = 33, A = 80 with AlphaPremultiplied. However, after decoding the png on iOS17, the result is R = 75, G = 39, B = 33, A = 255, where the alpha values ​​are all forced to 255. Xcode Version 15.0 beta (15A5160n). iPhone14 Pro. The png file:
Posted
by
Post not yet marked as solved
2 Replies
505 Views
Hello, There is a problem when querying flag state using the following API CGEventSourceFlagsState(kCGEventSourceStateHIDSystemState). When there is external mouse movement the flag gets reset by a low OS-level event corresponding to mouse move. As a result the modifier keys don’t work as expected when using Java’s Robot which in-turn uses Apple's native CGEvent APIs. The issue occurs at CRobot.m#L295 the flags gets reset or cleared when mouse is moved physically in unison with Robot's key events. The difference can be seen in the logs without and with mouse move for typing 'A'. (log attached) Logs Due to this issue applications that use Java's Robot on Mac don’t work as expected, in particular this behavior breaks the usability of the on-screen accessibility related keyboard application - TouchBoard. More details of this use case here. https://github.com/adoptium/adoptium-support/issues/710 The Robot is initialized with the following initial configurations - https://github.com/openjdk/jdk/blob/ac6af6a64099c182e982a0a718bc1b780cef616e/src/java.desktop/macosx/native/libawt_lwawt/awt/CRobot.m#L125 Are we missing anything during initialization of Robot which cause this issue? Why does an external mouse movement cause the event flags to reset? Since a low level OS event corresponding to mouse move is causing the flag to reset, there might be an issue within the CGEventSourceFlagsState() API. Is there a reason behind why an external mouse event causes CGEventFlag state to reset to 0 ? Is there any known issue regarding CGEventSourceFlagsState() and a workaround for it?
Posted
by
Post marked as solved
1 Replies
549 Views
I am writing a tool that tracks statistics about key strokes. For that I create an event tap using CGEventTapCreate (docs). Since my code does not alter events, I create the tap using the kCGEventTapOptionListenOnly option. Do I still need to minimize the runtime of my event handling callback for fast processing of keyboard events? I assume that a listen-only handler does not block the OS-internal event handling queue, but I can't find anything assertive for that in the documentation. Many thanks in advance.
Posted
by
Post not yet marked as solved
1 Replies
435 Views
Rectangle() .fill( RadialGradient.radialGradient( colors: [.blue, .yellow], center: UnitPoint(x:0.5, y:0.5), startRadius: 0, endRadius: 50) ) .frame(width: 100, height: 100) In the above code I have a Rectangle with simple radial gradient as follow: I wanna apply an arbitrary transformation matrix to the gradient so I can achieve following effects: I tried following.. but it applying the transformation matrix to frame instead of shader/gradient Rectangle() .overlay( RadialGradient.radialGradient( colors: [.blue, .yellow], center: UnitPoint(x:0.5, y:0.5), startRadius: 0, endRadius: 50) .transformEffect( CGAffineTransform( -0.5000000596046448, 0.4999999403953552, -0.972577691078186, -0.9725777506828308, 0.5000000596046448, 1.4725778102874756) .translatedBy(x: -50, y: -100) ) ) .frame(width: 100, height: 100) it result in transformation of frame intreat of shader/gradient: Thanks in advance 🙌🏻
Posted
by
Post not yet marked as solved
0 Replies
533 Views
I'm exploring using the CARemoteLayerClient/Server API to render a layer from another process as is described in the docs, but can't seem to get a very simple example to work. Here's a very simple example of what I'd expect to work: // Run with `swift file.swift` import AppKit let app = NSApplication.shared class AppDelegate: NSObject, NSApplicationDelegate { let window = NSWindow( contentRect: NSMakeRect(200, 200, 400, 200), styleMask: [.titled, .closable, .miniaturizable, .resizable], backing: .buffered, defer: false, screen: nil ) func applicationDidFinishLaunching(_ notification: Notification) { window.makeKeyAndOrderFront(nil) let view = NSView() view.frame = NSRect(x: 0, y: 0, width: 150, height: 150) view.layerUsesCoreImageFilters = true view.wantsLayer = true let server = CARemoteLayerServer.shared() let client = CARemoteLayerClient(serverPort: server.serverPort) print(client.clientId) client.layer = CALayer() client.layer?.backgroundColor = NSColor.red.cgColor // Expect red rectangle client.layer?.bounds = CGRect(x: 0, y: 0, width: 100, height: 100) let serverLayer = CALayer(remoteClientId: client.clientId) serverLayer.bounds = CGRect(x: 0, y: 0, width: 100, height: 100) view.layer?.addSublayer(serverLayer) view.layer?.backgroundColor = NSColor.blue.cgColor // Background blue to confirm parent layer exists window.contentView?.addSubview(view) } } let delegate = AppDelegate() app.delegate = delegate app.run() In this example I'd expect there to be a red rectangle appearing as the remote layer. If I inspect the server's layer hierarchy I see the correct CALayerHost with the correct client ID being created, but it doesn't display the correct contents being set from the client side. After investigating this thread: https://bugs.chromium.org/p/chromium/issues/detail?id=312462 and some demo projects, I've found that the workarounds previously found to make this API work no longer seem to work on my machine (M1 Pro, Ventura). Am I missing something glaringly obvious in my simple implementation or is this a bug?
Posted
by
Post not yet marked as solved
1 Replies
855 Views
Hello there, I am building an app that's going to be keyboard oriented, meaning that the UI will be minimal and only live n the menu bar; all core functions will be performed from the keyboard hotkeys that should be available from wherever in the system. I know about a Swift library called Hotkey that's doing it and it seems to work, however it uses the Carbon API which is deprecated for many years, plus its code is double dutch to me and, since it relies on a legacy code I wish I could atleast understand it to maintain my version of it in case MacOS finally sheds of the Carbon API completely. Is there a way to realize global hotkey in a more modern way?
Posted
by
Post marked as solved
9 Replies
888 Views
Hi all: I have a macOS application which capture mouse events: CGEventMask eventMask = CGEventMaskBit(kCGEventMouseMoved) | CGEventMaskBit(kCGEventLeftMouseUp) | CGEventMaskBit(kCGEventLeftMouseDown) | CGEventMaskBit(kCGEventRightMouseUp) | CGEventMaskBit(kCGEventRightMouseDown) | CGEventMaskBit(kCGEventOtherMouseUp) | CGEventMaskBit(kCGEventOtherMouseDown) | CGEventMaskBit(kCGEventScrollWheel) | CGEventMaskBit(kCGEventLeftMouseDragged) | CGEventMaskBit(kCGEventRightMouseDragged) | CGEventMaskBit(kCGEventOtherMouseDragged); _eventTap = CGEventTapCreate(kCGHIDEventTap, kCGHeadInsertEventTap, kCGEventTapOptionDefault, eventMask, &MouseCallback, nil); _runLoopRef = CFRunLoopGetMain(); _runLoopSourceRef = CFMachPortCreateRunLoopSource(NULL, _eventTap, 0); CFRunLoopAddSource(_runLoopRef, _runLoopSourceRef, kCFRunLoopCommonModes); CGEventTapEnable(_eventTap, true); CGEventRef MouseCallback(CGEventTapProxy proxy, CGEventType type, CGEventRef event, void *refcon) { NSLog(@"Mouse event: %d", type); return event; } This mouse logger need accessibility privilege granted in Privacy & Security. But I found that if accessibility turned off while CGEventTap is running, left & right click are blocked, unless restart macOS. Although replace kCGEventTapOptionDefault to kCGEventTapOptionListenOnly can fix this issue, but I have other feature which require kCGEventTapOptionDefault. So I try to detect accessibility is disabled and remove CGEventTap: [[NSDistributedNotificationCenter defaultCenter] addObserver:self selector:@selector(didToggleAccessStatus:) name:@"com.apple.accessibility.api" object:nil suspensionBehavior:NSNotificationSuspensionBehaviorDeliverImmediately]; } However, the notification won't be sent if user didn't turn off accessibility but removed it from list. Worse, AXIsProcessTrusted() continues to return true. Is there a way to fix mouse blocked, or detect accessibility removed? Thanks!
Posted
by