Core Graphics

RSS for tag

Harness the power of Quartz technology to perform lightweight 2D rendering with high-fidelity output using Core Graphics.

Core Graphics Documentation

Posts under Core Graphics tag

59 Posts
Sort by:
Post not yet marked as solved
3 Replies
102 Views
I'm trying to integrate WKWebView to metal rendering pipline and met 2 issues. WKWebView will kick webview rendering only when the WKWebView control is visible, i have been looking around WKWebView related header files and configurations, didn't find a way to force WkWebView render the view for every single frame when the control is invisible or not part of view hierarchy . Low performance when access CGImage pixel data returned from WKWebview takesnapshot, in order to get the pixel data uploaded to MTLTexture i did the following tests: Access the pixel data through CGDataProviderCopyData, the CPU usage is very high, main thread drop to 37 FPS on IPhone 8 Plus simulator, most CPU cycles spend on function vConvert_PermuteChannels_ARGB8888_CV_vec, i also try to render the CGImage to a BitmapContext but still can't get ride of vConvert_PermuteChannels_ARGB8888_CV_vec.       CGDataProviderRef provider = CGImageGetDataProvider(cgImage);       CFDataRef rawData = CGDataProviderCopyData( provider );       CFIndex length = CFDataGetLength( rawData );       UInt8 *buf = (UInt8*) CFDataGetBytePtr( rawData );       MTLRegion region = {         {0, 0, 0},         {1280, 960, 1}       };           [_webviewTexture replaceRegion:region mipmapLevel:0 withBytes:buf bytesPerRow:bytesPerRow];       CFRelease( rawData ); Another try is create metal texture from CGImage via textureLoader but failed with error "image decoding failed"        MTKTextureLoader *loader = [[MTKTextureLoader alloc] initWithDevice: _device];       NSDictionary *textureLoaderOption = @{         MTKTextureLoaderOptionTextureUsage:@(MTLTextureUsageShaderRead),         MTKTextureLoaderOptionTextureStorageMode : @(MTLStorageModePrivate)       };       NSError *error = nil;       _webviewTexture = [ loader newTextureWithCGImage:cgImage options:textureLoaderOption error:&error]; Any idea against how to force kick WebView rendering or more efficient way to access CGImage raw pixel data would be greatly appreciated.
Posted
by Joey_Yu.
Last updated
.
Post not yet marked as solved
0 Replies
80 Views
As the title says CGDisplayCopyAllDisplayModes does not appear to return ALL of the display modes. I've attached a screenshot showing a list of the modes returned by CGDisplayCopyAllDisplayModes. Notice that a CGDisplayMode with the currently used mode ID# 13 is not in the list. My second screen, a non-Retina display, seems to behave as expected. How do you find 'all' of the CGDisplayModes for an Apple Studio Display?
Posted
by RickB.
Last updated
.
Post not yet marked as solved
0 Replies
97 Views
I am getting this cashed and not able to understand and not able to reproduce it at my end. any help will be appreciated. CoreGraphics RIPLayerCreateWithData
Posted Last updated
.
Post not yet marked as solved
1 Replies
495 Views
I am trying to develop an app that enables calligraphers to use their Apple Pencil as a calligraphy pen. The problem I am facing is that I don't know how to customize the strokes drawn by the Apple Pencil using the PencilKit framework. I also tried to use UIKit and handle the touches by Apple Pencil, but I am not sure how to achieve the desired effect. Can anyone guide me to solve this issue?
Posted
by mohsen98.
Last updated
.
Post not yet marked as solved
1 Replies
109 Views
I am trying to build a drawing app for pencil with features like layers, importing photos, and undo redo, and initial tests of the responsiveness of the drawing engine are not so great. I'm experimenting based on the SpeedSketch example, except using a bitmap CGContext as a back-buffer to store a layer's pixels. I figure I have to use rasters because of the importing images feature. For the same reason I don't think I can use PencilKit. I've done some debugging and it seems that maybe the sluggishness is because every time draw is called in the drawing view, the whole frame is updated. I've overridden the setNeedsDisplay functions to try and see which rects are invalidated and this part seems fine. For whatever reason UIKit is always calling draw with a full frame. here is the code in my view: override func setNeedsDisplay(_ rect : CGRect) {         print("Needs display \(rect)")         super.setNeedsDisplay(rect)     }     override func setNeedsDisplay() {         print("Called set needs display for whole frame")         super.setNeedsDisplay()     }     override func draw(_ rect: CGRect) {         print("Draw rectangle \(rect) in bounds \(bounds.size)")                  guard let context = UIGraphicsGetCurrentContext() else {             print("Failed to get context")             return         }         if bitmapGraphicsContext != nil {             if (cachedImage == nil ) {                 cachedImage = bitmapGraphicsContext!.makeImage()             }             let transformedRect = transformRect(rect)             let image = cachedImage?.cropping(to: transformedRect)             if let imageRef = image {                 context.draw(imageRef, in: rect)             }         }         context.setBlendMode(CGBlendMode.clear)         if let stroke = strokeToDraw {             draw(stroke: stroke, in: rect)         }     } the output is: Called set needs display for whole frame Draw rectangle (0.0, -0.17777, 557.5, 248.0) in bounds (557.6, 247.822) Needs display (177.8, 142.91, 40.0, 40.0) Draw rectangle (0.0, -0.17777, 557.5, 248.0) in bounds (557.6, 247.822) Would love some help if anyone knows what to do
Posted Last updated
.
Post not yet marked as solved
11 Replies
766 Views
I have an application which is doing screen recording, now I move the screen recording feature to a standalone native XPC module for better performance due to some reason that the app is tied an old lib which cannot generate native code for M1 (Intel only). My question is that, this new xpc module is belong to the App (demanded by the app), if I give the screen recording permission to the app, will the xpc screen scraping module be granted to the permission? Right now looks like it is not after I granted the application with the screen recording permission since display stream won't produce the frame data.
Posted
by stang.
Last updated
.
Post not yet marked as solved
0 Replies
137 Views
I use this mehtod for screenshots,but it blocking mainthread, Is there another way to replace it , { UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);   CGContextRef context = UIGraphicsGetCurrentContext();   for (UIWindow *window in [[UIApplication sharedApplication] windows]) {     CGContextSaveGState(context);     CGContextTranslateCTM(context, window.center.x, window.center.y);     CGContextConcatCTM(context, window.transform);     CGContextTranslateCTM(context, -window.bounds.size.width * window.layer.anchorPoint.x,                -window.bounds.size.height * window.layer.anchorPoint.y);     if ([window respondsToSelector:@selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {       [window drawViewHierarchyInRect:window.bounds afterScreenUpdates:YES];     } else {       [window.layer renderInContext:context];     }     CGContextRestoreGState(context);   }   UIImage *image = UIGraphicsGetImageFromCurrentImageContext();   UIGraphicsEndImageContext(); } Is there an alternative way to take screenshots I need your help
Posted Last updated
.
Post not yet marked as solved
0 Replies
162 Views
About a month ago I asked whether I could use HDR in SpriteKit. This wasn't a well-phrased question since HDR seems to mean different things in different questions, which probably led to my question going unanswered. What I meant to ask was whether it's possible to use assets that have a color gamut that many modern devices are capable of displaying (XDR is somewhat standard among mid- to high-end devices). In other words: Is SpriteKit keeping up with the hardware? If not, what framework options do I have that can quickly display large Rec. 2020 images? Do any of the Core frameworks offer this capability?
Posted
by wmk.
Last updated
.
Post not yet marked as solved
0 Replies
127 Views
Recently a bug related to CVDisplayLink happens to some users of our APP. In our code, we simply create a display link using CVDisplayLinkCreateWithActiveCGDisplays and do the render things in the callback. The log shows that, in some cases, the interval between two display link callbacks last for many hours, in other cases, the display link callback stop being called from the beginning or after a certain time. We did not change that code for a long time and that bug only happens on macOS 12.3.0+ and M1 Macbook. So we want to know if this is a system bug, or there are some changes about CVDisplayLink in recent macOS versions that we need to do some compatibility changes. (Since 12.1, there will be some logs about CVDisplayLink in the xcode output.) CVCGDisplayLink::setCurrentDisplay: 69734662 CVDisplayLinkCreateWithCGDisplays count: 2 [displayID[0]: 0x4281106] [CVCGDisplayLink: 0x7fde5200ce20] CVDisplayLinkStart CVDisplayLink::start CVXTime::reset CVXTime::reset CVXTime::reset
Posted
by Runze.
Last updated
.
Post not yet marked as solved
1 Replies
429 Views
Hi all, I'm currently implementing a feature that performs customized behavior in each desktop (space). As far as I know, Apple does not have an API that can enumerate all spaces under each screen. I've only found a way that can get all spaces, but cannot find any method on how to determine each space belongs to which screen. Can somebody help me out? Thanks in advance.
Posted Last updated
.
Post not yet marked as solved
5 Replies
1.7k Views
I have a background process which is updating an IOSurface-backed CVPixelBuffer at 30fps. I want to render a preview of that pixel buffer in my window, scaled to the size of the NSView that's displaying it. I get a callback every time the pixelbuffer/IOSurface is updated. I've tried using a custom layer-backed NSView and setting the layer contents to the IOsurface -- which works when the view is created but it's never updated unless the window is resized or another window is in front of it. I've tried setting both my view and my layer SetNeedsDisplay(), I've tried changing the layerContentsRedrawPolicy to .onSetNeedsDisplay, I've tried making sure all my content and update code is happening on the UI thread, but I can't get it to dynamically update. Is there a way to bind my layer or view to the IOSurface once and then just have it reflect the updates as they happen, or, if not, at least mark the layer as dirty each frame when it changes? I've pored over the docs but I don't see a lot about the relationship between IOSurface and CALayer.contents, and when in the lifecycle to mark things dirty (especially when updates are happening outside the view). Here's example code: class VideoPreviewThumbnail: NSView, VideoFeedConsumer {   let testCard = TestCardHelper()       override var wantsUpdateLayer: Bool {     get { return true }   }   required init?(coder decoder: NSCoder) {     super.init(coder: decoder)     self.wantsLayer = true     self.layerContentsRedrawPolicy = .onSetNeedsDisplay      		/* Scale the incoming data to the size of the view */      self.layer?.transform = CATransform3DMakeScale(       (self.layer?.contentsScale)! * self.frame.width / CGFloat(VideoSettings.width),       (self.layer?.contentsScale)! * self.frame.height / CGFloat(VideoSettings.height),       CGFloat(1)) 	 /* Register us with the content provider */     VideoFeedBrowser.instance.registerConsumer(self)   }       deinit{     VideoFeedBrowser.instance.deregisterConsumer(self)   }       override func updateLayer() { 		/* ideally we woudln't need to do this */     updateLayer(pixelBuffer: VideoFeedBrowser.instance.renderer.pixelBuffer)   }    	/* This gets called every time our pixelbuffer is updated (30fps) */   @objc   func updateFrame(pixelBuffer: CVPixelBuffer) {     updateLayer(pixelBuffer: pixelBuffer)   }       func updateLayer(pixelBuffer: CVPixelBuffer) {     guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {      print("pixelbuffer isn't IOsurface backed! noooooo!")       return;     } 	 /* these don't have any effect */ 	 //		self.layer?.setNeedsDisplay() //		self.setNeedsDisplay(invalidRect: self.visibleRect)     self.layer?.contents = surface   } }
Posted
by awmcclain.
Last updated
.
Post not yet marked as solved
0 Replies
137 Views
I am creating an image using the following code: UIGraphicsImageRendererFormat *format = [UIGraphicsImageRendererFormat preferredFormat]; fomat.opaque = false; NSError *error = nil; UIImage *image = nil; [[[UIGraphicsImageRenderer alloc] initWithSize:size format:format] runDrawingActions:^(UIGraphicsImageRendererContext *rendererContext) { //...issue drawing commands into UIGraphicsGetCurrentContext() } completionActions:^(UIGraphicsImageRendererContext *rendererContext) { image = rendererContext.currentImage; } error:&error]; When executing the line: image = rendererContext.currentImage; I'm seeing a call stack leading to vImageConvert_AnyToAny: If I understand correctly, an image format conversion is occurring, which is unexpected as I am trying to draw in the device's native format. Are there any workarounds? Or maybe it is the expected behavior and the conversion does not impact performance in any way?
Posted
by AlexDS1.
Last updated
.
Post not yet marked as solved
0 Replies
144 Views
Greetings, Recently I want to make a application that is capable of rotating my external display. I found these related APIs CGBeginDisplayConfiguration, CGDisplayRotation. The first one let me change height and width of display, but not the rotation angle, and the second one only shows the current rotation angle. I wonder is it possible for us to change the rotation of display someway through API? Thank you, Kuroame
Posted
by Kuroame.
Last updated
.
Post not yet marked as solved
1 Replies
210 Views
We have a process that takes high resolution source PNG/JPG images and creates renditions of these images in various lower resolution formats / cropped versions. This process works well and gets the results we want. We can re-run these functions on hundreds of images and get the same output every time. We then commit these files in git repos. HOWEVER, every time we update macOS to a new version (such as updating to High Sierra, Monterey, etc.) when we run these functions ALL of the images result in an output that is different and has different hashes so git treats these images as being changed even though the source images are identical. FURTHER, JPG images seem to have a different output when run on an Intel mac vs. an Apple M1 mac. We have checked the head of the output images using a command like: od -bc banner.png | head This results in the same head data in all cases even though the actual image data doesn't match after version changes. We've also checked CGImageSourceCopyPropertiesAtIndex such as: Source Code { ColorModel = RGB; Depth = 8; HasAlpha = 1; PixelHeight = 1080; PixelWidth = 1920; ProfileName = "Generic RGB Profile"; "{Exif}" = { PixelXDimension = 1920; PixelYDimension = 1080; }; "{PNG}" = { InterlaceType = 0; }; } Which do not show any differences between versions of macOS or Intel vs. M1. We don't want the hash to keep changing on us and resulting in extra churn in git and hoping for feedback that may help in us getting consistent output in all cases. Any tips are greatly appreciated. Source code is attached because it was too long to be inline.
Posted
by dmi1011.
Last updated
.
Post not yet marked as solved
0 Replies
195 Views
I used to have a project that used Quartz Composer and OpenGL, but Xcode 13 has deprecated these two components, which caused me to fail to get off-screen images during video production. The previous code to create the OpenGLContext is as follows: (id) initOffScreenOpenGLPixelsWide:(unsigned)width pixelsHigh:(unsigned)height { //Check parameters - Rendering at sizes smaller than 16x16 will likely produce garbage if((width < 16) || (height < 16)) { [self release]; return nil; } self = [super init]; if(self != nil) {         NSOpenGLPixelFormatAttribute pixattributes[] = {             NSOpenGLPFADoubleBuffer,             NSOpenGLPFANoRecovery,             NSOpenGLPFAAccelerated,             NSOpenGLPFADepthSize, 24,             (NSOpenGLPixelFormatAttribute) 0         };         _pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:pixattributes]; //Create the OpenGL context to render with (with color and depth buffers) _openGLContext = [[NSOpenGLContext alloc] initWithFormat:_pixelFormat shareContext:nil]; if(_openGLContext == nil) { DDLogInfo(@"Cannot create OpenGL context"); [self release]; return nil; }                  //Create the OpenGL pixel buffer to render into         NSOpenGLPixelBuffer* glPixelBuffer = [[NSOpenGLPixelBuffer alloc] initWithTextureTarget:GL_TEXTURE_RECTANGLE_EXT    textureInternalFormat:GL_RGBA textureMaxMipMapLevel:0 pixelsWide:width pixelsHigh:height];         if(glPixelBuffer == nil) {             DDLogInfo(@"Cannot create OpenGL pixel buffer");             [self release];             return nil;         }         [_openGLContext setPixelBuffer:glPixelBuffer cubeMapFace:0 mipMapLevel:0 currentVirtualScreen:[_openGLContext currentVirtualScreen]];                  //Destroy the OpenGL pixel buffer         [glPixelBuffer release];          NSMutableDictionary* attributes = [NSMutableDictionary dictionary];         [attributes setObject:[NSNumber numberWithUnsignedInt:k32BGRAPixelFormat] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey]; [attributes setObject:[NSNumber numberWithUnsignedInt:width] forKey:(NSString*)kCVPixelBufferWidthKey]; [attributes setObject:[NSNumber numberWithUnsignedInt:height] forKey:(NSString*)kCVPixelBufferHeightKey]; //Create buffer pool to hold our frames OSErr theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (CFDictionaryRef)attributes, &_bufferPool); if(theError != kCVReturnSuccess)  { DDLogInfo(@"CVPixelBufferPoolCreate() failed with error %i", theError); [self release]; return nil; } }     /*      *A context is current on a per-thread basis. Multiple threads must serialize calls into the same context object.      */     [self.openGLContext makeCurrentContext]; return self; } By creating an NSOpenGLPixelBuffer object, and then setting the pixelbuffer of NSOpenGLContext, but in Xcode13, NSOpenGLPixelBuffer cannot be created successfully. Looking at the help documentation, it is recommended to use GL_EXT_framebuffer_object instead. So I tried the following code::         //RGBA8 RenderBuffer, 24 bit depth RenderBuffer, 256x256         glGenFramebuffersEXT(1, &fb);         glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);         //Create and attach a color buffer                  glGenRenderbuffersEXT(1, &color_rb);         //We must bind color_rb before we call glRenderbufferStorageEXT         glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, color_rb);         //The storage format is RGBA8         glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA, width, height);         //Attach color buffer to FBO         glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, color_rb);         //-------------------------                  glGenRenderbuffersEXT(1, &depth_rb);         glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);         glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, width, height);         //-------------------------         //Attach depth buffer to FBO         glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb);         //-------------------------         //Does the GPU support current FBO configuration?         GLenum status;         status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);         switch(status)         {             case GL_FRAMEBUFFER_COMPLETE_EXT:                 DDLogInfo(@"gl no problem");                 break;             default:                 DDLogInfo(@"error");                 break;         }                  //-------------------------         //and now you can render to the FBO (also called RenderBuffer)         glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); When running the program we can get the 'gl no problem' log. However, when reading off-screen image data, although glGetError does not return an error code, I can only read a black image. In previous versions, a QCRenderer rendered image could be successfully obtained. Reading off-screen images is implemented as follows: (CVPixelBufferRef) readPixelBuffer {     // Create pixel buffer from pixel buffer pool     CVPixelBufferRef bufferRef;     OSErr theError = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, _bufferPool, &bufferRef);     if(theError) {         DDLogInfo(@"CVPixelBufferPoolCreatePixelBuffer() failed with error %i", theError);         return nil;     }     theError = CVPixelBufferLockBaseAddress(bufferRef, 0); if(theError) { DDLogInfo(@"CVPixelBufferLockBaseAddress() failed with error %i", theError); return nil; } void* bufferPtr = CVPixelBufferGetBaseAddress(bufferRef);     size_t width = CVPixelBufferGetWidth(bufferRef);     size_t height = CVPixelBufferGetHeight(bufferRef); size_t bufferRowBytes = CVPixelBufferGetBytesPerRow(bufferRef);     CGLContextObj cgl_ctx = [_openGLContext CGLContextObj]; CGLLockContext(cgl_ctx);     //Read pixels back from the OpenGL pixel buffer in ARGB 32 bits format - For extra safety, we save / restore the OpenGL states we change     GLint save; glGetIntegerv(GL_PACK_ROW_LENGTH, &save); glPixelStorei(GL_PACK_ROW_LENGTH, (int)bufferRowBytes / 4); glReadPixels(0, 0, (GLsizei)width, (GLsizei)height, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, bufferPtr);     flipImage(bufferPtr, width, height, bufferRowBytes);     glPixelStorei(GL_PACK_ROW_LENGTH, save); CGLUnlockContext(cgl_ctx);     GLenum code = glGetError(); if(code) return nil; CVPixelBufferUnlockBaseAddress(bufferRef, 0);     return bufferRef; } Ask an expert how to solve this problem.
Posted Last updated
.
Post not yet marked as solved
0 Replies
214 Views
Hi! I'm looking to create a system utility to apply an openGL shader (or Metal?) to the window that the user is focused on (or the screen), ideally with a keyboard shortcut. From what I can tell, applying OpenGL shaders or pixel level modifications to the whole screens at a time is possible (e.g. BlackLight by Michel Fortin). Any pointers to this kind of thing would be great. Combining Automator workflows with some system-level code seems like it would do the trick but I'm not sure where to start. Update: It looks like CGColorSpace might be helpful for applying color transforms to windows. Perhaps there's a way to make a swift app similar to Rectangle that could modify these CoreGraphics elements instead of the coordinates / transform ones? Thanks for the help, Jack
Posted Last updated
.
Post not yet marked as solved
1 Replies
277 Views
Hello, At RunTime I'm being suggested an Optimization Opportunity for some code I've written. The layer is using a simple layer with background color set as a mask. Instead, use a container layer of the same "frame" and "cornerRadius" as the mask, but with "masksToBounds" set to YES I'm afraid I don't know what is meant here by a "container layer". This is the code that threw up the optimization opportunity      let maskView = UIView(frame: CGRect(x: 0, y: 0, width: 45, height: 90))     maskView.backgroundColor = .black     filledImage.mask = maskView The code above masks half of a 90x90 UIImageView filledImage. Any ideas how this could be refactoring to use a "container layer". Many thanks
Posted Last updated
.