Core Graphics

RSS for tag

Harness the power of Quartz technology to perform lightweight 2D rendering with high-fidelity output using Core Graphics.

Core Graphics Documentation

Posts under Core Graphics tag

59 Posts
Sort by:
Post not yet marked as solved
3 Replies
721 Views
I wrote the code as below for save image from PHImageManager, but a crash occurs. The data is from PHImageManager.default().requestImageDataAndOrientation. Crash Code guard let cgImage = UIImage(data: data)?.cgImage else { return } let metadata = ciImage.properties let destination: CGImageDestination = CGImageDestinationCreateWithURL(url as CFURL, uti as CFString, 1, nil)! CGImageDestinationAddImage(destination, cgImage, metadata as CFDictionary?) let success: Bool = CGImageDestinationFinalize(destination) // <- crashed Not Crash Code guard let cgImage = UIImage(data: data)?.cgImage else { return } let metadata = ciImage.properties let destination: CGImageDestination = CGImageDestinationCreateWithURL(url as CFURL, uti as CFString, 1, nil)! CGImageDestinationAddImage(destination, cgImage, nil) let success: Bool = CGImageDestinationFinalize(destination) // <- not crashed metadata { ColorModel = RGB; DPIHeight = 72; DPIWidth = 72; Depth = 8; PixelHeight = 2160; PixelWidth = 2880; ProfileName = "sRGB IEC61966-2.1"; "{Exif}" = { ApertureValue = "1.356143809255609"; BrightnessValue = "0.1278596944592232"; ColorSpace = 1; ComponentsConfiguration = ( 1, 2, 3, 0 ); CompositeImage = 2; DateTimeDigitized = "2021:12:28 08:38:28"; DateTimeOriginal = "2021:12:28 08:38:28"; DigitalZoomRatio = "1.300085984522786"; ExifVersion = ( 2, 2, 1 ); ExposureBiasValue = "0.09803208290449658"; ExposureMode = 0; ExposureProgram = 2; ExposureTime = "0.025"; FNumber = "1.6"; Flash = 16; FlashPixVersion = ( 1, 0 ); FocalLenIn35mmFilm = 33; FocalLength = "4.2"; ISOSpeedRatings = ( 400 ); LensMake = Apple; LensModel = "iPhone 12 back camera 4.2mm f/1.6"; LensSpecification = ( "4.2", "4.2", "1.6", "1.6" ); MeteringMode = 5; OffsetTime = "+09:00"; OffsetTimeDigitized = "+09:00"; OffsetTimeOriginal = "+09:00"; PixelXDimension = 2880; PixelYDimension = 2160; SceneCaptureType = 0; SceneType = 1; SensingMethod = 2; ShutterSpeedValue = "5.321697281908764"; SubjectArea = ( 2011, 1509, 2216, 1329 ); SubsecTimeDigitized = 686; SubsecTimeOriginal = 686; WhiteBalance = 0; }; "{IPTC}" = { DateCreated = 20211228; DigitalCreationDate = 20211228; DigitalCreationTime = 083828; TimeCreated = 083828; }; "{JFIF}" = { DensityUnit = 0; JFIFVersion = ( 1, 0, 1 ); XDensity = 72; YDensity = 72; }; "{TIFF}" = { DateTime = "2021:12:28 08:38:28"; HostComputer = "iPhone 12"; Make = Apple; Model = "iPhone 12"; Orientation = 0; ResolutionUnit = 2; Software = "Snowcorp SODA 5.4.8 / 15.2"; XResolution = 72; YResolution = 72; }; } What are the reasons? If I use CGimageDestinationAddImageFromSource instead of CGimageDestinationAddImage, there is no crash even if I add metadata. If I use PHImageManager.default().requestImage instead of PHImageManager.default().requestImageDataAndOrientation, and extract cgImage, there is no crash even if I add metadata.
Posted
by mj.lee123.
Last updated
.
Post not yet marked as solved
4 Replies
546 Views
I'm wondering if it is possible to get system wide NSEvent.cursorUpdate (or cursor events in any other form?) NSEvent.addGlobalMonitorForEvents(matching: .cursorUpdate, handler: cursorEventReceived(_:)) That isn't working and all I documentation and examples I can find are local and involving a the apps view but my app does not have any views and still need to get them if possible.
Posted
by gamakaze.
Last updated
.
Post not yet marked as solved
0 Replies
278 Views
I am trying to generate a gif file from array of images using following code snippet - Facing a crash with iOS15. For iOS15 -> Updating the deprecated "kUTTypeGIF" to "UTType.gif.identifier" doesn't prevent the crash. func createGif(fromImages images:[UIImage], withSize size: CGSize) -> CFURL? { guard !images.isEmpty else{ return nil } let fileProperties: CFDictionary = [kCGImagePropertyGIFDictionary as String: [kCGImagePropertyGIFLoopCount as String: 0]] as CFDictionary let frameProperties: CFDictionary = [kCGImagePropertyGIFDictionary as String: [kCGImagePropertyGIFDelayTime as String: 0.125]] as CFDictionary //gets the url let documentsDirectoryURL: URL? = try? FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true)           //add the filename     let fileURL: URL? = documentsDirectoryURL?.appendingPathComponent("animatedImage.gif")           if let url = fileURL as CFURL? {       //iOS15 -> Updating to UTType.gif.identifier       if let destination = CGImageDestinationCreateWithURL(url, kUTTypeGIF, images.count, nil) {         CGImageDestinationSetProperties(destination, fileProperties)         for image in images {           autoreleasepool {             let modifiedImage = image.scaled(to: size, scalingMode: .aspectFill)             if let cgImage = modifiedImage.cgImage {               CGImageDestinationAddImage(destination, cgImage, frameProperties)             }           }         }         if !CGImageDestinationFinalize(destination) {           print("Failed to finalize the image destination")           return nil         }else{           //converted to gif sucsessfully           return url         }       }     }     //something went wrong     return nil   } The memory allocation with CGImageDestinationAddImage increases exponentially with each iteration and finally lands in applicationDidReceiveMemoryWarning which terminates the App. The autoreleasepool added within for loop fails to releases the allocated memory leading to the crash. Any thoughts to resolve the issue are much appreciated. Thanks~
Posted
by dev_lapse.
Last updated
.
Post not yet marked as solved
2 Replies
275 Views
Hi, I got a bitmapData full of zeros after creating a NSImage with initWithContentsOfURL on a HEIF RGB image with 10 bits per component (bitsPerPixel is 40). The image is correctly displayed by the Finder and Preview. If I convert the bitmap representation with: CGImageRef cgImage = imageRep.CGImage; CGDataProviderRef provider = CGImageGetDataProvider(cgImage); CFDataRef data = CGDataProviderCopyData(provider); ... the subsequent CFData contains non-zero bytes. The call to CGDataProviderCopyData is not free and takes some time, as if there is some kind of unpacking process. I do not have such a null content with any other kind of image. Is there something I forgot before calling bitmapData? Thanks.
Posted
by iPerKard.
Last updated
.
Post not yet marked as solved
0 Replies
217 Views
I am seeing an issue where CGDisplayCopyDisplayMode returns NULL for a monitor that was recently plugged in. We call CGGetOnlineDisplayList to get the list of displays. We then call CGDisplayCopyDisplayMode with the CGDirectDisplayId provided. However, in the case of a monitor that was recently plugged in it returns NULL. This is strange because CGDisplayIsOnline returns true and CGDisplayBounds returns correct values. The following sample app reproduces the issue: `void printDisplays() {   [NSApplication sharedApplication];   while (true) {     sleep(1);         CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, false);         CGDirectDisplayID displays[16];   CGDisplayCount displayCount;   CGError err = CGGetOnlineDisplayList(16, displays, &displayCount);   if (err != kCGErrorSuccess) {      return;   }   printf("cgdisplays cgdisplaycount \n\n");   for (uint32_t i = 0; i < displayCount; i++) {     CGDirectDisplayID cgDisplayID = displays[i];     printf("cgdisplay id %d \n", cgDisplayID);     CGRect rect = CGDisplayBounds(cgDisplayID);     printf("cgrect origin x %.2f, y %.2f \n", rect.origin.x, rect.origin.y);     printf("cgrect size width %.2f, height %.2f \n", rect.size.width, rect.size.height);           CGDisplayModeRef displayMode = CGDisplayCopyDisplayMode(cgDisplayID);     int pixelWidth = CGDisplayModeGetPixelWidth(displayMode);     int pixelHeight = CGDisplayModeGetPixelHeight(displayMode);     printf("cgdisplaymode pixelWidth %d, pixelHeight %d \n\n", pixelWidth, pixelHeight);   }     printf("\n\n\n");         }     }` However the addition of [NSApplication sharedApplication]; and CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, false); resolved the issue in the sample app, but did not resolve the issue in my real application. That is, the above code works because those lines were added, but adding the above lines in my application still return NULL.
Posted Last updated
.
Post not yet marked as solved
1 Replies
296 Views
Hello, I want to order theBestWhoop HDMI Male to USB-C Female Cable Adapter with Micro USB Power Cable. But before i order it i have a question about it, I have now a macbook pro early 2013 with the next connections: (2x) thunderbolt 2 (support no 3440 x 1440 solution) (1x) HDMI port and (3x) USB 3.0. now i have bought a dell S3422DWG gaming monitor and want to use the full 3440 x 1440 solution but it give me only the option 1080P so i did some research about how it will work and everywhere on the ethernet it say use a USB C (thunderbolt 3) to DisplayPort 1.4 Cable. Now i have don't have a USB C (thunderbolt 3) connection in my macbook so i was looking for a solution and found this product. so i think if i put this dongle in my HDMI plug and use the cable from USB C (thunderbolt 3) to DisplayPort 1.4 Cable than it will maybe work.  Link: https://www.bestwhoop.com/products/bestwhoop-hdmi-male-to-usb-c-female-cable-adapter-with-micro-usb-power-cable?variant=40241638146207 Can somebody please help me?
Posted
by Samvandop.
Last updated
.
Post not yet marked as solved
1 Replies
289 Views
Hopefully someone can help me explain what is happening. In short, at certain scales, when drawing an NSImage created from a bitmap there is a stripe of black pixels near the right border of the image. This is when the image to be drawn is about half the size of the actual image. This artefact only occurs for certain scales. For smaller scales and larger scales the artefact is not there. This suggests that the problem is not with the bitmap image. For a specific scale the artefact occurs consistently. It occurs every time the image is drawn, and also when a newly created image is drawn. The artefact occurs for differently sized images (but similar relative drawing scales). To further complicate things, the problem occurs in my (relatively large) application project. When I run the exact same code in a toy project, created to reproduce the problem, the artefact is not there. This toy project is built and tested on the same machine (a Mac Mini with macOS Monterey 12.0 using XCode 12.4). I tried to keep the project settings similar (e.g. both projects target macOS 10.9) but obviously something is different which impacts how image are drawn. The code with the problem is as follows: - (NSImage *)createTestImageFromColor {   NSSize size = NSMakeSize(640, 640);   NSImage *image = [[[NSImage alloc] initWithSize: size] autorelease];   [image lockFocus];   [NSColor.blueColor drawSwatchInRect: NSMakeRect(0, 0, size.width, size.height)];   [NSColor.whiteColor drawSwatchInRect: NSMakeRect(10, 10, size.width - 20, size.height - 20)];   [image unlockFocus];   return image; } - (NSImage *)createTestImageFromBitmap {   int w = 640, h = 640;   NSBitmapImageRep  *bitmap;   bitmap = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes: NULL     pixelsWide: w pixelsHigh: h     bitsPerSample: 8 samplesPerPixel: 3     hasAlpha: NO isPlanar: NO     colorSpaceName: NSDeviceRGBColorSpace     bytesPerRow: 0 bitsPerPixel: 32];   int bmw = (int)bitmap.bytesPerRow / sizeof(UInt32);   for (int y = 0; y &lt; h; ++y) {       UInt32 *pos = (UInt32 *)bitmap.bitmapData + y * bmw;       for (int x = 0; x &lt; w; ++x) {           *pos++ = 0xFF;       }   }   NSImage  *image = [[NSImage alloc] initWithSize: NSMakeSize(w, h)];   [image addRepresentation: bitmap];   [bitmap release];   return [image autorelease]; } - (void) drawScaledImages {   if (testImage == nil) {     testImage = [[self createTestImageFromColor] retain];   }   if (testImage2 == nil) {     testImage2 = [[self createTestImageFromBitmap] retain];   }   [NSColor.whiteColor setFill];   NSRectFill(self.bounds);   [testImage drawInRect: NSMakeRect(50, 50, 320, 320)                fromRect: NSZeroRect               operation: NSCompositeCopy                fraction: 1.0];   [testImage2 drawInRect: NSMakeRect(50, 100, 320, 320)                 fromRect: NSZeroRect                operation: NSCompositeCopy                 fraction: 1.0];   [testImage drawInRect: NSMakeRect(450, 50, 480, 480)                fromRect: NSZeroRect               operation: NSCompositeCopy                fraction: 1.0];   [testImage2 drawInRect: NSMakeRect(450, 100, 480, 480)                 fromRect: NSZeroRect                operation: NSCompositeCopy                 fraction: 1.0]; } It draws two images. Both have the same size, but are created differently. The artefact only occurs for the image created from bitmap (testImage2), and only when drawn at size 320x320. At size 480x480 the artefact is not there. This results in the following view. The black pixels at the right of the left red square are the artefact. It may be difficult to reproduce the problem, as the same code in a minimal project works fine. This results in the following: So does anyone have any pointers on how to troubleshoot this? I cannot step into the drawInRect code, so I am unable to determine where the code paths diverge, and what causes this. Could it be that somehow my application is linking to a different (older) version of the framework that does the drawing, a version that contains a bug with scaled image drawing? If so, how to prevent that? Should anyone wish to see the project where the actual problem occurs, that's possible, as it's Open Source. The code is on the branch scaled-image-bug in the following git repository: https://git.code.sf.net/p/grandperspectiv/source
Posted
by eriban.
Last updated
.
Post not yet marked as solved
6 Replies
786 Views
So we have an app that has been working for a very long time. It is generating a PDF that has a section for crew members that will list details of each crew member including a signature image. In iOS 15, the signature of the first crew member is being drawn for all crew members. Still working fine in other version of iOS. Here is the code looping through each crew member: for (ShiftCrew *crew in delegate.pcr.shift.shiftcrews) { NSString *tempSignatureFile; NSString *crewMemberName; tempSignatureFile = [Utils getFullPathForFile:crew.signature.fileName]; crewMemberName = [NSString stringWithFormat:@"%@, %@", crew.lastName, crew.firstName]; nextY = [self handleCrewMember:pdfContext andCrewMember:crewMemberName andSignatureFile:tempSignatureFile andPosition:position andBaseY:nextY]; } Code where the signatures etc are being drawn: if ([signatureFile length] > 0) { UIImage *myUIImage; if (self.restricted) { myUIImage = [UIImage imageNamed:@"Restricted Signature Image.png"]; } else { myUIImage = [EncryptionFunctions openEncryptedImage:signatureFile]; } CGContextDrawImage (pdfContext, CGRectMake(238, nextY - 21, 114, 28), myUIImage.CGImage); } Utils getFullPathForFile just appends the passed in file name to the path to the Documents folder. When I debug, I have verified that the signatureFile string is the correct path to the individual signature file image. To troubleshoot, right before CGContextDrawImage, I have inserted the following code to output the image files to an unencrypted png file: NSString *filePath = [Utils getFullPathForFile:[NSString stringWithFormat:@"%@.png", crewMember]];       [UIImagePNGRepresentation(myUIImage) writeToFile:filePath atomically:YES]; The resulting files are correct and different from each other. Administrator, admin: account, Test: What actually shows in the PDF: A few things I have tried: Converting to CIImage and then to CGImage Using drawInRect on the UIImages instead of drawing from the CGImages. Hard coded the different images based on the crew member names. It does print a different image if I hard code drawing the Restricted Signature Image.png file that is in the bundle for one of the crew members, but that's not too helpful in figuring out how to make this work so far. I tried to create a new project that just generates a PDF that draws the two signature files. It works fine. I also in the same project, had a separate function that similarly generates a PDF with the two signature files and it also works fine. However, as this app is quite large and old, there is a lot of legacy code, so it is hard to extract and isolate the code that can reproduce this issue. Anyone have any suggestions on troubleshooting this? Things to look into, or things to try? What is driving me crazy is how this code: NSString *filePath = [Utils getFullPathForFile:[NSString stringWithFormat:@"%@.png", crewMember]];       [UIImagePNGRepresentation(myUIImage) writeToFile:filePath atomically:YES];               CGContextDrawImage (pdfContext, signatureRect, myUIImage.CGImage); Saves two different images, but draws the same image twice. And only in iOS 15. Thanks for any help.
Posted
by JTForte.
Last updated
.
Post not yet marked as solved
2 Replies
376 Views
I'm wondering which way I should go in my current app project. It is an app where the user can take a photo and place multiple 2D vector images on that photo. Some vector images are showing angles between lines. The user can interact with the vectors to change the angels to make some measurements on the photo. So you have multiple layers of vector images upon a photo. You can also pinch to zoom to have better control to set accurate vectors/angles. The user can choose the layer to interact with so I need to have control of all gesture recognizers and for example deactivate the pinch gestures on the scroll view. I'm wondering which technology I should use 🤔 SwiftUI, UIKit or CoreGraphics? Does somebody have some recommendations?
Posted Last updated
.
Post not yet marked as solved
1 Replies
551 Views
I have an application coded in objective-c that are using CoreGraphics and CGPDFDocument, it's a PDF reader. With the release of iOS 15 i'm having problems with the rendering of certain pages in certain PDF files. The problem is not present with PDFKit. I have also downloaded the ZoomingPDFViewer example (https://developer.apple.com/library/archive/samplecode/ZoomingPDFViewer/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010281) from the official apple documentation page and i see that the same thing happens. See the problem
Posted Last updated
.
Post not yet marked as solved
2 Replies
340 Views
I am porting an android music score application over to iOS, using swift and SwiftUI. One of the elements of music notation is a glissando. The form of this line can be seen here. - could not see how to add an image here. When I draw the glissando in the android app, I can create a shape which represents a single "wave" of this line and then use it as a stamp which is repeated by the graphics system along the defined path, in this manner : m_StampPath = new Path(); m_StampPath.moveTo(...); m_StampPath.cubicTo(...); m_StampPath.cubicTo(...); ... m_StampPath.close(); m_WavyLine = new PathDashPathEffect(m_StampPath, fStampOffset, 0.0f, PathDashPathEffect.Style.MORPH); // this is a Paint object pt.setPathEffect(m_WavyLine); pt.setStyle(Paint.Style.STROKE); LinePath = new Path(); LinePath.moveTo(...); LinePath.lineTo(...); canvas.drawPath(LinePath, pt); How can I achieve the same thing within swift, taking into account that the angle of the line is not always the same ?
Posted Last updated
.
Post not yet marked as solved
1 Replies
365 Views
I compress and convert format for picture,but a crash occurs sometimes Only on IOS 15.2. CFMutableDataRef destinationData = CFDataCreateMutable(kCFAllocatorDefault, 0);   CGImageDestinationRef destinationRef = CGImageDestinationCreateWithData(destinationData, kUTTypeJPEG, 1, NULL); NSDictionary *options = nil; options = @{             (NSString *)kCGImageDestinationDateTime : createDate,             (NSString *)kCGImageDestinationLossyCompressionQuality : @(compressQuality)             }; UIImage *tempImageWithData = [UIImage imageWithData:sourceImageData];     CGImageDestinationAddImageAndMetadata(destinationRef,                        tempImageWithData.CGImage,                        imageMetadataRef,                        (__bridge CFDictionaryRef)options);     if (CGImageDestinationFinalize(destinationRef)) {        // code     } crash 0 ImageIO 0x0000000182c6bb38 IIODictionary::containsKey(__CFString const*) + 8 1 ImageIO 0x0000000182debe84 _IIOGetExifOrientation + 44 2 ImageIO 0x0000000182e17174 AppleJPEGReadPlugin::IIORecodeAppleJPEG_to_JPEG(IIOImageDestination*, IIOImageSource*) + 652 3 ImageIO 0x0000000182df283c IIOImageDestination::finalizeUsingAppleJPEGRecode() + 40 4 ImageIO 0x0000000182c9b948 IIOImageDestination::finalizeDestination() + 416 5 ImageIO 0x0000000182c73234 _CGImageDestinationFinalize + 128 I think CGImageDestinationFinalize is a problem, but What are the reasons?
Posted
by westye.
Last updated
.
Post not yet marked as solved
0 Replies
293 Views
Wondering if it possible to post HID Gamepad events to the system similar to keyboard and mouse NSEvent or CGEvent. I am able to monitor gamepad events and get the usage value via IOHIDElementGetUsage (48 and 49 for the axis) and the value via IOHIDValueGetIntegerValue. I would like to generate these same events from code to simulate them without any actual controller attached to the system.
Posted Last updated
.
Post not yet marked as solved
3 Replies
366 Views
Doing a guard let cgImage = CGWindowListCreateImage(.null, [.optionIncludingWindow], cgID, [.nominalResolution]) else { print ("problem!"); continue } where cgID is a CGWindowID for a Desktop background image almost always returns an CGImage of the Desktop (minus any icons on the Desktop, of course). However, under Monterey, there is a finite possibility of the returned image to be simply gray or some chopped up version of the actual Desktop. This usually happens when Spaces are changed and code is triggered to update the image from a NSWorkspace.shared.notificationCenter notification named: NSWorkspace.activeSpaceDidChangeNotification. Is there a way to detect when the image returned is not correct? The else in the guard is never triggered and the cgImage is the correct size, just the wrong content. In fact comparing a good cgImage to a bad cgImage, there doesn't appear to be any difference. Documentation for .optionIncludingWindow says You must combine this option with the optionOnScreenAboveWindow or optionOnScreenBelowWindow option to retrieve meaningful results. However, including either option (e.g. [.optionOnScreenBelowWindow, .optionIncludingWindow]) can still result in an incorrect image. As an aside, https://developer.apple.com/videos/play/wwdc2019/701/ at the 15:49 mark shows using only the optionIncludingWindow, so not sure which documentation is correct.
Posted
by parker9.
Last updated
.
Post not yet marked as solved
0 Replies
293 Views
I'm working on an AppKit-based macOS application that has a transparent pane as part of the window.  The view structure is an IB-based storyboard. The transparent pane has UI items embedded in it, one of which is an NSTableView (which also has a transparent background.  The cell view backgrounds are also transparent.  After the application launches, the Table View leaves graphic artifacts (in some background layer) when the table is scrolled. If the window is resized, the artifacts disappear (resizing seems to force a redraw).  If the table is then scrolled, the artifacts re-appear (again, at the last position of the actual objects). The artifacts appear to be grayscale outlines or the edges of the objects (text or images) This is only an issue when using transparency.   Does anyone know what these artifacts are from, or perhaps how to get rid of them?  I assume that there is some redraw/refresh operation that is not occurring when it should (and is an issue with transparent backgrounds), but I've not yet been able to figure out how to properly trigger it.   (macOS Monterey 12.0.1, Xcode 13.2)
Posted
by lwilson.
Last updated
.
Post not yet marked as solved
4 Replies
3.5k Views
The DDC/CI application can work well on MacbookPro/Mac Pro (Big Sur), but it doesn't work on M1 Mac (both macOS 11.0.1 and 11.1). M1’s graphics card is Apple, not Intel or AMD. Does this incompatible issue relate with new graphics card or kernel change? Any alternative solution for M1?
Posted Last updated
.
Post not yet marked as solved
1 Replies
386 Views
Is there any way to send key/mouse events to unfocused windows? Currently my code looks like this: let src = CGEventSource(stateID: CGEventSourceStateID.hidSystemState)       let key_d = CGEvent(keyboardEventSource: src, virtualKey: 0x12, keyDown: true) // key "1" press       let key_u = CGEvent(keyboardEventSource: src, virtualKey: 0x12, keyDown: false) // key "1" release       key_d?.postToPid( (Int32)(pid) )       key_u?.postToPid( (Int32)(pid) ) Unfortunately this is working just for application, which owns menu bar. I was trying different methods, but none of them is working. I would love to send those events directly to app by selecting specific window ID instead of pid, but anything working with not-focused apps will be good.
Posted
by Archont94.
Last updated
.
Post marked as solved
1 Replies
424 Views
I'm developing application for macOS which require screen recording. Each time when I will recompile code, I have to manually add exception for this application in Security & Privacy (Screen Recording tab). Is there a way, to allow it only once? I was trying to acces it using this code: if( !CGPreflightScreenCaptureAccess()) {   print("not granted!")   let result = CGRequestScreenCaptureAccess()   if(result == true)   {     print("Screen recording granted, thank you.")   }   else   {     print("Not granted! Bye-bye...")     exit(1)   } } but CGRequestScreenCaptureAccess does not wait for approval. I was also trying to poll current status by calling CGPreflightScreenCaptureAccess in a loop, but it always returned false, even after manual approval. When I run this application from terminal (which have pernament access to screen), everything is working fine. But this way, I cannot debug anything.
Posted
by Archont94.
Last updated
.
Post not yet marked as solved
0 Replies
270 Views
Capturing an image of an off-screen window with CGWindowListCreateImage is a common way to create QuickLook-style zoom-in animations, but seems to give the wrong background colour, slightly whiter than the actual window has when it is shown onscreen. This causes a flash at the end of the animation which kind of ruins the effect. Does anyone have any idea why this happens and what can be done about it? If I set the window appearance to textured in Interface Builder this problem goes away, but then I have the problem that the window looks different (darker) than other windows in the app. I can set the window background to a custom color that makes it match the others windows, but then it still looks off in older macOS versions. I made a sample project that illustrates the problem here: https://github.com/angstsmurf/WindowCaptureTest.
Posted
by Dalaplan.
Last updated
.