Core Graphics

RSS for tag

Harness the power of Quartz technology to perform lightweight 2D rendering with high-fidelity output using Core Graphics.

Core Graphics Documentation

Posts under Core Graphics tag

59 Posts
Sort by:
Post not yet marked as solved
10 Replies
6.2k Views
Hi There, I just bought a IIYAMA G-MASTER GB3461WQSU-B1 which has a native resolution of 3440x1440 but my MacBook Pro (Retina, 15-inch, Mid 2014) doesn't recognise the monitor and I can't run it at its full resolution. It is currently recognised as a PL3461WQ 34.5-inch (2560 x 1440). Is there anything that I can do to get it sorted or I will have to wait until this monitor driver is added to the Big Sur list? Thanks
Posted
by
Post not yet marked as solved
1 Replies
495 Views
I am trying to develop an app that enables calligraphers to use their Apple Pencil as a calligraphy pen. The problem I am facing is that I don't know how to customize the strokes drawn by the Apple Pencil using the PencilKit framework. I also tried to use UIKit and handle the touches by Apple Pencil, but I am not sure how to achieve the desired effect. Can anyone guide me to solve this issue?
Posted
by
Post not yet marked as solved
5 Replies
1.7k Views
I have a background process which is updating an IOSurface-backed CVPixelBuffer at 30fps. I want to render a preview of that pixel buffer in my window, scaled to the size of the NSView that's displaying it. I get a callback every time the pixelbuffer/IOSurface is updated. I've tried using a custom layer-backed NSView and setting the layer contents to the IOsurface -- which works when the view is created but it's never updated unless the window is resized or another window is in front of it. I've tried setting both my view and my layer SetNeedsDisplay(), I've tried changing the layerContentsRedrawPolicy to .onSetNeedsDisplay, I've tried making sure all my content and update code is happening on the UI thread, but I can't get it to dynamically update. Is there a way to bind my layer or view to the IOSurface once and then just have it reflect the updates as they happen, or, if not, at least mark the layer as dirty each frame when it changes? I've pored over the docs but I don't see a lot about the relationship between IOSurface and CALayer.contents, and when in the lifecycle to mark things dirty (especially when updates are happening outside the view). Here's example code: class VideoPreviewThumbnail: NSView, VideoFeedConsumer {   let testCard = TestCardHelper()       override var wantsUpdateLayer: Bool {     get { return true }   }   required init?(coder decoder: NSCoder) {     super.init(coder: decoder)     self.wantsLayer = true     self.layerContentsRedrawPolicy = .onSetNeedsDisplay      		/* Scale the incoming data to the size of the view */      self.layer?.transform = CATransform3DMakeScale(       (self.layer?.contentsScale)! * self.frame.width / CGFloat(VideoSettings.width),       (self.layer?.contentsScale)! * self.frame.height / CGFloat(VideoSettings.height),       CGFloat(1)) 	 /* Register us with the content provider */     VideoFeedBrowser.instance.registerConsumer(self)   }       deinit{     VideoFeedBrowser.instance.deregisterConsumer(self)   }       override func updateLayer() { 		/* ideally we woudln't need to do this */     updateLayer(pixelBuffer: VideoFeedBrowser.instance.renderer.pixelBuffer)   }    	/* This gets called every time our pixelbuffer is updated (30fps) */   @objc   func updateFrame(pixelBuffer: CVPixelBuffer) {     updateLayer(pixelBuffer: pixelBuffer)   }       func updateLayer(pixelBuffer: CVPixelBuffer) {     guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {      print("pixelbuffer isn't IOsurface backed! noooooo!")       return;     } 	 /* these don't have any effect */ 	 //		self.layer?.setNeedsDisplay() //		self.setNeedsDisplay(invalidRect: self.visibleRect)     self.layer?.contents = surface   } }
Posted
by
Post not yet marked as solved
4 Replies
3.5k Views
The DDC/CI application can work well on MacbookPro/Mac Pro (Big Sur), but it doesn't work on M1 Mac (both macOS 11.0.1 and 11.1). M1’s graphics card is Apple, not Intel or AMD. Does this incompatible issue relate with new graphics card or kernel change? Any alternative solution for M1?
Posted
by
Post not yet marked as solved
1 Replies
429 Views
Hi all, I'm currently implementing a feature that performs customized behavior in each desktop (space). As far as I know, Apple does not have an API that can enumerate all spaces under each screen. I've only found a way that can get all spaces, but cannot find any method on how to determine each space belongs to which screen. Can somebody help me out? Thanks in advance.
Posted
by
Post not yet marked as solved
1 Replies
811 Views
I have some uncontroversial code that used to work perfectly in iOS 12 (and I think 13, from memory): let cropRect = mapVC.view.frame.inset(by: mapVC.view.safeAreaInsets).inset(by: mapVC.mapEdgeInsets) let mapRenderer = UIGraphicsImageRenderer(bounds: cropRect) let img = renderer.image(actions:  { _ in mapVC.view.drawHierarchy(in: mapVC.view.bounds, afterScreenUpdates: true) }) Now, I'm getting a partially rendered image (only the top 8-10 px or so - the rest is white/blank), and this message in the debugger: [Unknown process name] vImageConvert_AnyToAny - failed width = 0 height = 1 dst component = 16bit float dstLayout = ByteOrder16Little dstBytesPerRow = 0 src component = 16bit integer srcLayout = ByteOrder16Little srcBytesPerRow = 0 There seems to be very little online about this error and I have no clue where to start with it. The issue arises when calling .jpegData or .pngData methods on mapRenderer. No other changes to the view hierarchy or the rendered view since this was working just fine. Any suggestions? Xcode 12.4, iOS 14.3 running on an iPhone XS Max.
Posted
by
Post not yet marked as solved
2 Replies
919 Views
I want graphics card details using objective c. used IOServiceGetMatchingServices api for it, its working fine in Intel processor machine, but not returning model info for M1 machine. here is the code I was using    CFMutableDictionaryRef matchDict = IOServiceMatching("IOPCIDevice");        io_iterator_t iterator;        if (IOServiceGetMatchingServices(kIOMasterPortDefault,matchDict,                    &iterator) == kIOReturnSuccess)   {      io_registry_entry_t regEntry;            while ((regEntry = IOIteratorNext(iterator))) {        CFMutableDictionaryRef serviceDictionary;        if (IORegistryEntryCreateCFProperties(regEntry,                          &serviceDictionary,                           kCFAllocatorDefault,                           kNilOptions) != kIOReturnSuccess)       {          IOObjectRelease(regEntry);          continue;       }        const void *GPUModel = CFDictionaryGetValue(serviceDictionary, @"model");                if (GPUModel != nil) {          if (CFGetTypeID(GPUModel) == CFDataGetTypeID()) {            NSString *modelName = [[NSString alloc] initWithData:                       (NSData *)GPUModel encoding:NSASCIIStringEncoding];                        NSLog(@"GPU Model: %@", modelName);           [modelName release];         }       }        CFRelease(serviceDictionary);        IOObjectRelease(regEntry);     }      IOObjectRelease(iterator);   }
Posted
by
Post not yet marked as solved
0 Replies
291 Views
Hi. The following code creates and initialises a simple progress bar. How would I add a callback to update the progress bar ? float prog = 0.5;     CFNumberRef cfNum = CFNumberCreate(kCFAllocatorDefault,kCFNumberFloatType, &prog);     const void* keys[] = { kCFUserNotificationAlertHeaderKey, kCFUserNotificationProgressIndicatorValueKey };      const void* vals[] = { CFSTR("Progress Bar"), cfNum};          CFDictionaryRef dict = CFDictionaryCreate(0, keys, vals,                     sizeof(keys)/sizeof(*keys),                     &kCFTypeDictionaryKeyCallBacks,                     &kCFTypeDictionaryValueCallBacks);           CFUserNotificationRef pDlg = NULL;      pDlg = CFUserNotificationCreate(kCFAllocatorDefault, 0,                          kCFUserNotificationPlainAlertLevel,                          &nRes, dict);
Posted
by
Post not yet marked as solved
0 Replies
581 Views
Steps to reproduce: Download and open the attached "ARKitTest" project Build and deploy the project to iOS Reproduced with:2018.4.31f1, 2019.4.20f1, 2020.2.4f1, 2021.1.0b5, 2021.2.0a4 Reproducible with iPhone 12 Pro (iOS 14.2.1) [Bug] iOS app crashes after some time (EXC_BAD_ACCESS) · Issue #716 · Unity-Technologies/arfoundation-samples (github.com) https://github.com/Unity-Technologies/arfoundation-samples/issues/716 Unity Issue Tracker - [iOS 14] EXC_BAD_ACCESS crash from com.apple.arkit.ardisplaylink (unity3d.com) https://issuetracker.unity3d.com/issues/ios-14-exc-bad-access-crash-from-com-dot-apple-dot-arkit-dot-ardisplaylink
Posted
by
Post not yet marked as solved
1 Replies
466 Views
Objective and steps Use the device front true depth camera (iPhone 12 Pro Max) to capture image data, live photo data and metadata (e.g. depth data and portrait effects matte) using AVFoundation capture principles into an AVCapturePhoto object. Save this captured object with its metadata to PHPhotoLibrary using a PHAssetCreationRequest object API. Result Image data, live data, disparity depth data (640x480 px) and some metadata is stored with the image through the PHPhotoLibrary API but the high quality portrait effects matte is lost. Notes Upon receiving the AVCapturePhoto object from AVFoundation capture delegate API I can verify that AVCapturePhoto object contains a high quality portrait effects matte member object. Using object's fileDataRepresentation() to obtain Data blob, writing that to a test file URL and reading it back I can see that flattened data API writes and restores the portrait effects matte. However, it gets stripped from the data when writing through the PHPhotoLibrary asset creation request. When later picking the image e.g. with PHPickerViewController + PHPickerResult and peeking into the object's data with CGImageSourceCopyAuxiliaryDataInfoAtIndex() I can see that there is data dictionary only for key kCGImageAuxiliaryDataTypeDisparity, and kCGImageAuxiliaryDataTypeDepth and kCGImageAuxiliaryDataTypePortraitEffectsMatte are both missing. Please, anyone has more detailed information if this possible at all? Thanks!
Posted
by
Post marked as solved
1 Replies
394 Views
I have an array of CGPoint containing various coordinates. I need to apply the filter to x coordinates and y coordinates separately. I am not sure how to do this the Swift way so I unpack the coordinate using this way currently.       var xvalues: [CGFloat] = []       var yvalues: [CGFloat] = []       if (observation1.count) == 5{         for n in observation1 {           xvalues.append(n.x)           yvalues.append(n.y)         }         filter1 = convolve(xvalues, sgfilterwindow5_order2)         filter2 = convolve(yvalues, sgfilterwindow5_order2) I am sure there is a more elegant way to do this. How to do this without unpacking the array?
Posted
by
Post not yet marked as solved
1 Replies
518 Views
I am using AVCapturePhoto to capture image. In didFinishProcessingPhoto i am getting image data using fileDataRepresentation. But when i convert this data to UIImage, it loses most of its metadata. I need to draw bezier path on UIImage and still maintain metadata. Is there any way to do this.
Posted
by
Post marked as solved
1 Replies
545 Views
I try to rotate a page 180° in a pdf file. I nearly get it, but the page is also mirrored horizontally. Some images to illustrate: Initial page: Result after rotation (see code): it is rotated 180° BUT mirrored horizontally as well: The expected result It is just as if it was rotated 180°, around the x axis of the page. And I would need to rotate 180° around z axis (perpendicular to the page). It is probably the result of writeContext!.scaleBy(x: 1, y: -1) I have tried a lot of changes for transform, translate, scale parameters, including removing calls to some of them, to no avail. @IBAction func createNewPDF(_ sender: UIButton) { var originalPdfDocument: CGPDFDocument! let urls = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask) let documentsDirectory = urls[0] // read some pdf from bundle for test if let path = Bundle.main.path(forResource: "Test", ofType: "pdf"), let pdf = CGPDFDocument(URL(fileURLWithPath: path) as CFURL) { originalPdfDocument = pdf } else { return } // create new pdf let modifiedPdfURL = documentsDirectory.appendingPathComponent("Modified.pdf") guard let page = originalPdfDocument.page(at: 1) else { return } // Starts at page 1 var mediaBox: CGRect = page.getBoxRect(CGPDFBox.mediaBox) // mediabox which will set the height and width of page let writeContext = CGContext(modifiedPdfURL as CFURL, mediaBox: &mediaBox, nil) // get the context var pageRect: CGRect = page.getBoxRect(CGPDFBox.mediaBox) // get the page rect writeContext!.beginPage(mediaBox: &pageRect) let m = page.getDrawingTransform(.mediaBox, rect: mediaBox, rotate: 0, preserveAspectRatio: true) // Because of rotate 0, no effect ; changed rotate to 180, then get an empty page writeContext!.translateBy(x: 0, y: pageRect.size.height) writeContext!.scaleBy(x: 1, y: -1) writeContext!.concatenate(m) writeContext!.clip(to: pageRect) writeContext!.drawPDFPage(page) // draw content in page writeContext!.endPage() // end the current page writeContext!.closePDF() } Note: This is a follow up of a previous thread, https://developer.apple.com/forums/thread/688436
Posted
by
Post marked as solved
1 Replies
663 Views
I have doubts about Core Image coordinate system, way transforms are applied and way the image extent is determined. I couldn't find much in documentation or on internet so I tried the following code to rotate CIImage and display it in UIImageView. As I understand there is no absolute coordinate system in Core Image. The bottom left corner of an image is supposed to be (0,0). But my experiments show something else. I created a prototype to rotate a CIImage by pi/10 radians on each button click. Here is the code I wrote. override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. imageView.contentMode = .scaleAspectFit let uiImage = UIImage(contentsOfFile: imagePath) ciImage = CIImage(cgImage: (uiImage?.cgImage)!) imageView.image = uiImage } private var currentAngle = CGFloat(0) private var ciImage:CIImage! private var ciContext = CIContext() @IBAction func rotateImage() { let extent = ciImage.extent let translate = CGAffineTransform(translationX: extent.midX, y: extent.midY) let uiImage = UIImage(contentsOfFile: imagePath) currentAngle = currentAngle + CGFloat.pi/10 let rotate = CGAffineTransform(rotationAngle: currentAngle) let translateBack = CGAffineTransform(translationX: -extent.midX, y: -extent.midY) let transform = translateBack.concatenating(rotate.concatenating(translate)) ciImage = CIImage(cgImage: (uiImage?.cgImage)!) ciImage = ciImage.transformed(by: transform) NSLog("Extent \(ciImage.extent), Angle \(currentAngle)") let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent) imageView.image = UIImage(cgImage: cgImage!) } But in the logs, I see the extent of images have negative origin.x and origin.y. What does it mean? Relative to whom it is negative and where exactly is (0,0) then? What exactly is image extent and how does Core Image coordinate system work? 2021-09-24 14:43:29.280393+0400 CoreImagePrototypes[65817:5175194] Metal API Validation Enabled 2021-09-24 14:43:31.094877+0400 CoreImagePrototypes[65817:5175194] Extent (-105.0, -105.0, 1010.0, 1010.0), Angle 0.3141592653589793 2021-09-24 14:43:41.426371+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.6283185307179586 2021-09-24 14:43:42.244703+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.9424777960769379
Posted
by
Post not yet marked as solved
2 Replies
667 Views
Since update to iOS 15, our app is crashing in the first controller, right after splash screen(log attached). To be mentioned that if app goes to background right after being active, it will only crash after i switch it back in foreground. At crash it gets me right to int main() without any relevant info to help me debug, except (maybe) this line in the console "[Unknown process name] CGBitmapContextCreateWithCallbacks: failed to create CGAutomaticBitmapContextInfo." I've tried using all exceptions breakpoint but didn't help debugging further. It seems to be an issue in storyboard and not a specific line of code. At start, the app has a tabBarController which integrates some controllers. The first goes like viewDidLoad, viewWillAppear, viewWillLayoutSubviews, viewDidLayoutSubviews and then dies before reaching viewDidAppear. Would appreciate some help 2021-09-27_10-34-54.4864_+0300-9047901996b77362f454f5d9fb324f4ab72e4b5d.crash
Posted
by
Post not yet marked as solved
6 Replies
786 Views
So we have an app that has been working for a very long time. It is generating a PDF that has a section for crew members that will list details of each crew member including a signature image. In iOS 15, the signature of the first crew member is being drawn for all crew members. Still working fine in other version of iOS. Here is the code looping through each crew member: for (ShiftCrew *crew in delegate.pcr.shift.shiftcrews) { NSString *tempSignatureFile; NSString *crewMemberName; tempSignatureFile = [Utils getFullPathForFile:crew.signature.fileName]; crewMemberName = [NSString stringWithFormat:@"%@, %@", crew.lastName, crew.firstName]; nextY = [self handleCrewMember:pdfContext andCrewMember:crewMemberName andSignatureFile:tempSignatureFile andPosition:position andBaseY:nextY]; } Code where the signatures etc are being drawn: if ([signatureFile length] > 0) { UIImage *myUIImage; if (self.restricted) { myUIImage = [UIImage imageNamed:@"Restricted Signature Image.png"]; } else { myUIImage = [EncryptionFunctions openEncryptedImage:signatureFile]; } CGContextDrawImage (pdfContext, CGRectMake(238, nextY - 21, 114, 28), myUIImage.CGImage); } Utils getFullPathForFile just appends the passed in file name to the path to the Documents folder. When I debug, I have verified that the signatureFile string is the correct path to the individual signature file image. To troubleshoot, right before CGContextDrawImage, I have inserted the following code to output the image files to an unencrypted png file: NSString *filePath = [Utils getFullPathForFile:[NSString stringWithFormat:@"%@.png", crewMember]];       [UIImagePNGRepresentation(myUIImage) writeToFile:filePath atomically:YES]; The resulting files are correct and different from each other. Administrator, admin: account, Test: What actually shows in the PDF: A few things I have tried: Converting to CIImage and then to CGImage Using drawInRect on the UIImages instead of drawing from the CGImages. Hard coded the different images based on the crew member names. It does print a different image if I hard code drawing the Restricted Signature Image.png file that is in the bundle for one of the crew members, but that's not too helpful in figuring out how to make this work so far. I tried to create a new project that just generates a PDF that draws the two signature files. It works fine. I also in the same project, had a separate function that similarly generates a PDF with the two signature files and it also works fine. However, as this app is quite large and old, there is a lot of legacy code, so it is hard to extract and isolate the code that can reproduce this issue. Anyone have any suggestions on troubleshooting this? Things to look into, or things to try? What is driving me crazy is how this code: NSString *filePath = [Utils getFullPathForFile:[NSString stringWithFormat:@"%@.png", crewMember]];       [UIImagePNGRepresentation(myUIImage) writeToFile:filePath atomically:YES];               CGContextDrawImage (pdfContext, signatureRect, myUIImage.CGImage); Saves two different images, but draws the same image twice. And only in iOS 15. Thanks for any help.
Posted
by
Post not yet marked as solved
0 Replies
216 Views
Nach dem Update auf Beta 9, 12.0 Beta (21A5543b) können Sie die Auflösung nicht mehr ändern! Vorher in beta 8 ging es ohne Probleme starte das MacBook Pro neu ohne die Auflösung geändert zu haben hat jemand eine Idee oder ein bug ?? habe ich auch schon so gemeldet
Posted
by