Core Graphics

RSS for tag

Harness the power of Quartz technology to perform lightweight 2D rendering with high-fidelity output using Core Graphics.

Core Graphics Documentation

Posts under Core Graphics tag

59 Posts
Sort by:
Post not yet marked as solved
0 Replies
218 Views
I am seeing an issue where CGDisplayCopyDisplayMode returns NULL for a monitor that was recently plugged in. We call CGGetOnlineDisplayList to get the list of displays. We then call CGDisplayCopyDisplayMode with the CGDirectDisplayId provided. However, in the case of a monitor that was recently plugged in it returns NULL. This is strange because CGDisplayIsOnline returns true and CGDisplayBounds returns correct values. The following sample app reproduces the issue: `void printDisplays() {   [NSApplication sharedApplication];   while (true) {     sleep(1);         CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, false);         CGDirectDisplayID displays[16];   CGDisplayCount displayCount;   CGError err = CGGetOnlineDisplayList(16, displays, &displayCount);   if (err != kCGErrorSuccess) {      return;   }   printf("cgdisplays cgdisplaycount \n\n");   for (uint32_t i = 0; i < displayCount; i++) {     CGDirectDisplayID cgDisplayID = displays[i];     printf("cgdisplay id %d \n", cgDisplayID);     CGRect rect = CGDisplayBounds(cgDisplayID);     printf("cgrect origin x %.2f, y %.2f \n", rect.origin.x, rect.origin.y);     printf("cgrect size width %.2f, height %.2f \n", rect.size.width, rect.size.height);           CGDisplayModeRef displayMode = CGDisplayCopyDisplayMode(cgDisplayID);     int pixelWidth = CGDisplayModeGetPixelWidth(displayMode);     int pixelHeight = CGDisplayModeGetPixelHeight(displayMode);     printf("cgdisplaymode pixelWidth %d, pixelHeight %d \n\n", pixelWidth, pixelHeight);   }     printf("\n\n\n");         }     }` However the addition of [NSApplication sharedApplication]; and CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, false); resolved the issue in the sample app, but did not resolve the issue in my real application. That is, the above code works because those lines were added, but adding the above lines in my application still return NULL.
Posted
by
Post not yet marked as solved
1 Replies
297 Views
Hello, I want to order theBestWhoop HDMI Male to USB-C Female Cable Adapter with Micro USB Power Cable. But before i order it i have a question about it, I have now a macbook pro early 2013 with the next connections: (2x) thunderbolt 2 (support no 3440 x 1440 solution) (1x) HDMI port and (3x) USB 3.0. now i have bought a dell S3422DWG gaming monitor and want to use the full 3440 x 1440 solution but it give me only the option 1080P so i did some research about how it will work and everywhere on the ethernet it say use a USB C (thunderbolt 3) to DisplayPort 1.4 Cable. Now i have don't have a USB C (thunderbolt 3) connection in my macbook so i was looking for a solution and found this product. so i think if i put this dongle in my HDMI plug and use the cable from USB C (thunderbolt 3) to DisplayPort 1.4 Cable than it will maybe work.  Link: https://www.bestwhoop.com/products/bestwhoop-hdmi-male-to-usb-c-female-cable-adapter-with-micro-usb-power-cable?variant=40241638146207 Can somebody please help me?
Posted
by
Post not yet marked as solved
1 Replies
290 Views
Hopefully someone can help me explain what is happening. In short, at certain scales, when drawing an NSImage created from a bitmap there is a stripe of black pixels near the right border of the image. This is when the image to be drawn is about half the size of the actual image. This artefact only occurs for certain scales. For smaller scales and larger scales the artefact is not there. This suggests that the problem is not with the bitmap image. For a specific scale the artefact occurs consistently. It occurs every time the image is drawn, and also when a newly created image is drawn. The artefact occurs for differently sized images (but similar relative drawing scales). To further complicate things, the problem occurs in my (relatively large) application project. When I run the exact same code in a toy project, created to reproduce the problem, the artefact is not there. This toy project is built and tested on the same machine (a Mac Mini with macOS Monterey 12.0 using XCode 12.4). I tried to keep the project settings similar (e.g. both projects target macOS 10.9) but obviously something is different which impacts how image are drawn. The code with the problem is as follows: - (NSImage *)createTestImageFromColor {   NSSize size = NSMakeSize(640, 640);   NSImage *image = [[[NSImage alloc] initWithSize: size] autorelease];   [image lockFocus];   [NSColor.blueColor drawSwatchInRect: NSMakeRect(0, 0, size.width, size.height)];   [NSColor.whiteColor drawSwatchInRect: NSMakeRect(10, 10, size.width - 20, size.height - 20)];   [image unlockFocus];   return image; } - (NSImage *)createTestImageFromBitmap {   int w = 640, h = 640;   NSBitmapImageRep  *bitmap;   bitmap = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes: NULL     pixelsWide: w pixelsHigh: h     bitsPerSample: 8 samplesPerPixel: 3     hasAlpha: NO isPlanar: NO     colorSpaceName: NSDeviceRGBColorSpace     bytesPerRow: 0 bitsPerPixel: 32];   int bmw = (int)bitmap.bytesPerRow / sizeof(UInt32);   for (int y = 0; y &lt; h; ++y) {       UInt32 *pos = (UInt32 *)bitmap.bitmapData + y * bmw;       for (int x = 0; x &lt; w; ++x) {           *pos++ = 0xFF;       }   }   NSImage  *image = [[NSImage alloc] initWithSize: NSMakeSize(w, h)];   [image addRepresentation: bitmap];   [bitmap release];   return [image autorelease]; } - (void) drawScaledImages {   if (testImage == nil) {     testImage = [[self createTestImageFromColor] retain];   }   if (testImage2 == nil) {     testImage2 = [[self createTestImageFromBitmap] retain];   }   [NSColor.whiteColor setFill];   NSRectFill(self.bounds);   [testImage drawInRect: NSMakeRect(50, 50, 320, 320)                fromRect: NSZeroRect               operation: NSCompositeCopy                fraction: 1.0];   [testImage2 drawInRect: NSMakeRect(50, 100, 320, 320)                 fromRect: NSZeroRect                operation: NSCompositeCopy                 fraction: 1.0];   [testImage drawInRect: NSMakeRect(450, 50, 480, 480)                fromRect: NSZeroRect               operation: NSCompositeCopy                fraction: 1.0];   [testImage2 drawInRect: NSMakeRect(450, 100, 480, 480)                 fromRect: NSZeroRect                operation: NSCompositeCopy                 fraction: 1.0]; } It draws two images. Both have the same size, but are created differently. The artefact only occurs for the image created from bitmap (testImage2), and only when drawn at size 320x320. At size 480x480 the artefact is not there. This results in the following view. The black pixels at the right of the left red square are the artefact. It may be difficult to reproduce the problem, as the same code in a minimal project works fine. This results in the following: So does anyone have any pointers on how to troubleshoot this? I cannot step into the drawInRect code, so I am unable to determine where the code paths diverge, and what causes this. Could it be that somehow my application is linking to a different (older) version of the framework that does the drawing, a version that contains a bug with scaled image drawing? If so, how to prevent that? Should anyone wish to see the project where the actual problem occurs, that's possible, as it's Open Source. The code is on the branch scaled-image-bug in the following git repository: https://git.code.sf.net/p/grandperspectiv/source
Posted
by
Post not yet marked as solved
2 Replies
378 Views
I'm wondering which way I should go in my current app project. It is an app where the user can take a photo and place multiple 2D vector images on that photo. Some vector images are showing angles between lines. The user can interact with the vectors to change the angels to make some measurements on the photo. So you have multiple layers of vector images upon a photo. You can also pinch to zoom to have better control to set accurate vectors/angles. The user can choose the layer to interact with so I need to have control of all gesture recognizers and for example deactivate the pinch gestures on the scroll view. I'm wondering which technology I should use 🤔 SwiftUI, UIKit or CoreGraphics? Does somebody have some recommendations?
Posted
by
Post not yet marked as solved
2 Replies
341 Views
I am porting an android music score application over to iOS, using swift and SwiftUI. One of the elements of music notation is a glissando. The form of this line can be seen here. - could not see how to add an image here. When I draw the glissando in the android app, I can create a shape which represents a single "wave" of this line and then use it as a stamp which is repeated by the graphics system along the defined path, in this manner : m_StampPath = new Path(); m_StampPath.moveTo(...); m_StampPath.cubicTo(...); m_StampPath.cubicTo(...); ... m_StampPath.close(); m_WavyLine = new PathDashPathEffect(m_StampPath, fStampOffset, 0.0f, PathDashPathEffect.Style.MORPH); // this is a Paint object pt.setPathEffect(m_WavyLine); pt.setStyle(Paint.Style.STROKE); LinePath = new Path(); LinePath.moveTo(...); LinePath.lineTo(...); canvas.drawPath(LinePath, pt); How can I achieve the same thing within swift, taking into account that the angle of the line is not always the same ?
Post not yet marked as solved
1 Replies
366 Views
I compress and convert format for picture,but a crash occurs sometimes Only on IOS 15.2. CFMutableDataRef destinationData = CFDataCreateMutable(kCFAllocatorDefault, 0);   CGImageDestinationRef destinationRef = CGImageDestinationCreateWithData(destinationData, kUTTypeJPEG, 1, NULL); NSDictionary *options = nil; options = @{             (NSString *)kCGImageDestinationDateTime : createDate,             (NSString *)kCGImageDestinationLossyCompressionQuality : @(compressQuality)             }; UIImage *tempImageWithData = [UIImage imageWithData:sourceImageData];     CGImageDestinationAddImageAndMetadata(destinationRef,                        tempImageWithData.CGImage,                        imageMetadataRef,                        (__bridge CFDictionaryRef)options);     if (CGImageDestinationFinalize(destinationRef)) {        // code     } crash 0 ImageIO 0x0000000182c6bb38 IIODictionary::containsKey(__CFString const*) + 8 1 ImageIO 0x0000000182debe84 _IIOGetExifOrientation + 44 2 ImageIO 0x0000000182e17174 AppleJPEGReadPlugin::IIORecodeAppleJPEG_to_JPEG(IIOImageDestination*, IIOImageSource*) + 652 3 ImageIO 0x0000000182df283c IIOImageDestination::finalizeUsingAppleJPEGRecode() + 40 4 ImageIO 0x0000000182c9b948 IIOImageDestination::finalizeDestination() + 416 5 ImageIO 0x0000000182c73234 _CGImageDestinationFinalize + 128 I think CGImageDestinationFinalize is a problem, but What are the reasons?
Posted
by
Post not yet marked as solved
0 Replies
294 Views
Wondering if it possible to post HID Gamepad events to the system similar to keyboard and mouse NSEvent or CGEvent. I am able to monitor gamepad events and get the usage value via IOHIDElementGetUsage (48 and 49 for the axis) and the value via IOHIDValueGetIntegerValue. I would like to generate these same events from code to simulate them without any actual controller attached to the system.
Posted
by
Post not yet marked as solved
3 Replies
722 Views
I wrote the code as below for save image from PHImageManager, but a crash occurs. The data is from PHImageManager.default().requestImageDataAndOrientation. Crash Code guard let cgImage = UIImage(data: data)?.cgImage else { return } let metadata = ciImage.properties let destination: CGImageDestination = CGImageDestinationCreateWithURL(url as CFURL, uti as CFString, 1, nil)! CGImageDestinationAddImage(destination, cgImage, metadata as CFDictionary?) let success: Bool = CGImageDestinationFinalize(destination) // <- crashed Not Crash Code guard let cgImage = UIImage(data: data)?.cgImage else { return } let metadata = ciImage.properties let destination: CGImageDestination = CGImageDestinationCreateWithURL(url as CFURL, uti as CFString, 1, nil)! CGImageDestinationAddImage(destination, cgImage, nil) let success: Bool = CGImageDestinationFinalize(destination) // <- not crashed metadata { ColorModel = RGB; DPIHeight = 72; DPIWidth = 72; Depth = 8; PixelHeight = 2160; PixelWidth = 2880; ProfileName = "sRGB IEC61966-2.1"; "{Exif}" = { ApertureValue = "1.356143809255609"; BrightnessValue = "0.1278596944592232"; ColorSpace = 1; ComponentsConfiguration = ( 1, 2, 3, 0 ); CompositeImage = 2; DateTimeDigitized = "2021:12:28 08:38:28"; DateTimeOriginal = "2021:12:28 08:38:28"; DigitalZoomRatio = "1.300085984522786"; ExifVersion = ( 2, 2, 1 ); ExposureBiasValue = "0.09803208290449658"; ExposureMode = 0; ExposureProgram = 2; ExposureTime = "0.025"; FNumber = "1.6"; Flash = 16; FlashPixVersion = ( 1, 0 ); FocalLenIn35mmFilm = 33; FocalLength = "4.2"; ISOSpeedRatings = ( 400 ); LensMake = Apple; LensModel = "iPhone 12 back camera 4.2mm f/1.6"; LensSpecification = ( "4.2", "4.2", "1.6", "1.6" ); MeteringMode = 5; OffsetTime = "+09:00"; OffsetTimeDigitized = "+09:00"; OffsetTimeOriginal = "+09:00"; PixelXDimension = 2880; PixelYDimension = 2160; SceneCaptureType = 0; SceneType = 1; SensingMethod = 2; ShutterSpeedValue = "5.321697281908764"; SubjectArea = ( 2011, 1509, 2216, 1329 ); SubsecTimeDigitized = 686; SubsecTimeOriginal = 686; WhiteBalance = 0; }; "{IPTC}" = { DateCreated = 20211228; DigitalCreationDate = 20211228; DigitalCreationTime = 083828; TimeCreated = 083828; }; "{JFIF}" = { DensityUnit = 0; JFIFVersion = ( 1, 0, 1 ); XDensity = 72; YDensity = 72; }; "{TIFF}" = { DateTime = "2021:12:28 08:38:28"; HostComputer = "iPhone 12"; Make = Apple; Model = "iPhone 12"; Orientation = 0; ResolutionUnit = 2; Software = "Snowcorp SODA 5.4.8 / 15.2"; XResolution = 72; YResolution = 72; }; } What are the reasons? If I use CGimageDestinationAddImageFromSource instead of CGimageDestinationAddImage, there is no crash even if I add metadata. If I use PHImageManager.default().requestImage instead of PHImageManager.default().requestImageDataAndOrientation, and extract cgImage, there is no crash even if I add metadata.
Posted
by
Post not yet marked as solved
3 Replies
367 Views
Doing a guard let cgImage = CGWindowListCreateImage(.null, [.optionIncludingWindow], cgID, [.nominalResolution]) else { print ("problem!"); continue } where cgID is a CGWindowID for a Desktop background image almost always returns an CGImage of the Desktop (minus any icons on the Desktop, of course). However, under Monterey, there is a finite possibility of the returned image to be simply gray or some chopped up version of the actual Desktop. This usually happens when Spaces are changed and code is triggered to update the image from a NSWorkspace.shared.notificationCenter notification named: NSWorkspace.activeSpaceDidChangeNotification. Is there a way to detect when the image returned is not correct? The else in the guard is never triggered and the cgImage is the correct size, just the wrong content. In fact comparing a good cgImage to a bad cgImage, there doesn't appear to be any difference. Documentation for .optionIncludingWindow says You must combine this option with the optionOnScreenAboveWindow or optionOnScreenBelowWindow option to retrieve meaningful results. However, including either option (e.g. [.optionOnScreenBelowWindow, .optionIncludingWindow]) can still result in an incorrect image. As an aside, https://developer.apple.com/videos/play/wwdc2019/701/ at the 15:49 mark shows using only the optionIncludingWindow, so not sure which documentation is correct.
Posted
by
Post not yet marked as solved
0 Replies
295 Views
I'm working on an AppKit-based macOS application that has a transparent pane as part of the window.  The view structure is an IB-based storyboard. The transparent pane has UI items embedded in it, one of which is an NSTableView (which also has a transparent background.  The cell view backgrounds are also transparent.  After the application launches, the Table View leaves graphic artifacts (in some background layer) when the table is scrolled. If the window is resized, the artifacts disappear (resizing seems to force a redraw).  If the table is then scrolled, the artifacts re-appear (again, at the last position of the actual objects). The artifacts appear to be grayscale outlines or the edges of the objects (text or images) This is only an issue when using transparency.   Does anyone know what these artifacts are from, or perhaps how to get rid of them?  I assume that there is some redraw/refresh operation that is not occurring when it should (and is an issue with transparent backgrounds), but I've not yet been able to figure out how to properly trigger it.   (macOS Monterey 12.0.1, Xcode 13.2)
Posted
by
Post not yet marked as solved
1 Replies
387 Views
Is there any way to send key/mouse events to unfocused windows? Currently my code looks like this: let src = CGEventSource(stateID: CGEventSourceStateID.hidSystemState)       let key_d = CGEvent(keyboardEventSource: src, virtualKey: 0x12, keyDown: true) // key "1" press       let key_u = CGEvent(keyboardEventSource: src, virtualKey: 0x12, keyDown: false) // key "1" release       key_d?.postToPid( (Int32)(pid) )       key_u?.postToPid( (Int32)(pid) ) Unfortunately this is working just for application, which owns menu bar. I was trying different methods, but none of them is working. I would love to send those events directly to app by selecting specific window ID instead of pid, but anything working with not-focused apps will be good.
Posted
by
Post marked as solved
1 Replies
426 Views
I'm developing application for macOS which require screen recording. Each time when I will recompile code, I have to manually add exception for this application in Security & Privacy (Screen Recording tab). Is there a way, to allow it only once? I was trying to acces it using this code: if( !CGPreflightScreenCaptureAccess()) {   print("not granted!")   let result = CGRequestScreenCaptureAccess()   if(result == true)   {     print("Screen recording granted, thank you.")   }   else   {     print("Not granted! Bye-bye...")     exit(1)   } } but CGRequestScreenCaptureAccess does not wait for approval. I was also trying to poll current status by calling CGPreflightScreenCaptureAccess in a loop, but it always returned false, even after manual approval. When I run this application from terminal (which have pernament access to screen), everything is working fine. But this way, I cannot debug anything.
Posted
by
Post not yet marked as solved
0 Replies
270 Views
Capturing an image of an off-screen window with CGWindowListCreateImage is a common way to create QuickLook-style zoom-in animations, but seems to give the wrong background colour, slightly whiter than the actual window has when it is shown onscreen. This causes a flash at the end of the animation which kind of ruins the effect. Does anyone have any idea why this happens and what can be done about it? If I set the window appearance to textured in Interface Builder this problem goes away, but then I have the problem that the window looks different (darker) than other windows in the app. I can set the window background to a custom color that makes it match the others windows, but then it still looks off in older macOS versions. I made a sample project that illustrates the problem here: https://github.com/angstsmurf/WindowCaptureTest.
Posted
by
Post not yet marked as solved
2 Replies
402 Views
I'm using VNImageRequestHandler to recognize text using the camera. In my handler I'm using the topLeft, topRight, bottomLeft, bottomRight properties, which I'm scaling to the size of the canvas, to draw an outline around each text object. When I do this the Y position and Height are correct, but the Width is slightly smaller, and the X position centers the outline around the text. Any idea why this would be a different size?
Posted
by
Post not yet marked as solved
1 Replies
552 Views
I have an application coded in objective-c that are using CoreGraphics and CGPDFDocument, it's a PDF reader. With the release of iOS 15 i'm having problems with the rendering of certain pages in certain PDF files. The problem is not present with PDFKit. I have also downloaded the ZoomingPDFViewer example (https://developer.apple.com/library/archive/samplecode/ZoomingPDFViewer/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010281) from the official apple documentation page and i see that the same thing happens. See the problem
Post marked as solved
7 Replies
486 Views
Code to draw a graph of data. Each abscissa has an ordinate range to be displayed as a line segment. All data, i.e., scaled points are verified to be within the declared analysisView.bounds. strokeColors are verified within the range 0..1 BTW, no I don't need animation for this static data, but CALayer seemed to require more coding, and I found fewer code examples for it. The code below has two problems: 1) it doesn't draw into the window the weird behavior of min/max The first is why I am posting. What am I missing? import AppKit class AnalysisViewController: NSViewController { @IBOutlet var analysisView: NSView! var ranges = [ClosedRange<Double>]() var ordinateMinimum = CGFloat() var ordinateMaximum = CGFloat() var ordinateScale = CGFloat() let abscissaMinimum:CGFloat = 1 let abscissaMaximum:CGFloat = 92 let abscissaScale :CGFloat = 800/92 let shapeLayer = CAShapeLayer() var points = [CGPoint]() // created just to verify (in debugger area) that points are within analysisView.bounds func genrateGraph() { // ranges.append(0...0) // inexplicably FAILS! @ ordinateMinimum/ordinateMaximum if replaces "if N == 1" below // ranges.append(0.1...0.1) // non-zero range does not fail but becomes the min or max, therefore, not useful for N in 1...92 { if let element = loadFromJSON(N) { if N == 1 { ranges.append( element.someFunction() ) } // ranges[0] is an unused placeholder // if N == 1 { ranges.append(0...0) } // inexplicably FAILS! @ ordinateMinimum/ordinateMaximum if replacing above line ranges.append( element.someFunction() ) } else { ranges.append(0...0) } // some elements have no range data } ordinateMinimum = CGFloat(ranges.min(by: {$0 != 0...0 && $1 != 0...0 && $0.lowerBound < $1.lowerBound})!.lowerBound) ordinateMaximum = CGFloat(ranges.max(by: {$0 != 0...0 && $1 != 0...0 && $0.upperBound < $1.upperBound})!.upperBound) ordinateScale = analysisView.frame.height/(ordinateMaximum - ordinateMinimum) for range in 1..<ranges.count { shapeLayer.addSublayer(CALayer()) // sublayer each abscissa range so that .strokeColor can be assigned to each // shapeLayer.frame = CGRect(x: 0, y: 0, width: analysisView.frame.width, height: analysisView.frame.height) // might be unneccessary let path = CGMutablePath() // a new path for every sublayer, i.e., range that is displayed as line segment points.append(CGPoint(x: CGFloat(range)*abscissaScale, y: CGFloat(ranges[range].lowerBound)*ordinateScale)) path.move(to: points.last! ) points.append(CGPoint(x: CGFloat(range)*abscissaScale, y: CGFloat(ranges[range].upperBound)*ordinateScale)) path.addLine(to: points.last! ) path.closeSubpath() shapeLayer.path = path // shapeLayer.strokeColor = CGColor.white let r:CGFloat = 1.0/CGFloat(range) let g:CGFloat = 0.3/CGFloat(range) let b:CGFloat = 0.7/CGFloat(range) // print("range: \(range)\tr: \(r)\tg: \(g)\tb: \(b)") // just to verify 0...1 values shapeLayer.strokeColor = CGColor(srgbRed: r, green: g, blue: b, alpha: 1.0) } } override func viewDidLoad() { super.viewDidLoad() view.wantsLayer = true // one of these (view or analysisView) must be unneccessary view.frame = CGRect(x: 0, y: 0, width: 840, height: 640) analysisView.wantsLayer = true analysisView.frame = CGRect(x: 0, y: 0, width: 840, height: 640) genrateGraph() } }
Posted
by
Post not yet marked as solved
0 Replies
290 Views
I have a CALayer with many sublayers. Those sublayers have multiple CABasicAnimation added to them. Now, I'd like to render the whole layer subtree to the UIImage at a specific point of animation time. How could I achieve that? The only thing I found is a CALayer.render(in:) method but the docs say that this method ignores Core Animations :<
Posted
by
Post not yet marked as solved
0 Replies
341 Views
I'm trying to add an animated CALayer over my video and export it with AVAssetExportSession. I'm animating the layer using CABasicAnimation set to my custom property. However, it seems that func draw(in ctx: CGContext) is never called during an export for my custom layer, and no animation is played. I found out that animating standard properties like borderWidth works fine, but custom properties are ignored. Can someone help with that? func export(standard: Bool) { print("Exporting...") let composition = AVMutableComposition() //composition.naturalSize = CGSize(width: 300, height: 300) // Video track let videoTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: CMPersistentTrackID(1))! let _videoAssetURL = Bundle.main.url(forResource: "emptyVideo", withExtension: "mov")! let _emptyVideoAsset = AVURLAsset(url: _videoAssetURL) let _emptyVideoTrack = _emptyVideoAsset.tracks(withMediaType: .video)[0] try! videoTrack.insertTimeRange(CMTimeRange(start: .zero, duration: _emptyVideoAsset.duration), of: _emptyVideoTrack, at: .zero) // Root Layer let rootLayer = CALayer() rootLayer.frame = CGRect(origin: .zero, size: composition.naturalSize) // Video layer let video = CALayer() video.frame = CGRect(origin: .zero, size: composition.naturalSize) rootLayer.addSublayer(video) // Animated layer let animLayer = CustomLayer() animLayer.progress = 0.0 animLayer.frame = CGRect(origin: .zero, size: composition.naturalSize) rootLayer.addSublayer(animLayer) animLayer.borderColor = UIColor.green.cgColor animLayer.borderWidth = 0.0 let key = standard ? "borderWidth" : "progress" let anim = CABasicAnimation(keyPath: key) anim.fromValue = 0.0 anim.toValue = 50.0 anim.duration = 6.0 anim.beginTime = AVCoreAnimationBeginTimeAtZero anim.isRemovedOnCompletion = false animLayer.add(anim, forKey: nil) // Video Composition let videoComposition = AVMutableVideoComposition(propertiesOf: composition) videoComposition.renderSize = composition.naturalSize videoComposition.frameDuration = CMTime(value: 1, timescale: 30) // Animation tool let animTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: video, in: rootLayer) videoComposition.animationTool = animTool // Video instruction > Basic let videoInstruction = AVMutableVideoCompositionInstruction() videoInstruction.timeRange = CMTimeRange(start: .zero, duration: composition.duration) videoComposition.instructions = [videoInstruction] // Video-instruction > Layer instructions let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack) videoInstruction.layerInstructions = [layerInstruction] // Session let exportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)! exportSession.videoComposition = videoComposition exportSession.shouldOptimizeForNetworkUse = true var url = FileManager.default.temporaryDirectory.appendingPathComponent("\(arc4random()).mov") url = URL(fileURLWithPath: url.path) exportSession.outputURL = url exportSession.outputFileType = .mov _session = exportSession exportSession.exportAsynchronously { if let error = exportSession.error { print("Fail. \(error)") } else { print("Ok") print(url) DispatchQueue.main.async { let vc = AVPlayerViewController() vc.player = AVPlayer(url: url) self.present(vc, animated: true) { vc.player?.play() } } } } } CustomLayer: class CustomLayer: CALayer { @NSManaged var progress: CGFloat override init() { super.init() } override init(layer: Any) { let l = layer as! CustomLayer super.init(layer: layer) print("Copy. \(progress) \(l.progress)") self.progress = l.progress } required init?(coder: NSCoder) { super.init(coder: coder) } override class func needsDisplay(forKey key: String) -> Bool { let needsDisplayKeys = ["progress"] if needsDisplayKeys.contains(key) { return true } return super.needsDisplay(forKey: key) } override func display() { print("Display. \(progress) | \(presentation()?.progress)") super.display() } override func draw(in ctx: CGContext) { // Save / restore ctx ctx.saveGState() defer { ctx.restoreGState() } print("Draw. \(progress)") ctx.move(to: .zero) ctx.addLine(to: CGPoint(x: bounds.size.width * progress, y: bounds.size.height * progress)) ctx.setStrokeColor(UIColor.red.cgColor) ctx.setLineWidth(40) ctx.strokePath() } } Here's a full sample project if someone is interested: https://www.dropbox.com/s/evkm60wkeb2xrzh/BrokenAnimation.zip?dl=0
Posted
by
Post not yet marked as solved
0 Replies
216 Views
Nach dem Update auf Beta 9, 12.0 Beta (21A5543b) können Sie die Auflösung nicht mehr ändern! Vorher in beta 8 ging es ohne Probleme starte das MacBook Pro neu ohne die Auflösung geändert zu haben hat jemand eine Idee oder ein bug ?? habe ich auch schon so gemeldet
Posted
by