MetalKit

RSS for tag

Render graphics in a standard Metal view, load textures from many sources, and work efficiently with models provided by Model I/O using MetalKit.

MetalKit Documentation

Posts under MetalKit tag

58 Posts
Sort by:
Post not yet marked as solved
4 Replies
2.9k Views
When I use metal to render, the application switch to the background resulting in metal rendering failure in iOS 15 sys. How can I do? Error: Execution of the command buffer was aborted due to an error during execution.Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
Posted
by
Post not yet marked as solved
1 Replies
1.2k Views
I am trying to use a CIColorKernel or CIBlendKernel with sampler arguments but the program crashes. Here is my shader code which compiles successfully. extern "C" float4 wipeLinear(coreimage::sampler t1, coreimage::sampler t2, float time) { float2 coord1 = t1.coord(); float2 coord2 = t2.coord(); float4 innerRect = t2.extent(); float minX = innerRect.x + time*innerRect.z; float minY = innerRect.y + time*innerRect.w; float cropWidth = (1 - time) * innerRect.w; float cropHeight = (1 - time) * innerRect.z; float4 s1 = t1.sample(coord1); float4 s2 = t2.sample(coord2); if ( coord1.x > minX && coord1.x < minX + cropWidth && coord1.y > minY && coord1.y <= minY + cropHeight) { return s1; } else { return s2; } } And it crashes on initialization. class CIWipeRenderer: CIFilter { var backgroundImage:CIImage? var foregroundImage:CIImage? var inputTime: Float = 0.0 static var kernel:CIColorKernel = { () -> CIColorKernel in let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")! let data = try! Data(contentsOf: url) return try! CIColorKernel(functionName: "wipeLinear", fromMetalLibraryData: data) //Crashes here!!!! }() override var outputImage: CIImage? { guard let backgroundImage = backgroundImage else { return nil } guard let foregroundImage = foregroundImage else { return nil } return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, inputTime]) } } It crashes in the try line with the following error: Fatal error: 'try!' expression unexpectedly raised an error: Foundation._GenericObjCError.nilError If I replace the kernel code with the following, it works like a charm: extern "C" float4 wipeLinear(coreimage::sample_t s1, coreimage::sample_t s2, float time) { return mix(s1, s2, time); }
Posted
by
Post not yet marked as solved
3 Replies
862 Views
I downloaded this sample: https://developer.apple.com/documentation/metal/basic_tasks_and_concepts/using_metal_to_draw_a_view_s_contents?preferredLanguage=occ I commented out this line in AAPLViewController.mm //    _view.enableSetNeedsDisplay = YES; I modified the presentDrawable line in AAPLRenderer.mm to add afterMinimumDuration:     [commandBuffer presentDrawable:drawable afterMinimumDuration:1.0/60]; I then added a presentedHandler before the above line that records the time between successive presents. Most of the time it correctly reports 0.166667s. However, about every dozen or so frames (it varies) it seems to present a frame early with an internal of 0.0083333333s followed by the next frame after around 0.24s. Is this expected behaviour, I was hoping that afterMinimumDuration would specifically make things consistent. Why would it present a frame early? This is on a new MacBook Pro 16 running latest macOS Monterrey, and the sample project upgraded to have a minimum deployment target of 11.0. Xcode latest public release 13.1.
Posted
by
Post not yet marked as solved
5 Replies
3.9k Views
Hello guys. With the release of the M1 Pro and M1 Max in particular, the Mac has become a platform that could become very interesting for games in the future. However, since some features are still missing in Metal, it could be problematic for some developers to port their games to Metal. Especially with the Unreal Engine 5 you can already see a tendency in this direction, since e.g. Nanite and Lumen are unfortunately not available on the Mac. As a Vulkan developer I wanted to inquire about some features that are not yet available in Metal at the moment. These features are very interesting if you want to write a GPU driven renderer for modern game engines. Furthermore, these features could be used to emulate D3D12 on the Mac via MoltenVK, which would result in more games being available on the Mac. Buffer device address: This feature allows the application to query a 64-bit buffer device address value for a buffer. It is very useful for D3D12 emulation and for compatibility with Vulkan, e.g. to implement ray tracing on MoltenVK. DrawIndirectCount: This feature allows an application to source the number of draws for indirect drawing calls from a buffer. Also very useful in many gpu driven situations Only 500000 resources per argument buffer Metal has a limit of 500000 resources per argument buffer. To be equivalent to D3D12 Resource Binding Tear 2, you would need 1 million. This is also very important as so many DirectX12 game engines could be ported to Metal more easily. Mesh shader / Task shader: Two interesting new shader stages to optimize the rendering pipeline Are there any plans to implement this features in future? Is there a roadmap for metal? Is there a website where I can suggest features to the metal developers? I hope to see at least the first 3 features in metal in the future and I think that many developers feel the same way. Best regards, Marlon
Posted
by
Post not yet marked as solved
1 Replies
1.9k Views
In my game project, there is a functions.data file in then /AppData/Library/Caches/[bundleID]/com.apple.metal/functions.data, when we reboot and launch the game, this file was rest to about 40KB, normaly this file's is about 30MB, this operation was done by the metal, Is there any way to avoid it?
Posted
by
Post not yet marked as solved
1 Replies
1.3k Views
Hi, I want to begin by saying thank you Apple for making the Spatial framework! Please add a million more features ;-) I'm using the following code to make an object "look at" another point, but at a particular rotation the object "flips" its rotations. See a video here: https://www.dropbox.com/s/5irxt0gxou4c2j6/QuaternionFlip.mov?dl=0 I shake the mouse cursor when it happens to make it obvious to you. import Spatial let lookAtRotation = Rotation3D(eye: Point3D(position), target: Point3D(x: 0, y: 0, z: 0), up: Vector3D(x: 0, y: 1, z: 0)) myObj.quaternion = lookAtRotation.quaternion So my question is why is this happening, and how can I fix it? thx
Posted
by
MKX
Post not yet marked as solved
4 Replies
1k Views
We are using AVAssetWriter to write videos using both HEVC and H.264 encoding. Occasionally, we get reports of choppy footage in which frames appear out of order when played back on a Mac (QuickTime) or iOS device (stock Photos app). This occurs extremely unpredictably, often not starting until 20+ minutes of filming, but occasionally happening as soon as filming starts. Interestingly, users have reported the issue goes away while editing or viewing on a different platform (e.g. Linux) or in the built-in Google Drive player, but comes back as soon as the video is exported or downloaded again. When this occurs in an HEVC file, converting to H.264 seems to resolve it. I haven't found a similar fix for H.264 files. I suspect an AVAssetWriter encoding issue but haven't been able to uncover the source. Running a stream analyzer on HEVC files with this issue reveals the following error: Short-term reference picture with POC = [some number] seems to have been removed or not correctly decoded. However, running a stream analyzer on H.264 files with the same playback issue seems to show nothing wrong. At a high level, our video pipeline looks something like this: Grab a sample buffer in captureOutput(_ captureOutput: AVCaptureOutput!, didOutputVideoSampleBuffer sampleBuffer: CMSampleBuffer!) Perform some Metal rendering on that buffer Pass the resulting CVPixelBuffer to the AVAssetWriterInputPixelBufferAdaptor associated with our AVAssetWriter Example files can be found here: https://drive.google.com/drive/folders/1OjDZ3XaC-ubD5hyDiNvMQGl2NVqZbWnR?usp=sharing This includes a video file suffering this issue, the same file fixed after converting to mp4, and a screen recording of the distorted playback in QuickTime. Can anyone help point me in the right direction to solving this issue? I can provide more details as necessary.
Posted
by
Post not yet marked as solved
2 Replies
1.6k Views
Hi, when I add mapview to my viewcontroller i receive the log [Pipeline Library] Mapping the pipeline data cache failed, errno22. This happened after i had updated xcode to 14.3 and the log is only shown when i test on a real device and not when i test on the simulator. (This is shown always in every project i create, even without calling any function to modify the mapview. I only add the MapKit and Core Location libraries.) Xcode version: Version 14.3 (14E222b) Iphone: Iphone 8, IOS version: 16.4 I've tested also on Iphone 12 with lower IOS version and i receive the same error.
Posted
by
Post not yet marked as solved
1 Replies
1k Views
I am developing a macOS app which has a metal-backed view, based on CAMetalLayer. It worked fine on macOS Monterey, but after I switched to a M2 Pro machine and upgraded my macOS to Ventura (tried both 13.3 and 13.4), the view would randomly display a blank pink screen - as if the Metal is indicating an error or empty state. I don't get any log messages in the Console though. Particularly, the pink screen appears after I select the app's "Enter Full Screen" command. It appears only in ~1 of 10 attempts, and doesn't appear in 9 of 10 attempts. The pink screen goes away after the view renders its next frame (e.g. on mouse move). It's worth mentioning that the CAMetalLayer-backed view in question is hosted in the toolbar, so the issue is probably related to the fact that when an app switches to the full screen mode, macOS detaches the toolbar from the main window and attaches it to the separate toolbar window. I have tried to profile the app and catch the pink frame - particularly using -[MTLCaptureManager startCaptureWithDescriptor:error:], however when I analyze the trace, everything seems to be fine - all vertexes, textures and commands are being transmitted to the GPU correctly. However, on the screen, I still get the occasional pink screen. I wonder if anyone could give me hints as of why this might be happening and what other tools I can use to track down the problem? Or maybe it's a macOS Ventura bug? P.S. For brevity, I'm not attaching the code, which is too large and spread over multiple files to fit here. I'm interested in general advice.
Posted
by
Post not yet marked as solved
3 Replies
1.2k Views
I have been experimenting with different rendering approaches in Metal and am hitting a wall when it comes to reconciling "bindless" or GPU-driven approaches* with a dynamic scene where meshes can be added, removed, and changed. All the examples I have found of such approaches use fixed scenes, where all the data is fixed before the first draw call into something like a MeshBuffer that holds all scene geometry in the form of Mesh objects (for instance). While I can assume that recreating a MeshBuffer from scratch each frame would be possible but completely undesirable, and that there may be some clever tricks with pointers to update a MeshBuffer as needed, I would like to know if there is an established or optimal solution to this problem, or if these approaches are simply incompatible with dynamic geometry. Any example projects that do what I am asking that I may have missed would be appreciated, too. * I know these are not the same, but seem to share some common characteristics, namely providing your entire geometry to the GPU at once. Looping over an array of meshes and calling drawIndexedPrimitives from the CPU does not post any such obstacles, but also precludes some of the benefits of offloading work to the GPU, or having access to all geometry on the GPU for things like path tracing.
Posted
by
Post not yet marked as solved
0 Replies
511 Views
Hello, Metal is a hard biscuit. the first part of my project is to draw wired platonic solids, I replaced the triangle with a cube which I want to rotate with two sliders. But I don't no how, the MTKView is the view of the viewController, the sliders will rotate too, or not ? Would you developers use the XIP ?
Posted
by
Post not yet marked as solved
1 Replies
888 Views
It seems like Apple Silicon (even M1/M2 MAX) devices can only use a certain percentage of their total unified memory for GPU/Metal. This seems to be a limitation related to: recommendedMaxWorkingSetSize Which is quite odd because even M1 Mac Mini's or Macbook Airs run totally fine with 8GB of total memory for both the OS and GPU so why limit this in the first place? Also seems like false advertising to me from Apple by not clearly stating this limitation. I am asking this in regards to the following open source project (but of course more software will be impacted by the same limitation): https://github.com/ggerganov/llama.cpp/pull/1826 another resource I've found: https://developer.apple.com/videos/play/tech-talks/10580/?time=546 If anyone has any ideas on how these limitations can be overcome and how to get apps to use more Memory for GPU (Meta)l I (and the open source community) would be truly grateful! thanks in advance!
Posted
by
Post marked as solved
1 Replies
527 Views
Hi, I was watching this WWDC23 video on Metal with xrOS (https://developer.apple.com/videos/play/wwdc2023/10089/?time=1222). However, when I tried it, the Compositor Services API wasn't available. Is it ? Or when will it be released ? Thanks.
Posted
by
Post not yet marked as solved
4 Replies
1.1k Views
I am processing CVPixelBuffers received from camera using both Metal and CoreImage, and comparing the performance. The only processing that is done is taking a source pixel buffer and applying crop & affine transforms, and saving the result to another pixel buffer. What I do notice is CPU usage is as high a 50% when using CoreImage and only 20% when using Metal. The profiler shows most of the time spent is in CIContext render: let cropRect = AVMakeRect(aspectRatio: CGSize(width: dstWidth, height: dstHeight), insideRect: srcImage.extent) var dstImage = srcImage.cropped(to: cropRect) let translationTransform = CGAffineTransform(translationX: -cropRect.minX, y: -cropRect.minY) var transform = CGAffineTransform.identity transform = transform.concatenating(CGAffineTransform(translationX: -(dstImage.extent.origin.x + dstImage.extent.width/2), y: -(dstImage.extent.origin.y + dstImage.extent.height/2))) transform = transform.concatenating(translationTransform) transform = transform.concatenating(CGAffineTransform(translationX: (dstImage.extent.origin.x + dstImage.extent.width/2), y: (dstImage.extent.origin.y + dstImage.extent.height/2))) dstImage = dstImage.transformed(by: translationTransform) let scale = max(dstWidth/(dstImage.extent.width), CGFloat(dstHeight/dstImage.extent.height)) let scalingTransform = CGAffineTransform(scaleX: scale, y: scale) transform = CGAffineTransform.identity transform = transform.concatenating(scalingTransform) dstImage = dstImage.transformed(by: transform) if flipVertical { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: 1, y: -1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: 0, y: dstImage.extent.size.height)) } if flipHorizontal { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: -1, y: 1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: dstImage.extent.size.width, y: 0)) } var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) Here is how CIContext was created: _ciContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [CIContextOption.cacheIntermediates: false]) I want to know if I am doing anything wrong and what could be done to lower CPU usage in CoreImage?
Posted
by
Post not yet marked as solved
0 Replies
470 Views
I'm trying to create an app similar to PolyCam using Lidar . I'm using SceneKit mesh reconstruction and able to apply some random textures. But need real-world textures in generated output 3D Model. Found few examples available which are related to MetalKit and Point Cloud, which was not helpful. Can you help me out with any references/steps/tutorial to how to achieve it .
Posted
by
Post not yet marked as solved
0 Replies
489 Views
I am drawing am image in a MTKView using a Metal shader based on the 'pointCloudVertexShader' used in this sample code. The image can be moved with a 'dragGesture' in the MTKView, similarly to the 'MetalPointCloud' View in the sample Code. I want to implement a 'UITapGestureRecognizer' that, when tapping in the MTKView returns the appropriate 'vecout' (from the 'pointCloudVertexShader') value for the given location (see code bellow). // Calculate the vertex's world coordinates. float xrw = ((int)pos.x - cameraIntrinsics[2][0]) * depth / cameraIntrinsics[0][0]; float yrw = ((int)pos.y - cameraIntrinsics[2][1]) * depth / cameraIntrinsics[1][1]; float4 xyzw = { xrw, yrw, depth, 1.f }; // Project the coordinates to the view. float4 vecout = viewMatrix * xyzw; The 'vecout' variable is computed in the metal vertex shader, it would be associated with a coloured pixel. My idea would be to use the 'didTap' method to evaluate the wanted pixel data (3D coordinates). @objc func didTap(_ gesture: UITapGestureRecognizer){ let location = gesture.location(in: mtkView) //Use the location to get the pixel value. } Is there any way to get this value directly from the MTKView? Thanks in advance!
Posted
by
Post not yet marked as solved
9 Replies
944 Views
I'm trying to save metal textures in a lossless compressed format. I've tried png and tiff, but I run into the same problem: the pixel data changes after save and load when we have transparency. Here is the code I use to save a Tiff: import ImageIO import UIKit import Metal import MobileCoreServices extension MTLTexture { func saveAsLosslessTIFF(url: URL) throws { guard let context = CIContext() else { return } guard let colorSpace = CGColorSpace(name: CGColorSpace.linearSRGB) else { return } guard let ciImage = CIImage(mtlTexture: self, options: [.colorSpace : colorSpace]) else { return } guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return } // create a dictionary with TIFF compression options let tiffCompression_LZW = 5 let options: [String: Any] = [ kCGImagePropertyTIFFCompression as String: tiffCompression_LZW, kCGImagePropertyDepth as String: depth, kCGImagePropertyPixelWidth as String: width, kCGImagePropertyPixelHeight as String: height, ] let fileDestination = CGImageDestinationCreateWithURL(url as CFURL, kUTTypeTIFF, 1, nil) guard let destination = fileDestination else { throw RuntimeError("Unable to create image destination.") } CGImageDestinationAddImage(destination, cgImage, options as CFDictionary) if !CGImageDestinationFinalize(destination) { throw RuntimeError("Unable to save image to destination.") } } } I can then load the texture like this: func loadTexture(url:URL) throws -> MTLTexture { let usage = MTLTextureUsage(rawValue: MTLTextureUsage.renderTarget.rawValue | MTLTextureUsage.shaderRead.rawValue | MTLTextureUsage.shaderWrite.rawValue) return try loader.newTexture(URL:url,options:[MTKTextureLoader.Option.textureUsage:usage.rawValue,MTKTextureLoader.Option.origin:MTKTextureLoader.Origin.flippedVertically.rawValue]) } After saving and then loading the texture again, I want to get back the exact same texture. And I do, if there is no transparency. Transparent pixels, however, are transformed in a way that I don't understand. Here is an example pixel: [120, 145, 195, 170] -> [144, 174, 234, 170] My first guess would be that something is trying to undo a pre-multiplied alpha that never happened. But the numbers don't seem to work out. For example, if that were the case I'd expect 120 to go to (120 * 255) / 170 = 180 , not 144. Any idea what I am doing wrong?
Posted
by
Post not yet marked as solved
1 Replies
397 Views
Here is a normal Xcode project. When it runs, there will only be no more than 5 wakes per second on CPU. But when I created a class inherited by MTKView in this project, there would be more than 100 wakes per second on CPU, which may make my device hot. Besides, I also find many possible ways to create unusual wakes on CPU. But I need more test. Steps: Create a new Xcode project using Swift. Run the project and see the wakes using Xcode, expect less than 5. Add the following code in the project: Run the project and see the wakes using Xcode, expect more than 100. import Foundation import MetalKit class MTKPlayerView : MTKView { init() { let dev = MTLCreateSystemDefaultDevice()! super.init(frame: .zero, device: dev) } required init(coder: NSCoder) { super.init(coder: coder) } } I haven't find any information about this thread named gputools_smt_poll. If you know anything about it you can share with me.
Posted
by