MetalKit

RSS for tag

Render graphics in a standard Metal view, load textures from many sources, and work efficiently with models provided by Model I/O using MetalKit.

MetalKit Documentation

Posts under MetalKit tag

61 Posts
Sort by:
Post not yet marked as solved
0 Replies
385 Views
I'm developing a drawing app. I use MTKView to render the canvas. But for some reason and for only a few users, the pixels are not rendered correctly (pixels have different sizes), the majority of users have no problem with this. Here is my setup: Each pixel is rendered as 2 triangles MTKView's frame dimensions are always multiple of the canvas size (a 100x100 canvas will have the frame size of 100x100, 200x200, and so on) There is a grid to indicate pixels (it's an SwiftUI Path) which display correctly, and we can see that they don't align with the pixels). There is also a checkerboard pattern in the background rendered using another MTKView which lines up with the pixels but not the grid. Previously, I had a similar issue when my view's frame is not a multiple of the canvas size, but I fixed that with the setup above already. The issue worsens when the number of points representing a pixel of the canvas becomes smaller. E.g. a 100x100 canvas on a 100x100 view is worse than a 100x100 canvas on a 500x500 view The vertices have accurate coordinates, this is a rendering issue. As you can see in the picture, some pixels are bigger than others. I tried changing the contentScaleFactor to 1, 2, and 3 but none seems to solve the problem. My MTKView setup: clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0) delegate = renderer renderer.setup() isOpaque = false layer.magnificationFilter = .nearest layer.minificationFilter = .nearest Renderer's setup: let pipelineDescriptor = MTLRenderPipelineDescriptor() pipelineDescriptor.vertexFunction = vertexFunction pipelineDescriptor.fragmentFunction = fragmentFunction pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineState = try? device.makeRenderPipelineState(descriptor: pipelineDescriptor) Draw method of renderer: commandEncoder.setRenderPipelineState(pipelineState) commandEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0) commandEncoder.setVertexBuffer(colorBuffer, offset: 0, index: 1) commandEncoder.drawIndexedPrimitives( type: .triangle, indexCount: indexCount, indexType: .uint32, indexBuffer: indexBuffer, indexBufferOffset: 0 ) commandEncoder.endEncoding() commandBuffer.present(drawable) commandBuffer.commit() Metal file: struct VertexOut { float4 position [[ position ]]; half4 color; }; vertex VertexOut frame_vertex(constant const float2* vertices [[ buffer(0) ]], constant const half4* colors [[ buffer(1) ]], uint v_id [[ vertex_id ]]) { VertexOut out; out.position = float4(vertices[v_id], 0, 1); out.color = colors[v_id / 4]; return out; } fragment half4 frame_fragment(VertexOut v [[ stage_in ]]) { half alpha = v.color.a; return half4(v.color.r * alpha, v.color.g * alpha, v.color.b * alpha, v.color.a); }
Posted
by Son Seo.
Last updated
.
Post not yet marked as solved
1 Replies
484 Views
language: swift i use the metal shaders to render an image which size is (4096, 2304). the mtkView.frame.size is (414.0, 233.0). i convert the drawable.texture to uiimage, and save it. but its size is (828, 466). how can i save image that its size is (4096, 2304) ? thanks for your help!
Posted
by BiStars.
Last updated
.
Post not yet marked as solved
0 Replies
478 Views
I've got the following code to generate an MDLMaterial from my own material data model: public extension MaterialModel { var mdlMaterial: MDLMaterial { let f = MDLPhysicallyPlausibleScatteringFunction() f.metallic.floatValue = metallic f.baseColor.color = CGColor(red: CGFloat(color.x), green: CGFloat(color.y), blue: CGFloat(color.z), alpha: 1.0) f.roughness.floatValue = roughness return MDLMaterial(name: name, scatteringFunction: f) } } When exporting to OBJ, I get the expected material properties: # Apple ModelI/O MTL File: testExport.mtl newmtl material_1 Kd 0.163277 0.0344635 0.229603 Ka 0 0 0 Ks 0 ao 0 subsurface 0 metallic 0 specularTint 0 roughness 0 anisotropicRotation 0 sheen 0.05 sheenTint 0 clearCoat 0 clearCoatGloss 0 newmtl material_2 Kd 0.814449 0.227477 0.124541 Ka 0 0 0 Ks 0 ao 0 subsurface 0 metallic 0 specularTint 0 roughness 1 anisotropicRotation 0 sheen 0.05 sheenTint 0 clearCoat 0 clearCoatGloss 0 However when exporting USD I just get: #usda 1.0 ( defaultPrim = "_0" endTimeCode = 0 startTimeCode = 0 timeCodesPerSecond = 60 upAxis = "Y" ) def Xform "Obj0" { def Mesh "_" { uniform bool doubleSided = 0 float3[] extent = [(896, 896, 896), (1152, 1152, 1148.3729)] int[] faceVertexCounts = ... int[] faceVertexIndices = ... point3f[] points = ... } def Mesh "_0" { uniform bool doubleSided = 0 float3[] extent = [(898.3113, 896.921, 1014.4961), (1082.166, 1146.7178, 1152)] int[] faceVertexCounts = ... int[] faceVertexIndices = ... point3f[] points = ... matrix4d xformOp:transform = ( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) ) uniform token[] xformOpOrder = ["xformOp:transform"] } } There aren't any material properties. FWIW, this specifies a set of common material parameters for USD: https://openusd.org/release/spec_usdpreviewsurface.html (Note: there is no tag for ModelIO, so using SceneKit, etc.)
Posted
by Audulus.
Last updated
.
Post marked as solved
2 Replies
475 Views
I'm trying to find a way to reduce synchronization time between two compute shader calls, where one dispatch depends on an atomic counter from the other. Example: If I have two metal kernels, select and execute, select is looking through the numbers-buffer and stores the index of all numbers < 10 in a new buffer selectedNumberIndices by using an atomic counter. execute is then run counter number of times do do something with those selected indices. kernel void select (device atomic_uint &counter, device uint *numbers, device uint *selectedNumberIndices, uint id [[thread_position_in_grid]]) { if(numbers[id] < 10) { uint idx = atomic_fetch_add_explicit(&counter, 1, memory_order_relaxed); selectedNumbers[idx] = id; } } kernel void execute (device uint *selectedNumberIndices, uint id [[thread_position_in_grid]]) { // do something #counter number of times } currently I can do this by using .waitUntilCompleted() between the dispatches to ensure I get accurate results, something like: // select buffer = queue.makeCommandBuffer()! encoder = buffer.makeComputeCommandEncoder()! encoder.setComputePipelineState(selectState) encoder.setBuffer(counterBuffer, offset: 0, index: 0) encoder.setBuffer(numbersBuffer, offset: 0, index: 1) encoder.setBuffer(selectedNumberIndicesBuffer, index: 2) encoder.dispatchThreads(.init(width: Int(numbersCount), height: 1, depth: 1), threadsPerThreadgroup: .init(width: selectState.threadExecutionWidth, height: 1, depth: 1)) encoder.endEncoding() buffer.commit() // wait buffer.waitUntilCompleted() // execute buffer = queue.makeCommandBuffer()! encoder = buffer.makeComputeCommandEncoder()! encoder.setComputePipelineState(executeState) encoder.setBuffer(selectedNumberIndicesBuffer, index: 0) var counterValue: uint = 0 // extract the value of the atomic counter counterBuffer.contents().copyBytes(to: &counterValue, count: MemoryLayout<UInt32>.stride) encoder.dispatchThreads(.init(width: Int(counterValue), height: 1, depth: 1), threadsPerThreadgroup: .init(width: executeState.threadExecutionWidth, height: 1, depth: 1)) encoder.endEncoding() buffer.commit() My question is if there is any way I can have this same functionality without the costly buffer.waitUntilCompleted() call? Or am I going about this in completely the wrong way, or missing something else?
Posted
by gopatrik.
Last updated
.
Post not yet marked as solved
9 Replies
986 Views
I'm trying to save metal textures in a lossless compressed format. I've tried png and tiff, but I run into the same problem: the pixel data changes after save and load when we have transparency. Here is the code I use to save a Tiff: import ImageIO import UIKit import Metal import MobileCoreServices extension MTLTexture { func saveAsLosslessTIFF(url: URL) throws { guard let context = CIContext() else { return } guard let colorSpace = CGColorSpace(name: CGColorSpace.linearSRGB) else { return } guard let ciImage = CIImage(mtlTexture: self, options: [.colorSpace : colorSpace]) else { return } guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return } // create a dictionary with TIFF compression options let tiffCompression_LZW = 5 let options: [String: Any] = [ kCGImagePropertyTIFFCompression as String: tiffCompression_LZW, kCGImagePropertyDepth as String: depth, kCGImagePropertyPixelWidth as String: width, kCGImagePropertyPixelHeight as String: height, ] let fileDestination = CGImageDestinationCreateWithURL(url as CFURL, kUTTypeTIFF, 1, nil) guard let destination = fileDestination else { throw RuntimeError("Unable to create image destination.") } CGImageDestinationAddImage(destination, cgImage, options as CFDictionary) if !CGImageDestinationFinalize(destination) { throw RuntimeError("Unable to save image to destination.") } } } I can then load the texture like this: func loadTexture(url:URL) throws -> MTLTexture { let usage = MTLTextureUsage(rawValue: MTLTextureUsage.renderTarget.rawValue | MTLTextureUsage.shaderRead.rawValue | MTLTextureUsage.shaderWrite.rawValue) return try loader.newTexture(URL:url,options:[MTKTextureLoader.Option.textureUsage:usage.rawValue,MTKTextureLoader.Option.origin:MTKTextureLoader.Origin.flippedVertically.rawValue]) } After saving and then loading the texture again, I want to get back the exact same texture. And I do, if there is no transparency. Transparent pixels, however, are transformed in a way that I don't understand. Here is an example pixel: [120, 145, 195, 170] -> [144, 174, 234, 170] My first guess would be that something is trying to undo a pre-multiplied alpha that never happened. But the numbers don't seem to work out. For example, if that were the case I'd expect 120 to go to (120 * 255) / 170 = 180 , not 144. Any idea what I am doing wrong?
Posted Last updated
.
Post marked as solved
1 Replies
890 Views
Hello everyone! Here with another graphics api question but slightly different. I'm currently looking at 2 SDK's for physics called PhysX and Bullet. The game asphalt 9 uses metal and bullet and I would like to do the same with asphalt 9. With metal 3 out and stable it seems, I would like to use one of these engines for my upcoming metal api rendering engine. Bur there's a catch, I wish to use objective-c or c++ for both the rendering engine and the physics engine as I mentioned above, but not me touching swift(its a good language but i wish to use c++ for game development). What do you guys say about this?
Posted Last updated
.
Post not yet marked as solved
1 Replies
409 Views
Here is a normal Xcode project. When it runs, there will only be no more than 5 wakes per second on CPU. But when I created a class inherited by MTKView in this project, there would be more than 100 wakes per second on CPU, which may make my device hot. Besides, I also find many possible ways to create unusual wakes on CPU. But I need more test. Steps: Create a new Xcode project using Swift. Run the project and see the wakes using Xcode, expect less than 5. Add the following code in the project: Run the project and see the wakes using Xcode, expect more than 100. import Foundation import MetalKit class MTKPlayerView : MTKView { init() { let dev = MTLCreateSystemDefaultDevice()! super.init(frame: .zero, device: dev) } required init(coder: NSCoder) { super.init(coder: coder) } } I haven't find any information about this thread named gputools_smt_poll. If you know anything about it you can share with me.
Posted
by rdHan.
Last updated
.
Post not yet marked as solved
3 Replies
885 Views
I downloaded this sample: https://developer.apple.com/documentation/metal/basic_tasks_and_concepts/using_metal_to_draw_a_view_s_contents?preferredLanguage=occ I commented out this line in AAPLViewController.mm //    _view.enableSetNeedsDisplay = YES; I modified the presentDrawable line in AAPLRenderer.mm to add afterMinimumDuration:     [commandBuffer presentDrawable:drawable afterMinimumDuration:1.0/60]; I then added a presentedHandler before the above line that records the time between successive presents. Most of the time it correctly reports 0.166667s. However, about every dozen or so frames (it varies) it seems to present a frame early with an internal of 0.0083333333s followed by the next frame after around 0.24s. Is this expected behaviour, I was hoping that afterMinimumDuration would specifically make things consistent. Why would it present a frame early? This is on a new MacBook Pro 16 running latest macOS Monterrey, and the sample project upgraded to have a minimum deployment target of 11.0. Xcode latest public release 13.1.
Posted
by oviano.
Last updated
.
Post not yet marked as solved
4 Replies
3.0k Views
When I use metal to render, the application switch to the background resulting in metal rendering failure in iOS 15 sys. How can I do? Error: Execution of the command buffer was aborted due to an error during execution.Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
Posted Last updated
.
Post not yet marked as solved
0 Replies
513 Views
I am drawing am image in a MTKView using a Metal shader based on the 'pointCloudVertexShader' used in this sample code. The image can be moved with a 'dragGesture' in the MTKView, similarly to the 'MetalPointCloud' View in the sample Code. I want to implement a 'UITapGestureRecognizer' that, when tapping in the MTKView returns the appropriate 'vecout' (from the 'pointCloudVertexShader') value for the given location (see code bellow). // Calculate the vertex's world coordinates. float xrw = ((int)pos.x - cameraIntrinsics[2][0]) * depth / cameraIntrinsics[0][0]; float yrw = ((int)pos.y - cameraIntrinsics[2][1]) * depth / cameraIntrinsics[1][1]; float4 xyzw = { xrw, yrw, depth, 1.f }; // Project the coordinates to the view. float4 vecout = viewMatrix * xyzw; The 'vecout' variable is computed in the metal vertex shader, it would be associated with a coloured pixel. My idea would be to use the 'didTap' method to evaluate the wanted pixel data (3D coordinates). @objc func didTap(_ gesture: UITapGestureRecognizer){ let location = gesture.location(in: mtkView) //Use the location to get the pixel value. } Is there any way to get this value directly from the MTKView? Thanks in advance!
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.6k Views
Hi, when I add mapview to my viewcontroller i receive the log [Pipeline Library] Mapping the pipeline data cache failed, errno22. This happened after i had updated xcode to 14.3 and the log is only shown when i test on a real device and not when i test on the simulator. (This is shown always in every project i create, even without calling any function to modify the mapview. I only add the MapKit and Core Location libraries.) Xcode version: Version 14.3 (14E222b) Iphone: Iphone 8, IOS version: 16.4 I've tested also on Iphone 12 with lower IOS version and i receive the same error.
Posted
by Marione.
Last updated
.
Post marked as solved
1 Replies
535 Views
Hi, I was watching this WWDC23 video on Metal with xrOS (https://developer.apple.com/videos/play/wwdc2023/10089/?time=1222). However, when I tried it, the Compositor Services API wasn't available. Is it ? Or when will it be released ? Thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
486 Views
I'm trying to create an app similar to PolyCam using Lidar . I'm using SceneKit mesh reconstruction and able to apply some random textures. But need real-world textures in generated output 3D Model. Found few examples available which are related to MetalKit and Point Cloud, which was not helpful. Can you help me out with any references/steps/tutorial to how to achieve it .
Posted
by muqaddir.
Last updated
.
Post not yet marked as solved
3 Replies
1.2k Views
I have been experimenting with different rendering approaches in Metal and am hitting a wall when it comes to reconciling "bindless" or GPU-driven approaches* with a dynamic scene where meshes can be added, removed, and changed. All the examples I have found of such approaches use fixed scenes, where all the data is fixed before the first draw call into something like a MeshBuffer that holds all scene geometry in the form of Mesh objects (for instance). While I can assume that recreating a MeshBuffer from scratch each frame would be possible but completely undesirable, and that there may be some clever tricks with pointers to update a MeshBuffer as needed, I would like to know if there is an established or optimal solution to this problem, or if these approaches are simply incompatible with dynamic geometry. Any example projects that do what I am asking that I may have missed would be appreciated, too. * I know these are not the same, but seem to share some common characteristics, namely providing your entire geometry to the GPU at once. Looping over an array of meshes and calling drawIndexedPrimitives from the CPU does not post any such obstacles, but also precludes some of the benefits of offloading work to the GPU, or having access to all geometry on the GPU for things like path tracing.
Posted
by spamheat.
Last updated
.
Post not yet marked as solved
1 Replies
929 Views
It seems like Apple Silicon (even M1/M2 MAX) devices can only use a certain percentage of their total unified memory for GPU/Metal. This seems to be a limitation related to: recommendedMaxWorkingSetSize Which is quite odd because even M1 Mac Mini's or Macbook Airs run totally fine with 8GB of total memory for both the OS and GPU so why limit this in the first place? Also seems like false advertising to me from Apple by not clearly stating this limitation. I am asking this in regards to the following open source project (but of course more software will be impacted by the same limitation): https://github.com/ggerganov/llama.cpp/pull/1826 another resource I've found: https://developer.apple.com/videos/play/tech-talks/10580/?time=546 If anyone has any ideas on how these limitations can be overcome and how to get apps to use more Memory for GPU (Meta)l I (and the open source community) would be truly grateful! thanks in advance!
Posted
by Jake88.
Last updated
.
Post not yet marked as solved
0 Replies
655 Views
MAGMA API runs on NVIDIA or AMD GPU's via CUDA. Are there plans to code Accelerate for Metal? Thanks
Posted Last updated
.
Post not yet marked as solved
0 Replies
531 Views
Hello, Metal is a hard biscuit. the first part of my project is to draw wired platonic solids, I replaced the triangle with a cube which I want to rotate with two sliders. But I don't no how, the MTKView is the view of the viewController, the sliders will rotate too, or not ? Would you developers use the XIP ?
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.9k Views
In my game project, there is a functions.data file in then /AppData/Library/Caches/[bundleID]/com.apple.metal/functions.data, when we reboot and launch the game, this file was rest to about 40KB, normaly this file's is about 30MB, this operation was done by the metal, Is there any way to avoid it?
Posted Last updated
.
Post not yet marked as solved
1 Replies
1k Views
I am developing a macOS app which has a metal-backed view, based on CAMetalLayer. It worked fine on macOS Monterey, but after I switched to a M2 Pro machine and upgraded my macOS to Ventura (tried both 13.3 and 13.4), the view would randomly display a blank pink screen - as if the Metal is indicating an error or empty state. I don't get any log messages in the Console though. Particularly, the pink screen appears after I select the app's "Enter Full Screen" command. It appears only in ~1 of 10 attempts, and doesn't appear in 9 of 10 attempts. The pink screen goes away after the view renders its next frame (e.g. on mouse move). It's worth mentioning that the CAMetalLayer-backed view in question is hosted in the toolbar, so the issue is probably related to the fact that when an app switches to the full screen mode, macOS detaches the toolbar from the main window and attaches it to the separate toolbar window. I have tried to profile the app and catch the pink frame - particularly using -[MTLCaptureManager startCaptureWithDescriptor:error:], however when I analyze the trace, everything seems to be fine - all vertexes, textures and commands are being transmitted to the GPU correctly. However, on the screen, I still get the occasional pink screen. I wonder if anyone could give me hints as of why this might be happening and what other tools I can use to track down the problem? Or maybe it's a macOS Ventura bug? P.S. For brevity, I'm not attaching the code, which is too large and spread over multiple files to fit here. I'm interested in general advice.
Posted
by Sheldon15.
Last updated
.