MetalKit

RSS for tag

Render graphics in a standard Metal view, load textures from many sources, and work efficiently with models provided by Model I/O using MetalKit.

MetalKit Documentation

Posts under MetalKit tag

61 Posts
Sort by:
Post not yet marked as solved
2 Replies
825 Views
I know opengl is marked as deprecated since ios12 but I have an old project using it and I want to update some feature of it then release the update version. So I'm wondering if I can still release an app using opengl to app store currently? (I know it's better to shift to MetalKit but for some reason I want to cut the cost if I can. )
Posted
by
Post marked as solved
1 Replies
886 Views
Hello everyone! Here with another graphics api question but slightly different. I'm currently looking at 2 SDK's for physics called PhysX and Bullet. The game asphalt 9 uses metal and bullet and I would like to do the same with asphalt 9. With metal 3 out and stable it seems, I would like to use one of these engines for my upcoming metal api rendering engine. Bur there's a catch, I wish to use objective-c or c++ for both the rendering engine and the physics engine as I mentioned above, but not me touching swift(its a good language but i wish to use c++ for game development). What do you guys say about this?
Posted
by
Post marked as solved
2 Replies
474 Views
I'm trying to find a way to reduce synchronization time between two compute shader calls, where one dispatch depends on an atomic counter from the other. Example: If I have two metal kernels, select and execute, select is looking through the numbers-buffer and stores the index of all numbers < 10 in a new buffer selectedNumberIndices by using an atomic counter. execute is then run counter number of times do do something with those selected indices. kernel void select (device atomic_uint &counter, device uint *numbers, device uint *selectedNumberIndices, uint id [[thread_position_in_grid]]) { if(numbers[id] < 10) { uint idx = atomic_fetch_add_explicit(&counter, 1, memory_order_relaxed); selectedNumbers[idx] = id; } } kernel void execute (device uint *selectedNumberIndices, uint id [[thread_position_in_grid]]) { // do something #counter number of times } currently I can do this by using .waitUntilCompleted() between the dispatches to ensure I get accurate results, something like: // select buffer = queue.makeCommandBuffer()! encoder = buffer.makeComputeCommandEncoder()! encoder.setComputePipelineState(selectState) encoder.setBuffer(counterBuffer, offset: 0, index: 0) encoder.setBuffer(numbersBuffer, offset: 0, index: 1) encoder.setBuffer(selectedNumberIndicesBuffer, index: 2) encoder.dispatchThreads(.init(width: Int(numbersCount), height: 1, depth: 1), threadsPerThreadgroup: .init(width: selectState.threadExecutionWidth, height: 1, depth: 1)) encoder.endEncoding() buffer.commit() // wait buffer.waitUntilCompleted() // execute buffer = queue.makeCommandBuffer()! encoder = buffer.makeComputeCommandEncoder()! encoder.setComputePipelineState(executeState) encoder.setBuffer(selectedNumberIndicesBuffer, index: 0) var counterValue: uint = 0 // extract the value of the atomic counter counterBuffer.contents().copyBytes(to: &counterValue, count: MemoryLayout<UInt32>.stride) encoder.dispatchThreads(.init(width: Int(counterValue), height: 1, depth: 1), threadsPerThreadgroup: .init(width: executeState.threadExecutionWidth, height: 1, depth: 1)) encoder.endEncoding() buffer.commit() My question is if there is any way I can have this same functionality without the costly buffer.waitUntilCompleted() call? Or am I going about this in completely the wrong way, or missing something else?
Posted
by
Post not yet marked as solved
1 Replies
481 Views
language: swift i use the metal shaders to render an image which size is (4096, 2304). the mtkView.frame.size is (414.0, 233.0). i convert the drawable.texture to uiimage, and save it. but its size is (828, 466). how can i save image that its size is (4096, 2304) ? thanks for your help!
Posted
by
Post not yet marked as solved
0 Replies
475 Views
I've got the following code to generate an MDLMaterial from my own material data model: public extension MaterialModel { var mdlMaterial: MDLMaterial { let f = MDLPhysicallyPlausibleScatteringFunction() f.metallic.floatValue = metallic f.baseColor.color = CGColor(red: CGFloat(color.x), green: CGFloat(color.y), blue: CGFloat(color.z), alpha: 1.0) f.roughness.floatValue = roughness return MDLMaterial(name: name, scatteringFunction: f) } } When exporting to OBJ, I get the expected material properties: # Apple ModelI/O MTL File: testExport.mtl newmtl material_1 Kd 0.163277 0.0344635 0.229603 Ka 0 0 0 Ks 0 ao 0 subsurface 0 metallic 0 specularTint 0 roughness 0 anisotropicRotation 0 sheen 0.05 sheenTint 0 clearCoat 0 clearCoatGloss 0 newmtl material_2 Kd 0.814449 0.227477 0.124541 Ka 0 0 0 Ks 0 ao 0 subsurface 0 metallic 0 specularTint 0 roughness 1 anisotropicRotation 0 sheen 0.05 sheenTint 0 clearCoat 0 clearCoatGloss 0 However when exporting USD I just get: #usda 1.0 ( defaultPrim = "_0" endTimeCode = 0 startTimeCode = 0 timeCodesPerSecond = 60 upAxis = "Y" ) def Xform "Obj0" { def Mesh "_" { uniform bool doubleSided = 0 float3[] extent = [(896, 896, 896), (1152, 1152, 1148.3729)] int[] faceVertexCounts = ... int[] faceVertexIndices = ... point3f[] points = ... } def Mesh "_0" { uniform bool doubleSided = 0 float3[] extent = [(898.3113, 896.921, 1014.4961), (1082.166, 1146.7178, 1152)] int[] faceVertexCounts = ... int[] faceVertexIndices = ... point3f[] points = ... matrix4d xformOp:transform = ( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) ) uniform token[] xformOpOrder = ["xformOp:transform"] } } There aren't any material properties. FWIW, this specifies a set of common material parameters for USD: https://openusd.org/release/spec_usdpreviewsurface.html (Note: there is no tag for ModelIO, so using SceneKit, etc.)
Posted
by
Post not yet marked as solved
1 Replies
584 Views
The release notes for Xcode 14 mention a new AppleTextureConverter library. https://developer.apple.com/documentation/xcode-release-notes/xcode-14-release-notes TextureConverter 2.0 adds support for decompressing textures, advanced texture error metrics, and support for reading and writing KTX2 files. The new AppleTextureConverter library makes TextureConverter available for integration into third-party engines and tools. (82244472) Does anyone know how to include this library into a project and use it at runtime?
Posted
by
Post not yet marked as solved
0 Replies
383 Views
I'm developing a drawing app. I use MTKView to render the canvas. But for some reason and for only a few users, the pixels are not rendered correctly (pixels have different sizes), the majority of users have no problem with this. Here is my setup: Each pixel is rendered as 2 triangles MTKView's frame dimensions are always multiple of the canvas size (a 100x100 canvas will have the frame size of 100x100, 200x200, and so on) There is a grid to indicate pixels (it's an SwiftUI Path) which display correctly, and we can see that they don't align with the pixels). There is also a checkerboard pattern in the background rendered using another MTKView which lines up with the pixels but not the grid. Previously, I had a similar issue when my view's frame is not a multiple of the canvas size, but I fixed that with the setup above already. The issue worsens when the number of points representing a pixel of the canvas becomes smaller. E.g. a 100x100 canvas on a 100x100 view is worse than a 100x100 canvas on a 500x500 view The vertices have accurate coordinates, this is a rendering issue. As you can see in the picture, some pixels are bigger than others. I tried changing the contentScaleFactor to 1, 2, and 3 but none seems to solve the problem. My MTKView setup: clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0) delegate = renderer renderer.setup() isOpaque = false layer.magnificationFilter = .nearest layer.minificationFilter = .nearest Renderer's setup: let pipelineDescriptor = MTLRenderPipelineDescriptor() pipelineDescriptor.vertexFunction = vertexFunction pipelineDescriptor.fragmentFunction = fragmentFunction pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineState = try? device.makeRenderPipelineState(descriptor: pipelineDescriptor) Draw method of renderer: commandEncoder.setRenderPipelineState(pipelineState) commandEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0) commandEncoder.setVertexBuffer(colorBuffer, offset: 0, index: 1) commandEncoder.drawIndexedPrimitives( type: .triangle, indexCount: indexCount, indexType: .uint32, indexBuffer: indexBuffer, indexBufferOffset: 0 ) commandEncoder.endEncoding() commandBuffer.present(drawable) commandBuffer.commit() Metal file: struct VertexOut { float4 position [[ position ]]; half4 color; }; vertex VertexOut frame_vertex(constant const float2* vertices [[ buffer(0) ]], constant const half4* colors [[ buffer(1) ]], uint v_id [[ vertex_id ]]) { VertexOut out; out.position = float4(vertices[v_id], 0, 1); out.color = colors[v_id / 4]; return out; } fragment half4 frame_fragment(VertexOut v [[ stage_in ]]) { half alpha = v.color.a; return half4(v.color.r * alpha, v.color.g * alpha, v.color.b * alpha, v.color.a); }
Posted
by
Post not yet marked as solved
1 Replies
550 Views
Hello, I've got a question about the Xcode Scene Editor. That is the SceneKit one NOT SpriteKit. According to this documentation: https://developer.apple.com/documentation/scenekit/scnnode/2873004-entity the entity property of a node serialised via the Xcode's scene editor can be set. While the Xcode's SpriteKit Scene Editor has this option I cannot find anything similar in the SceneKit editor. So my question is do *.scn files produced from Xcode contain GameplayKit information such as a GKEntity graph or only SCNNode data? Do I have to parse the scene and programatically create GKEntities? If that is the case there must be an error in the documentation. Thank you!
Posted
by
Post marked as solved
1 Replies
397 Views
Hi Devs, Anyone know how to get the local mouse click position of a UIKit view that is wrapped in a NSViewRepresentable and drawn using SwiftUI. No matter what I do it always returns the entire window view, and I am after the location relative to the MTKView. I am overriding the MTKView's, mouseDown function, and wanting to get the local position from the mouseDown event. using self.convert(event.locationInWindow, to:self), returns the same position as event.locationInWindow The reason I need the local position inside the MTKView, is I will be sampling an idBuffer at that point. Any help would be greatly appreciated, Thanks Simon
Posted
by
Post not yet marked as solved
0 Replies
532 Views
I am currently using CoreImage to process YCbCr422/420 10-bit pixel buffers but it is lacking performance at high frame rates so I decided to switch to Metal. But with Metal I am getting even worse performance. I am loading both the Luma (Y) and Chroma (CbCr) textures in 16-bit format as follows: let pixelFormatY = MTLPixelFormat.r16Unorm let pixelFormatUV = MTLPixelFormat.rg16Unorm renderPassDescriptorY!.colorAttachments[0].texture = texture; renderPassDescriptorY!.colorAttachments[0].loadAction = .clear; renderPassDescriptorY!.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0) renderPassDescriptorY!.colorAttachments[0].storeAction = .store; renderPassDescriptorCbCr!.colorAttachments[0].texture = texture; renderPassDescriptorCbCr!.colorAttachments[0].loadAction = .clear; renderPassDescriptorCbCr!.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0) renderPassDescriptorCbCr!.colorAttachments[0].storeAction = .store; // Vertices and texture coordinates for Metal shader let vertices:[AAPLVertex] = [AAPLVertex(position: vector_float2(-1.0, -1.0), texCoord: vector_float2( 0.0 , 1.0)), AAPLVertex(position: vector_float2(1.0, -1.0), texCoord: vector_float2( 1.0, 1.0)), AAPLVertex(position: vector_float2(-1.0, 1.0), texCoord: vector_float2( 0.0, 0.0)), AAPLVertex(position: vector_float2(1.0, 1.0), texCoord: vector_float2( 1.0, 0.0)) ] let commandBuffer = commandQueue!.makeCommandBuffer() if let commandBuffer = commandBuffer { let renderEncoderY = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptorY!) renderEncoderY?.setRenderPipelineState(pipelineStateY!) renderEncoderY?.setVertexBytes(vertices, length: vertices.count * MemoryLayout<AAPLVertex>.stride, index: 0) renderEncoderY?.setFragmentTexture(CVMetalTextureGetTexture(lumaTexture!), index: 0) renderEncoderY?.setViewport(MTLViewport(originX: 0, originY: 0, width: Double(dstWidthY), height: Double(dstHeightY), znear: 0, zfar: 1)) renderEncoderY?.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1) renderEncoderY?.endEncoding() let renderEncoderCbCr = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptorCbCr!) renderEncoderCbCr?.setRenderPipelineState(pipelineStateCbCr!) renderEncoderCbCr?.setVertexBytes(vertices, length: vertices.count * MemoryLayout<AAPLVertex>.stride, index: 0) renderEncoderCbCr?.setFragmentTexture(CVMetalTextureGetTexture(chromaTexture!), index: 0) renderEncoderCbCr?.setViewport(MTLViewport(originX: 0, originY: 0, width: Double(dstWidthUV), height: Double(dstHeightUV), znear: 0, zfar: 1)) renderEncoderCbCr?.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1) renderEncoderCbCr?.endEncoding() commandBuffer.commit() } And here is shader code: vertex MappedVertex vertexShaderYCbCrPassthru ( constant Vertex *vertices [[ buffer(0) ]], unsigned int vertexId [[vertex_id]] ) { MappedVertex out; Vertex v = vertices[vertexId]; out.renderedCoordinate = float4(v.position, 0.0, 1.0); out.textureCoordinate = v.texCoord; return out; } fragment half fragmentShaderYPassthru ( MappedVertex in [[ stage_in ]], texture2d<float, access::sample> textureY [[ texture(0) ]] ) { constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear); float Y = float(textureY.sample(s, in.textureCoordinate).r); return half(Y); } fragment half2 fragmentShaderCbCrPassthru ( MappedVertex in [[ stage_in ]], texture2d<float, access::sample> textureCbCr [[ texture(0) ]] ) { constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear); float2 CbCr = float2(textureCbCr.sample(s, in.textureCoordinate).rg); return half2(CbCr); } Is there anything fundamentally wrong in the code that makes it slow?
Posted
by
Post not yet marked as solved
0 Replies
467 Views
I understand that by default, Core image uses extended linear sRGB as default working color space for executing kernels. This means that the color values received (or sampled from sampler) in the Metal Core Image kernel are linear values without gamma correction applied. But if we disable color management by setting let options:[CIContextOption:Any] = [CIContextOption.workingColorSpace:NSNull()]; do we receive color values as it exists in the input texture (which may have gamma correction already applied)? In other words, the color values received in the kernel are gamma corrected and we need to manually convert them to linear values in the Metal kernel if required?
Posted
by
Post not yet marked as solved
0 Replies
506 Views
I am trying to carefully process HDR pixel buffers (10-bit YCbCr buffers) from the camera. I have watched all WWDC videos on this topic but have some doubts expressed below. Q. What assumptions are safe to make about sample values in Metal Core Image Kernels? Are the sample values received in Metal Core Image kernel linear or gamma corrected? Or does that depend on workingColorSpace property, or the input image that is supplied (though imageByMatchingToColorSpace() API, etc.)? And what could be the max and min values of these samples in either case? I see that setting workingColorSpace to NSNull() in context creation options will guarantee receiving the samples as is and normalised to [0-1]. But then it's possible the values are non-linear gamma corrected, and extracting linear values would involve writing conversion functions in the shader. In short, how do you safely process HDR pixel buffers received from the camera (which are in YCrCr420_10bit, which I believe have gamma correction applied, so Y in YCbCr is actually Y'. Can AVFoundation team clarify this?) ?
Posted
by
Post not yet marked as solved
0 Replies
397 Views
I have been using MTKView to display CVPixelBuffer from the camera. I use so many options to configure color space of the MTKView/CAMetalLayer that may be needed to tonemap content to the display (CAEDRMetadata for instance). If however I use AVSampleBufferDisplayLayer, there are not many configuration options for color matching. I believe AVSampleBufferDisplayLayer uses pixel buffer attachments to determine the native color space of the input image and does the tone mapping automatically. Does AVSampleBufferDisplayLayer have any limitations compared to MTKView, or both can be used without any compromise on functionality?
Posted
by
Post not yet marked as solved
1 Replies
539 Views
For better memory usage when working with MTLTextures (editing + displaying in render passes, compute shaders, etc.) is it possible to save the texture to the app's Documents folder, and then use an UnsafeMutablePointer to access/modify the contents of the texture before displaying in a render pass? And would this be performant (i.e 60fps)? That way the texture won't be directly in memory all the time, but the contents can still be edited and displayed when needed.
Posted
by
Post not yet marked as solved
0 Replies
370 Views
Hi everyone When I try to load the model with binary .ply format, MDLAsset can’t read the data as binary (I tried debugging and checking the buffer but I didn’t see any data) But when I change the model to ASCII format then model display normally. If you know the reason, please tell me how to fix it. Thank you
Posted
by
Post not yet marked as solved
0 Replies
373 Views
Hello! The Aim of my project is as specified in the title and the code I am currently trying to modify uses CVPixelBufferGetBaseAddress to acquire the DepthData using LiDAR. For some context, I made use of the available "Capturing depth using the LiDAR camera" Documentation using AVFoundation and edited code after referring a few Q&A on Developer Forums. I have a few doubts and would be grateful if I could get insights or a push in the right direction. Regarding the LiDAR DepthData: Where is the Origin(0,0) and In what order is it saved? (Basically, how do the row and column correspond to the real world scenario) How do I add a touch gesture to fill in the values of X&Y for "distanceAtXYPoint" so that I can acquire the Depth data on user touch input rather than in real-time. The function for reference : //new function to show the depth data value in meters func depthDataOutput(syncedDepthData: AVCaptureSynchronizedDepthData) { let depthData = syncedDepthData.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat16) let depthMapWidth = CVPixelBufferGetWidthOfPlane(depthData.depthDataMap, 0) let depthMapHeight = CVPixelBufferGetHeightOfPlane(depthData.depthDataMap, 0) CVPixelBufferLockBaseAddress(depthData.depthDataMap, .readOnly) if let rowData = CVPixelBufferGetBaseAddress(depthData.depthDataMap)?.assumingMemoryBound(to: Float16.self) { //need to find a way to get the specific depthpoint (using row data) on touch gesture. //currently use the depth point when row&column = 0 let depthPoint = rowData[0] for y in 0...depthMapHeight-1 { var distancesLine = [Float16]() for x in 0...depthMapWidth-1 { let distanceAtXYPoint = rowData[y * depthMapWidth + x] } } print("⭐️Depth value of (0,0) point in meters: \(depthPoint)") } CVPixelBufferUnlockBaseAddress(depthData.depthDataMap, .readOnly) } The current real-time console log output is as shown below Also a slight concern is that the current output at (0,0) shows a value greater than 1m at times even when the real distance is probably a few cm. Any experience and countermeasures on this also would be greatly helpful. Thanks in advance.
Posted
by