MetalKit

RSS for tag

Render graphics in a standard Metal view, load textures from many sources, and work efficiently with models provided by Model I/O using MetalKit.

MetalKit Documentation

Posts under MetalKit tag

60 Posts
Sort by:
Post not yet marked as solved
0 Replies
394 Views
I working on this project last 1 month I have gone through all the possibilities but have not found the expected output. I have the go through the following article and link: https://ixtli.unam.mx/6280-2/ https://developer.apple.com/documentation/arkit/content_anchors/scanning_and_detecting_3d_objects?language=objc https://github.com/TokyoYoshida/ExampleOfiOSLiDAR https://stackoverflow.com/questions/61063571/arkit-3-5-how-to-export-obj-from-new-ipad-pro-with-lidar/61104855#61104855 I am getting normal 3d objects with gray color, I am expecting objects with texture, if you guys know some way to find the solution then please let me know. Thank You!
Posted
by darshantg.
Last updated
.
Post not yet marked as solved
0 Replies
300 Views
Hello Everybody. I'm trying to port graphic code written cg in unity to metal. And, one more thing I don't want to manually implement scene graph, so I gonna use SceneKit. So I should use SCNProgram or SCNNodeRendererDelegate, and I think SCNProgram is more comfort. And real my question is how convert this code, in cg Cull Front ZTest LEqual ZWrite On Blend SrcAlpha OneMinusSrcAlpha I know source alpha blending in MTLPipelineDescriptor, zbuffer in RenderCommandEncoder and Cull Face also. But When I use SCNProgram or SCNSceneRendererNode, can't find these options... how I change these. Help me.
Posted
by wonkieun.
Last updated
.
Post marked as solved
1 Replies
410 Views
Hello, Everyone. I try to use a metal kit with a scene kit. Because, the scene kits scene graph is great, I want to implement a low-level metal shader. I want to use SCNNodeRenderDelegate, without SCNProgram. Because I want low-level implement for example passing custom extra MTLBuffer, or multi-pass-rendering. So I pass model view projection matrix like that, in metal shader struct NodeBuffer {   float4x4 modelTransform;   float4x4 modelViewProjectionTransform;   float4x4 modelViewTransform;   float4x4 normalTransform;   float2x3 boundingBox; }; in Swift code struct NodeMatrix: sizeable {     var modelTransform = float4x4()     var modelViewProjectionTransform = float4x4()     var modelViewTransform = float4x4()     var normalTransform = float4x4()     var boundingBox = float2x3()   } ...    private func updateNodeMatrix(_ camNode: SCNNode) {     guard let camera = camNode.camera else {       return     }           let modelMatrix = transform     let viewMatrix = camNode.transform     let projectionMatrix = camera.projectionTransform           let viewProjection = SCNMatrix4Mult(viewMatrix, projectionMatrix)     let modelViewProjection = SCNMatrix4Mult(modelMatrix, viewProjection)     nodeMatrix.modelViewProjectionTransform = float4x4(modelViewProjectionMatrix)   } ... public func renderNode(_ node: SCNNode,               renderer: SCNRenderer,               arguments: [String: Any])   {     guard let renderTexturePipelineState = renderTexturePipelineState,        let renderCommandEncoder = renderer.currentRenderCommandEncoder,        let camNode = renderer.pointOfView,        let texture = texture     else { return }           updateNodeMatrix(camNode)     guard let nodeBuffer       = renderer.device?.makeBuffer(bytes: &nodeMatrix,                      length: NodeMatrix.stride,                      options: [])     else { return }           renderCommandEncoder.setDepthStencilState(depthState)     renderCommandEncoder.setRenderPipelineState(renderTexturePipelineState)     renderCommandEncoder.setFragmentTexture(texture, index: 0)     renderCommandEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)     renderCommandEncoder.setVertexBuffer(nodeBuffer, offset: 0, index: 1)     renderCommandEncoder.drawIndexedPrimitives(type: .triangle,                           indexCount: indexCount,                           indexType: .uint16,                           indexBuffer: indexBuffer,                           indexBufferOffset: 0)   } But I got the wrong model view projection matrix in the shader. I think scene kit has modify intermediate transform hiding. I can't know, help me...
Posted
by wonkieun.
Last updated
.
Post not yet marked as solved
3 Replies
909 Views
I'm using Xcode 13 after recently updating to MacOS Monterey, and only after updating am I getting this error: [MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion `MTLResource 0x14a8a8cc0 (label: null), referenced in cmd buffer 0x149091400 (label: null) is in volatile or empty purgeable state at commit' I haven't changed my code at all between updating to the latest OS, and it worked perfectly before. How can I fix this? I don't think there should be any reason that I can't use a command buffer on a texture resource with a volatile/empty purgeable state.
Posted
by yyjhuj.
Last updated
.
Post not yet marked as solved
1 Replies
654 Views
I am getting this error while running the app with Xcode. especially when I am moving things in view like swiping with my finger, rotating the screen etc.. I have not been able to reproduce it when running the app like normal user. I am just wondering if Xcode is doing something weird in debugging and is hitting the apps animations.
Posted Last updated
.
Post not yet marked as solved
2 Replies
431 Views
Hey everyone :-) I'm trying access the information of my ARFrame so I can colorize different parts of my scene depending of their classifications. In other words, I'm looking for a way to transform XYZ World Coordinates or XY Screen Coordinates to what is being classified by my AR Session in that position. All the floor in the scene being colorize in red, the tables in blue, the walls in green, etc. How could I embed in real-time these classifications from my reconstructed ARMeshAnchor into my coordinates so I can access it on my Metal Script in real-time? As of today... I've been able to get the XYZ World Coordinate related to a XY Screen Coordinate reading the following example: Building an Immersive Experience with RealityKit Here is a scene painting it with different shades of purple depending on the Z world-coordinate: P.S. I want to be able to do it within RealityKit. Thanks in advance.
Posted
by cabada.
Last updated
.
Post not yet marked as solved
0 Replies
367 Views
Hello everyone! how to put a custom shader in depth only on the floor? I'm trying to use the depth of the scene to put the shader exclusively on the floor, but apparently I'm doing something wrong Links https://www.dropbox.com/s/4ghun92frlcg7hz/IMG_9960.PNG?dl=0 https://www.dropbox.com/home?preview=-2362988581602429186.MP4 PostProcess.metal float linearizeDepth(float sampleDepth, float4x4 viewMatrix) {     constexpr float kDepthEpsilon = 1e-5f;     float d = max(kDepthEpsilon, sampleDepth);     d = abs(-viewMatrix[3][2] / d);     return d; } constexpr sampler textureSampler(address::clamp_to_edge, filter::bicubic); float getDepth(float2 coords, constant InputArgs *args, texture2d<float, access::sample> inDepth,depth2d<float, access::sample> arDepth) {     float2 arDepthCoords = args->orientationTransform * coords + args->orientationOffset;     float realDepth = arDepth.sample(textureSampler, arDepthCoords);     float virtualDepth = linearizeDepth(inDepth.sample(textureSampler, coords)[0], args->viewMatrix);     bool realFragment = (virtualDepth <= FLT_EPSILON);     if (realFragment) { virtualDepth = realDepth; }     return min(virtualDepth, realDepth); } float3 getDirection(float2 screenCoord, constant InputArgs *args) {     float3 top = mix(args->topLeft.xyz, args->topRight.xyz, screenCoord.x);     float3 bottom = mix(args->bottomLeft.xyz, args->bottomRight.xyz, screenCoord.x);     return normalize(mix(bottom, top, screenCoord.y)); } float3 worldCoordsForDepth(float depth, float2 screenCords, constant InputArgs *args) {     float3 centerDirection = getDirection(float2(0.5, 0.5), args);     float3 direction = getDirection(screenCords, args);     float depth2 = depth / dot(direction, centerDirection);     return direction * depth2 + args->viewTranslation.xyz; } [[kernel]] void postProcess(uint2 gid [[thread_position_in_grid]],                  texture2d<half, access::read> inputTexture [[texture(0)]],                  texture2d<float, access::sample> inDepth [[texture(1)]],                  texture2d<half, access::write> outputTexture [[texture(2)]],                  depth2d<float, access::sample> arDepth [[texture(3)]],                  constant InputArgs *args [[buffer(0)]]) {  float2 screenCoords = float2(float(gid[0]) / float(outputTexture.get_width()),                                  float(gid[1]) / float(outputTexture.get_height()));     float rawDepth = getDepth(screenCoords, args, inDepth, arDepth);     float3 worldCoords = worldCoordsForDepth(rawDepth, screenCoords, args);     float depth = rawDepth;     depth = 1 - pow(1 / (pow(depth, args->intensity) + 1), args->falloff);     depth = clamp(depth, 0.0, 1.0);     half4 nearColor = inputTexture.read(gid);     float blend = pow(1 - depth, args->exponent);     half4 color = half4(0.0);  float2 frag = worldCoords.xz;         frag *= 1.0 - 0.2 * cos (frag) * sin (3.14159 * 0.5 * inDepth.sample(textureSampler, float2(0.0)).x);         frag *= 5.0;         float random = rand (floor(frag));         float2 black = smoothstep(1.0, 0.8, cos(frag * 3.14159 * 2.0));         float3 finalColor = hsv2rgb (float3 (random, 1.0, 1.0));         finalColor *= black.x * black.y * smoothstep (1.0, 0.0, length (fract (frag) - 0.5));         finalColor *= 0.5 + 0.5 * cos (random + random * args->time + args->time + 3.14159 * 0.5 * inDepth.sample(textureSampler, float2(0.7)).x); color = blend * nearColor + (1.0 - blend) * half4(half3(finalColor), 1.0); } I really hope for help of understanding this one
Posted Last updated
.
Post marked as solved
1 Replies
311 Views
There is a write function documented in the CoreImage Metal shader reference here: https://developer.apple.com/metal/MetalCIKLReference6.pdf But I'm not sure how to use it. I assumed one would be able to use it on the destination parameter i.e. dest.write(...) but I get the error, "no member named 'write' in 'coreimage::destination'" How do I use this function?
Posted Last updated
.
Post not yet marked as solved
2 Replies
404 Views
I've created a custom BoxBlur kernel that produces identical results to Apple's built-in box blur (CIBoxBlur) kernel but my custom kernel is orders of magnitude slower. So naturally I am wondering what I'm doing wrong to get such poor performance. Below is my custom kernel in the Metal shading language. Can you spot why it's so slow? The built-in filter performs well so I can only assume it's something I'm doing wrong. #include <CoreImage/CoreImage.h> #import <simd/simd.h> extern "C" { namespace coreimage { float4 customBoxBlurFilterKernel(sampler src) { float2 crd = src.coord(); int edge = 100; int minx = crd.x - edge; int maxx = crd.x + edge; int miny = crd.y - edge; int maxy = crd.y + edge; float4 sums = float4(0,0,0,0); float cnt = 0; // compute average of surrounding rgb values for(int row=miny; row < maxy; row++) { for(int col=minx; col < maxx; col++) { float4 samp = src.sample(float2(col, row)); sums[0] += samp[0]; sums[1] += samp[1]; sums[2] += samp[2]; cnt += 1.; } } return float4(sums[0]/cnt, sums[1]/cnt, sums[2]/cnt, 1); } } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
403 Views
So I wanted to render video from .mov file in MTKView. Short algorithm: Read CMSampleBuffer from .mov using AVAssetReader and AVAssetReaderTrackOutput. Convert CMSampleBuffer from step 1 to MTLTexture and pass it to renderer I did those 2 steps and got the picture with twitches and I don’t know why. Link to .mov https://www.dropbox.com/s/gmzxd8j94pjhc1q/2.MOV?dl=0. Link to result https://www.dropbox.com/s/exgf1tk7oqvon25/result.mov?dl=0. The code. ViewController.swift VideoReader.swift SampleConverter.swift TextureModel.swift Renderer.swift MyMetal.metal
Posted
by Nils.
Last updated
.
Post marked as solved
2 Replies
382 Views
The documentation says the pixel format of the environmentTexture in an AREnvironmentProbeAnchor is bgra8Unorm_srgb. https://developer.apple.com/documentation/arkit/arenvironmentprobeanchor/2977511-environmenttexture However, when I inspect the pixelFormat property of the MTLTexture it says it's rgba16Float. I'm trying to read the texture out as a PNG, and because it's a 16-bit float image, I'm assuming its color space is CGColorSpace.displayP3, but I'm not 100% sure. The texture looks darker than what I expected. Could it be that the color space is sRGB, but it's 16-bit because it's actually an HDR texture stored as linear RGB? (Tested on iPhone 12, iOS 15)
Posted
by endavid.
Last updated
.
Post not yet marked as solved
0 Replies
263 Views
I’m trying to acquire a point cloud and then capture an image (the capture would stop the point cloud acquisition session). The point cloud acquisition seems easy. This is a well discussed topic: https://developer.apple.com/forums/thread/658109 Furthermore, it would seem possible to store camera pose information from the point cloud with the photo so that data could be used to find a pose. Later, I’d align the 3D point cloud to that 2d image (preferably not using depth images, interpolation, or NNs). I am surprised, but seems really hard to find out the feasibility of such an approach.
Posted Last updated
.
Post not yet marked as solved
3 Replies
269 Views
I am working on rendering multiple objects in Metalkit. While doing it I can see objects are flickering if they are overlaying each-other. I'm passing the currentRenderPassDescriptor and current drawable in draw method which is calling every time. Please refer screenshot. Please help me out to render objects without flickering.
Posted Last updated
.
Post not yet marked as solved
0 Replies
214 Views
I downloaded this sample: https://developer.apple.com/documentation/metal/basic_tasks_and_concepts/using_metal_to_draw_a_view_s_contents?preferredLanguage=occ I commented out this line in AAPLViewController.mm //    _view.enableSetNeedsDisplay = YES; I modified the presentDrawable line in AAPLRenderer.mm to add afterMinimumDuration:     [commandBuffer presentDrawable:drawable afterMinimumDuration:1.0/60]; I then added a presentedHandler before the above line that records the time between successive presents. Most of the time it correctly reports 0.166667s. However, about every dozen or so frames (it varies) it seems to present a frame early with an internal of 0.0083333333s followed by the next frame after around 0.24s. Is this expected behaviour, I was hoping that afterMinimumDuration would specifically make things consistent. Why would it present a frame early? This is on a new MacBook Pro 16 running latest macOS Monterrey, and the sample project upgraded to have a minimum deployment target of 11.0. Xcode latest public release 13.1.
Posted
by oviano.
Last updated
.
Post not yet marked as solved
0 Replies
319 Views
I am trying to use a CIColorKernel or CIBlendKernel with sampler arguments but the program crashes. Here is my shader code which compiles successfully. extern "C" float4 wipeLinear(coreimage::sampler t1, coreimage::sampler t2, float time) { float2 coord1 = t1.coord(); float2 coord2 = t2.coord(); float4 innerRect = t2.extent(); float minX = innerRect.x + time*innerRect.z; float minY = innerRect.y + time*innerRect.w; float cropWidth = (1 - time) * innerRect.w; float cropHeight = (1 - time) * innerRect.z; float4 s1 = t1.sample(coord1); float4 s2 = t2.sample(coord2); if ( coord1.x > minX && coord1.x < minX + cropWidth && coord1.y > minY && coord1.y <= minY + cropHeight) { return s1; } else { return s2; } } And it crashes on initialization. class CIWipeRenderer: CIFilter { var backgroundImage:CIImage? var foregroundImage:CIImage? var inputTime: Float = 0.0 static var kernel:CIColorKernel = { () -> CIColorKernel in let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")! let data = try! Data(contentsOf: url) return try! CIColorKernel(functionName: "wipeLinear", fromMetalLibraryData: data) //Crashes here!!!! }() override var outputImage: CIImage? { guard let backgroundImage = backgroundImage else { return nil } guard let foregroundImage = foregroundImage else { return nil } return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, inputTime]) } } It crashes in the try line with the following error: Fatal error: 'try!' expression unexpectedly raised an error: Foundation._GenericObjCError.nilError If I replace the kernel code with the following, it works like a charm: extern "C" float4 wipeLinear(coreimage::sample_t s1, coreimage::sample_t s2, float time) { return mix(s1, s2, time); }
Posted Last updated
.
Post not yet marked as solved
1 Replies
278 Views
For running my app on devices with P3 screens, to take advantage of the P3 colors, which MTLPixelFormat is the proper one to use for my MTKView? My app is for iOS, but it’s nice to be able to debug on the simulator or as a Catalyst app. Right now, I’ve been using MTLPixelFormatBGR10_XR_sRGB and MTLPixelFormatBGRA10_XR_sRGB per the recommendation in WWDC 2016 session 605 for iOS support of P3 color. This appears to work fine on iOS devices, but on my Apple silicon Mac, when running as a Catalyst app, the colors sometimes look lighter than they should, but it doesn’t always happen. When I run in the simulator on this Mac, my app crashes. When I was using an Intel Mac (with an sRGB screen), I was having my app render in sRGB-only with MTLPixelFormat.bgra8Unorm. MTLPixelFormat.bgra8Unorm also works fine on my Apple silicon Mac, but of course, I’m not getting the P3 color space.  For Apple Silicon Macs, as the GPU hardware is more like an iOS device than like an Intel Mac, should the same pixel formats work on an Apple silicon Mac as on iOS devices? Or are the pixel formats that work with Apple Silicon Macs the same as those on Intel Macs?
Posted Last updated
.
Post marked as solved
2 Replies
377 Views
Hi, i have a SceneKit/ARKit (with my own metal shader) app and I need to change the main texture format from .bgra8unorm_srgb to .bgra8unorm. Since it is srgb, anything I draw is again converted to srgb and shown too bright. Is there a way to change it/how can it be changed? I didn't find anything useful in docs or examples. Thx
Posted
by AlgoChris.
Last updated
.
Post not yet marked as solved
1 Replies
745 Views
Friendly greetings ! I'm on a MacBook Air M1, using Xcode 13.1, Swift. The full source code is below, as a MaOS Console Application import Foundation import Metal import MetalKit import simd guard let device : MTLDevice = MTLCreateSystemDefaultDevice() else { fatalError("could not create metal default device") } let queue = device.makeCommandQueue() print("Hello, World!") The full output, running in debug from XCode is : 2021-10-29 06:00:16.567089+0200 FraCompute[71702:11572598] Metal API Validation Enabled 2021-10-29 06:00:16.621413+0200 FraCompute[71702:11572598] flock failed to lock list file (/var/folders/s7/7m1rt8kx3jq7c02mwclmnvf80000gn/C//com.apple.metal/16777235_322/functions.list): errno = 35 2021-10-29 06:00:16.628053+0200 FraCompute[71702:11572598] +[MTLIOAccelDevice registerDevices]: Zero Metal services found Hello, World! Program ended with exit code: 0 Running the sample "MacOS Game" default code (the one with the rotating cube) also give this "Zero Metal services found". What's a "metal service" anyway ? Googling for it obviously give me unrelated smithing service offer =^_^= It's not even "Metal Service not found" but, "I found Zero Metal Services", whatever that's supposed to mean ? is it "I didn't find any metal service" or is there a thing called "Zero Metal" ? I can query the metal device name and it get me "Apple M1". No problem, everything seems to work fine (so far). But what's the meaning of this message please ?
Posted
by ker2x.
Last updated
.
Post marked as solved
1 Replies
494 Views
Hi all, though I have the YCbCr texture of the camera feed, I still lack the right matrix to convert it to what ARKit is showing as background texture. I tried it with this matrix (found here): constant const float4x4 ycbcrToRGBTransform = float4x4(     float4(+1.0000f, +1.0000f, +1.0000f, +0.0000f),     float4(+0.0000f, -0.3441f, +1.7720f, +0.0000f),     float4(+1.4020f, -0.7141f, +0.0000f, +0.0000f),     float4(-0.7010f, +0.5291f, -0.8860f, +1.0000f) ); But this one is too bright. Are there docs about this or is the background_video_frag source available? (Xcode tells me it's not/needs to be loaded, but I don't know from where) I tried to find out from the buffers used in background_video_frag, but without the source it's hard to tell what's what.
Posted
by AlgoChris.
Last updated
.