MetalKit

RSS for tag

Render graphics in a standard Metal view, load textures from many sources, and work efficiently with models provided by Model I/O using MetalKit.

Posts under MetalKit tag

55 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Metal Core Image passing sampler arguments
I am trying to use a CIColorKernel or CIBlendKernel with sampler arguments but the program crashes. Here is my shader code which compiles successfully. extern "C" float4 wipeLinear(coreimage::sampler t1, coreimage::sampler t2, float time) { float2 coord1 = t1.coord(); float2 coord2 = t2.coord(); float4 innerRect = t2.extent(); float minX = innerRect.x + time*innerRect.z; float minY = innerRect.y + time*innerRect.w; float cropWidth = (1 - time) * innerRect.w; float cropHeight = (1 - time) * innerRect.z; float4 s1 = t1.sample(coord1); float4 s2 = t2.sample(coord2); if ( coord1.x > minX && coord1.x < minX + cropWidth && coord1.y > minY && coord1.y <= minY + cropHeight) { return s1; } else { return s2; } } And it crashes on initialization. class CIWipeRenderer: CIFilter { var backgroundImage:CIImage? var foregroundImage:CIImage? var inputTime: Float = 0.0 static var kernel:CIColorKernel = { () -> CIColorKernel in let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")! let data = try! Data(contentsOf: url) return try! CIColorKernel(functionName: "wipeLinear", fromMetalLibraryData: data) //Crashes here!!!! }() override var outputImage: CIImage? { guard let backgroundImage = backgroundImage else { return nil } guard let foregroundImage = foregroundImage else { return nil } return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, inputTime]) } } It crashes in the try line with the following error: Fatal error: 'try!' expression unexpectedly raised an error: Foundation._GenericObjCError.nilError If I replace the kernel code with the following, it works like a charm: extern "C" float4 wipeLinear(coreimage::sample_t s1, coreimage::sample_t s2, float time) { return mix(s1, s2, time); }
1
0
1.2k
Oct ’23
MTKView, presentDrawable afterMinimumDuration seems flakey...
I downloaded this sample: https://developer.apple.com/documentation/metal/basic_tasks_and_concepts/using_metal_to_draw_a_view_s_contents?preferredLanguage=occ I commented out this line in AAPLViewController.mm //    _view.enableSetNeedsDisplay = YES; I modified the presentDrawable line in AAPLRenderer.mm to add afterMinimumDuration:     [commandBuffer presentDrawable:drawable afterMinimumDuration:1.0/60]; I then added a presentedHandler before the above line that records the time between successive presents. Most of the time it correctly reports 0.166667s. However, about every dozen or so frames (it varies) it seems to present a frame early with an internal of 0.0083333333s followed by the next frame after around 0.24s. Is this expected behaviour, I was hoping that afterMinimumDuration would specifically make things consistent. Why would it present a frame early? This is on a new MacBook Pro 16 running latest macOS Monterrey, and the sample project upgraded to have a minimum deployment target of 11.0. Xcode latest public release 13.1.
3
0
938
Aug ’23
Spatial framework, Rotation3D eye target up (look at), Quaternion flip
Hi, I want to begin by saying thank you Apple for making the Spatial framework! Please add a million more features ;-) I'm using the following code to make an object "look at" another point, but at a particular rotation the object "flips" its rotations. See a video here: https://www.dropbox.com/s/5irxt0gxou4c2j6/QuaternionFlip.mov?dl=0 I shake the mouse cursor when it happens to make it obvious to you. import Spatial let lookAtRotation = Rotation3D(eye: Point3D(position), target: Point3D(x: 0, y: 0, z: 0), up: Vector3D(x: 0, y: 1, z: 0)) myObj.quaternion = lookAtRotation.quaternion So my question is why is this happening, and how can I fix it? thx
1
0
1.3k
Oct ’23
Frames out of order using AVAssetWriter
We are using AVAssetWriter to write videos using both HEVC and H.264 encoding. Occasionally, we get reports of choppy footage in which frames appear out of order when played back on a Mac (QuickTime) or iOS device (stock Photos app). This occurs extremely unpredictably, often not starting until 20+ minutes of filming, but occasionally happening as soon as filming starts. Interestingly, users have reported the issue goes away while editing or viewing on a different platform (e.g. Linux) or in the built-in Google Drive player, but comes back as soon as the video is exported or downloaded again. When this occurs in an HEVC file, converting to H.264 seems to resolve it. I haven't found a similar fix for H.264 files. I suspect an AVAssetWriter encoding issue but haven't been able to uncover the source. Running a stream analyzer on HEVC files with this issue reveals the following error: Short-term reference picture with POC = [some number] seems to have been removed or not correctly decoded. However, running a stream analyzer on H.264 files with the same playback issue seems to show nothing wrong. At a high level, our video pipeline looks something like this: Grab a sample buffer in captureOutput(_ captureOutput: AVCaptureOutput!, didOutputVideoSampleBuffer sampleBuffer: CMSampleBuffer!) Perform some Metal rendering on that buffer Pass the resulting CVPixelBuffer to the AVAssetWriterInputPixelBufferAdaptor associated with our AVAssetWriter Example files can be found here: https://drive.google.com/drive/folders/1OjDZ3XaC-ubD5hyDiNvMQGl2NVqZbWnR?usp=sharing This includes a video file suffering this issue, the same file fixed after converting to mp4, and a screen recording of the distorted playback in QuickTime. Can anyone help point me in the right direction to solving this issue? I can provide more details as necessary.
4
2
1.1k
Nov ’23
Mapkit: [Pipeline Library] Mapping the Pipeline data cache failed, errno22
Hi, when I add mapview to my viewcontroller i receive the log [Pipeline Library] Mapping the pipeline data cache failed, errno22. This happened after i had updated xcode to 14.3 and the log is only shown when i test on a real device and not when i test on the simulator. (This is shown always in every project i create, even without calling any function to modify the mapview. I only add the MapKit and Core Location libraries.) Xcode version: Version 14.3 (14E222b) Iphone: Iphone 8, IOS version: 16.4 I've tested also on Iphone 12 with lower IOS version and i receive the same error.
2
1
1.7k
Jul ’23
Bindless/GPU-Driven approach with dynamic scenes?
I have been experimenting with different rendering approaches in Metal and am hitting a wall when it comes to reconciling "bindless" or GPU-driven approaches* with a dynamic scene where meshes can be added, removed, and changed. All the examples I have found of such approaches use fixed scenes, where all the data is fixed before the first draw call into something like a MeshBuffer that holds all scene geometry in the form of Mesh objects (for instance). While I can assume that recreating a MeshBuffer from scratch each frame would be possible but completely undesirable, and that there may be some clever tricks with pointers to update a MeshBuffer as needed, I would like to know if there is an established or optimal solution to this problem, or if these approaches are simply incompatible with dynamic geometry. Any example projects that do what I am asking that I may have missed would be appreciated, too. * I know these are not the same, but seem to share some common characteristics, namely providing your entire geometry to the GPU at once. Looping over an array of meshes and calling drawIndexedPrimitives from the CPU does not post any such obstacles, but also precludes some of the benefits of offloading work to the GPU, or having access to all geometry on the GPU for things like path tracing.
3
1
1.2k
Jun ’23
High CPU usage with CoreImage vs Metal
I am processing CVPixelBuffers received from camera using both Metal and CoreImage, and comparing the performance. The only processing that is done is taking a source pixel buffer and applying crop & affine transforms, and saving the result to another pixel buffer. What I do notice is CPU usage is as high a 50% when using CoreImage and only 20% when using Metal. The profiler shows most of the time spent is in CIContext render: let cropRect = AVMakeRect(aspectRatio: CGSize(width: dstWidth, height: dstHeight), insideRect: srcImage.extent) var dstImage = srcImage.cropped(to: cropRect) let translationTransform = CGAffineTransform(translationX: -cropRect.minX, y: -cropRect.minY) var transform = CGAffineTransform.identity transform = transform.concatenating(CGAffineTransform(translationX: -(dstImage.extent.origin.x + dstImage.extent.width/2), y: -(dstImage.extent.origin.y + dstImage.extent.height/2))) transform = transform.concatenating(translationTransform) transform = transform.concatenating(CGAffineTransform(translationX: (dstImage.extent.origin.x + dstImage.extent.width/2), y: (dstImage.extent.origin.y + dstImage.extent.height/2))) dstImage = dstImage.transformed(by: translationTransform) let scale = max(dstWidth/(dstImage.extent.width), CGFloat(dstHeight/dstImage.extent.height)) let scalingTransform = CGAffineTransform(scaleX: scale, y: scale) transform = CGAffineTransform.identity transform = transform.concatenating(scalingTransform) dstImage = dstImage.transformed(by: transform) if flipVertical { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: 1, y: -1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: 0, y: dstImage.extent.size.height)) } if flipHorizontal { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: -1, y: 1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: dstImage.extent.size.width, y: 0)) } var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) Here is how CIContext was created: _ciContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [CIContextOption.cacheIntermediates: false]) I want to know if I am doing anything wrong and what could be done to lower CPU usage in CoreImage?
4
1
1.3k
Oct ’23
Accessing vertex data from a MTKView
I am drawing am image in a MTKView using a Metal shader based on the 'pointCloudVertexShader' used in this sample code. The image can be moved with a 'dragGesture' in the MTKView, similarly to the 'MetalPointCloud' View in the sample Code. I want to implement a 'UITapGestureRecognizer' that, when tapping in the MTKView returns the appropriate 'vecout' (from the 'pointCloudVertexShader') value for the given location (see code bellow). // Calculate the vertex's world coordinates. float xrw = ((int)pos.x - cameraIntrinsics[2][0]) * depth / cameraIntrinsics[0][0]; float yrw = ((int)pos.y - cameraIntrinsics[2][1]) * depth / cameraIntrinsics[1][1]; float4 xyzw = { xrw, yrw, depth, 1.f }; // Project the coordinates to the view. float4 vecout = viewMatrix * xyzw; The 'vecout' variable is computed in the metal vertex shader, it would be associated with a coloured pixel. My idea would be to use the 'didTap' method to evaluate the wanted pixel data (3D coordinates). @objc func didTap(_ gesture: UITapGestureRecognizer){ let location = gesture.location(in: mtkView) //Use the location to get the pixel value. } Is there any way to get this value directly from the MTKView? Thanks in advance!
0
0
533
Jul ’23
Save texture as Tiff
I'm trying to save metal textures in a lossless compressed format. I've tried png and tiff, but I run into the same problem: the pixel data changes after save and load when we have transparency. Here is the code I use to save a Tiff: import ImageIO import UIKit import Metal import MobileCoreServices extension MTLTexture { func saveAsLosslessTIFF(url: URL) throws { guard let context = CIContext() else { return } guard let colorSpace = CGColorSpace(name: CGColorSpace.linearSRGB) else { return } guard let ciImage = CIImage(mtlTexture: self, options: [.colorSpace : colorSpace]) else { return } guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return } // create a dictionary with TIFF compression options let tiffCompression_LZW = 5 let options: [String: Any] = [ kCGImagePropertyTIFFCompression as String: tiffCompression_LZW, kCGImagePropertyDepth as String: depth, kCGImagePropertyPixelWidth as String: width, kCGImagePropertyPixelHeight as String: height, ] let fileDestination = CGImageDestinationCreateWithURL(url as CFURL, kUTTypeTIFF, 1, nil) guard let destination = fileDestination else { throw RuntimeError("Unable to create image destination.") } CGImageDestinationAddImage(destination, cgImage, options as CFDictionary) if !CGImageDestinationFinalize(destination) { throw RuntimeError("Unable to save image to destination.") } } } I can then load the texture like this: func loadTexture(url:URL) throws -> MTLTexture { let usage = MTLTextureUsage(rawValue: MTLTextureUsage.renderTarget.rawValue | MTLTextureUsage.shaderRead.rawValue | MTLTextureUsage.shaderWrite.rawValue) return try loader.newTexture(URL:url,options:[MTKTextureLoader.Option.textureUsage:usage.rawValue,MTKTextureLoader.Option.origin:MTKTextureLoader.Origin.flippedVertically.rawValue]) } After saving and then loading the texture again, I want to get back the exact same texture. And I do, if there is no transparency. Transparent pixels, however, are transformed in a way that I don't understand. Here is an example pixel: [120, 145, 195, 170] -> [144, 174, 234, 170] My first guess would be that something is trying to undo a pre-multiplied alpha that never happened. But the numbers don't seem to work out. For example, if that were the case I'd expect 120 to go to (120 * 255) / 170 = 180 , not 144. Any idea what I am doing wrong?
9
0
1k
Aug ’23
Unusual wakes on CPU
Here is a normal Xcode project. When it runs, there will only be no more than 5 wakes per second on CPU. But when I created a class inherited by MTKView in this project, there would be more than 100 wakes per second on CPU, which may make my device hot. Besides, I also find many possible ways to create unusual wakes on CPU. But I need more test. Steps: Create a new Xcode project using Swift. Run the project and see the wakes using Xcode, expect less than 5. Add the following code in the project: Run the project and see the wakes using Xcode, expect more than 100. import Foundation import MetalKit class MTKPlayerView : MTKView { init() { let dev = MTLCreateSystemDefaultDevice()! super.init(frame: .zero, device: dev) } required init(coder: NSCoder) { super.init(coder: coder) } } I haven't find any information about this thread named gputools_smt_poll. If you know anything about it you can share with me.
1
0
426
Aug ’23
Physics Engine for Metal API rendering
Hello everyone! Here with another graphics api question but slightly different. I'm currently looking at 2 SDK's for physics called PhysX and Bullet. The game asphalt 9 uses metal and bullet and I would like to do the same with asphalt 9. With metal 3 out and stable it seems, I would like to use one of these engines for my upcoming metal api rendering engine. Bur there's a catch, I wish to use objective-c or c++ for both the rendering engine and the physics engine as I mentioned above, but not me touching swift(its a good language but i wish to use c++ for game development). What do you guys say about this?
1
0
955
Aug ’23
metal encoder dispatch based on value of an atomic counter
I'm trying to find a way to reduce synchronization time between two compute shader calls, where one dispatch depends on an atomic counter from the other. Example: If I have two metal kernels, select and execute, select is looking through the numbers-buffer and stores the index of all numbers < 10 in a new buffer selectedNumberIndices by using an atomic counter. execute is then run counter number of times do do something with those selected indices. kernel void select (device atomic_uint &counter, device uint *numbers, device uint *selectedNumberIndices, uint id [[thread_position_in_grid]]) { if(numbers[id] < 10) { uint idx = atomic_fetch_add_explicit(&counter, 1, memory_order_relaxed); selectedNumbers[idx] = id; } } kernel void execute (device uint *selectedNumberIndices, uint id [[thread_position_in_grid]]) { // do something #counter number of times } currently I can do this by using .waitUntilCompleted() between the dispatches to ensure I get accurate results, something like: // select buffer = queue.makeCommandBuffer()! encoder = buffer.makeComputeCommandEncoder()! encoder.setComputePipelineState(selectState) encoder.setBuffer(counterBuffer, offset: 0, index: 0) encoder.setBuffer(numbersBuffer, offset: 0, index: 1) encoder.setBuffer(selectedNumberIndicesBuffer, index: 2) encoder.dispatchThreads(.init(width: Int(numbersCount), height: 1, depth: 1), threadsPerThreadgroup: .init(width: selectState.threadExecutionWidth, height: 1, depth: 1)) encoder.endEncoding() buffer.commit() // wait buffer.waitUntilCompleted() // execute buffer = queue.makeCommandBuffer()! encoder = buffer.makeComputeCommandEncoder()! encoder.setComputePipelineState(executeState) encoder.setBuffer(selectedNumberIndicesBuffer, index: 0) var counterValue: uint = 0 // extract the value of the atomic counter counterBuffer.contents().copyBytes(to: &counterValue, count: MemoryLayout<UInt32>.stride) encoder.dispatchThreads(.init(width: Int(counterValue), height: 1, depth: 1), threadsPerThreadgroup: .init(width: executeState.threadExecutionWidth, height: 1, depth: 1)) encoder.endEncoding() buffer.commit() My question is if there is any way I can have this same functionality without the costly buffer.waitUntilCompleted() call? Or am I going about this in completely the wrong way, or missing something else?
2
0
501
Aug ’23
Does ModelIO export materials to USD?
I've got the following code to generate an MDLMaterial from my own material data model: public extension MaterialModel { var mdlMaterial: MDLMaterial { let f = MDLPhysicallyPlausibleScatteringFunction() f.metallic.floatValue = metallic f.baseColor.color = CGColor(red: CGFloat(color.x), green: CGFloat(color.y), blue: CGFloat(color.z), alpha: 1.0) f.roughness.floatValue = roughness return MDLMaterial(name: name, scatteringFunction: f) } } When exporting to OBJ, I get the expected material properties: # Apple ModelI/O MTL File: testExport.mtl newmtl material_1 Kd 0.163277 0.0344635 0.229603 Ka 0 0 0 Ks 0 ao 0 subsurface 0 metallic 0 specularTint 0 roughness 0 anisotropicRotation 0 sheen 0.05 sheenTint 0 clearCoat 0 clearCoatGloss 0 newmtl material_2 Kd 0.814449 0.227477 0.124541 Ka 0 0 0 Ks 0 ao 0 subsurface 0 metallic 0 specularTint 0 roughness 1 anisotropicRotation 0 sheen 0.05 sheenTint 0 clearCoat 0 clearCoatGloss 0 However when exporting USD I just get: #usda 1.0 ( defaultPrim = "_0" endTimeCode = 0 startTimeCode = 0 timeCodesPerSecond = 60 upAxis = "Y" ) def Xform "Obj0" { def Mesh "_" { uniform bool doubleSided = 0 float3[] extent = [(896, 896, 896), (1152, 1152, 1148.3729)] int[] faceVertexCounts = ... int[] faceVertexIndices = ... point3f[] points = ... } def Mesh "_0" { uniform bool doubleSided = 0 float3[] extent = [(898.3113, 896.921, 1014.4961), (1082.166, 1146.7178, 1152)] int[] faceVertexCounts = ... int[] faceVertexIndices = ... point3f[] points = ... matrix4d xformOp:transform = ( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) ) uniform token[] xformOpOrder = ["xformOp:transform"] } } There aren't any material properties. FWIW, this specifies a set of common material parameters for USD: https://openusd.org/release/spec_usdpreviewsurface.html (Note: there is no tag for ModelIO, so using SceneKit, etc.)
0
0
513
Sep ’23