Hello Everyone,
within the renderEncoder?.drawIndexedPrimitives(type: .line…. function, I can't render all the lines of the object. I can see approx. 80%. Do you know what could be causing this? Other game engines, like those in C++, handle this just fine.
import MetalKit
class Renderer: NSObject, MTKViewDelegate {
var parent: ContentView
var metalDevice: MTLDevice!
var metalCommandQueue: MTLCommandQueue!
let allocator: MTKMeshBufferAllocator
let pipelineState: MTLRenderPipelineState
var scene: RenderScene
let mesh: ObjMesh
init(_ parent: ContentView) {
self.parent = parent
if let metalDevice = MTLCreateSystemDefaultDevice() {
self.metalDevice = metalDevice
}
self.metalCommandQueue = metalDevice.makeCommandQueue()
self.allocator = MTKMeshBufferAllocator(device: metalDevice)
mesh = ObjMesh(device: metalDevice, allocator: allocator, filename: "cube")
let pipelineDescriptor = MTLRenderPipelineDescriptor()
let library = metalDevice.makeDefaultLibrary()
pipelineDescriptor.vertexFunction = library?.makeFunction(name: "vertexShader")
pipelineDescriptor.fragmentFunction = library?.makeFunction(name: "fragmentShader")
pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
pipelineDescriptor.vertexDescriptor = MTKMetalVertexDescriptorFromModelIO(mesh.metalMesh.vertexDescriptor)
do {
try pipelineState = metalDevice.makeRenderPipelineState(descriptor: pipelineDescriptor)
} catch {
fatalError()
}
scene = RenderScene()
super.init()
}
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
}
func draw(in view: MTKView) {
//update
scene.update()
guard let drawable = view.currentDrawable else {
return
}
let commandBuffer = metalCommandQueue.makeCommandBuffer()
let renderPassDescriptor = view.currentRenderPassDescriptor
renderPassDescriptor?.colorAttachments[0].clearColor = MTLClearColorMake(0, 0.5, 0.5, 1.0)
renderPassDescriptor?.colorAttachments[0].loadAction = .clear
renderPassDescriptor?.colorAttachments[0].storeAction = .store
let renderEncoder = commandBuffer?.makeRenderCommandEncoder(descriptor: renderPassDescriptor!)
renderEncoder?.setRenderPipelineState(pipelineState)
var cameraData: CameraParameters = CameraParameters()
cameraData.view = Matrix44.create_lookat(
eye: scene.player.position,
target: scene.player.position + scene.player.forwards,
up: scene.player.up
)
cameraData.projection = Matrix44.create_perspective_projection(
fovy: 45, aspect: 800/600, near: 0.1, far: 10
)
renderEncoder?.setVertexBytes(&cameraData, length: MemoryLayout<CameraParameters>.stride, index: 2)
renderEncoder?.setVertexBuffer(mesh.metalMesh.vertexBuffers[0].buffer, offset: 0, index: 0)
for cube in scene.cubes {
var model: matrix_float4x4 = Matrix44.create_from_rotation(eulers: cube.eulers)
model = Matrix44.create_from_translation(translation: cube.position) * model
renderEncoder?.setVertexBytes(&model, length: MemoryLayout<matrix_float4x4>.stride, index: 1)
for submesh in mesh.metalMesh.submeshes {
renderEncoder?.drawIndexedPrimitives(
type: .line, indexCount: submesh.indexCount,
indexType: submesh.indexType, indexBuffer: submesh.indexBuffer.buffer,
indexBufferOffset: submesh.indexBuffer.offset
)
}
}
renderEncoder?.endEncoding()
commandBuffer?.present(drawable)
commandBuffer?.commit()
}
}
====================
import MetalKit
class ObjMesh {
let modelIOMesh: MDLMesh
let metalMesh: MTKMesh
init(device: MTLDevice, allocator: MTKMeshBufferAllocator, filename: String) {
guard let meshURL = Bundle.main.url(forResource: filename, withExtension: "obj") else {
fatalError()
}
let vertexDescriptor = MTLVertexDescriptor()
var offset: Int = 0
//position
vertexDescriptor.attributes[0].format = .float3
vertexDescriptor.attributes[0].offset = offset
vertexDescriptor.attributes[0].bufferIndex = 0
offset += MemoryLayout<SIMD3<Float>>.stride
vertexDescriptor.layouts[0].stride = offset
let meshDescriptor = MTKModelIOVertexDescriptorFromMetal(vertexDescriptor)
(meshDescriptor.attributes[0] as! MDLVertexAttribute).name = MDLVertexAttributePosition
let asset = MDLAsset(url: meshURL,
vertexDescriptor: meshDescriptor,
bufferAllocator: allocator)
self.modelIOMesh = asset.childObjects(of: MDLMesh.self).first as! MDLMesh
do {
metalMesh = try MTKMesh(mesh: self.modelIOMesh, device: device)
} catch {
fatalError("couldn't load mesh")
}
}
}
===============
cube.obj
Blender v2.91.0 OBJ File: ''
www_blender_org
mtllib piece.mtl
o Cube_Cube.001
v -1.000000 1.000000 -1.000000
v -1.000000 1.000000 1.000000
v 1.000000 1.000000 -1.000000
v 1.000000 1.000000 1.000000
v -1.000000 -1.000000 -1.000000
v -1.000000 -1.000000 1.000000
v 1.000000 -1.000000 -1.000000
v 1.000000 -1.000000 1.000000
vt 0.375000 0.000000
vt 0.625000 0.000000
vt 0.625000 0.250000
vt 0.375000 0.250000
vt 0.625000 0.500000
vt 0.375000 0.500000
vt 0.625000 0.750000
vt 0.375000 0.750000
vt 0.625000 1.000000
vt 0.375000 1.000000
vt 0.125000 0.500000
vt 0.125000 0.750000
vt 0.875000 0.500000
vt 0.875000 0.750000
vn 0.0000 1.0000 0.0000
vn 1.0000 0.0000 0.0000
vn 0.0000 -1.0000 0.0000
vn -1.0000 0.0000 0.0000
vn 0.0000 0.0000 -1.0000
vn 0.0000 0.0000 1.0000
usemtl None
s off
f 1/1/1 2/2/1 4/3/1 3/4/1
f 3/4/2 4/3/2 8/5/2 7/6/2
f 7/6/3 8/5/3 6/7/3 5/8/3
f 5/8/4 6/7/4 2/9/4 1/10/4
f 3/11/5 7/6/5 5/8/5 1/12/5
f 8/5/6 4/13/6 2/14/6 6/7/6
Metal
RSS for tagRender advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.
Post
Replies
Boosts
Views
Activity
Hi, when I run my app, in console say:
NSBundle file:///System/Library/PrivateFrameworks/MetalTools.framework/ principal class is nil because all fallbacks have failed
I created an empty binary archive and try to save it on disk, but it returned false without giving any error message.
The code is from
https://developer.apple.com/documentation/metal/shader_libraries/metal_binary_archives/creating_binary_archives_from_device-built_pipeline_state_objects?language=objc
Hi folks,
I'm working on a Tile based Deferred renderer, similar to this Apple example. I'm wondering how to add MSAA to the renderer, and I see two choices:
Copy the single-sampled texture at the end of the GBuffer/Lighting render pass to a multi-sampled texture and resolve from that
Make all render targets (GBuffer) multi-sampled and deal with sampling/resolving all intermediate textures as well as the final, combined texture.
Which is the proper approach, and are there any examples of how to implement it?
Thanks!
There is a sample project from Apple here. It has a scene of a city at night and you can move in it.
It basically has 2 parts:
application code written in what looks like Objective-C (I am more familiar with C++), which inherits from things like NSObject, MTKView, NSViewController and so on - it processes input and all app-related and window-related stuff.
rendering code that also looks like Objective-C. Btw both parts are mostly in .mm files (Obj-C++ AFAIK). The application part directly uses only one class from the rendering part - AAPLRenderer.
I want to move the rendering part to C++ using metal-cpp. For that I need to link metal-cpp to the project. I did it successfully with blank projects several times before using this tutorial. But with this sample project Xcode can't find Foundation/Foundation.hpp (and other metal-cpp headers). The error says this:
Did not find header 'Foundation.hpp' in framework 'Foundation' (loaded from '/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk/System/Library/Frameworks')
Pls help
Hi, I am recently writing metal shader language to parallelize the algorithms to accelerate the speed of it.
I created a simple example to show the acceleration result of it. Since Rust is used in our algorithm, so I used metal-rs as the wrapper to execute the MSL kernels from rust side.
In this example, I am calculating the result of two arrays, and kernel looks like:
kernel void two_array_addition_2(
constant uint* a [[buffer(0)]],
constant uint* b [[buffer(1)]],
device uint* c [[buffer(2)]],
uint idx [[thread_position_in_grid]]
) {
c[idx] = a[idx] + b[idx];
}
in the main.rs, you can see a function called execute_kernel() , this function has all it needs to execute the kernel in MSL (such as commandEncoder, piplelineState, etc).
use core::mem;
use metal::{Buffer, MTLSize};
use objc::rc::autoreleasepool;
use std::time::Instant;
use two_array_addition::abstractions::state::MetalState;
fn execute_kernel(
name: &str,
state: &MetalState,
input_a: &Buffer,
input_b: &Buffer,
output_c: &Buffer,
) -> Vec<u32> {
// assert!(input_a.len() == input_b.len() && input_a.len() == output_c.len());
// let len = input_a.len() as u64;
let len = input_a.length() as u64 / mem::size_of::<u32>() as u64;
// 1. Init the MetalState
// - we inited it
// 2. Set up Pipeline State
let pipeline = state.setup_pipeline(name).unwrap();
// 3. Allocate the buffers for A, B, and C
// - we allocated outside of this function
let mut result: &[u32] = &[];
autoreleasepool(|| {
// 4. Create the command buffer & command encoder
let (command_buffer, command_encoder) = state.setup_command(
&pipeline,
Some(&[(0, input_a), (1, input_b), (2, output_c)]),
);
// 5. command encoder dispatch the threadgroup size and num of threads per threadgroup
let threadgroup_count = MTLSize::new((len + 256 - 1) / 256, 1, 1);
let thread_per_threadgroup = MTLSize::new(256, 1, 1);
// let grid_size = MTLSize::new(len, 1, 1);
// let threadgroup_count = MTLSize::new(pipeline.max_total_threads_per_threadgroup(), 1, 1);
command_encoder.dispatch_thread_groups(threadgroup_count, thread_per_threadgroup);
command_encoder.end_encoding();
command_buffer.commit();
command_buffer.wait_until_completed();
// 6. Copy the result back to the host
let start = Instant::now();
result = MetalState::retrieve_contents::<u32>(output_c);
let duration = start.elapsed();
println!("Duration for copying result back to host: {:?}", duration);
});
result.to_vec()
}
The performance of the result is kinda interesting to me.
This is the result:
$ cargo run -r
This is expected to run for a while... please wait...
Generating input arrays...
Generating input arrays...
Generating output array...
Generating expected output...
Duration for allocating buffers: 2.015258s
Executing 1st kernel (1)...
Duration for copying result back to host: 5.75µs
Executing 1st kernel (2)...
Duration for copying result back to host: 542ns
Executing 2nd kernel (1)...
Duration for copying result back to host: 1µs
Executing 2nd kernel (2)...
Duration for copying result back to host: 458ns
Duration expected: 183.406167ms
Duration for 1st kernel (1): 1.894994875s
Duration for 1st kernel (2): 537.318208ms
Duration for 2nd kernel (1): 501.33275ms
Duration for 2nd kernel (2): 497.339916ms
You have successfully run the kernels!
The speed is slower when executing in the MSL kernel, while I reckon of the dataset is quite big ($2^{29}$)
The first kernel execution takes more time to launch.
Is there any way to optimize the MSL in this case? And in most case, when you design the algorithm into parallelism, what would be the concerns?
The machine I am using is M1 Pro with 14-core GPU and 16 GB memory.
Does anyone have idea / explanation for why these happen?
Thank you
hello everyone.
I got a texture loading error on visionOS 2.0:
Can't create texture(Error Domain=MTKTextureLoaderErrorDomain Code=0 "Pixel format(MTLPixelFormatInvalid) is not valid on this device" UserInfo={NSLocalizedDescription=Pixel format(MTLPixelFormatInvalid) is not valid on this device, MTKTextureLoaderErrorKey=Pixel format(MTLPixelFormatInvalid) is not valid on this device}
But this texture can load correctly on visionOS1.3.
I don't know what happen between visionOS1.3 and visionOS2.0.
The texture is a ktx file which stores cubemap that encoding in astc6x6hdr. And the ktx texture has a glInternalFormat info: GL_COMPRESSED_RGBA_ASTC_6x6.
I wonder if visionOS2.0 no longer supports astc6x6hdr cubemap format, or there is something wrong with my assets.
I use xcode16 and swiftUI for programming on a macos15 system. There is a problem. When I render a picture through mtkview, it is normal when displayed on a regular view. However, when the view is displayed through the .sheet method, the image cannot be displayed. There is no error message from xcode.
import Foundation
import MetalKit
import SwiftUI
struct CIImageDisplayView: NSViewRepresentable {
typealias NSViewType = MTKView
var ciImage: CIImage
init(ciImage: CIImage) {
self.ciImage = ciImage
}
func makeNSView(context: Context) -> MTKView {
let view = MTKView()
view.delegate = context.coordinator
view.preferredFramesPerSecond = 60
view.enableSetNeedsDisplay = true
view.isPaused = true
view.framebufferOnly = false
if let defaultDevice = MTLCreateSystemDefaultDevice() {
view.device = defaultDevice
}
view.delegate = context.coordinator
return view
}
func updateNSView(_ nsView: MTKView, context: Context) {
}
func makeCoordinator() -> RawDisplayRender {
RawDisplayRender(ciImage: self.ciImage)
}
class RawDisplayRender: NSObject, MTKViewDelegate {
// MARK: Metal resources
var device: MTLDevice!
var commandQueue: MTLCommandQueue!
// MARK: Core Image resources
var context: CIContext!
var ciImage: CIImage
init(ciImage: CIImage) {
self.ciImage = ciImage
self.device = MTLCreateSystemDefaultDevice()
self.commandQueue = self.device.makeCommandQueue()
self.context = CIContext(mtlDevice: self.device)
}
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {}
func draw(in view: MTKView) {
guard let currentDrawable = view.currentDrawable,
let commandBuffer = commandQueue.makeCommandBuffer()
else {
return
}
let dSize = view.drawableSize
let drawImage = self.ciImage
let destination = CIRenderDestination(width: Int(dSize.width),
height: Int(dSize.height),
pixelFormat: view.colorPixelFormat,
commandBuffer: commandBuffer,
mtlTextureProvider: { () -> MTLTexture in
return currentDrawable.texture
})
_ = try? self.context.startTask(toClear: destination)
_ = try? self.context.startTask(toRender: drawImage, from: drawImage.extent,
to: destination, at: CGPoint(x: (dSize.width - drawImage.extent.width) / 2, y: 0))
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
}
}
struct ShowCIImageView: View {
let cii = CIImage.init(contentsOf: Bundle.main.url(forResource: "9-10", withExtension: "jpg")!)!
var body: some View {
CIImageDisplayView.init(ciImage: cii).frame(width: 500, height: 500).background(.red)
}
}
struct ContentView: View {
@State var showImage = false
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
ShowCIImageView()
Button {
showImage = true
} label: {
Text("showImage")
}
}
.frame(width: 800, height: 800)
.padding()
.sheet(isPresented: $showImage) {
ShowCIImageView()
}
}
}
In my project I need to do the following:
In runtime create metal Dynamic library from source.
In runtime create metal Executable library from source and Link it with my previous created Dynamic library.
Create compute pipeline using those two libraries created above.
But I get the following error at the third step:
Error Domain=AGXMetalG15X_M1 Code=2 "Undefined symbols:
_Z5noisev, referenced from: OnTheFlyKernel
" UserInfo={NSLocalizedDescription=Undefined symbols:
_Z5noisev, referenced from: OnTheFlyKernel
}
import Foundation
import Metal
class MetalShaderCompiler {
let device = MTLCreateSystemDefaultDevice()!
var pipeline: MTLComputePipelineState!
func compileDylib() -> MTLDynamicLibrary {
let source = """
#include <metal_stdlib>
using namespace metal;
half3 noise() {
return half3(1, 0, 1);
}
"""
let option = MTLCompileOptions()
option.libraryType = .dynamic
option.installName = "@executable_path/libFoundation.metallib"
let library = try! device.makeLibrary(source: source, options: option)
let dylib = try! device.makeDynamicLibrary(library: library)
return dylib
}
func compileExlib(dylib: MTLDynamicLibrary) -> MTLLibrary {
let source = """
#include <metal_stdlib>
using namespace metal;
extern half3 noise();
kernel void OnTheFlyKernel(texture2d<half, access::read> src [[texture(0)]],
texture2d<half, access::write> dst [[texture(1)]],
ushort2 gid [[thread_position_in_grid]]) {
half4 rgba = src.read(gid);
rgba.rgb += noise();
dst.write(rgba, gid);
}
"""
let option = MTLCompileOptions()
option.libraryType = .executable
option.libraries = [dylib]
let library = try! self.device.makeLibrary(source: source, options: option)
return library
}
func runtime() {
let dylib = self.compileDylib()
let exlib = self.compileExlib(dylib: dylib)
let pipelineDescriptor = MTLComputePipelineDescriptor()
pipelineDescriptor.computeFunction = exlib.makeFunction(name: "OnTheFlyKernel")
pipelineDescriptor.preloadedLibraries = [dylib]
pipeline = try! device.makeComputePipelineState(descriptor: pipelineDescriptor, options: .bindingInfo, reflection: nil)
}
}
The following code runs fine when compiled with Swift 5, but crashes when compiled with Swift 6 (stack trace below). In the draw method, commenting out the addCompletedHandler line fixes the problem. I'm testing on iOS 18.0 and see the same behavior in both the simulator and on a device. What's going on here?
import Metal
import MetalKit
import UIKit
class ViewController: UIViewController {
@IBOutlet var metalView: MTKView!
private var commandQueue: MTLCommandQueue?
override func viewDidLoad() {
super.viewDidLoad()
guard let device = MTLCreateSystemDefaultDevice() else {
fatalError("expected a Metal device")
}
self.commandQueue = device.makeCommandQueue()
metalView.device = device
metalView.enableSetNeedsDisplay = true
metalView.isPaused = true
metalView.delegate = self
}
}
extension ViewController: MTKViewDelegate {
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {}
func draw(in view: MTKView) {
guard let commandQueue,
let commandBuffer = commandQueue.makeCommandBuffer()
else { return }
commandBuffer.addCompletedHandler { _ in } // works with Swift 5, crashes with Swift 6
commandBuffer.commit()
}
}
Here's the stack trace:
Thread 10 Queue : connection Queue (serial)
#0 0x000000010581c3f8 in _dispatch_assert_queue_fail ()
#1 0x000000010581c384 in dispatch_assert_queue ()
#2 0x00000002444c63e0 in swift_task_isCurrentExecutorImpl ()
#3 0x0000000104d71ec4 in closure #1 in ViewController.draw(in:) ()
#4 0x0000000104d71f58 in thunk for @escaping @callee_guaranteed (@guaranteed MTLCommandBuffer) -> () ()
#5 0x0000000105ef1950 in __47-[CaptureMTLCommandBuffer _preCommitWithIndex:]_block_invoke_2 ()
#6 0x00000001c50b35b0 in -[MTLToolsCommandBuffer invokeCompletedHandlers] ()
#7 0x000000019e94d444 in MTLDispatchListApply ()
#8 0x000000019e94f558 in -[_MTLCommandBuffer didCompleteWithStartTime:endTime:error:] ()
#9 0x000000019e95352c in -[_MTLCommandQueue commandBufferDidComplete:startTime:completionTime:error:] ()
#10 0x0000000226ef50b0 in handleMainConnectionReplies ()
#11 0x00000001800c9690 in _xpc_connection_call_event_handler ()
#12 0x00000001800cad90 in _xpc_connection_mach_event ()
#13 0x000000010581a86c in _dispatch_client_callout4 ()
#14 0x0000000105837950 in _dispatch_mach_msg_invoke ()
#15 0x0000000105822870 in _dispatch_lane_serial_drain ()
#16 0x0000000105838c10 in _dispatch_mach_invoke ()
#17 0x0000000105822870 in _dispatch_lane_serial_drain ()
#18 0x00000001058237b0 in _dispatch_lane_invoke ()
#19 0x00000001058301f0 in _dispatch_root_queue_drain_deferred_wlh ()
#20 0x000000010582f75c in _dispatch_workloop_worker_thread ()
#21 0x00000001050abb74 in _pthread_wqthread ()
Hi there, I'm trying to test the "Drawing fully immersive content using Metal" , but when I select Language: Swift, it still shows Objective C code in some sample codes.
Please check and update the document Swift Code, thank you.
I wanted to try the new logging feature for Metal but could not get it to work.
I modified the PerformingCalculationsOnAGPU example by adding os_log_default.log_debug("Hello thread: %d", index); to log the current thread id. But never saw any messages neither in the console nor in Xcode.
I also added the -fmetal-enable-logging flag. I am running the Sequoia release candidate 15.0 (24A335) on M1 Max and Xcode 16.0 (16A242).
What am I missing?
I am trying to get a little game prototype up and running using Metal using the metal-cpp libraries where I run everything natively at 120Hz with a coupled renderer using Vsync turned on so that I have the absolute physically minimum input to photon latency possible.
// Create the metal view
SDL_MetalView metal_view = SDL_Metal_CreateView(window);
CA::MetalLayer *swap_chain = (CA::MetalLayer *)SDL_Metal_GetLayer(metal_view);
// Set up the Metal device
MTL::Device *device = MTL::CreateSystemDefaultDevice();
swap_chain->setDevice(device);
swap_chain->setPixelFormat(MTL::PixelFormat::PixelFormatBGRA8Unorm);
swap_chain->setDisplaySyncEnabled(true);
swap_chain->setMaximumDrawableCount(2);
I am using SDL3 just for creating the window. Now when I go through my game / render loop - I stall for a long time on getting the next drawable which is understandable - my app runs in about 2-3ms.
m_CurrentContext->m_Drawable = m_SwapChain->nextDrawable();
m_CurrentContext->m_CommandBuffer = m_CommandQueue->commandBuffer()->retain();
char frame_label[32];
snprintf(frame_label, sizeof(frame_label), "Frame %d", m_FrameIndex);
m_CurrentContext->m_CommandBuffer->setLabel(NS::String::string(frame_label, NS::UTF8StringEncoding));
m_CurrentContext->m_RenderPassDescriptor[ERenderPassTypeNormal] = MTL::RenderPassDescriptor::alloc()->init();
MTL::RenderPassColorAttachmentDescriptor* cd = m_CurrentContext->m_RenderPassDescriptor[ERenderPassTypeNormal]->colorAttachments()->object(0);
cd->setTexture(m_CurrentContext->m_Drawable->texture());
cd->setLoadAction(MTL::LoadActionClear);
cd->setClearColor(MTL::ClearColor( 0.53f, 0.81f, 0.98f, 1.0f ));
cd->setStoreAction(MTL::StoreActionStore);
However my ProMotion display does not reliably run at 120Hz when fullscreen and using the direct to display system - it seems to run faster when windowed in composite which is the opposite of what I would expect. The Metal HUD says 120Hz, but the delay to getting the next drawable and looking at what Instruments is saying tells otherwise.
When I profile it, the game loop has completed and is sitting there waiting for the next drawable, but the screen does not want to complete in 8.33ms, so the whole thing slows down for no discernible reason.
Also as a game developer it is very strange for the command buffer to actually need the drawable texture free to be allowed to encode commands - usually the command buffers and swapping the front and back render buffers are not directly dependent on each other. Usually you only actually need the render buffer texture free when you want to draw to it. I could give myself another drawable, but because I am completing in less than 3ms, all it would do would be to add another frame of latency.
I also looked at the FramePacing example and its behaviour is even worse at having high framerate with low latency - the direct to display is always rejected for some reason.
Is this just a flaw in the Metal API? Or am I missing something important? I hope someone can help - the behaviour of the display is baffling.
Guten Tag,
my project is simple, first I want draw wired Hexa,-Tetra- and Octahedrons.
I draw a cube with Metal but I didn't found rotation, translation and scale.
I have searched help , the examples I found are too complicated for me.
Mit freundlichen Grüßen
VanceRegnet
I am trying to convert a ThreeJS project to Metal for the Vision Pro. The issue is ThreeJS doesn't do any color space conversion (when I output a color in a fragment shader and then read it using the digital color meter in SRGB mode I get the same value I inputed in the fragment shader) This is not the case when using metal. When setting up my LayerRenderer I set the colorFormat to rgba16Unorm since it is the only non srgb color format supported on the vision pro apps. However switching between bgra8Unorm_srgb and rgba16Unorm seems to have no affect.
when I set up the renderPassDescriptor I use the drawable colorTexture
renderPassDescriptor.colorAttachments[0].texture = drawable.colorTextures[0]
and when printing its pixel format it seems to be passed from the configuration.
If there is anyway to disable this behavior or perform an inverse function of such that I get the original value out from the shader, that would be appreciated.
I'm trying to create a custom Metal-based visual effect as a UIView to be used inside an existing UIKit-based interface. (An example might be a view that applies a blur effect to what's behind it.) I need to capture the MTLTexture of what's behind the view so that I can feed it to MTLRenderCommandEncoder.setFragmentTexture(_:index:). Can someone show me how or point me to an example? Thanks!
Greetings! I have been battling with a bit of a tough issue. My use case is running a pixelwise regression model on a 2D array of images using CIImageProcessorKernel and a custom Metal Shader.
It mostly works great, but the issue that arises is that if the regression calculation in Metal takes too long, an error occurs and the resulting output texture has strange artifacts, for example:
The specific error is:
Error excuting command buffer = Error Domain=MTLCommandBufferErrorDomain Code=1 "Internal Error (0000000e:Internal Error)" UserInfo={NSLocalizedDescription=Internal Error (0000000e:Internal Error), NSUnderlyingError=0x60000320ca20 {Error Domain=IOGPUCommandQueueErrorDomain Code=14 "(null)"}} (com.apple.CoreImage)
There are multiple levels of concurrency: Swift Concurrency calling the Core Image code (which shouldn't have an impact) and of course the Metal command buffer.
Is there anyway to ensure the compute command encoder can complete its work?
Here is the full implementation of my CIImageProcessorKernel subclass:
class ParametricKernel: CIImageProcessorKernel {
static let device = MTLCreateSystemDefaultDevice()!
override class var outputFormat: CIFormat {
return .BGRA8
}
override class func formatForInput(at input: Int32) -> CIFormat {
return .BGRA8
}
override class func process(with inputs: [CIImageProcessorInput]?, arguments: [String : Any]?, output: CIImageProcessorOutput) throws {
guard
let commandBuffer = output.metalCommandBuffer,
let images = arguments?["images"] as? [CGImage],
let mask = arguments?["mask"] as? CGImage,
let fillTime = arguments?["fillTime"] as? CGFloat,
let betaLimit = arguments?["betaLimit"] as? CGFloat,
let alphaLimit = arguments?["alphaLimit"] as? CGFloat,
let errorScaling = arguments?["errorScaling"] as? CGFloat,
let timing = arguments?["timing"],
let TTRThreshold = arguments?["ttrthreshold"] as? CGFloat,
let input = inputs?.first,
let sourceTexture = input.metalTexture,
let destinationTexture = output.metalTexture
else {
return
}
guard let kernelFunction = device.makeDefaultLibrary()?.makeFunction(name: "parametric") else {
return
}
guard let commandEncoder = commandBuffer.makeComputeCommandEncoder() else {
return
}
let imagesTexture = Texture.textureFromImages(images)
let pipelineState = try device.makeComputePipelineState(function: kernelFunction)
commandEncoder.setComputePipelineState(pipelineState)
commandEncoder.setTexture(imagesTexture, index: 0)
let maskTexture = Texture.textureFromImages([mask])
commandEncoder.setTexture(maskTexture, index: 1)
commandEncoder.setTexture(destinationTexture, index: 2)
var errorScalingFloat = Float(errorScaling)
let errorBuffer = device.makeBuffer(bytes: &errorScalingFloat, length: MemoryLayout<Float>.size, options: [])
commandEncoder.setBuffer(errorBuffer, offset: 0, index: 1)
// Other buffers omitted....
let threadsPerThreadgroup = MTLSizeMake(16, 16, 1)
let width = Int(ceil(Float(sourceTexture.width) / Float(threadsPerThreadgroup.width)))
let height = Int(ceil(Float(sourceTexture.height) / Float(threadsPerThreadgroup.height)))
let threadGroupCount = MTLSizeMake(width, height, 1)
commandEncoder.dispatchThreadgroups(threadGroupCount, threadsPerThreadgroup: threadsPerThreadgroup)
commandEncoder.endEncoding()
}
}
The Metal feature set tables specifies that beginning with the Apple4 family, the "Maximum threads per threadgroup" is 1024. Given that a single threadgroup is guaranteed to be run on the same GPU shader core, it means that a shader core of any new Apple GPU must be capable of running at least 1024/32 = 32 warps in parallel.
From the WWDC session "Scale compute workloads across Apple GPUs (6:17)":
For relatively complex kernels, 1K to 2K concurrent threads per shader core is considered a very good occupancy.
The cited sentence suggests that a single shader core is capable of running at least 2K (I assume this is meant to be 2048) threads in parallel, so 2048/32 = 64 warps running in parallel.
However, I am curious what is the maximum theoretical amount of warps running in parallel on a single shader core (it sounds like it is more than 64). The WWDC session mentions 2K to be only "very good" occupancy. How many threads would be "the best possible" occupancy?
Our app encountered the following error:
Execution of the command buffer was aborted due to an error during execution. Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored)
How many 32-bit variables can I use concurrently in a single thread of a Metal compute kernel without worrying about the variables getting spilled into the device memory? Alternatively: how many 32-bit registers does a single thread have available for itself?
Let's say that each thread of my compute kernel needs to store and work with its own array of N float variables, where N can be 128, 256, 512 or more. To achieve maximum possible performance, I do not want to the local thread variables to get spilled into the slow device memory. I want all N variables to be stored "on-chip", in the thread memory space.
To make my question more concrete, let's say there is an array thread float localArray[N]. Assuming an unrealistic hypothetical scenario where localArray is the only variable in the whole kernel, what is the maximum value of N for which no portion of localArray would get spilled into the device memory?
I searched in the Metal feature set tables, but I could not find any details.