Is there a working example of imageblock_slice with implicit layout somewhere?
I get a compilation error when i write this:
imageblock_slilce color_slice = img_blk.slice(frag->color);
Error:
No matching member function for call to 'slice'
candidate template ignored: couldn't infer template argument 'E'
candidate function template not viable: requires 2 arguments, but 1 was provided
Too few template arguments for class template 'imageblock_slice'
It seems the syntax has changed since the Imageblocks presentation https://developer.apple.com/videos/play/tech-talks/603/
I tried supplying the struct type of the image block between <> but it still does not work.
Metal Performance Shaders
RSS for tagOptimize graphics and compute performance with kernels that are fine-tuned for the unique characteristics of each Metal GPU family using Metal Performance Shaders.
Posts under Metal Performance Shaders tag
26 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I am working on a custom resolve tile shader for a client. I see a big difference in performance depending on where we write to:
1- the resolve texture of the color attachment
2- a rw tile shader texture set via [renderEncoder setTileTexture: myResolvedTexture]
Option 2 is more than twice as slow than option 1.
Our compute shader writes to 4 UAVs so just using the resolve texture entry is not possible.
Why such a difference as there is no more data being written? Can option 2 be as fast as option 1?
I can demonstrate the issue in a modified version of the Multisample code sample.
I am running the same Python script using the TensorFlow Metal module on computers with M3 and M4 GPUs. While 1 epoch takes 5 minutes on the M3 device, it takes 15 minutes on the M4 device. What could be the reason for this? Could it be that TensorFlow Metal is not yet optimized for the M4 architecture?
When building MLModel, it is set to use NPU. It seems that GPU is used during inference, but it crashes during Compile.
The stack is as follows:
Project: I have some data wich could be transformed by shader, result may be kept in rgb channels of image. Great.
But now to mix dozens of those results? Not one by one, image after image, but all at once. Something like „complicated average” color of particular pixel from all delivered images.
Is it possible?
Where can I find an example of using this MPSGraph function? I'm trying to use it to paste an image into a larger canvas at certain coordinates.
func sliceUpdateDataTensor(
_ dataTensor: MPSGraphTensor,
update updateTensor: MPSGraphTensor,
starts: [NSNumber],
ends: [NSNumber],
strides: [NSNumber],
startMask: UInt32,
endMask: UInt32,
squeezeMask: UInt32,
name: String?
) -> MPSGraphTensor
Hello,
We are experimenting with Metal to accelerate some peculiar numerical computation. Our workloads are relatively small, so the ability to avoid moving data to and from the GPU's memory is very appealing. However, we are observing higher overhead compared to CUDA, which negates the benefits of avoiding data transfer.
In our tests using an empty kernel, CUDA completes in 0.001 ms (Intel i7 10700K, RTX 3080), while Metal's waitUntilCompleted takes 0.12 ms (M2 Max). As we do not have prior experience with Metal, we are wondering if we are using the APIs just fine and this timing is expected, or if there is a way to reduce it.
Thank you in advance for any comment!
test-metal.cpp
Hi!
How to define and call an inline function in Metal? Or simple function that will return some value.
Case:
inline uint index4D(constant _4D& shape,
constant uint& n,
constant uint& c,
constant uint& h,
constant uint& w) {
return n * shape.C * shape.H * shape.W + c * shape.H * shape.W + h * shape.W + w;
}
When I call it in my kernel function I get No matching function for call error.
Thx in advance.
When generating large arrays of random numbers, NaNs show up. They also show up at the same indices when using the same seed, leading me to believe that this is a bug with MPSMatrixRandom's normally distributed Float32 random number distribution.
Happens with both Philox and MTGP32.
Is this intentional and how do I work around this?
See the original post for a MWE in Swift and Julia: https://github.com/JuliaGPU/Metal.jl/issues/474
Hey guys,
is it possible to implement mirror like reflections like in this project:
https://developer.apple.com/documentation/metal/metal_sample_code_library/rendering_reflections_in_real_time_using_ray_tracing
for visionOS? Or is the hardware not prepared for Metal Raytracing?
Thanks in advance
Screenshot:
Specific error message:
validateComputeFunctionArguments:1149: failed assertion `Compute Function(textureShader): Shader uses texture(texture[0]) as read-write, but hardware does not support read-write texture of this pixel format.'
OS: visionOS 2.1 (22N5548c) simulator.
Link:
https://developer.apple.com/documentation/visionos/generating-procedural-textures-in-visionos
We convert a .onnx file to mpsgraphpackage for iOS deploymentPlatform with command
“Mpsgraphtool convert -deploymentPlatform iOS -minimumDeploymentTarget17.0.0 model.onnx -path .”
When open output.mpsgraphpackage with Xcode16, there are only “generic” and “ Apple M2(MTLDevice)” options in the “Device” selection list. Cannot find any option for iOS device.
How can we view mpsgraph compiled for iOS platform?
We use Xcode16 on a MacBook Pro M2 with macOS 15.
We generated output.mpsgraphpackage from sample.onnx file with command " mpsgraphtool convert -onnx sample.onnx -path ./"
When we try to open the output.mpsgraphpackage with xcode16, the Xcode shows message " A newer macOS is required to view this file"
We have macOS version 14.7 and Xcode version 16.0 (16A242d).
What newer macOS is required?
I understand we can use MPSImageBatch as input to
[MPSNNGraph encodeBatchToCommandBuffer: ...]
method.
That being said, all inputs to the MPSNNGraph need to be encapsulated in a MPSImage(s).
Suppose I have an machine learning application that trains/infers on thousands of input data where each input has 4 feature channels. Metal Performance Shaders is chosen as the primary AI backbone for real-time use.
Due to the nature of encodeBatchToCommandBuffer method, I will have to create a MTLTexture first as a 2D texture array. The texture has pixel width of 1, height of 1 and pixel format being RGBA32f.
The general set up will be:
#define NumInputDims 4
MPSImageBatch * infBatch = @[];
const uint32_t totalFeatureSets = N;
// Each slice is 4 (RGBA) channels.
const uint32_t totalSlices = (totalFeatureSets * NumInputDims + 3) / 4;
MTLTextureDescriptor * descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat: MTLPixelFormatRGBA32Float
width: 1
height: 1
mipmapped: NO];
descriptor.textureType = MTLTextureType2DArray
descriptor.arrayLength = totalSlices;
id<MTLTexture> texture = [mDevice newTextureWithDescriptor: descriptor];
// bytes per row is `4 * sizeof(float)` since we're doing one pixel of RGBA32F.
[texture replaceRegion: MTLRegionMake3D(0, 0, 0, 1, 1, totalSlices)
mipmapLevel: 0
withBytes: inputFeatureBuffers[0].data()
bytesPerRow: 4 * sizeof(float)];
MPSImage * infQueryImage = [[MPSImage alloc] initWithTexture: texture
featureChannels: NumInputDims];
infBatch = [infBatch arrayByAddingObject: infQueryImage];
The training/inference will be:
MPSNNGraph * mInferenceGraph = /*some MPSNNGraph setup*/;
MPSImageBatch * returnImage = [mInferenceGraph encodeBatchToCommandBuffer: commandBuffer
sourceImages: @[infBatch]
sourceStates: nil
intermediateImages: nil
destinationStates: nil];
// Commit and wait...
// Read the return image for the inferred result.
As you can see, the setup is really ad hoc - a lot of 1x1 pixels just for this sole purpose.
Is there any better way I can achieve the same result while still on Metal Performance Shaders? I guess a further question will be: can MPS handle general machine learning cases other than CNN? I can see the APIs are revolved around convolution network, both from online documentations and header files.
Any response will be helpful, thank you.
Hi, I am recently writing metal shader language to parallelize the algorithms to accelerate the speed of it.
I created a simple example to show the acceleration result of it. Since Rust is used in our algorithm, so I used metal-rs as the wrapper to execute the MSL kernels from rust side.
In this example, I am calculating the result of two arrays, and kernel looks like:
kernel void two_array_addition_2(
constant uint* a [[buffer(0)]],
constant uint* b [[buffer(1)]],
device uint* c [[buffer(2)]],
uint idx [[thread_position_in_grid]]
) {
c[idx] = a[idx] + b[idx];
}
in the main.rs, you can see a function called execute_kernel() , this function has all it needs to execute the kernel in MSL (such as commandEncoder, piplelineState, etc).
use core::mem;
use metal::{Buffer, MTLSize};
use objc::rc::autoreleasepool;
use std::time::Instant;
use two_array_addition::abstractions::state::MetalState;
fn execute_kernel(
name: &str,
state: &MetalState,
input_a: &Buffer,
input_b: &Buffer,
output_c: &Buffer,
) -> Vec<u32> {
// assert!(input_a.len() == input_b.len() && input_a.len() == output_c.len());
// let len = input_a.len() as u64;
let len = input_a.length() as u64 / mem::size_of::<u32>() as u64;
// 1. Init the MetalState
// - we inited it
// 2. Set up Pipeline State
let pipeline = state.setup_pipeline(name).unwrap();
// 3. Allocate the buffers for A, B, and C
// - we allocated outside of this function
let mut result: &[u32] = &[];
autoreleasepool(|| {
// 4. Create the command buffer & command encoder
let (command_buffer, command_encoder) = state.setup_command(
&pipeline,
Some(&[(0, input_a), (1, input_b), (2, output_c)]),
);
// 5. command encoder dispatch the threadgroup size and num of threads per threadgroup
let threadgroup_count = MTLSize::new((len + 256 - 1) / 256, 1, 1);
let thread_per_threadgroup = MTLSize::new(256, 1, 1);
// let grid_size = MTLSize::new(len, 1, 1);
// let threadgroup_count = MTLSize::new(pipeline.max_total_threads_per_threadgroup(), 1, 1);
command_encoder.dispatch_thread_groups(threadgroup_count, thread_per_threadgroup);
command_encoder.end_encoding();
command_buffer.commit();
command_buffer.wait_until_completed();
// 6. Copy the result back to the host
let start = Instant::now();
result = MetalState::retrieve_contents::<u32>(output_c);
let duration = start.elapsed();
println!("Duration for copying result back to host: {:?}", duration);
});
result.to_vec()
}
The performance of the result is kinda interesting to me.
This is the result:
$ cargo run -r
This is expected to run for a while... please wait...
Generating input arrays...
Generating input arrays...
Generating output array...
Generating expected output...
Duration for allocating buffers: 2.015258s
Executing 1st kernel (1)...
Duration for copying result back to host: 5.75µs
Executing 1st kernel (2)...
Duration for copying result back to host: 542ns
Executing 2nd kernel (1)...
Duration for copying result back to host: 1µs
Executing 2nd kernel (2)...
Duration for copying result back to host: 458ns
Duration expected: 183.406167ms
Duration for 1st kernel (1): 1.894994875s
Duration for 1st kernel (2): 537.318208ms
Duration for 2nd kernel (1): 501.33275ms
Duration for 2nd kernel (2): 497.339916ms
You have successfully run the kernels!
The speed is slower when executing in the MSL kernel, while I reckon of the dataset is quite big ($2^{29}$)
The first kernel execution takes more time to launch.
Is there any way to optimize the MSL in this case? And in most case, when you design the algorithm into parallelism, what would be the concerns?
The machine I am using is M1 Pro with 14-core GPU and 16 GB memory.
Does anyone have idea / explanation for why these happen?
Thank you
arScnView = ARSCNView(frame: CGRect.zero, options: nil)
arScnView.delegate = self
arScnView.automaticallyUpdatesLighting = true
arScnView.allowsCameraControl = true
addSubview(arScnView)
arSession = arScnView.session
arSession.delegate = self
config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .meshWithClassification
config.environmentTexturing = .automatic
func session(_ session: ARSession, didAdd anchors: [ARAnchor])
{
anchors.forEach({ anchor in
if let meshAnchor = anchor as? ARMeshAnchor {
let node = meshAnchor.toSCNNode()
self.arScnView.scene.rootNode.addChildNode(node)
}
if let environmentProbeAnchor = anchor as? AREnvironmentProbeAnchor {
// Can I retrieve the texture map corresponding to ARMeshAnchor from Environment Probe Anchor?
// Or how can I retrieve the texture map corresponding to ARMeshAnchor?
}
})
}
How can I scan a 3D scene and save it as USDZ?
I want to achieve the following scenario?
I'm making an app that reads a ProRes file, processes each frame through metal to resize and scale it, then outputs a new ProRes file. In the future the app will support other codecs but for now just ProRes. I'm reading the ProRes 422 buffers in the kCVPixelFormatType_422YpCbCr16 pixel format. This is what's recommended by Apple in this video https://developer.apple.com/wwdc20/10090?time=599.
When the MTLTexture is run through a metal performance shader, the colorspace seems to force RGB or is just not allowing yCbCr textures as the output is all green/purple. If you look at the render code, you will see there's a commented out block of code to just blit copy the outputTexture, if you perform the copy instead of the scaling through MPS, the output colorspace is fine. So it appears the issue is from Metal Performance Shaders.
Side note - I noticed that when using this format, it brings in the YpCbCr texture as a single plane. I thought it's preferred to handle this as two separate planes? That said, if I have two separate planes, that makes my app more complicated as I would need to scale both planes or merge it to RGB. But I'm going for the most performance possible.
A sample project can be found here: https://www.dropbox.com/scl/fo/jsfwh9euc2ns2o3bbmyhn/AIomDYRhxCPVaWw9XH-qaN0?rlkey=sp8g0sb86af1u44p3xy9qa3b9&dl=0
Inside the supporting files, there is a test movie. For ease, I would move this to somewhere easily accessible (i.e Desktop).
Load and run the example project.
Click 'Select Video'
Select that video you placed on your desktop
It will now output a new video next to the selected one, named "Output.mov"
The new video should just be scaled at 50%, but the colorspace is all wrong.
Below is a photo of before and after the metal performance shader.
Here is the test code run in a macOS app (MacOS 15 Beta3).
If the excutable path does not contain Chinese character, every thing go as We expect. Otherwise(simply place excutable in a Chinese named directory) , the MTLLibrary We made by newLibraryWithSource: function contains no functions, We just got logs:
"Library contains the following functions: {}"
"Function 'squareKernel' not found."
Note: macOS 14 works fine
id<MTLDevice> device = MTLCreateSystemDefaultDevice();
if (!device) {
NSLog(@"not support Metal.");
}
NSString *shaderSource = @
"#include <metal_stdlib>\n"
"using namespace metal;\n"
"kernel void squareKernel(device float* data [[buffer(0)]], uint gid [[thread_position_in_grid]]) {\n"
" data[gid] *= data[gid];\n"
"}";
MTLCompileOptions *options = [[MTLCompileOptions alloc] init];
options.languageVersion = MTLLanguageVersion2_0;
NSError *error = nil;
id<MTLLibrary> library = [device newLibraryWithSource:shaderSource options:options error:&error];
if (error) {
NSLog(@"New MTLLibrary error: %@", error);
}
NSArray<NSString *> *functionNames = [library functionNames];
NSLog(@"Library contains the following functions: %@", functionNames);
id<MTLFunction> computeShaderFunction = [library newFunctionWithName:@"squareKernel"];
if (computeShaderFunction) {
NSLog(@"Found function 'squareKernel'.");
NSError *pipelineError = nil;
id<MTLComputePipelineState> pipelineState = [device newComputePipelineStateWithFunction:computeShaderFunction error:&pipelineError];
if (pipelineError) {
NSLog(@"Create pipeline state error: %@", pipelineError);
}
NSLog(@"Create pipeline state succeed!");
} else {
NSLog(@"Function 'squareKernel' not found.");
}
Unity 2022.3.33f1
For some reason modifying MeshRenderer material shader SetVectorArray doesn't work on IOS, but it works on android and windows builds!!
I was working on Fog Of War, where I used SimpleFOW by Revision3 it's very simple FOW shader where it manipulates the alpha based on UVs vertices.
This is the FogOfWarShaderControl.cs script
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
namespace SimpleFOW
{
[RequireComponent(typeof(MeshRenderer))]
public class FogOfWarShaderControl : MonoBehaviour
{
public static FogOfWarShaderControl Instance { get; private set; }
[Header("Maximum amount of revealing points")]
[SerializeField] private uint maximumPoints = 512;
[Header("Game Camera")]
[SerializeField] private Camera mainCamera;
private List<Vector4> points = new List<Vector4>();
private Vector2 meshSize, meshExtents;
private Vector4[] sendBuffer;
private MeshRenderer meshRenderer;
private void Awake()
{
Instance = this;
Init();
}
// Initialize required variables
public void Init()
{
meshRenderer = GetComponent<MeshRenderer>();
meshExtents = meshRenderer.bounds.extents;
meshSize = meshRenderer.bounds.size;
points = new List<Vector4>();
sendBuffer = new Vector4[maximumPoints];
}
// Transform world point to UV coordinate of FOW mesh
public Vector2 WorldPointToMeshUV(Vector2 wp)
{
Vector2 toRet = Vector2.zero;
toRet.x = (transform.position.x - wp.x + meshExtents.x) / meshSize.x;
toRet.y = (transform.position.y - wp.y + meshExtents.y) / meshSize.y;
return toRet;
}
// Show or hide FOW
public void SetEnabled(bool on)
{
meshRenderer.enabled = on;
}
// Add revealing point to FOW renderer if amount of points is lower than MAX_POINTS
public void AddPoint(Vector2 worldPoint)
{
if (points.Count < maximumPoints)
{
points.Add(WorldPointToMeshUV(worldPoint));
}
}
// Remove FOW revealing point
public void RemovePoint(Vector2 worldPoint)
{
if (worldPoint == new Vector2(0, 0))
{
return;
}
if (points.Contains(WorldPointToMeshUV(worldPoint)))
{
points.Remove(WorldPointToMeshUV(worldPoint));
}
}
// Send any change to revealing point list to shader for rendering
public void SendPoints()
{
points.ToArray().CopyTo(sendBuffer, 0);
meshRenderer.material.SetVectorArray("_PointArray", sendBuffer);
meshRenderer.material.SetInt("_PointCount", points.Count);
}
// Send new range value to shader
public void SendRange(float range)
{
meshRenderer.material.SetFloat("_RadarRange", range);
}
// Send new scale value to shader
public void SendScale(float scale)
{
meshRenderer.material.SetFloat("_Scale", scale);
}
}
}
And this is the FogOfWar.shader
Shader "Revision3/FogOfWar"
{
Properties
{
_MainTex ("Texture", 2D) = "black" {}
_PointCount("Point count", Range(0,512)) = 0
_Scale("Scale", Float) = 1.0
_RadarRange("Range", Float) = .5
_MaxAlpha("Maximum Alpha", Float) = 1.0
}
SubShader
{
Tags { "RenderType"="Transparent" "Queue"="Transparent" }
LOD 100
ZWrite Off
Blend SrcAlpha OneMinusSrcAlpha
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// make fog work
#pragma multi_compile_fog
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
UNITY_FOG_COORDS(1)
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
float _RadarRange;
uint _PointCount;
float _Scale;
float _MaxAlpha;
float2 _PointArray[512];
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
float getDistance(float2 pa[512], float2 uv) {
float cdist = 99999.0;
for (uint i = 0; i < _PointCount; i++) {
cdist = min(cdist, distance(pa[i]*_Scale, uv));
}
return cdist;
}
fixed4 frag (v2f i) : SV_Target
{
// sample the texture
fixed4 col = tex2D(_MainTex, i.uv);
i.uv *= _Scale;
if (_PointCount > 0)
col.w = min(_MaxAlpha, max(0.0f, getDistance(_PointArray, i.uv) - _RadarRange));
else
col.w = _MaxAlpha;
return col;
}
ENDCG
}
}
}
Now I create a gameobject called FogOfWar as follows
And then in Unit.cs script and Building.cs script I add the following logic
private Vector3 lastPos;
private void Update()
{
if (lastPos != transform.position)
{
FogOfWarShaderControl.Instance.RemovePoint(lastPos);
FogOfWarShaderControl.Instance.AddPoint(transform.position);
lastPos = transform.position;
FogOfWarShaderControl.Instance.SendPoints();
}
}
Now this gives me the effect of FOW as follows on IOS
where the result should be as follows on other devices
I don't know what causes this to happen only on IOS devices.
The logic works fine on android/windows/Linux/editor but not IOS devices.
So why metal API doesn't support shader set vector array?!
What best to pass to the options parameter of:
MTLDevice.makeBuffer(length:options:)
MTLDevice.makeBuffer(bytes:length:options:)
MTLDevice.makeBuffer(bytesNoCopy:length:options:deallocator:)
Basically I'm looking for a "plain English" explanation of MTLResourceOptions doc page.