Hello,
Question re: iOS RealityView postProcess. I've got a working postProcess kernel and I'd like to add some depth-based effects to it. Theoretically I should be able to just do:
encoder.setTexture(context.sourceDepthTexture, index: 1)
and then in the kernel:
texture2d<float, access::read> depthIn [[texture(1)]]
...
outTexture.write(depthIn.read(gid), gid);
And I consistently see all black rendered to the view. The postProcess shader works, so that's not the issue. It just seems to not be receiving actual depth information.
(If I set a breakpoint at the encoder setTexture step, I can see preview the color texture of the scene, but the context's depthTexture looks like all NaN / blank.)
I've looked at all the WWDC samples, but they include ARView for all the depth sample code, which has a different set of configuration options than RealityView. So far I haven't seen anywhere to explicitly tell RealityView "include the depth information". So I'm not sure if I'm missing something there.
It appears that there is indeed a depth texture being passed, but it looks blank.
Is there a working example somewhere that we can reference?
Metal
RSS for tagRender advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.
Posts under Metal tag
150 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I was encountering visual artifacts in my Unity game only on iOS devices and wanted to use the Metal Debugger to diagnose it. However, it seems to only capture static noise which is not helpful as shown below
For reference, this is a frame of the visual artifact and also the location where I asked Metal to capture the frame
Am I missing any settings in Unity/Xcode?
Fundamentally, my questions are: is there a known transform I can apply onto a given (pixel) position (passed into a Metal Fragment Function) to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks?
My goal is to highlight people in a Vision Pro app using Compositor Services.
To start, I asynchronously receive camera frames for the main left and right cameras. This is the breakdown of the specific CameraVideoFormat I pass along to the CameraFrameProvider:
minFrameDuration: 0.03
maxFrameDuration: 0.033333335
frameSize: (1920.0, 1080.0)
pixelFormat: 875704422
cameraType: main
cameraPositions: [left, right]
cameraRectification: mono
From each camera frame sample, I extract the left and right buffers (CVReadOnlyPixelBuffer.withUnsafebuffer ==> CVPixelBuffer).
I asynchronously process the extracted buffers by performing a VNGeneratePersonSegmentationRequest on both of them:
// NOTE: This block of code and all following code blocks contain simplified representations of my code for clarity's sake.
var request = VNGeneratePersonSegmentationRequest()
request.qualityLevel = .balanced
request.outputPixelFormat = kCVPixelFormatType_OneComponent8
...
let lHandler = VNSequenceRequestHandler()
let rHandler = VNSequenceRequestHandler()
...
func processBuffers() async {
try lHandler.perform([request], on: lBuffer)
guard let lMask = request.results?.first?.pixelBuffer else {...}
try rHandler.perform([request], on: rBuffer)
guard let rMask = request.results?.first?.pixelBuffer else {...}
appModel.latestPersonMasks = (lMask, rMask)
}
I store the two resulting CVPixelBuffers in my appModel. For both of these buffers aka grayscale masks:
width (in pixels) = 512
height (in pixels) = 384
byters per row = 512
plane count = 0
pixel format type = 1278226488
I am using Compositor Services to render my content in Immersive Space. My implementation of Compositor Services is based off of the same code from Interacting with virtual content blended with passthrough.
Within the Shaders.metal, the tint's Fragment Shader is now passed the grayscale masks (converted from CVPixelBuffer to MTLTexture via CVMetalTextureCacheCreateTextureFromImage() at the beginning of the main render pipeline).
fragment float4 tintFragmentShader(
TintInOut in [[stage_in]],
ushort amp_id [[amplification_id]],
texture2d<uint> leftMask [[texture(0)]],
texture2d<uint> rightMask [[texture(1)]]
)
{
if (in.color.a <= 0.0) {
discard_fragment();
}
float2 uv;
if (amp_id == 0) { // LEFT
uv = ??????????????????????;
} else { // RIGHT
uv = ??????????????????????;
}
constexpr sampler linearSampler (mip_filter::linear, mag_filter::linear, min_filter::linear);
// Sample the PersonSegmentation grayscale mask
float maskValue = 0.0;
if (amp_id == 0) { // LEFT
if (leftMask.get_width() > 0) {
maskValue = rightMask.sample(linearSampler, uv).r;
}
} else { // RIGHT
if (rightMask.get_width() > 0) {
maskValue = rightMask.sample(linearSampler, uv).r;
}
}
if (maskValue > 250) {
return (1.0, 1.0, 1.0, 0.5)
}
return in.color;
}
I need to correctly sample the masks for a given fragment.
The LayerRenderer.Layout is set to .layered. From Developer Documentation.
A layout that specifies each view’s content as a slice of a single texture.
Using the Metal debugger, I know that the final render target texture for each view / eye is 1888 x 1792 pixels, giving an aspect ratio of 59:56.
The initial CVPixelBuffer provided by the main left and right cameras is 1920x1080 (16:9).
The grayscale CVPixelBuffer returned by the VNPersonSegmentationRequest is 512x384 (4:3).
All of these aspect ratios are different.
My questions come down to: is there a known transform I can apply onto a given (pixel) position to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks?
Within the tint's Vertex Shader, after applying the modelViewProjectionMatrix, I have tried every version I have been able to find that takes the pixel space position (= vertices[vertexID].position.xy) and the viewport size (1888x1792) to compute the correct clip space position (maybe = pixel space position.xy / (viewport size * 0.5)???) of the grayscale masks but nothing has worked. The "highlight" of the person segmentations is off: scaled a little too big, offset little to far up and off to the side.
Hi, I'm Beginner with Metal 4 and Model I/O 🥺.
I can render simple models with just one mesh, but when I try to render models with SubMeshes, nothing shows up on screen.
Can anyone help me figure out how to properly render models with multiple submeshes? I think I'm not iterating through them correctly or maybe missing some buffers setup.
Here's what I have so far:
https://www.icloud.com.cn/iclouddrive/0a6x_NLwlWy-herPocExZ8g3Q#LoadModel
We’ve encountered what appears to be a CoreML regression between macOS 26.0.1 and macOS 26.1 Beta.
In macOS 26.0.1, CoreML models run and produce correct results. However, in macOS 26.1 Beta, the same models produce scrambled or corrupted outputs, suggesting that tensor memory is being read or written incorrectly. The behavior is consistent with a low-level stride or pointer arithmetic issue — for example, using 16-bit strides on 32-bit data or other mismatches in tensor layout handling.
Reproduction
Install ON1 Photo RAW 2026 or ON1 Resize 2026 on macOS 26.0.1.
Use the newest Highest Quality resize model, which is Stable Diffusion–based and runs through CoreML.
Observe correct, high-quality results.
Upgrade to macOS 26.1 Beta and run the same operation again.
The output becomes visually scrambled or corrupted.
We are also seeing similar issues with another Stable Diffusion UNet model that previously worked correctly on macOS 26.0.1. This suggests the regression may affect multiple diffusion-style architectures, likely due to a change in CoreML’s tensor stride, layout computation, or memory alignment between these versions.
Notes
The affected models are exported using standard CoreML conversion pipelines.
No custom operators or third-party CoreML runtime layers are used.
The issue reproduces consistently across multiple machines.
It would be helpful to know if there were changes to CoreML’s tensor layout, precision handling, or MLCompute backend between macOS 26.0.1 and 26.1 Beta, or if this is a known regression in the current beta.
After watching WWDC 2025 session "Combine Metal 4 machine learning and graphics", I have decided to give it a shot to integrate the latest MTL4MachineLearningCommandEncoder to my existing render pipeline. After a lot of trial and errors, I managed to set up the pipeline and have the app compiled.
However, I am now stuck on creating a MTLLibrary with .mtlpackage.
Here is the code I have to create a MTLLibrary according the WWDC session https://developer.apple.com/videos/play/wwdc2025/262/?time=550:
let coreMLFilePath = bundle.path(forResource: "my_model", ofType: "mtlpackage")!
let coreMLURL = URL(string: coreMLFilePath)!
do {
metalDevice.makeLibrary(URL: coreMLURL)
} catch {
print("error: \(error)")
}
With the above code, I am getting error:
Error Domain=MTLLibraryErrorDomain Code=1 "Invalid metal package" UserInfo={NSLocalizedDescription=Invalid metal package}
What is the correct way to create a MTLLibrary with .mtlpackage? Do I see this error because the .mtlpackage I am using is incorrect? How should I go with debugging this?
I'd really appreciate if I could get some help on this as I have been stuck with it for some time now. Thanks in advance!
Hello,
I recently watched the WWDC2025 session titled “Combine Metal 4 machine learning and graphics” (https://developer.apple.com/videos/play/wwdc2025/262/ ), and I’m very excited about the new Metal 4 features that integrate machine learning with graphics—such as neural ambient occlusion, shader-based ML inference, and the use of MTLTensor and MTL4MachineLearningCommandEncoder.
While the session includes helpful code snippets and a compelling debug demo (e.g., the neural ambient occlusion example), the implementation details are not fully shown, and I haven’t been able to find a complete, runnable sample project that demonstrates end-to-end integration of ML and rendering in Metal 4.
Would Apple be able to provide a full, working example—such as an Xcode project—that shows how to:
Export a model to an .mlpackage,
Convert it to an .mtlpackage,
Use MTL4MachineLearningCommandEncoder alongside render passes,
Or embed small neural networks directly in shaders using Shader ML?
Having such a sample would greatly help developers like me adopt these powerful new capabilities correctly and efficiently.
Thank you very much for your time and support!
Best regards,
Hi
We're on tensorflow 2.20 that has support now for python 3.13 (finally!). tensorflow-metal is still only supporting 2.18 which is over a year old.
When can we expect to see support in tensorflow-metal for tf 2.20 (or later!) ?
I bought a mac thinking I would be able to get great performance from the M processors but here I am using my CPU for my ML projects.
If it's taking so long to release it, why not open source it so the community can keep it more up to date?
cheers
Matt
Also submitted as feedback (ID: FB20612561).
Tensorflow-metal fails on tensorflow versions above 2.18.1, but works fine on tensorflow 2.18.1
In a new python 3.12 virtual environment:
pip install tensorflow
pip install tensor flow-metal
python -c "import tensorflow as tf"
Prints error:
Traceback (most recent call last):
File "", line 1, in
File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/init.py", line 438, in
_ll.load_library(_plugin_dir)
File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Reason: tried: '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file)
Topic:
Machine Learning & AI
SubTopic:
General
Tags:
Developer Tools
Metal
Machine Learning
tensorflow-metal
I have a Core Image filter in my app that uses Metal. I cannot compile it because it complains that the executable tool metal is not available, but I have installed it in Xcode.
If I go to the "Components" section of Xcode Settings, it shows it as downloaded. And if I run the suggested command, it also shows it as installed. Any advice?
Xcode Version
Version 26.0 beta (17A5241e)
Build Output
Showing All Errors Only
Build target Lessons of project StudyJapanese with configuration Light
RuleScriptExecution /Users/chris/Library/Developer/Xcode/DerivedData/StudyJapanese-glbneyedpsgxhscqueifpekwaofk/Build/Intermediates.noindex/StudyJapanese.build/Light-iphonesimulator/Lessons.build/DerivedSources/OtsuThresholdKernel.ci.air /Users/chris/Code/SerpentiSei/Shared/iOS/CoreImage/OtsuThresholdKernel.ci.metal normal undefined_arch (in target 'Lessons' from project 'StudyJapanese')
cd /Users/chris/Code/SerpentiSei/StudyJapanese
/bin/sh -c xcrun\ metal\ -w\ -c\ -fcikernel\ \"\$\{INPUT_FILE_PATH\}\"\ -o\ \"\$\{SCRIPT_OUTPUT_FILE_0\}\"'
'
error: error: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain
/Users/chris/Code/SerpentiSei/StudyJapanese/error:1:1: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain
Build failed 6/9/25, 8:31 PM 27.1 seconds
Result of xcodebuild -downloadComponent MetalToolchain (after switching Xcode-beta.app with xcode-select)
xcodebuild -downloadComponent MetalToolchain
Beginning asset download...
Downloaded asset to: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/4d77809b60771042e514cfcf39662c6d1c195f7d.asset/AssetData/Restore/022-19457-035.dmg
Done downloading: Metal Toolchain (17A5241c).
Screenshots from Xcode
Result of "Copy Information"
Metal Toolchain 26.0 [com.apple.MobileAsset.MetalToolchain: 17.0 (17A5241c)] (Installed)
Hello
Xcode 26.0.1 (17A400) Missing some Metal components
When building a program using Metal, it induces an unexpected error :
“error: error: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain
Command CompileMetalFile failed with a nonzero exit code”
Which terminates the build
The fix given “xcodebuild -downloadComponent MetalToolchain” using sudo does not work
Did someone find a work around or could resolve the issue?
Many thanks
Jean
MacBook Air M4;
macOS 26.0.1;
Xcode 26.0.1
Context
I’m deploying large language models on iPhone using llama.cpp. A new iPhone Air (12 GB RAM) reports a Metal MTLDevice.recommendedMaxWorkingSetSize of 8,192 MB, and my attempt to load Llama-2-13B Q4_K (~7.32 GB weights) fails during model initialization.
Environment
Device: iPhone Air (12 GB RAM)
iOS: 26
Xcode: 26.0.1
Build: Metal backend enabled llama.cpp
App runs on device (not Simulator)
What I’m seeing
MTLCreateSystemDefaultDevice().recommendedMaxWorkingSetSize == 8192 MiB
Loading Llama-2-13B Q4_K (7.32 GB) fails to complete. Logs indicate memory pressure / allocation issues consistent with the 8 GB working-set guidance.
Smaller models (e.g., 7B/8B with similar quantization) load and run (8B Q4_K provide around 9 tokens/second decoding speed).
Questions
Is 8,192 MB an expected recommendedMaxWorkingSetSize on a 12 GB iPhone?
What values should I expect on other 2025 devices including iPhone 17 (8 GB RAM) and iPhone 17 Pro (12 GB RAM)
Is it strictly enforced by Metal allocations (heaps/buffers), or advisory for best performance/eviction behavior?
Can a process practically exceed this for long-lived buffers without immediate Jetsam risk?
Any guidance for LLM scenarios near the limit?
Hello,
I'm getting started for my project with Xcode Cloud since I upgraded to the macOS Sequioa Beta and Xcode 16 now refuses to archive builds for TestFlight.
Somewhere very late in the build process I get the following error:
realitytool requires Metal for this operation and it is not available in this build environment
The log says this happens at:
Compile Skybox urban.skybox
My project uses RealityKit. How can I fix this issue?
Thanks!
I'm trying to find the installer for Metal Toolchain 26. It seems to fix an issue I have but I don't want to have to install Xcode 26 just to get the toolchain installed.
Is this possible?
How do I configure a Unity project for a fully immersive VR app on Apple Vision Pro using Metal Rendering, and add a simple pinch-to-teleport-where-looking feature? I've tried the available samples and docs, but they don't cover this clearly (to me).
So far, I've reviewed Unity XR docs, Apple dev guides, and tutorials, but most emphasize spatial apps. Metal examples exist but don't include teleportation. Specifically:
visionOS sample "XRI_SimpleRig" – Deploys to device/simulator, but no full immersion or teleport.
XRI Toolkit sample "XR Origin Hands (XR Rig)" – Pinch gestures detect, but not linked to movement.
visionOS "XR Plugin" sample "Metal Sample URP" – Metal setup works, but static scene without locomotion.
I'm new in Unity XR development and would appreciate a simple, standalone scene or document focused only on the essentials for "teleport to gaze on pinch" in VR mode—no extra features. I do have some experience in unreal, world toolkit, cosmo, etc from the 90's and I'm ok with code.
Please include steps for:
Setting up immersive VR (disabling spatial defaults if needed).
Integrating pinch detection with ray-based teleport.
Any config changes or basic scripts.
Project Configuration:
Unity Editor Version: 6000.2.5f1.2588.7373 (Revision: 6000.2/staging 43d04cd1df69)
Installed Packages:
Apple visionOS XR Plugin: 2.3.1
AR Foundation: 6.2.0
PolySpatial XR: 2.3.1
XR Core Utilities: 2.5.3
XR Hands: 1.6.1
XR Interaction Toolkit: 3.2.1
XR Legacy Input Helpers: 2.1.12
XR Plugin Management: 4.5.1
Imported Samples:
Apple visionOS XR Plugin 2.3.1: Metal Sample - URP
XR Hands 1.6.1
XR Interaction Toolkit 3.2.1: Hands Interaction Demo, Starter Assets, visionOS
Build Platform Settings:
Target: Apple visionOS
App Mode: Metal Rendering with Compositor Services
Selected Validation Profiles: visionOS Metal
Documentation: Enabled
Xcode Version: 26.01
visionOS SDK: 26
Mac Hardware: Apple M1 Max
Target visionOS Version: 20 or 26
Test Environment: Model: Apple Vision Pro, visionOS 26.0.1 (23M341), Apple M1 Max
No errors in builds so far; just missing the desired functionality.
Thanks for a complete response with actionable steps.
Hi there,
We’re encountering this error in all of our builds when using the latest Xcode and macOS:
The Metal Toolchain was not installed and could not compile the Metal source files. Download the Metal Toolchain from Xcode > Settings > Components and try again.
In short, all builds are failing. I’ve tried fixing this by installing Metal and applying other solutions, but none of them worked reliably.
Is there a way to ensure that the Metal Toolchain is installed on the CI machine?
Hi,
In the iOS13 and macOS Catalina release notes it says:
Metal CIKernel instances now support arguments with arbitrarily structured data.
I've been trying to use this functionality in a CIKernel with mixed results. I'm particularly interested in passing data in the form of a dynamically sized array. It seems to work up to a certain size. Beyond the threshold excessive data is discarded and the kernel becomes unstable. I assume there is some kind of memory alignment issue going on, but I've tried various types in my array and always get a similar result.
I have not found any documentation or sample code regarding this. It would be great to know how this is intended to work and what the limitations are.
In the forums there are two similar unanswered questions about data arguments, so I'm sure there are a few out there with similar issues.
Thanks!
Michael
Hi,
I’m using the latest iPad Pro (13-inch) and I can see that Metal offers an rgb10a2unorm texture for rendering, but when I render a grey ramp and measure the actual luminance, I get a pattern that I would expect from an 8-bit texture (see below). Before I start ripping apart all my code, is there anything else I need to do to convince iOS to render my texture in 10-bit?
I already tried setting the PixelFormat in my CMetalLayer to rgb10a2unorm, but that didn’t change anything.
Hello,
Shaders in our application is written using HLSL and we rely on Metal Shader Converter to convert DXIL to Metal IR. We ran into an issue that causes metal pipeline state creation to fail when vertex stage-in function is used on AMD GPUs.
Here's the error reported by Metal in Xcode output:
Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
XPC_ERROR_CONNECTION_INTERRUPTED
MTLCompiler: Compilation failed with XPC_ERROR_CONNECTION_INTERRUPTED on 4 try. This error suggests an unexpected interruption in the connection. Possible reasons: a crash in the compiler service, termination by the OS due to resource constraints (e.g., jetsam), a timeout in the service, or an issue with IPC. Verify system stability and check the logs for more details.
Compiler failed with XPC_ERROR_CONNECTION_INVALID
XPC_ERROR_CONNECTION_INVALID
MTLCompiler: Compiler encountered XPC_ERROR_CONNECTION_INVALID: failed to check-in, peer may have been unloaded: mach_error=10000003 (is the OS shutting down or process jetsammed?)
Compilation failed due to an interrupted connection: XPC_ERROR_CONNECTION_INTERRUPTED. This error occurred after multiple retries.
which seems to indicate a internal compiler error.
I have a minimal repro here: https://github.com/kcloudy0717/metal_pso_fail/tree/main, simply follow the instructions in README.
I've been getting intermittent failures on Xcode code compiling my app on multiple platforms because it fails to compile a metal shader.
The Metal Toolchain was not installed and could not compile the Metal source files. Download the Metal Toolchain from Xcode > Settings > Components and try again.
Sometimes if I re-run it, it works fine. Then I'll run it again, and it will fail.
If you tell me to file a feedback, please tell me what information would be useful and actionable, because this is all I have.