Render advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.

Metal Documentation

Posts under Metal subtopic

Post

Replies

Boosts

Views

Activity

Why slower with larger threadgroup memory?
I'm implementing optimized matmul on metal: https://github.com/crynux-ai/metal-matmul/blob/main/metal/1_shared_mem.metal I notice that performance is significantly different with different threadgroup memory set in [computeEncoder setThreadgroupMemoryLength] All other lines are exactly same, the only difference is this parameter. Matmul performance is roughly 250 GFLops if I set 32768 (max bytes allowed on this M1 Max), but 400 GFLops if I set 8192. Why does this happen? How can I optimize it?
2
0
128
Apr ’25
Physics bug in WWE 2K25 with GPTK2.1
The game physics work as expected using GTPK 2.0 using Crossover 24 or Whisky. However, using GPTK 2.1 with Crossover 25, the player and camera physics misbehave. See https://www.reddit.com/r/WWEGames/comments/1jx9mph/the_siamese_elbow/ and https://www.reddit.com/r/WWEGames/comments/1jx9ow4/camera_glitch/ Full video also linked in the Reddit post. I have also submitted this bug via the feedback assistant.
2
0
205
Apr ’25
WWDC25 Metal & game technologies group lab
Hello, Thank you for attending today’s Metal & game technologies group lab at WWDC25! We were delighted to answer many questions from developers and energized by the community engagement. We hope you enjoyed it and welcome your feedback. We invite you to carry on the conversation here, particularly if your question appeared in Slido and we were unable to answer it during the lab. If your question received feedback let us know if you need clarification. You may want to ask your question again in a different lab e.g. visionOS tomorrow. (We realize that this can be confusing when frameworks interoperate) We have a lot to learn from each other so let’s get to Q&A and make the best of WWDC25! 😃 Looking forward to your questions posted in new threads.
2
0
400
Jul ’25
MetalFX upscaler/denoiser and instant changes
Hi, What's the best way to handle drastic changes in scene charateristics with the new MTLFXTemporalDenoisedScaler? Let's say a visible object of the scene radically changes its material properties. I can modify the albedo and roughness textures consequently. But I suspect the history will be corrupted. Blending visual information between the new frame and the previous ones might be a nonsense. I guess the problem should be the same when objects appear or disappear instantly. Is the upsacler manage these events for us (by lowering blending), or should we use the reactive or the denoise strength mask or something like that to handle them?
2
0
146
Jul ’25
Unable to package in UE5.6
Im new in the Mac area but for sure not UE. Windows is a long process to packaging but it could be done. All the documentation for Epic and from the internet is basically non existent with exactly how to package a project within UE. I have Xcode installed which makes sense, agreed to terms and install for MacOS, I've been able to make a project for several weeks now and want to package for a test run for my friends to play on Windows. Now I just get this in the log: UATHelper: Packaging (Mac): ERROR: Failed to finalize the .app with Xcode. Check the log for more information UATHelper: Packaging (Mac): Trace written to file /Users/rileysleger/Library/Logs/Unreal Engine/LocalBuildLogs/UBA-ProjectNightTerror-Mac-Development.uba with size 12.6kb UATHelper: Packaging (Mac): Total time in Unreal Build Accelerator local executor: 8.12 seconds UATHelper: Packaging (Mac): Result: Failed (OtherCompilationError) UATHelper: Packaging (Mac): Total execution time: 9.71 seconds PackagingResults: Error: Failed to finalize the .app with Xcode. Check the log for more information UATHelper: Packaging (Mac): Took 9.77s to run dotnet, ExitCode=6 UATHelper: Packaging (Mac): UnrealBuildTool failed. See log for more details. (/Users/rileysleger/Library/Logs/Unreal Engine/LocalBuildLogs/UBA-ProjectNightTerror-Mac-Development.txt) UATHelper: Packaging (Mac): AutomationTool executed for 0h 0m 10s UATHelper: Packaging (Mac): AutomationTool exiting with ExitCode=6 (6) UATHelper: Packaging (Mac): RunUAT ERROR: AutomationTool was unable to run successfully. Exited with code: 6 PackagingResults: Error: AutomationTool was unable to run successfully. Exited with code: 6 PackagingResults: Error: Unknown Error This absolutely makes no sense to me. Anyone have ideas?
2
0
282
Jul ’25
CustomMetalView sample uses deprecated functions - update?
The sample code here, has code like: // Create a display link capable of being used with all active displays cvReturn = CVDisplayLinkCreateWithActiveCGDisplays(&_displayLink); But that function's doc says it's deprecated and to use NSView/NSWindow/NSScreen displayLink instead. That returns CADisplayLink, not CVDisplayLink. Also the documentation for that displayLink method is completely empty. I'm not sure if I'm supposed to add it to run loop, or what, after I get it. It would be nice to get an updated version of this sample project and/or have some documentation in NSView.displayLink
2
0
364
Jul ’25
Problem running Unreal 5.6
I am using Unreal Engine 5.6 on a MacBook Pro with an M3 chip and macOS 15.5. I’ve installed Xcode and accepted the license, but Unreal is not detecting the latest Metal Shader Standard (Metal v3.0). The maximum version Unreal sees is Metal v2.4, even though the hardware and OS should support Metal 3.0. I’ve also run sudo xcode-select -s /Applications/Xcode.app and accepted the license via Terminal. Is there anything in Xcode settings, SDK availability, or system permissions that could be preventing access to Metal 3.0 features?"
2
0
489
Jul ’25
Metal IR reference
Hello! I'm developing a GPU (shader) language, where I aim to target multiple backends with a common frontend. I wanted to avoid having to round trip through Metal, and go straight to IR just like I have with SPIRV, in order to have a fast and efficient compilation process. I've been looking for a reference page where I can read about Metals IR, and as far as I'm aware, it exists, but I can't seem to find it anywhere. Furthermore, if such a reference is available, is there also a toolkit where I can run validation on the output IR, and perhaps even run optimizations, much like spv-tools for SPIRV? Any help would be appreciated! Thanks, Gustav
2
0
293
Jul ’25
Warning code: 00000006 will affect background survival
Our APP has integrated 3D function, in order to reduce the memory occupation of the APP in the background, we will uninstall the 3D after the APP enters the background. However, the uninstall also causes problems. When the uninstall process is executed in the background, the app will briefly trigger the background GPU rendering error warning with the error warning code: OGPUMetalError: Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted) Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU Work from background) (00000006: kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted) excuse me this warning system will tighten APP permissions background?​ For example, limit or shorten the background survival time of the APP. In addition, will the background refresh function fail, resulting in the failure of Bluetooth Ibeacon activation?
2
0
264
Aug ’25
Float64 (Double Precision) Support on MPS with PyTorch on Apple Silicon?
Hi everyone, This project uses PyTorch on an Apple Silicon Mac (M1/M2/etc.), and the goal is to use the MPS backend for GPU acceleration, notes Apple Developer. However, the workflow depends on Float64 (double-precision) floating-point numbers for certain computations, notes PyTorch Forums. The error "Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead" has been encountered, notes GitHub. It seems that the MPS backend doesn't currently support Float64 for direct GPU computation. Questions for the community: Are there any known workarounds or best practices for handling Float64-dependent operations when using the MPS backend with PyTorch? For those working with high-precision tasks on Apple Silicon, what strategies are being used to balance performance with the need for Float64? Offloading to the CPU is an option, and it's of interest to know if there are any specific techniques or libraries within the Apple ecosystem that could streamline this process while aiming for optimal performance. Any insights, tips, or experiences would be appreciated. Thanks in advance, Jonaid MacBook Pro M3 Max
2
1
431
Oct ’25
iPhone limited to 60hz frame rate
Just wondering if anyone knows what it will take to hit greater than 60hz when targeting iPhone. If I set the preferredFramesPerSecond of an MTKView to 120, it works on the iPad, but on iPhone it never goes over 60hz, even with a simple hello triangle sample app... is this a limitation of targeting iPhone?
2
0
190
Sep ’25
Metal HUD Display Value Range
Can't seem to get the Metal HUD to display value range's (pre 26 Tahoe). The documented environment variable MTL_HUD_SHOW_VALUE_RANGE doesn't seem to work. https://developer.apple.com/documentation/xcode/monitoring-your-metal-apps-graphics-performance#Display-the-value-range-of-metrics Anyone having any luck?
2
0
327
Sep ’25
Pink screen on MTLCommandBuffer.presentDrawable.
I rewrote my graphics pipeline to use Load/Store better for clearing and don't care cases. All my tests pass, and in the Metal debugger, all the draw calls succeed. But when I present drawables (before [commandBuffer commit]) I only get a pink screen. I've tried everything I can think of: making sure the pixel formats are the same for the back buffer as my render targets, etc. But it's still pink. Could you point me in the right direction so I can fix this, or help describe why it's pink. That would be really helpful. Thank you, Brian Hapgood
2
0
394
Sep ’25
xCode26.x Metal4 classes do not compile
Hi, I am using xCode26.x. But my Metal4 classes are not compiling. I downloaded the sample code from Apple's website - https://developer.apple.com/documentation/Metal/processing-a-texture-in-a-compute-function. For example, I am getting errors like "Cannot find protocol declaration for 'MTL4CommandQueue'; I have hit a deadline. Any recommendations are very welcome. I have downloaded the Metal Tool chain. When I run the following commands on the terminal - xcodebuild -showComponent metalToolchain ; xcrun -f metal ; xcrun metal --version I get the following response - Asset Path: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/86fbaf7b114a899754307896c0bfd52ffbf4fded.asset/AssetData Build Version: 17A321 Status: installed Toolchain Identifier: com.apple.dt.toolchain.Metal.32023 Toolchain Search Path: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/bin/metal Apple metal version 32023.830 (metalfe-32023.830.2) Target: air64-apple-darwin24.6.0 Thread model: posix InstalledDir: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/metal/current/bin
2
0
1.1k
2w
Metal 4: When is it ok to dealloc a MTLBuffer's memory
I have something like this drawing in an MTKView (see at bottom). I am finding it difficult to figure out when can the Swift-land resources used in making the MTLBuffer(s) be released? Below, for example, is it ok if args goes out of scope (or is otherwise deallocated) at point 1, 2, or 3? Or perhaps even earlier, as soon as argsBuffer has been created? I have been reading through various articles such as Setting resource storage modes Choosing a resource storage mode for Apple GPUs Copying data to a private resource but it's a lot to absorb and I haven't been really able to find an authoritative description of the required lifetime of the resources in CPU land. I should mention that this is Metal 4 code. In previous versions of Metal, the MTLCommandBuffer had the ability to add a completion handler to be called by the GPU after it has finished running the commands in the buffer but in Metal 4 there is no such thing (it it were even needed for the purpose I am interested in). Any advice and/or pointers to the definitive literature will be appreciated. guard let argsBuffer = device.makeBuffer(bytes: &args,... argumentTable.setAddress(argsBuffer.gpuAddress, ... encoder.setArgumentTable(argumentTable, stages: .vertex) // encode drawing renderEncoder.draw... ... encoder.endEncoding() // 1 commandBuffer.endCommandBuffer() // 2 commandQueue.waitForDrawable(drawable) commandQueue.commit([commandBuffer]) // 3 commandQueue.signalDrawable(drawable) drawable.present()
2
0
154
1w
In Metal compute kernels, when do thread variables get spilled into the device memory?
How many 32-bit variables can I use concurrently in a single thread of a Metal compute kernel without worrying about the variables getting spilled into the device memory? Alternatively: how many 32-bit registers does a single thread have available for itself? Let's say that each thread of my compute kernel needs to store and work with its own array of N float variables, where N can be 128, 256, 512 or more. To achieve maximum possible performance, I do not want to the local thread variables to get spilled into the slow device memory. I want all N variables to be stored "on-chip", in the thread memory space. To make my question more concrete, let's say there is an array thread float localArray[N]. Assuming an unrealistic hypothetical scenario where localArray is the only variable in the whole kernel, what is the maximum value of N for which no portion of localArray would get spilled into the device memory? I searched in the Metal feature set tables, but I could not find any details.
1
0
616
Mar ’25
Xcode Metal geometry inspector uses wrong NDC space?
When inspecting the geometry in Xcode's metal debugger, I noticed that the shown "frustrum box" didn't make sense. Since Metal uses depth range 0,1 in NDC space, I would expect a vertex that is projected to z:0 to be on the front clipping plane of the frustrum shown in the geometry inspector. This is however not the case. A vertex with ndc z:0 is shown halfway inside the frustrum. Vertices with ndc z less than 0 are correctly culled during rendering, while the geometry inspector's frustrum shows that the vertex is stil inside the frustrum. The image shows vertices that are visually in the middle of the frustrum on z axis, but at the same time the out position shows that they are projected to z:0. How is this possible, unless there's a bug in the geometry inspector?
1
0
557
Feb ’25
Why is depth/stencil buffer loaded/stored twice in xcode gpu capture?
I used xcode gpu capture to profile render pipeline's bandwidth of my game.Then i found depth buffer and stencil buffer use the same buffer whitch it's format is Depth32Float_Stencil8. But why in a single pass of pipeline, this buffer was loaded twice, and the Load Attachment Size of Encoder Statistics was double. Is there any bug with xcode gpu capture?Or the pass really loaded the buffer twice times?
1
0
366
Mar ’25
Implementing Scalable Order-Independent Transparency (OIT) in Metal
Hi, Apple’s documentation on Order-Independent Transparency (OIT) describes an approach using image blocks, where an array of size 4 is allocated per fragment to store depth and color in a tile shading compute pass. However, when increasing the scene’s depth complexity by adding more overlapping quads, the OIT implementation fails due to the fixed array size. Is there a way to dynamically allocate storage for fragments based on actual depth complexity encountered during rasterization, rather than using a fixed-size array? Specifically, can an adaptive array of fragments be maintained and sorted by depth, where the size grows as needed instead of being limited to 4 entries? Any insights or alternative approaches would be greatly appreciated. Thank you!
1
0
564
Mar ’25