I'm using the WebGPU abstraction library wgpu to build an app using compute shaders that compiles to Metal (on macOS), and in certain patterns where it uses a staging buffer for initial data, the data is just total missing from the capture, breaking other workflows such as shader debugging or seeing the completed results in the final buffer.
I wrote up details including a repro project and screen shots of the issue at https://github.com/gfx-rs/wgpu/issues/8111 . Seems like an Xcode bug. Any ideas? I'm happy to help investigate further if I can.
Metal
RSS for tagRender advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.
Posts under Metal tag
144 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm unable to download the Metal toolchain with Xcode. When trying to do it via the command line, I get the following:
> xcodebuild -downloadComponent metalToolchain -exportPath /tmp/MyMetalExport/
Beginning asset download...
2025-08-06 19:46:19.983 xcodebuild[1395:22024] Writing error result bundle to /var/folders/48/1k1jfsxn56zcs4qr_719rc1w0000gn/T/ResultBundle_2025-06-08_19-46-0019.xcresult
xcodebuild: error: Failed fetching catalog for assetType (com.apple.MobileAsset.MetalToolchain), serverParameters ({
RequestedBuild = 17A5295f;
})
From Console.app:
DVTDownloadsFetchAssetCatalog() complete assetType (com.apple.MobileAsset.MetalToolchain), options: (MADownloadOptions allowsCellular: 0 resourceTimeout: 60 canUseCacheServer: 0 discretionary: 0 disableUI: 0 sessionId: (null) additionalServerParams:{ RequestedBuild = 17A5295f; } allowsExpensiveAccess:1 requiresPowerPluggedIn: 0 prefersInfraWiFi: 1 liveServerOnly: -1 DownloadAuthorizationHeader: not present analyticsData: not present allowDaemonConnectionRetries: 0), result: (59), catalogError: (Download failed due to not being able to find the host. (Catalog download for com.apple.MobileAsset.MetalToolchain))
Note that I am online. I've tried restarting my computer and Xcode and using a different network.
Failed fetching catalog for assetType (com.apple.MobileAsset.MetalToolchain), serverParameters ({
RequestedBuild = 17A5295f;
})
Domain: DVTDownloadsUtilitiesErrorDomain
Code: -1
User Info: {
DVTErrorCreationDateKey = "2025-08-08 07:59:24 +0000";
}
Failed fetching catalog for assetType (com.apple.MobileAsset.MetalToolchain), serverParameters ({
RequestedBuild = 17A5295f;
})
Domain: DVTDownloadsUtilitiesErrorDomain
Code: -1
Download failed due to not being able to find the host. (Catalog download for com.apple.MobileAsset.MetalToolchain)
Domain: com.apple.MobileAssetError.Download
Code: 59
User Info: {
checkConfiguration = 1;
}
System Information
macOS Version 26.0 (Build 25A5327h)
Xcode 26.0 (24198.5) (Build 17A5295f)
Timestamp: 2025-08-08T08:59:24+01:00
With Xcode 26.0 beta 5 (17A5295f) when I run the following command
xcodebuild -downloadComponent metalToolchain
I get the following error:
xcodebuild[48851:12478851] Writing error result bundle to /var/folders/b_/g67r_tl557z244g20ncr_qmsd9wrz1/T/ResultBundle_2025-07-08_11-10-0012.xcresult
xcodebuild: error: Failed fetching catalog for assetType (com.apple.MobileAsset.MetalToolchain), serverParameters ({
RequestedBuild = 17A5295f;
})
I can't install the toolchain from the Xcode GUI also. Does someone know a workaround ?
Hi there,
Is it possible to customize the Metal Performance HUD on Apple TV, similar to how it can be done on iPhone & iPad?
Would like to see things like Compiled Shaders for my Apps on tvOS
.
If I compile a compute kernel with a call to texture.read(), it fails with the following error: "Error Domain=AGXMetalG13X Code=3 "Encountered unlowered function call to air.get_read_sampler" UserInfo={NSLocalizedDescription=Encountered unlowered function call to air.get_read_sampler}."
This error occurs on both macOS and iOS 26 Beta 5, but not when running on a simulator or in a playground. It does not occur on a macOS Sequoia VM. It occurs whether I use the old metal 3 or new metal 4 compilation method.
A workaround would be to use a sampler, but according to the feature tables, all platforms support reading from textures of all formats.
Below is a minimal example which produces the error:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()!
let computeFunction = library.makeFunction(name: "compute_test")!
do {
let pipeline = try device.makeComputePipelineState(function: computeFunction)
debugPrint(pipeline)
} catch {
debugPrint("Metal 3 failed with error:\n\(error)")
}
#import <metal_stdlib>
using namespace metal;
kernel void compute_test(uint2 gid [[thread_position_in_grid]],
texture2d<float, access::read> in [[texture(0)]],
texture2d<float, access::write> out [[texture(1)]]) {
out.write(in.read(gid), gid);
}
I filed feedback FB19530049.
We executed the iOS package using UE5.5 on Windows. UE automatically launched the "Metal Developer Tools for Windows". I tried two versions, 5.3 and 4.4. I found that the metal.exe sometimes had no response or output was empty. The problem of no response could be alleviated by adding timeout retries, but the problem of empty output seemed to have no effect with retries. The actual command line called was: metal.exe -v --target=air64-apple-darwin18.7.0. UE relies on this output to determine the version number, and if it cannot get it, it will crash directly.
During our replication test, we used a Python script to run multiple concurrent executions of the command "metal.exe -v --target=air64-apple-darwin18.7.0". On different Windows machines, it would get stuck at a certain concurrency level. Some machines even got stuck when running two tasks concurrently.
We would like to ask for solutions or available versions.
Hi!
I'm currently trying to render another XR scene in front of a RealityKit one.
Actually, I'm anchoring a plane to the head with a shader to display for left/right eye side-by-side images. By default, the camera has a near plane so I can directly draw at z=0.
Is there a way to change the camera near plane? Or maybe there is a better solution to overlay image/texture for left/right eyes?
Ideally, I would layer some kind of CompositorLayer on RealityKit, but that's sadly not possible from what I know.
Thanks in advance and have a good day!
Hello!
I'm developing a GPU (shader) language, where I aim to target multiple backends with a common frontend. I wanted to avoid having to round trip through Metal, and go straight to IR just like I have with SPIRV, in order to have a fast and efficient compilation process.
I've been looking for a reference page where I can read about Metals IR, and as far as I'm aware, it exists, but I can't seem to find it anywhere.
Furthermore, if such a reference is available, is there also a toolkit where I can run validation on the output IR, and perhaps even run optimizations, much like spv-tools for SPIRV?
Any help would be appreciated!
Thanks,
Gustav
Hello everyone,
I must have missed something but why isn't there a depthAttachmentPixelFormat to the new Metal 4 MTL4RenderPipelineDescriptor, unlike the old MTLRenderPipelineDescriptor?
So how do you set the depth pixel format?
Thanks in advance!
I would love to use Background GPU Access to do some video processing in the background.
However the documentation of BGContinuedProcessingTaskRequest.Resources.gpu clearly states:
Not all devices support background GPU use. For more information, see Performing long-running tasks on iOS and iPadOS.
Is there a list available of currently released devices that do (or don't) support GPU background usage? That would help to understand what part of our user base can use this feature. (And what hardware we need to test this on as developers.)
For example it seems that it isn't supported on an iPad Pro M1 with the current iOS 26 beta. The simulators also seem to not support the background GPU resource. So would be great to understand what hardware is capable of using this feature!
Hello, I'm tracking down a bug where useResource doesn't seem to apply proper synchronization when a resource is produced by the render pass then consumed by the compute pass, but when I use MTLFence between the to signal and wait between the render/compute encoders, the artifact goes away.
The resource is created with MTLHazardTrackingModeTracked and useResource is called on the compute encoder after the render pass. Metal API Validation doesn't report any warnings/errors.
Am I misunderstanding the difference between the two APIs? I dug through the Metal documentation and it looks like useResource should handle synchronization given the resource has MTLHazardTrackingModeTracked but on the other hand, MTLFence should be used to ensure proper synchronization between command encoders. Can someone can clarify the difference between the two APIs and when to use them.
I have discovered that RemoteImmersiveSpace is limited to utilizing the structure of the CompositorContent protocol, precluding direct invocation of RealityView. Consequently, I am interested in understanding the appropriate method for integrating CompositorContent within RemoteImmersiveSpace. Thanks.
WWDC25: Combine Metal 4 machine learning and graphics
Demonstrated a way to combine neural network in the graphics pipeline directly through the shaders, using an example of Texture Compression. However there is no mention of using which ML technique texture is compressed.
Can anyone point me to some well known model/s for this particular use case shown in WWDC25.
When I play an HDR video in the iPhone Photos app, I can see the HDR effect obviously. But if this HDR video is played continuously for more than 30-40 minutes, the HDR effect will disappear and the brightness will be compressed to the SDR range. This issue will appear on any iPhone.
Depending on the phone, it may be 20-30 minutes, or 30-40 minutes, or even a few minutes, such as iPhone 12 mini.
Similarly, if I use AVPlayer to play and preview an HDR video, if it plays more than 30-40 minutes, the HDR effect will disappear and the screen brightness will dim. Also the currentEDRHeadroom will gradually decrease to 1
Note, test it with an HDR video longer than 1 hour, and if the video is short, please loop it.
My question is how to avoid losing the HDR effect after 30-40 minutes when I use CAMetalLayer to render any HDR video.
When i use AVPlayer to obtain the video frame CVPixelBufferRef of an HDR video, and use AVSampleBufferDisplayLayer to display it on the screen, after a period of time, the HDR video content and screen gradually darken, losing the HDR effect.
Steps to reproduce:
Create an AVPlayer to loop an HDR video, specify the video frame format as kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange
Create a timer to get the video frame CVPixelBufferRef at 30 frames per second
Use AVSampleBufferDisplayLayer to display CVPixelBufferRef on the screen
Don't operate the phone, wait for a period of time (such as 40 minutes), the HDR effect disappears and the screen darkens
Note:
You need to use an iPhone device, iOS 18.5 and below operating system
You need to ensure that the HDR video is played in a loop, that is, to ensure that the screen continues to display HDR content, wait for a period of time, depending on different devices, you need to wait for 20-40 minutes.
In the iPhone Photos app,the same problem will occur after playing HDR video in a loop for a long time
Expected Results:
When rendering HDR content for a long time, it is guaranteed that there is always an HDR effect, and the HDR content and screen will not be darkened.
Current Results:
After about 20-40 minutes, the HDR effect disappears and the screen darkens.
Hi ,
My application meet below crash backtrace at very low repro rate from the public users, i do not see it relate to a specific iOS version or iPhone model. The last code line from my application is calling CAMetalLayer nextDrawable API.
I did some basic studying, suppose it may relate to the wrong CAMetaLayer configuration, like
frame property w or h <= 0.0
bounds property w or h <= 0.0
drawableSize w or h <= 0.0 or w or h > max value (like 16384)
Not sure my above thinking is right or not? Will the UIView which my CAMetaLayer attached will cause such nextDrawable crash or not ?
Thanks a lot
Main Thread - Crashed
libsystem_kernel.dylib
__pthread_kill
libsystem_c.dylib
abort
libsystem_c.dylib
__assert_rtn
Metal
MTLReportFailure.cold.1
Metal
MTLReportFailure
Metal
_MTLMessageContextEnd
Metal
-[MTLTextureDescriptorInternal validateWithDevice:]
AGXMetalA13
0x245b1a000 + 4522096
QuartzCore
allocate_drawable_texture(id<MTLDevice>, __IOSurface*, unsigned int, unsigned int, MTLPixelFormat, unsigned long long, CAMetalLayerRotation, bool, NSString*, unsigned long)
QuartzCore
get_unused_drawable(_CAMetalLayerPrivate*, CAMetalLayerRotation, bool, bool)
QuartzCore
CAMetalLayerPrivateNextDrawableLocked(CAMetalLayer*, CAMetalDrawable**, unsigned long*)
QuartzCore
-[CAMetalLayer nextDrawable]
SpaceApp
-[MetalRender renderFrame:] MetalRenderer.mm:167
SpaceApp
-[FrameBuffer acceptFrame:] VideoRender.mm:173
QuartzCore
CA::Display::DisplayLinkItem::dispatch_(CA::SignPost::Interval<(CA::SignPost::CAEventCode)835322056>&)
QuartzCore
CA::Display::DisplayLink::dispatch_items(unsigned long long, unsigned long long, unsigned long long)
QuartzCore
CA::Display::DisplayLink::dispatch_deferred_display_links(unsigned int)
UIKitCore
_UIUpdateSequenceRun
UIKitCore
schedulerStepScheduledMainSection
UIKitCore
runloopSourceCallback
CoreFoundation
__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__
CoreFoundation
__CFRunLoopDoSource0
CoreFoundation
__CFRunLoopDoSources0
CoreFoundation
__CFRunLoopRun
CoreFoundation
CFRunLoopRunSpecific
GraphicsServices
GSEventRunModal
UIKitCore
-[UIApplication _run]
UIKitCore
UIApplicationMain
hi everyone,
我们发现了一个和Metal相关崩溃。应用中使用了Metal相关的接口,在进行性能测试时,打开了设置-开发者-显示HUD图形。运行应用后,正常展示HUD,但应用很快发生了崩溃,日志主要信息如下:
Incident Identifier: 1F093635-2DB8-4B29-9DA5-488A6609277B
CrashReporter Key: 233e54398e2a0266d95265cfb96c5a89eb3403fd
Hardware Model: iPhone14,3
Process: waimai [16584]
Path: /private/var/containers/Bundle/Application/CCCFC0AE-EFB8-4BD8-B674-ED089B776221/waimai.app/waimai
Identifier:
Version: 61488 (8.53.0)
Code Type: ARM-64
Parent Process: ? [1]
Date/Time: 2025-06-12 14:41:45.296 +0800
OS Version: iOS 18.0 (22A3354)
Report Version: 104
Monitor Type: Mach Exception
Exception Type: EXC_BAD_ACCESS (SIGBUS)
Exception Codes: KERN_PROTECTION_FAILURE at 0x000000014fffae00
Crashed Thread: 57
Thread 57 Crashed:
0 libMTLHud.dylib esfm_GenerateTriangesForString + 408
1 libMTLHud.dylib esfm_GenerateTriangesForString + 92
2 libMTLHud.dylib Renderer::DrawText(char const*, int, unsigned int) + 204
3 libMTLHud.dylib Overlay::onPresent(id<CAMetalDrawable>) + 1656
4 libMTLHud.dylib CAMetalDrawable_present(void (*)(), objc_object*, objc_selector*) + 72
5 libMTLHud.dylib invocation function for block in void replaceMethod<void>(objc_class*, objc_selector*, void (*)(void (*)(), objc_object*, objc_selector*)) + 56
6 Metal __45-[_MTLCommandBuffer presentDrawable:options:]_block_invoke + 104
7 Metal MTLDispatchListApply + 52
8 Metal -[_MTLCommandBuffer didScheduleWithStartTime:endTime:error:] + 312
9 IOGPU IOGPUNotificationQueueDispatchAvailableCompletionNotifications + 136
10 IOGPU __IOGPUNotificationQueueSetDispatchQueue_block_invoke + 64
11 libdispatch.dylib _dispatch_client_callout4 + 20
12 libdispatch.dylib _dispatch_mach_msg_invoke + 464
13 libdispatch.dylib _dispatch_lane_serial_drain + 368
14 libdispatch.dylib _dispatch_mach_invoke + 456
15 libdispatch.dylib _dispatch_lane_serial_drain + 368
16 libdispatch.dylib _dispatch_lane_invoke + 432
17 libdispatch.dylib _dispatch_lane_serial_drain + 368
18 libdispatch.dylib _dispatch_lane_invoke + 380
19 libdispatch.dylib _dispatch_root_queue_drain_deferred_wlh + 288
20 libdispatch.dylib _dispatch_workloop_worker_thread + 540
21 libsystem_pthread.dylib _pthread_wqthread + 288
我们测试了几个不同的机型,只有iPhone 13 Pro Max会发生崩溃。
Q1:为什么会发生这个崩溃?
Q2:相同的逻辑,为什么仅在iPhone 13 Pro Max机型上出现崩溃?
期待您的解答。
Hello experts, I'm trying to implement a material with custom shader code, but I saw that visionOS doesn't allow you to inject custom Metal functions or use CustomMaterial like iOS/macOS, nor can you directly write Metal Shading Language (.metal) and use it through ShaderGraphMaterial. So my question is, if i want to implement your own shader code, how should i do it?
Hi all,
I'm running into an issue with an app that previously worked fine on device using visionOS 2.0. After updating to visionOS 26, the same code runs fine in the simulator but crashes on the device with the following error:
-[MTLDebugComputeCommandEncoder _validateThreadsPerThreadgroup:]:1330:
failed assertion `(threadsPerThreadgroup.width(32) * threadsPerThreadgroup.height(32) * threadsPerThreadgroup.depth(1))(1024) must be <= 832. (kernel threadgroup size limit)`
Is there any documented way to check or increase the allowed threadsPerThreadgroup size on Apple Vision Pro? Or any recommended workaround for this regression?
Thanks in advance!