Hi, I'm Beginner with Metal 4 and Model I/O 🥺.
I can render simple models with just one mesh, but when I try to render models with SubMeshes, nothing shows up on screen.
Can anyone help me figure out how to properly render models with multiple submeshes? I think I'm not iterating through them correctly or maybe missing some buffers setup.
Here's what I have so far:
https://www.icloud.com.cn/iclouddrive/0a6x_NLwlWy-herPocExZ8g3Q#LoadModel
Metal
RSS for tagRender advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi all,
I'm encountering an issue with Metal raytracing on my M5 MacBook Pro regarding Instance Acceleration Structure (IAS).
Intersection tests suddenly stop working after a certain point in the sampling loop.
Situation
I implemented an offline GPU path tracer that runs the same kernel multiple times per pixel (sampleCount) using metal::raytracing.
Intersection tests are performed using an IAS.
Since this is an offline path tracer, geometries inside the IAS never changes across samples (no transforms or updates).
As sampleCount increases, there comes a point where the number of intersections drops to zero, and remains zero for all subsequent samples.
Here's a code sketch:
let sampleCount: UInt16 = 1024
for sampleIndex: UInt16 in 0..<sampleCount {
// ...
do {
let commandBuffer = commandQueue.makeCommandBuffer()
// Dispatch the intersection kernel.
await commandBuffer.completed()
}
do {
let commandBuffer = commandQueue.makeCommandBuffer()
// Use the intersection test results from the previous command buffer.
await commandBuffer.completed()
}
// ...
}
kernel void intersectAlongRay(
const metal::uint32_t threadIndex [[thread_position_in_grid]],
// ...
const metal::raytracing::instance_acceleration_structure accelerationStructure [[buffer(2)]],
// ...
)
{
// ...
const auto result = intersector.intersect(ray, accelerationStructure);
switch (result.type) {
case metal::raytracing::intersection_type::triangle: {
// Write intersection result to device buffers.
break;
}
default:
break;
}
Observations
Encoding both the intersection kernel and the subsequent result usage in the same command buffer does not resolve the problem.
Switching from IAS to Primitive Acceleration Structure (PAS) fixes the problem.
Rebuilding the IAS for each sample also resolves the issue.
Intersections produce inconsistent results even though the IAS and rays are identical — Image 1 shows a hit, while Image 2 shows a miss.
Questions
Am I misusing IAS in some way ?
Could this be a Metal bug ?
Any guidance or confirmation would be greatly appreciated.
My app has a number of heterogeneous GPU workloads that all run concurrently. Some of these should be executed with the highest priority because the app’s responsiveness depends on them, while others are triggered by file imports and the like which should have a low priority. If this was running on the CPU I’d assign the former User Interactive QoS and the latter Utility QoS. Is there an equivalent to this for GPU work?
I am puzzled by the setAddress(_:attributeStride:index:) of MTL4ArgumentTable. Can anyone please explain what the attributeStride parameter is for? The doc says that it is "The stride between attributes in the buffer." but why?
Who uses this for what? On the C++ side in the shaders the stride is determined by the C++ type, as far as I know. What am I missing here?
Thanks!
Is the pseudocode below thread-safe? Imagine that the Main thread sets the CAMetalLayer's drawableSize to a new size meanwhile the rendering thread is in the middle of rendering into an existing MTLDrawable which does still have the old size.
Is the change of metalLayer.drawableSize thread-safe in the sense that I can present an old MTLDrawable which has a different resolution than the current value of metalLayer.drawableSize? I assume that setting the drawableSize property informs Metal that the next MTLDrawable offered by the CAMetalLayer should have the new size, right?
Is it valid to assume that "metalLayer.drawableSize = newSize" and "metalLayer.nextDrawable()" are internally synchronized, so it cannot happen that metalLayer.nextDrawable() would produce e.g. a MTLDrawable with the old width but with the new height (or a completely invalid resolution due to potential race conditions)?
func onWindowResized(newSize: CGSize) {
// Called on the Main thread
metalLayer.drawableSize = newSize
}
func onVsync(drawable: MTLDrawable) {
// Called on a background rendering thread
renderer.renderInto(drawable: drawable)
}
The maximumExtendedDynamicRangeColorComponentValue should provide some value between 1.0 and maximumPotentialExtendedDynamicRangeColorComponentValue depending on the available EDR headroom if there is any content on-screen that uses EDR.
This works fine in most scenarios but in macOS 26 Tahoe (including in 26.2) this seemingly breaks down when a third party external display is in HDR mode and the Mac goes to sleep and wakes up. After wake only a value of 1.0 is provided by the third party external display's NSScreen object, no matter what (although when the SDR peak brightness is being changed using the brightness slider, didChangeScreenParametersNotification is firing and the system should provide a proper updated headroom value). This makes dynamic tone-mapping that adapts to actual screen brightness impossible.
Everything works fine in Sequoia. In Tahoe the user needs to turn off HDR, then go through a sleep/wake cycle and turn HDR back on to have this fixed, which is obviously not a sustainable workaround.
I noticed that MTLPixelFormat has this cases:
case r32Float = 55
case rg32Float = 105
case rgba32Float = 125
But no case rgb32Float. What's the reason for such a discrimination?
My goal is to print a debug message from a shader. I follow the guide that orders to set -fmetal-enable-logging metal compiler flag and following environment variables:
MTL_LOG_LEVEL=MTLLogLevelDebug
MTL_LOG_BUFFER_SIZE=2048
MTL_LOG_TO_STDERR=1
However there's an issue with the guide, it's only covers Xcode project setup, however I'm working on a Swift Package. It has a Metal-only target that's included into main target like this:
targets: [
// A separate target for shaders.
.target(
name: "MetalShaders",
resources: [
.process("Metal")
],
plugins: [
// https://github.com/schwa/MetalCompilerPlugin
.plugin(name: "MetalCompilerPlugin", package: "MetalCompilerPlugin")
]
),
// Main target
.target(
name: "MegApp",
dependencies: ["MetalShaders"]
),
.testTarget(
name: "MegAppTests",
dependencies: [
"MegApp",
"MetalShaders",
]
]
So to apply compiler flag I use MetalCompilerPlugin which emits debug.metallib, it also allows to define DEBUG macro for shaders. This code compiles:
#ifdef DEBUG
logger.log_error("Hello There!");
os_log_default.log_debug("Hello thread: %d", gid);
// this proves that code exectutes
result.flag = true;
#endif
Environment is set via .xctestplan and valideted to work with ProcessInfo. However, nothing is printed to Xcode console nor to Console app.
In attempt to fix it I'm trying to setup a MTLLogState, however the makeLogState(descriptor:) fails with error:
if #available(iOS 18.0, *) {
let logDescriptor = MTLLogStateDescriptor()
logDescriptor.level = .debug
logDescriptor.bufferSize = 2048
// Error Domain=MTLLogStateErrorDomain Code=2 "Cannot create residency set for MTLLogState: (null)" UserInfo={NSLocalizedDescription=Cannot create residency set for MTLLogState: (null)}
let logState = try! device.makeLogState(descriptor: logDescriptor)
commandBufferDescriptor.logState = logState
}
Some LLMs suggested that this is connected with Simulator, and truly, I run the tests on simulator. However tests don't want to run on iPhone... I found solution running them on My Mac (Mac Catalyst). Surprisingly descriptor log works there, even without MTLLogState. But the Simulator behaviour seems like a bug...
On MacBook Pro M3 14" I can profile the Metal App performance by running it, then clicking on the M icon and choosing profile after replay.
On Mac Studio M2 Ultra I cannot: the profiler starts and crashes. I have tried everything including reinstalling the OS, Xcode, the Metal SDK, you name it.
The app uses the Metal 4 API. The content of the replayer errorinfo report is shown at the end.
Any ideas what is going on here and/or what else I can do do root cause this and fix it?
FWIW, it was worse on 26.1 (Xcode just reported Metal 4 profiling not available). In 26.2 Xcode attempts to profile and invariably crashes.
=== Error summary: ===
1x DYErrorDomain (512) - guest app crashed (512)
1x com.apple.gputools.MTLReplayer (100) - Abort trap: 6
=== First Error ===
Domain: DYErrorDomain
Error code: 512
Description: guest app crashed (512)
GTErrorKeyPID: 26913
GTErrorKeyProcessName: GPUToolsReplayService
GTErrorKeyCrashDate: 2026-01-09 19:22:52 +0000
=== Underlying Error #1 ===
Domain: com.apple.gputools.MTLReplayer
Error code: 100
Description: Abort trap: 6
Call stack:
0 GPUToolsReplay 0x0000000249c25850 MakeNSError + 284
1 GPUToolsReplay 0x0000000249c26428 HandleCrashSignal + 252
2 libsystem_platform.dylib 0x00000001856c7744 _sigtramp + 56
3 libsystem_pthread.dylib 0x00000001856bd888 pthread_kill + 296
4 libsystem_c.dylib 0x00000001855c2850 abort + 124
5 libsystem_c.dylib 0x00000001855c1a84 err + 0
6 IOGPU 0x00000001a9ea60a8 -[IOGPUMetal4CommandQueue _commit:count:commitFeedback:].cold.1 + 0
7 IOGPU 0x00000001a9ea0df8 __77-[IOGPUMetal4CommandQueue commitFillArgs:count:args:argsSize:commitFeedback:]_block_invoke + 0
8 IOGPU 0x00000001a9ea1004 -[IOGPUMetal4CommandQueue _commit:count:commitFeedback:] + 148
9 AGXMetalG14X 0x00000001158a2c98 -[AGXG14XFamilyCommandQueue_mtlnext noMergeCommit:count:options:commitFeedback:error:] + 116
10 AGXMetalG14X 0x0000000115a45c14 +[AGXG14XFamilyRenderContext_mtlnext mergeRenderEncoders:count:options:commitFeedback:queue:error:] + 4740
11 AGXMetalG14X 0x00000001158a2b34 -[AGXG14XFamilyCommandQueue_mtlnext commit:count:options:] + 96
12 GPUToolsReplay 0x0000000249bf0644 GTMTLReplayController_defaultDispatchFunction_noPinning + 2744
13 GPUToolsReplay 0x0000000249befb10 GTMTLReplayController_defaultDispatchFunction + 1368
14 GPUToolsReplay 0x0000000249b7a61c _ZL16DispatchFunctionP21GTMTLReplayControllerPK11GTTraceFuncRb + 476
15 GPUToolsReplay 0x0000000249b8603c ___ZN35GTUSCSamplingStreamingManagerHelper19StreamFrameTimeDataEv_block_invoke + 456
16 Foundation 0x0000000186f6c878 __NSBLOCKOPERATION_IS_CALLING_OUT_TO_A_BLOCK__ + 24
17 Foundation 0x0000000186f6c740 -[NSBlockOperation main] + 96
18 Foundation 0x0000000186f6c6d8 __NSOPERATION_IS_INVOKING_MAIN__ + 16
19 Foundation 0x0000000186f6c308 -[NSOperation start] + 640
20 Foundation 0x0000000186f6c080 __NSOPERATIONQUEUE_IS_STARTING_AN_OPERATION__ + 16
21 Foundation 0x0000000186f6bf70 __NSOQSchedule_f + 164
22 libdispatch.dylib 0x00000001855104d0 _dispatch_block_async_invoke2 + 148
23 libdispatch.dylib 0x000000018551aad4 _dispatch_client_callout + 16
24 libdispatch.dylib 0x00000001855056e4 _dispatch_continuation_pop + 596
25 libdispatch.dylib 0x0000000185504d58 _dispatch_async_redirect_invoke + 580
26 libdispatch.dylib 0x0000000185512fc8 _dispatch_root_queue_drain + 364
27 libdispatch.dylib 0x0000000185513784 _dispatch_worker_thread2 + 180
28 libsystem_pthread.dylib 0x00000001856b9e10 _pthread_wqthread + 232
29 libsystem_pthread.dylib 0x00000001856b8b9c start_wqthread + 8
Replayer breadcrumbs:
[
]
GTErrorKeyProcessSignal: SIGABRT
=== Setup ===
Capture device: star.localdomain (Mac14,14) - macOS 26.2 (25C56) - 0BA10D1D-D340-5F2E-934B-536675AF9BA1
Metal version: 370.64.2
Supported graphics APIs:
Metal device: Apple M2 Ultra
Supported GPU families: Apple1 Apple2 Apple3 Apple4 Apple5 Apple6 Apple7 Apple8 Mac1 Mac2 Common1 Common2 Common3 Metal3 Metal4
Replay device: star (Mac14,14) - macOS 26.2 (25C56) - 0BA10D1D-D340-5F2E-934B-536675AF9BA1
Metal version: 370.64.2
Supported graphics APIs:
Metal device: Apple M2 Ultra
Supported GPU families: Apple1 Apple2 Apple3 Apple4 Apple5 Apple6 Apple7 Apple8 Mac1 Mac2 Common1 Common2 Common3 Metal3 Metal4
Host: Mac14,14 - macOS 26.2 (25C56)
Tool: Xcode (17C52)
Known SDKs:
Topic:
Graphics & Games
SubTopic:
Metal
Hello!
I need to "draw" a set of particles into the texture. It would be trivial in render encoder of course. However, I would like to implement the task in compute kernel. Every particle draw operation is expected to set 5 texels - "center" one and left/right/upper/lower. Particles can and will overlap, so concurrent draws are to be expected.
I tried using texture atomics - atomic_store() to be more precise. This worked, albeit pretty slowly - too slow for my purpose.
Just to test what would happen, I tried using normal texture write(). I was expecting to see some kind of visual artefacts, but to my surprise, it worked very well (and much faster).
My question: is it safe? I understand that calling write() doesn't guarantee any ordering of the operations, so if multiple threads write to the same texel, the final value may come from any of those threads. But suppose all the threads were to write the very same color? Can I assume that the texel in question will have said color after the compute kernel finishes?
I am using M2 Pro MacBook, but ideally I would love to get the answer for the all Apple Silicon devices. My texture format is R32Int (so as to be able to use atomics), but I could do with any single-channel format, the purpose of the texture is to be binary mask of sorts.
Thanks!
Hello! I'm currently porting a videogame console emulator to iOS and I'm trying to make the renderer (tested on MacOS) work on iOS as well.
The emulator core is written in C++ and uses metal-cpp for rendering, whereas the iOS frontend is written in Swift with SwiftUI. I have an Objective-C++ bridging header for bridging the Swift and C++ sides.
On the Swift side, I create an MTKView. Inside the MTKView delegate, I run the emulator for 1 video frame and pass it the view's backing layer for it to render the final output image with. The emulator runs and returns, but when it returns I get a crash in Swift land (callstack attached below), inside objc_release, which indicates I'm doing something wrong with memory management.
My bridging interface (ios_driver.h):
#pragma once
#include <Foundation/Foundation.h>
#include <QuartzCore/QuartzCore.h>
void iosCreateEmulator();
void iosRunFrame(CAMetalLayer* layer);
Bridge implementation (ios_driver.mm):
#import <Foundation/Foundation.h>
extern "C" {
#include "ios_driver.h"
}
<...>
#define IOS_EXPORT extern "C" __attribute__((visibility("default")))
std::unique_ptr<Emulator> emulator = nullptr;
IOS_EXPORT void iosCreateEmulator() { ... }
// Runs 1 video frame of the emulator and
IOS_EXPORT void iosRunFrame(CAMetalLayer* layer) {
void* layerBridged = (__bridge void*)layer;
// Pass the CAMetalLayer to the emulator
emulator->getRenderer()->setMTKLayer(layerBridged);
// Runs the emulator for 1 frame and renders the output image using our layer
emulator->runFrame();
}
My MTKView delegate:
class Renderer: NSObject, MTKViewDelegate {
var parent: ContentView
var device: MTLDevice!
init(_ parent: ContentView) {
self.parent = parent
if let device = MTLCreateSystemDefaultDevice() {
self.device = device
}
super.init()
}
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {}
func draw(in view: MTKView) {
var metalLayer = view.layer as! CAMetalLayer
// Run the emulator for 1 frame & display the output image
iosRunFrame(metalLayer)
}
}
Finally, the emulator's render function that interacts with the layer:
void RendererMTL::setMTKLayer(void* layer) {
metalLayer = (CA::MetalLayer*)layer;
}
void RendererMTL::display() {
CA::MetalDrawable* drawable = metalLayer->nextDrawable();
if (!drawable) {
return;
}
MTL::Texture* texture = drawable->texture();
<rest of rendering follows here using the drawable & its texture>
}
This is the Swift callstack at the time of the crash:
To my understanding, I shouldn't be violating ARC rules as my bridging header uses CAMetalLayer* instead of void* and Swift will automatically account for ARC when passing CoreFoundation objects to Objective-C. However I don't have any other idea as to what might be causing this. I've been trying to debug this code for a couple of days without much success.
If you need more info, the emulator code is also on Github
Metal renderer: https://github.com/wheremyfoodat/Panda3DS/blob/ios/src/core/renderer_mtl/renderer_mtl.cpp#L58-L68
Bridge implementation: https://github.com/wheremyfoodat/Panda3DS/blob/ios/src/ios_driver.mm
Bridging header: https://github.com/wheremyfoodat/Panda3DS/blob/ios/include/ios_driver.h
Any help is more than appreciated. Thank you for your time in advance.
I'm trying to use MTLBinaryArchive. I collected a BinaryArchive from one device and used metal-tt to translate it for all supported iPhone devices, ranging from iPhone 7 Plus to iPhone 16.
However, this BinaryArchive is quite large, around 1.5GB uncompressed, and about 500MB compressed in the IPA. I'm wondering how to address the size issue.
I watched the WWDC 2022 video, which mentioned that the operating system or app installation process would handle compatibility. Does this compatibility support different GPU chips? I tried installing an IPA with a BinaryArchive collected only from an iPhone 12 on an iPhone 13, but the BinaryArchive didn't take effect.
I also saw that Apple supports App Thinning. However, it seems that resources in the Asset Catalog cannot be accessed via URL, and creating an MTLBinaryArchive requires a URL. Is it possible for MTLBinaryArchive to be distributed through App Thinning?
The WWDC 2022 video also mentioned using the -Os optimization flag to reduce size. Can this give an estimate of how much compression it would achieve? Are there any methods to solve the BinaryArchive size issue without impacting performance?
Topic:
Graphics & Games
SubTopic:
Metal
Hello!
I have a question about how thread groups work with tile shading. When running "traditional" compute, I get to choose both thread group size and the grid size. However, when using tile shading kernel I only have dispatchThreadsPerTile method - this controls how many threads will be ran in each tile. So far so good, but what about thread groups?
The examples in video "Tile Shading on A11" seem to suggest that there will be only one thread group per tile. In the video, [[thread_index_in_threadgroup]] is called "local_id" and it is used to access the image block.
I assume this is the default configuration. So when one does the following:
Creates MTLRenderPassDescriptor with tileWidth set to W and tileHeight set to H
Fires up the tile shading kernel using dispatchThreadsPerTile with MTLSize size = { W, H, 1 }
I understand that the result is 1-to-1 mapping between the tile "pixels" and kernel threads. Now, what I would like to do is to have more than one thread group there. I want this for performance reasons: I have a certain compute kernel which I know executes very well with small thread group size. In fact, { 32, 1, 1 } seems to be the fastest. My understanding is that even if I set tile size to 16x16, and so I am executing 256 threads there, there will only be one SIMD group active in a thread group. Meaning that this SIMD group has to execute 8 times over the tile.
Is it possible somehow? Or perhaps the limitations of the API are pointing at the limitations of hardware itself, and if I want to execute with SIMD group sized thread groups I have to use "traditional" compute encoder?
Will be grateful for help.
Michał
Hi,
I am working with a large project. We are compiling each material to its own .metallib. They all include many common files full of inline functions. Finally we link it all together at the end with a single big pathtrace kernel. Everything works as expected, however the compile times have gotten completely out of hand and it takes multiple minutes to compile at runtime (to native code). I have gathered that I can do this offline by using metal-tt however if I am wondering if there is a way to reduce the compile times in such a scenario, and how to investigate what the root cause of the problem is. I suspect it could have to do with the fact that every materials metallib contains duplications of all the inline functions. Any ideas on how to profile and debug this?
Thanks,
Rasmus
The code is pretty simple
kernel void naive(
constant RunParams *param [[ buffer(0) ]],
const device float *A [[ buffer(1) ]], // [N, K]
device float *output [[ buffer(2) ]],
uint2 gid [[ thread_position_in_grid ]]) {
uint a_ptr = gid.x * param->K;
for (uint i = 0; i < param->K; i++, a_ptr++) {
val += A[b_ptr];
}
output[ptr] = val;
}
when uint a_ptr = gid.x * param->K, the code got 150 GFLops
when uint a_ptr = gid.y * param->K, the code got 860 GFLops
param->K = 256;
thread per group: [16, 16]
I'd like to understand why the performance is so different, and how can I profile/diagnose this to help with further optimization.
Topic:
Graphics & Games
SubTopic:
Metal
I have run into an issue where I am trying to use atomic_float in a swift package but I cannot get things to compile because it appears that the Swift Package Manager doesn't support Metal 3 (atomic_float is Metal 3 functionality). Is there any way around this? I am using
// swift-tools-version: 6.1
and my Metal code includes:
#include <metal_stdlib>
#include <metal_geometric>
#include <metal_math>
#include <metal_atomic>
using namespace metal;
kernel void test(device atomic_float* imageBuffer [[buffer(1)]],
uint id [[ thread_position_in_grid ]]) {
}
But I get an error on the definition of atomic_float .
Any help, one more importantly, where I could have found this information about this limitation, would be helpful.
-RadBobby
Topic:
Graphics & Games
SubTopic:
Metal
Hello!
I'm a developer working on a plugin for the Elgato Stream Deck, called GPU Metrics. The plugin currently only works on Windows but I'd like to bring it to macOS. However, based on forum posts I've read (and StackOverflow) there isn't a very clear path to query GPU metrics like usage, temperature, used GPU memory, and power consumption. There are some tools out there that do similar things, but I wanted to see what would be the recommendation from Apple's engineering team to get this data via a public API.
Requirements:
Access GPU utilization, temperature, memory usage, power usage
C/C++ based API for querying the metrics so I can expose the data to JavaScript via Node Addon
No need to compatibile with Intel-based Macs, as Apple silicon will be fine for now
Plugin GitHub
Thank you!
Noah
hi
When analyzing our game using Instruments, I've always been confused about the two items "Drawable Present" and "Drawable Presented" in the GPU column. The timing of Drawable Present seems to be when the CPU layer calls commandbuffer:present, rather than when the actual encoding is completed on the GPU. Also, what does drawable presented specifically mean? In our case, when a CPU stall occurs, it appears that the vsync interval changes in the next frame, and a surface that has already been calculated is not displayed. Why is this happening?
Context
I’m deploying large language models on iPhone using llama.cpp. A new iPhone Air (12 GB RAM) reports a Metal MTLDevice.recommendedMaxWorkingSetSize of 8,192 MB, and my attempt to load Llama-2-13B Q4_K (~7.32 GB weights) fails during model initialization.
Environment
Device: iPhone Air (12 GB RAM)
iOS: 26
Xcode: 26.0.1
Build: Metal backend enabled llama.cpp
App runs on device (not Simulator)
What I’m seeing
MTLCreateSystemDefaultDevice().recommendedMaxWorkingSetSize == 8192 MiB
Loading Llama-2-13B Q4_K (7.32 GB) fails to complete. Logs indicate memory pressure / allocation issues consistent with the 8 GB working-set guidance.
Smaller models (e.g., 7B/8B with similar quantization) load and run (8B Q4_K provide around 9 tokens/second decoding speed).
Questions
Is 8,192 MB an expected recommendedMaxWorkingSetSize on a 12 GB iPhone?
What values should I expect on other 2025 devices including iPhone 17 (8 GB RAM) and iPhone 17 Pro (12 GB RAM)
Is it strictly enforced by Metal allocations (heaps/buffers), or advisory for best performance/eviction behavior?
Can a process practically exceed this for long-lived buffers without immediate Jetsam risk?
Any guidance for LLM scenarios near the limit?
The following minimal snippet SEGFAULTS with SDK 26.0 and 26.1. Won't crash if I remove async from the enclosing function signature - but it's impractical in a real project.
import Metal
import MetalPerformanceShaders
let SEED = UInt64(0x0)
typealias T = Float16
/* Why ran in async context? Because global GPU object,
and async makeMTLFunction,
and async makeMTLComputePipelineState.
Nevertheless, can trigger the bug without using global
@MainActor
let myGPU = MyGPU()
*/
@main
struct CMDLine {
static func main() async {
let ptr = UnsafeMutablePointer<T>.allocate(capacity: 0)
async let future: Void = randomFillOnGPU(ptr, count: 0)
print("Main thread is playing around")
await future
print("Successfully reached the end.")
}
static func randomFillOnGPU(_ buf: UnsafeMutablePointer<T>, count destbufcount: Int) async {
// let (device, queue) = await (myGPU.device, myGPU.commandqueue)
let myGPU = MyGPU()
let (device, queue) = (myGPU.device, myGPU.commandqueue)
// Init MTLBuffer, async let makeFunction, makeComputePipelineState, etc.
let tempDataType = MPSDataType.uInt32
let randfiller = MPSMatrixRandomMTGP32(device: device, destinationDataType: tempDataType, seed: Int(bitPattern:UInt(SEED)))
print("randomFillOnGPU: successfully created MPSMatrixRandom.")
// try await computePipelineState
// ^ Crashes before this could return
// Or in this minimal case, after randomFillOnGPU() returns
// make encoder, set pso, dispatch, commit...
}
}
actor MyGPU {
let device : MTLDevice
let commandqueue : MTLCommandQueue
init() {
guard let dev: MTLDevice = MPSGetPreferredDevice(.skipRemovable),
let cq = dev.makeCommandQueue(),
dev.supportsFamily(.apple6) || dev.supportsFamily(.mac2)
else { print("Unable to get Metal Device! Exiting"); exit(EX_UNAVAILABLE) }
print("Selected device: \(String(format: "%llX", dev.registryID))")
self.device = dev
self.commandqueue = cq
print("myGPU: initialization complete.")
}
}
See FB20916929. Apparently objc autorelease pool is releasing the wrong address during context switch (across suspension points). I wonder why such obvious case has not been caught before.