After watching WWDC 2025 session "Combine Metal 4 machine learning and graphics", I have decided to give it a shot to integrate the latest MTL4MachineLearningCommandEncoder to my existing render pipeline. After a lot of trial and errors, I managed to set up the pipeline and have the app compiled.
However, I am now stuck on creating a MTLLibrary with .mtlpackage.
Here is the code I have to create a MTLLibrary according the WWDC session https://developer.apple.com/videos/play/wwdc2025/262/?time=550:
let coreMLFilePath = bundle.path(forResource: "my_model", ofType: "mtlpackage")!
let coreMLURL = URL(string: coreMLFilePath)!
do {
metalDevice.makeLibrary(URL: coreMLURL)
} catch {
print("error: \(error)")
}
With the above code, I am getting error:
Error Domain=MTLLibraryErrorDomain Code=1 "Invalid metal package" UserInfo={NSLocalizedDescription=Invalid metal package}
What is the correct way to create a MTLLibrary with .mtlpackage? Do I see this error because the .mtlpackage I am using is incorrect? How should I go with debugging this?
I'd really appreciate if I could get some help on this as I have been stuck with it for some time now. Thanks in advance!
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
https://developer.apple.com/documentation/arkit/arkit_in_ios/specifying_a_lighting_environment_in_ar_quick_look
How can I disable it? or at least use a custom texture that's just black?
I don't see the purpose of having the real-time environment probe that captures IBL, but always add this fake studio IBL that you can't remove...
Topic:
Graphics & Games
SubTopic:
RealityKit
Hey everyone, I am currently developing an app in visionOS and using RealityComposerPro create scenes in put in my app.
I have a humanoid model with hair strands, and each strand of hair has an opacity map. However, some reflections are still visible even though the opacity is zero. There are also some weird culling among hair strands (in the left circle) and weird reflections in hair cards (in the right circle).
Here's my settings for the materials.
Since all the hair strands are interconnected with each other, it is hard to decide the drawing order in Xcode, so I am wondering if there's an easier way to handle transparency objects.
Please let me know if you know anything helpful, much appreciated!
I have something like this drawing in an MTKView (see at bottom).
I am finding it difficult to figure out when can the Swift-land resources used in making the MTLBuffer(s) be released? Below, for example, is it ok if args goes out of scope (or is otherwise deallocated) at point 1, 2, or 3? Or perhaps even earlier, as soon as argsBuffer has been created?
I have been reading through various articles such as
Setting resource storage modes
Choosing a resource storage mode for Apple GPUs
Copying data to a private resource
but it's a lot to absorb and I haven't been really able to find an authoritative description of the required lifetime of the resources in CPU land.
I should mention that this is Metal 4 code. In previous versions of Metal, the MTLCommandBuffer had the ability to add a completion handler to be called by the GPU after it has finished running the commands in the buffer but in Metal 4 there is no such thing (it it were even needed for the purpose I am interested in).
Any advice and/or pointers to the definitive literature will be appreciated.
guard let argsBuffer = device.makeBuffer(bytes: &args,...
argumentTable.setAddress(argsBuffer.gpuAddress, ...
encoder.setArgumentTable(argumentTable, stages: .vertex)
// encode drawing
renderEncoder.draw...
...
encoder.endEncoding() // 1
commandBuffer.endCommandBuffer() // 2
commandQueue.waitForDrawable(drawable)
commandQueue.commit([commandBuffer]) // 3
commandQueue.signalDrawable(drawable)
drawable.present()
Topic:
Graphics & Games
SubTopic:
Metal
We are seeing crashes in Xcode organizer. So far we are not able to reproduce them locally. They affect multiple app releases (some older, built with Xcode 15.x and newer built with Xcode 16.0). They only affect iOS 18.5.
Is there anything that changed in latest iOS? It's hard to tell what exactly is causing this crash because setting symbolic breakpoint on CA::Render::Image::new_image(unsigned int, unsigned int, unsigned int, unsigned int, CGColorSpace*, void const*, unsigned long const*, void (*)(void const*, void*), void*) triggers this breakpoint all the time, but not necessarily with exactly the previous stack frames matching the crash report.
Is it a known issue?
crash.crash
Thank you.
How can one match the walls and floor of a given CapturedRoom ?
The transform.eulerAngles of a floor z & y are always 0 !
And the polygons seems to have a different orientation than the walls.
So how to figure out the rotation and match the one from the walls ?
I am using Unity's GameKit to implement a turnbase game.
I want to make a UI in Unity to show all the games I can join.
I tried using
var matches = await GKTurnBasedMatch.LoadMatches();
to get all the open matches.
But it seems that I can only get the matcm related to the current apple account.
Can you help me get all the matches?
ALSO
I used
var match = await GKTurnBasedMatchmakerViewController.Request(request);
to exit the gamecenter interface and start a game (automatic matching, no one was invited)
Another device used
var match = await GKTurnBasedMatch.Find(request);
to find the game, but it did not find the game, but it start a new game (automatic matching).
Can you help me solve these problems?
after launching a nearby exoerience on quick look or inside our app, all the user in the group watch sometimes teh model blinking abd becoming transparet...
... just one user hasnt the issue, either the one who launched shareplay or the user who force align the immersive space in front
weird
RealityKit spatial audio crackles and pops on iOS 26.0 beta 5.
It works correctly on iOS 18.6 and visionOS 26.0 beta 5.
The APIs used are AudioPlaybackController, Entity.prepareAudio, Entity.play
Videos of the expected and observed behavior are attached to the feedback FB19423059.
The audio should be a consistent, repeating sound, but it seems oddly abbreviated and the volume varies unexpectedly.
Thank you for investigating this issue.
I have two devices (iPod, iPhone), each using a different Apple ID. I have an existing game to which I'm adding TBM. When the iPod invites the iPhone, it sends an iMessage invite to the iPhone; when I click on that message, I get "Retrieving", then Game Center in Settings is opened, not my App (same version installed on both devices). I start my App on the iPhone and that match is not shown in the Matchmaker View Controller.
When I send an invite from the iPhone to the iPod and I click on the iMessage invite, the app starts but the match isn't listed in the MatchMaker ViewController on the iPod (but is on the iPhone).
In addition, when I click on the info circle on the iPhone, it who's the two players and "App Store" under the Game Center name. However, When I do the same on the iPod, it has a "Play your turn" there.
Any ideas?
I implemented an EntityAction to change the baseColor tint - and had it working on VisionOS 2.x.
import RealityKit
import UIKit
typealias Float4 = SIMD4<Float>
extension UIColor {
var float4: Float4 {
if
cgColor.numberOfComponents == 4,
let c = cgColor.components
{
Float4(Float(c[0]), Float(c[1]), Float(c[2]), Float(c[3]))
} else {
Float4()
}
}
}
struct ColourAction: EntityAction {
// MARK: - PUBLIC PROPERTIES
let startColour: Float4
let targetColour: Float4
// MARK: - PUBLIC COMPUTED PROPERTIES
var animatedValueType: (any AnimatableData.Type)? { Float4.self }
// MARK: - INITIATION
init(startColour: UIColor, targetColour: UIColor) {
self.startColour = startColour.float4
self.targetColour = targetColour.float4
}
// MARK: - PUBLIC STATIC FUNCTIONS
@MainActor static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
animationState.storeAnimatedValue(interpolatedColour)
}
}
}
extension Entity {
// MARK: - PUBLIC FUNCTIONS
func changeColourTo(_ targetColour: UIColor, duration: Double) {
guard
let modelComponent = components[ModelComponent.self],
let material = modelComponent.materials.first as? PhysicallyBasedMaterial
else {
return
}
let colourAction = ColourAction(startColour: material.baseColor.tint, targetColour: targetColour)
if let colourAnimation = try? AnimationResource.makeActionAnimation(for: colourAction, duration: duration, bindTarget: .material(0).baseColorTint) {
playAnimation(colourAnimation)
}
}
}
This doesn't work in VisionOS 26. My current fix is to directly set the material base colour - but this feels like the wrong approach:
@MainActor static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard
let animationState = event.animationState,
let entity = event.targetEntity,
let modelComponent = entity.components[ModelComponent.self],
var material = modelComponent.materials.first as? PhysicallyBasedMaterial
else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
material.baseColor.tint = UIColor(interpolatedColour)
entity.components[ModelComponent.self]?.materials[0] = material
animationState.storeAnimatedValue(interpolatedColour)
}
}
So before I raise this as a bug, was I doing anything wrong in the former version and got lucky? Is there a better approach?
The following minimal snippet SEGFAULTS with SDK 26.0 and 26.1. Won't crash if I remove async from the enclosing function signature - but it's impractical in a real project.
import Metal
import MetalPerformanceShaders
let SEED = UInt64(0x0)
typealias T = Float16
/* Why ran in async context? Because global GPU object,
and async makeMTLFunction,
and async makeMTLComputePipelineState.
Nevertheless, can trigger the bug without using global
@MainActor
let myGPU = MyGPU()
*/
@main
struct CMDLine {
static func main() async {
let ptr = UnsafeMutablePointer<T>.allocate(capacity: 0)
async let future: Void = randomFillOnGPU(ptr, count: 0)
print("Main thread is playing around")
await future
print("Successfully reached the end.")
}
static func randomFillOnGPU(_ buf: UnsafeMutablePointer<T>, count destbufcount: Int) async {
// let (device, queue) = await (myGPU.device, myGPU.commandqueue)
let myGPU = MyGPU()
let (device, queue) = (myGPU.device, myGPU.commandqueue)
// Init MTLBuffer, async let makeFunction, makeComputePipelineState, etc.
let tempDataType = MPSDataType.uInt32
let randfiller = MPSMatrixRandomMTGP32(device: device, destinationDataType: tempDataType, seed: Int(bitPattern:UInt(SEED)))
print("randomFillOnGPU: successfully created MPSMatrixRandom.")
// try await computePipelineState
// ^ Crashes before this could return
// Or in this minimal case, after randomFillOnGPU() returns
// make encoder, set pso, dispatch, commit...
}
}
actor MyGPU {
let device : MTLDevice
let commandqueue : MTLCommandQueue
init() {
guard let dev: MTLDevice = MPSGetPreferredDevice(.skipRemovable),
let cq = dev.makeCommandQueue(),
dev.supportsFamily(.apple6) || dev.supportsFamily(.mac2)
else { print("Unable to get Metal Device! Exiting"); exit(EX_UNAVAILABLE) }
print("Selected device: \(String(format: "%llX", dev.registryID))")
self.device = dev
self.commandqueue = cq
print("myGPU: initialization complete.")
}
}
See FB20916929. Apparently objc autorelease pool is releasing the wrong address during context switch (across suspension points). I wonder why such obvious case has not been caught before.
I use unity 2020.3.48f1 to develop a game; trying to implement Apple Services integration I use Apple unity plugins(https://github.com/apple/unityplugins) Using latest version of unity plugins I getting error in Unity project after plugin import It say "Not allowed platform VisionOS" When I tryed to use older version of the plugins I getting error on runtime when calling "var fetchItemsResponse = await GKLocalPlayer.Local.FetchItems();" in line 42 it drop EXC_BAD_ACCESS(code=257, address=0x0000...) error I tryed to use different commits from official repositorys and even custom branches of apple unity plugins like (https://github.com/muZZkat/unityplugins/tree/muzzkat/fix-fetch-items) but it did not help
There is whole my script which trying to use apple unuity plugins
using System.Threading.Tasks;
using UnityEngine;
using System.Collections;
using System;
using Apple.GameKit;
using UnityEngine.UI;
public class TheScript : MonoBehaviour
{
[SerializeField]
InputField otp;
string Signature;
string TeamPlayerID;
string Salt;
string PublicKeyUrl;
string Timestamp;
void Start()
{
StartCoroutine(Call());
}
private IEnumerator Call()
{
yield return new WaitForSeconds(5);
Login();
}
public async Task Login()
{
otp.text += $"Loginig... ";
if (!Apple.GameKit.GKLocalPlayer.Local.IsAuthenticated)
{
try
{
var player = await GKLocalPlayer.Authenticate();
var localPlayer = GKLocalPlayer.Local;
TeamPlayerID = localPlayer.TeamPlayerId;
var fetchItemsResponse = await GKLocalPlayer.Local.FetchItems();
Signature = Convert.ToBase64String(fetchItemsResponse.GetSignature());
PublicKeyUrl = fetchItemsResponse.PublicKeyUrl;
otp.text += $"Team Player ID: {TeamPlayerID} ";
otp.text += $"PublicKeyUrl: {PublicKeyUrl} ";
}
catch(Exception e)
{
otp.text += $"Error: " + e.Message;
}
}
else
{
Debug.Log("AppleGameCenter player already logged in.");
}
}
async Task SignInWithAppleGameCenterAsync(string signature, string teamPlayerId, string publicKeyURL, string salt, ulong timestamp)
{
}
}
My app has a number of heterogeneous GPU workloads that all run concurrently. Some of these should be executed with the highest priority because the app’s responsiveness depends on them, while others are triggered by file imports and the like which should have a low priority. If this was running on the CPU I’d assign the former User Interactive QoS and the latter Utility QoS. Is there an equivalent to this for GPU work?
The “explore spatial accessory input on visionOS” presentation from WDC25 interests me. I bought both the MUSE Logitech stylus and the PS VR2 sense controllers to try out with the sculpting app presented by the author, engineer Amanda Han. Unfortunately the app itself was not included. Could the app be made available for downloading as well as the Xcode project? I appreciate any assistance the author and your team could provide. Thank you.
Topic:
Graphics & Games
SubTopic:
RealityKit
The code is pretty simple
kernel void naive(
constant RunParams *param [[ buffer(0) ]],
const device float *A [[ buffer(1) ]], // [N, K]
device float *output [[ buffer(2) ]],
uint2 gid [[ thread_position_in_grid ]]) {
uint a_ptr = gid.x * param->K;
for (uint i = 0; i < param->K; i++, a_ptr++) {
val += A[b_ptr];
}
output[ptr] = val;
}
when uint a_ptr = gid.x * param->K, the code got 150 GFLops
when uint a_ptr = gid.y * param->K, the code got 860 GFLops
param->K = 256;
thread per group: [16, 16]
I'd like to understand why the performance is so different, and how can I profile/diagnose this to help with further optimization.
Topic:
Graphics & Games
SubTopic:
Metal
I am trying to use the SVGF denoiser to denoise my ray traced shadows (and also other textures later). I do get a smoothed image, but with wonky denoising.
I need the depth-normal textures and motion textures for the SVGF and assume that these are badly filled in my case. However, neither in the above linked documentation nor in the WWDC19 video I find how they should be defined. I am looking to answers to:
Is depth in red or alpha channel for the depth-normal texture?
Are the normals in screen space?
Is depth linear?
Is it distance or z coordinate in view space? Or even logarithmically scaled or something else?
Are the motion vectors supposed to be in pixels per frame?
What is the orientation of the axis? Is y up or down?
Are there are other restrictions on the formats?
Also the linked code did not help me (I have not found any SVGF so far; also all the code is in Objective-C++, not Swift, but that's a different topic).
So how should I fill these textures.
Can someone point me to the documentation where these kinds of questions are answered?
Hello!
I need to "draw" a set of particles into the texture. It would be trivial in render encoder of course. However, I would like to implement the task in compute kernel. Every particle draw operation is expected to set 5 texels - "center" one and left/right/upper/lower. Particles can and will overlap, so concurrent draws are to be expected.
I tried using texture atomics - atomic_store() to be more precise. This worked, albeit pretty slowly - too slow for my purpose.
Just to test what would happen, I tried using normal texture write(). I was expecting to see some kind of visual artefacts, but to my surprise, it worked very well (and much faster).
My question: is it safe? I understand that calling write() doesn't guarantee any ordering of the operations, so if multiple threads write to the same texel, the final value may come from any of those threads. But suppose all the threads were to write the very same color? Can I assume that the texel in question will have said color after the compute kernel finishes?
I am using M2 Pro MacBook, but ideally I would love to get the answer for the all Apple Silicon devices. My texture format is R32Int (so as to be able to use atomics), but I could do with any single-channel format, the purpose of the texture is to be binary mask of sorts.
Thanks!
Hello
I would like to know how to combine 2 animations with RealityKit (one animation for the arms and one for the legs for example)
I saw this apple demo that seems to explain it but I don't understand at all how to do it...
Thanks
App Storeにある『浮遊時計 Premium』は1Hzごとか10Hzごと、または3段階以上のリフレッシュレート計測はできますか?
Topic:
Graphics & Games
SubTopic:
General