In my SceneKit game I'm able to connect two players with GKMatchmakerViewController. Now I want to support the scenario where one of them disconnects and wants to reconnect. I tried to do this with this code:
nonisolated public func match(_ match: GKMatch, player: GKPlayer, didChange state: GKPlayerConnectionState) {
Task { @MainActor in
switch state {
case .connected:
break
case .disconnected, .unknown:
let matchRequest = GKMatchRequest()
matchRequest.recipients = [player]
do {
try await GKMatchmaker.shared().addPlayers(to: match, matchRequest: matchRequest)
} catch {
}
@unknown default:
break
}
}
}
nonisolated public func player(_ player: GKPlayer, didAccept invite: GKInvite) {
guard let viewController = GKMatchmakerViewController(invite: invite) else {
return
}
viewController.matchmakerDelegate = self
present(viewController)
}
But after presenting the view controller with GKMatchmakerViewController(invite:), nothing else happens. I would expect matchmakerViewController(_:didFind:) to be called, or how would I get an instance of GKMatch?
Here is the code I use to reproduce the issue, and below the reproduction steps.
Code
Run the attached project on an iPad and a Mac simultaneously.
On both devices, tap the ship to connect to GameCenter.
Create an automatched match by tapping the rightmost icon on both devices.
When the two devices are matched, on iPad close the dialog and tap on the ship to disconnect from GameCenter.
Wait some time until the Mac detects the disconnect and automatically sends an invitation to join again.
When the notification arrives on the iPad, tap it, then tap the ship to connect to GameCenter again. The iPad receives the call player(_:didAccept:), but nothing else, so there’s no way to get a GKMatch instance again.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The “explore spatial accessory input on visionOS” presentation from WDC25 interests me. I bought both the MUSE Logitech stylus and the PS VR2 sense controllers to try out with the sculpting app presented by the author, engineer Amanda Han. Unfortunately the app itself was not included. Could the app be made available for downloading as well as the Xcode project? I appreciate any assistance the author and your team could provide. Thank you.
Topic:
Graphics & Games
SubTopic:
RealityKit
I'm running into a persistent visual issue while deploying a floral corridor scene to Apple Vision Pro using Unity 6.0 with URP and Metal. The issue only appears on the Vision Pro device — everything looks fine in the Unity Editor.
Issue Description
When the frame rate drops to around 60–70 FPS, noticeable distortion artifacts appear around the edges of foliage models. It seems like the background meshes (behind the plants) get warped and leak through the edges of the foliage. Although this is most visible around the leaves, even solid objects like standard URP wall or box models show distorted edges when the issue occurs.
All the foliage uses Opaque or Alpha Clipping materials.
Things I've Tried
Changing the foliage materials to Transparent mode —distortion around edges disappears, but using Transparent for a large number of foliage assets is not ideal for performance or sorting complexity.
Reducing the number of foliage objects — with only a few plants in the scene and the frame rate staying around 100 FPS, the distortion disappears. However, this isn’t a practical solution for a full environment.
Possible Cause?
I came across this note in the Unity documentation:
"Ensure depth-buffer for each pixel is non-zero - on visionOS, the depth buffer is used for reprojection. To ensure visual effects like skyboxes and shaders are displayed beautifully, ensure that some value is written to the depth for each pixel."
Could this be related to the issue? Is it possible that Alpha Clipping with low pixel coverage leads to some pixels not writing to the depth buffer, which then causes problems during Vision Pro’s reprojection or foveated rendering? However, even when I disable Alpha Clipping entirely, the distortion issue still persists, so it may not be solely caused by clipping itself.
Project Setup
Unity 6.0 (URP)
Depth Texture: Enable
Using Metal as the graphics backend
Running on real Vision Pro hardware (not simulator)
Any advice on how to avoid these distortion issues on Vision Pro would be greatly appreciated.
Thanks!
I think I really have tried everything and I did all according to official documentation to support game mode on iOS or iPadOS but it doesn't matter what I do it just doesn't get triggered. Funny enough it works during development when I install it via Xcode but as soon as it is live on the store and when I install it from there game mode doesn't get triggered anymore. What I have atm
I have added (even though it is deprecated)
<key>GCSupportsGameMode</key>
<true/>
I have set the (but it seems only supported for macOS)
<key>LSApplicationCategoryType</key>
<string>public.app-category.games</string>
I have added
<key>LSSupportsGameMode</key>
<true/>
It just doesn't work. Is there anything else what needs to be done? Should the flag LSSupportsGameMode not be enough normally?
The reason why this is so annoying is that my app is a real time streaming app and I want to profit from minimised background activities for smoother gameplay and more consistent frame rates like mentioned in the documentation.
Hi everyone,
I'm new to visionOS development. I'm trying to create a physics-based scene (with gravity) where users can pick up and move objects on a workbench. I am struggling with physics interactions during the drag gesture:
Kinematic Mode: If I switch to .kinematic during the drag, the object moves smoothly but clips through other objects (no collisions).
Dynamic Mode: I tried keeping it .dynamic and applying linear velocity toward the hand position, but the movement feels laggy and unresponsive.
Hybrid Approach: I tried switching to .kinematic during DragGesture.onChange and back to .dynamic on collision, but this causes the entity to jitter/shake violently when touching other objects.
Has anyone found a clean way to drag objects while maintaining solid collisions.
Thanks for your help!
My procedural animation using IKRig occasionally jitters and throws this warning. It completely breaks immersion.
getMoCapTask: Rig must be of type kCoreIKSolverTypePoseRetarget
Any idea where to start looking or what could be causing this?
Hello
I am trying to get thread group memory access in fragment shader. In essence, I would like to have all the fragments in a tile to bitwiseOR some value. My idea was to use simd_or across the SIMD group, then make each SIMD group thread 0 to atomic or the value into thread group memory. Finally very first thread of the tile would be tasked with writing the value down to texture with write access.
Now, I can allocate the thread group memory argument to the fragment function all right. MTLRenderEncoder has setThreadgroupMemoryLength call, which I am using the following way
[renderEncoder setThreagroupMemoryLength: 16 offset: 0 atIndex:0]
Unfortunately, all I am getting is the following error (runtime assertion)
-[MTLDebugRenderCommandEncoder setThreadgroupMemoryLength:offset:atIndex:]:3487: failed assertion Set Threadgroup Memory Length Validation
offset + length(16) must be <= threadgroupMemoryLength(0).`
What I am doing wrong? How I can get thread group memory in the fragment shader? I know I could use tile shading and compute function but the problem is that here I really like to use fragment stuff. Will be grateful for help.
When I enter webp image to the assets catalog I get warrning:
/Users/..../Assets.xcassets The image set "Card-Back" references a file "Card-Back.webp", but that file does not have a valid extension.
It works, I see all my images perfect.
How can I fix the 200+ warrnings?
Hi there,
I'm wondering if it's possible under iOS 28 developer beta to enable MetalFX scaling info with '{"MTL_HUD_ENABLED": "1" for my App.
This information has been added to Mac, but looks to be absent on iPhone / iPad
Hi Apple,
In VisionOS, for real-time streaming of large 3D scenes, I plan to create Metal buffers and textures in multiple threads and then use a compute shader on the main thread to copy the Metal resources into RealityKit, minimizing main thread usage. Given that most of RealityKit's default APIs require execution on the main actor (main thread), it is not ideal for streaming data. Is this approach the best way to handle streaming data and real-time rendering?
Thank you very much.
Post can be removed.
In my game 854159268 (com.1791entertainment.qugame), in my quMostRecent3 leaderboard, the top 2 entries have 'vanished'. They were there yesterday. I know these players have played today, as I see their scores on other leaderboards.
Any ideas how to get these back?
These 2 players (me and my tester) are both TestFlight ing - not sure if that changes things.
Topic:
Graphics & Games
SubTopic:
GameKit
Hi everyone,
I’ve been developing a custom, end-to-end 3D rendering engine called Crescent from scratch using C++20 and Metal-cpp (targeting macOS and visionOS). My primary goal is to build a zero-bottleneck, GPU-driven pipeline that maximizes the potential of Apple Silicon’s Unified Memory and TBDR architecture.
While the fundamental systems are stable, I am looking for architectural feedback from Metal framework engineers regarding specific synchronization and latency challenges.
Current Core Implementations:
GPU-Driven Instance Culling: High-performance occlusion culling using a Hierarchical Z-Buffer (HZB) approach via Compute Shaders.
Clustered Forward Shading: Support for high-count dynamic lights through view-space clustering.
Temporal Stability: Custom TAA with history rejection and Motion Blur resolve.
Asset Infrastructure: Robust GUID-based scene serialization and a JSON-driven ECS hierarchy.
The Architectural Challenge:
I am currently seeing slight synchronization overhead when generating the HZB mip-chain. On Apple Silicon, I am evaluating the cost of encoder transitions versus cache-friendly barriers.
&& m_hzbInitPipeline && m_hzbDownsamplePipeline && !m_hzbMipViews.empty();
if (canBuildHzb) {
MTL::ComputeCommandEncoder* hzbInit = commandBuffer->computeCommandEncoder();
hzbInit->setComputePipelineState(m_hzbInitPipeline);
hzbInit->setTexture(m_depthTexture, 0);
hzbInit->setTexture(m_hzbMipViews[0], 1);
if (m_pointClampSampler) {
hzbInit->setSamplerState(m_pointClampSampler, 0);
} else if (m_linearClampSampler) {
hzbInit->setSamplerState(m_linearClampSampler, 0);
}
const uint32_t hzbWidth = m_hzbMipViews[0]->width();
const uint32_t hzbHeight = m_hzbMipViews[0]->height();
const uint32_t threads = 8;
MTL::Size tgSize = MTL::Size(threads, threads, 1);
MTL::Size gridSize = MTL::Size((hzbWidth + threads - 1) / threads * threads,
(hzbHeight + threads - 1) / threads * threads,
1);
hzbInit->dispatchThreads(gridSize, tgSize);
hzbInit->endEncoding();
for (size_t mip = 1; mip < m_hzbMipViews.size(); ++mip) {
MTL::Texture* src = m_hzbMipViews[mip - 1];
MTL::Texture* dst = m_hzbMipViews[mip];
if (!src || !dst) {
continue;
}
MTL::ComputeCommandEncoder* downEncoder = commandBuffer->computeCommandEncoder();
downEncoder->setComputePipelineState(m_hzbDownsamplePipeline);
downEncoder->setTexture(src, 0);
downEncoder->setTexture(dst, 1);
const uint32_t mipWidth = dst->width();
const uint32_t mipHeight = dst->height();
MTL::Size downGrid = MTL::Size((mipWidth + threads - 1) / threads * threads,
(mipHeight + threads - 1) / threads * threads,
1);
downEncoder->dispatchThreads(downGrid, tgSize);
downEncoder->endEncoding();
}
if (m_instanceCullHzbPipeline) {
dispatchInstanceCulling(m_instanceCullHzbPipeline, true);
}
}
My Questions:
Encoder Synchronization: Would you recommend moving this loop into a single ComputeCommandEncoder using MTLBarrier between dispatches to maintain L2 cache residency, or is the overhead of separate encoders negligible for depth-downsampling on TBDR?
visionOS Bindless Latency: For stereo rendering on visionOS, what are the best practices for managing MTL4ArgumentTable updates at 90Hz+? I want to ensure that updating bindless resources for each eye doesn't introduce unnecessary CPU-to-GPU latency.
Memory Management: Are there specific hints for Memoryless textures that could be applied to intermediate HZB levels to save bandwidth during this process?
I’ve attached a screenshot of a scene rendered with the engine (PBR, SSR, and IBL).
As GKGameCenterViewController has been deprecated, it seems that GKAccessPoint is now the correct way to present the GameCentre leaderboard.
But the placement options for the GKAccessPoint are very limited and lead to misaligned UI that looks clunky, as the GKAccessPoint does not align with the system navigation toolbar.
Am I missing something here or am I just stuck with a lopsided UI now?
I much preferred how this previously worked, where I could present the GKGameCenterViewController in a sheet from my own button
Hi!
I hope everyone reading is doing well. I am working on developing a reinforcement learning agent that involves sending scan codes to a window, which I've been doing by sending virtual scan codes with CGEventCreateKeyboardEvent per the docs. There is no event source when I send the keyboard events.
However, when many keyboard events are happening (with the keys 'q', 'w', 'e', 'r', 'f', 'd', 's', space, arrow keys) in quick succession (<250ms), the enable dictation popup or the function button emojis popup appear for seemingly no reason. I have verified that I am using the correct scan codes for these keypresses, so I am wondering what else could cause this to happen. It is as if I am choosing to press f5 or fn. It does not happen when 'a' is the only button being pressed in quick succession.
One thing that I have not been able to easily find is the scan code inputs for dictation nor the function button. do these scan codes overlap somehow?
Thank you all for the help!
Hunter
Updated my app to include turn-based matches. Beta testing through FlightTest and all was well between iOS 18.x and 26.2 devices. One beta tester upgraded to 26.2 during beta testing and now when the MatchMaker VC is opened, it does not show existing matches. Worse, he can create new matches and play his turn, but the new match won't even show up in MMVC, even after opponent takes turn.
My app has been reviewed and is ready for release, but I'd like to know how to solve this before I release. He has tried re-installing the app, including an updated FlightTest version that is the same as the about-to-be-released reviewed version.
Topic:
Graphics & Games
SubTopic:
GameKit
I have an odd bug, if I use initWithFrame as the init routine for NSView subclass that uses layers I don't see this bug.
But if I embedded this view into a storyboard with a .nib file and use initWithCoder, I need to return true on
(BOOL) contentsAreFlipped
From the NSView subclass
If I don't the CALayer actually renders from 0,0 from the view upwards and off the window.
The frame sizes for the NSView and the CALayer are good.. when I see them in updateLayer.
Obviously I have a fix.. but I would like to understand why.
Topic:
Graphics & Games
SubTopic:
General
I am trying to create a simple portal like that in RealityKit, but using metal instead of RealityKit. Has anyone been able to create a window or portal like thing to show a skybox outside in mixed Reality?
Topic:
Graphics & Games
SubTopic:
Metal
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView.
I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085
I've included a crash log with the feedback.
If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior.
Please let me know if there's anything I can do to help.
Thank you.
Hi. I'm a 3D designer, using Blender for most of my work. The most recent Blender conference discussed utilizing the Open Shading Language (OSL) in their latest versions, which allows designers to write custom shaders for their workflows.
At the moment, only Nvidia Optix GPU's can utilize this language for rendering (from what I understand), but Blender developers stated they are waiting on other GPU manufacturers to implement this feature as well. I'm not sure if there are any licensing issues here, but would this be something Apple could implement in Metal to make their hardware more attractive to the 3D design community?
Any help or knowledge on this topic would be greatly appreciated.