I typically read an extended gamepad capture() and get all state. But PSVR2 controllers seem to report nothing. So the stick and other buttons don't do anything in a built app. They register as left/right controllers. This on vOS 26, Xcode 26, etc.
They work correctly in the main icon view, although they don't honor inverted vertical and horiztonal scrolling. Both of the default scrolls just feel wrong. When I move left I'm want to scroll level not right. Same for up/down.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have published a number of games that use SpriteKit for everything important. Since the release of macOS Tahoe, I've had a lot of end user reports saying that sound effects have stopped working in many (but not all) of my titles.
I'm not doing anything unusual here – typical code is:
sndGameOver = [SKAction playSoundFileNamed:@"Audio/GameOver.wav" waitForCompletion:YES];
Then at the appropriate time:
[self runAction:sndGameOver];
Has anyone else encountered this? The code still works fine on previous operating systems, and appears to be fine on iOS too. Has something changed in macOS Tahoe?
I'm at a bit of a loss. There's nothing obviously different between the titles that do work and the titles that don't.
Suggestions welcomed!
Thanks
I am working on a custom resolve tile shader for a client. I see a big difference in performance depending on where we write to:
1- the resolve texture of the color attachment
2- a rw tile shader texture set via [renderEncoder setTileTexture: myResolvedTexture]
Option 2 is more than twice as slow than option 1.
Our compute shader writes to 4 UAVs so just using the resolve texture entry is not possible.
Why such a difference as there is no more data being written? Can option 2 be as fast as option 1?
I can demonstrate the issue in a modified version of the Multisample code sample.
I just upgraded my macOS, Xcode and Simulator all to the newest beta version 26.
Then I found two issues when building my app with Xcode 26 and running it on simulator 26.
The game center access point no longer shows up in the app. This is how it's configured in the past. And it still works on simulator 18.4
func authenticatePlayer() {
GKAccessPoint.shared.location = .topTrailing
self.localPlayer.authenticateHandler = { viewController, error in
if let viewController = viewController {
// can present Game Center login screen
} else if self.localPlayer.isAuthenticated {
// game can be started
} else {
// user didn't log in, continue the game without game center
}
}
}
After game ended, the leaderboard won't load. This is how it's implemented in the past. It's still working in simulator 18.4
struct GameCenterView: UIViewControllerRepresentable {
@Environment(\.presentationMode) var presentationMode
...
func makeUIViewController(context: Context) -> GKGameCenterViewController {
let viewController = GKGameCenterViewController(
leaderboardID: getLeaderBoardID(with: leaderBoardGameMode),
playerScope: .global,
timeScope: .allTime
)
viewController.gameCenterDelegate = context.coordinator
return viewController
}
func updateUIViewController(_ uiViewController: GKGameCenterViewController, context: Context) {}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
class Coordinator: NSObject, GKGameCenterControllerDelegate {
let parent: GameCenterView
init(_ parent: GameCenterView) {
self.parent = parent
}
func gameCenterViewControllerDidFinish(_ gameCenterViewController: GKGameCenterViewController) {
parent.presentationMode.wrappedValue.dismiss()
}
}
}
Hi,
Introducing Swift Concurrency to my Metal app has been a bit challenging as Swift Concurrency is limited by the cooperative thread pool.
GPU work is obviously not CPU bound and can block forward moving progress, especially when using waitUntilCompleted on the command buffer. For concurrent render work this has the potential of under utilizing the CPU and even creating dead locks.
My question is, what is the Metal's teams general recommendation when it comes to concurrency? It seems to me that Dispatch or OperationQueues are still the preferred way for Metal bound tasks in order to gain maximum performance?
To integrate with Swift Concurrency my idea is to use continuations that kick off render jobs via Dispatch or Queues? Would this be the best solution to bridge async tasks with Metal work?
Thanks!
I'm updating our app to support metal 4, but the metal 4 types don't seem to get recognized when targeting simulator. Is it known if metal 4 will be supported in the near future, or am I setting up the app wrong?
Hi all,
I've encountered a potential issue with how the winding order of geometry is handled when their transformations involve negative scaling.
I created a simple test asset, a single triangle, to demonstrate this. The triangle's vertices are defined in a counter-clockwise ("right-handed") winding order, and its transform has a negative scale on the X-axis. According to the OpenUSD specification, this negative determinant in the transformation matrix should effectively reverse the winding order of the geometry:
However, any given gprim's local-to-world transformation can flip its effective orientation, when it contains an odd number of negative scales. This condition can be reliably detected using the (Jacobian) determinant of the local-to-world transform: if the determinant is less than zero, then the gprim's orientation has been flipped, and therefore one must apply the opposite handedness rule when computing its surface normals (or just flip the computed normals) for the purposes of hidden surface detection and lighting calculations.
When I view the asset in tools like Blender or Preview on macOS, it behaves as expected. The triangle's effective orientation is flipped to CW.
However, when the same asset is viewed in Reality Composer Pro or with QuickLook on iOS, its effective orientation remains CCW. In other words, the triangle faces the opposite direction.
My questions for the community and Apple are:
Is this behavior in RealityKit a known issue?
If this is a known issue, is there official guidance for DCC tools on how to export USDZ assets to ensure they appear correctly in the Apple ecosystem?
Any insights or recommendations would be greatly appreciated.
Hi, developers,
I maintain a shipped app that uses string concatenation to construct Metal shader and compile on-device. Beta 4 seems disabled __asm keyword, resulting the compilation failure.
The error is:
v2/GEMMKernel.cpp:229: error: program_source:23:9: error: illegal string literal in 'asm'
__asm("air.simdgroup_async_copy_1d.p3i8.p1i8");
The relevant code is available at https://github.com/liuliu/ccv/blob/unstable/lib/nnc/mfa/v2/GEMMHeaders.cpp#L30 although any __asm will trip this.
Please give us guidance on whether this is a regression or this will be something enforced in 26 release. Personally, I would consider this as a bug given it won't impact anything "compiled" shaders.
Thanks for your patience reading this!
Hello,
I've been trying to leverage instanced rendering in RealityKit on visionOS but have not had success.
RealityKit states this is supported:
https://developer.apple.com/documentation/realitykit/validating-usd-files
https://developer.apple.com/videos/play/wwdc2021/10075/?time=1373
https://developer.apple.com/videos/play/wwdc2023/10099/?time=772
RealityKit Trace metrics
Validating instancing is working:
To test I made a base visionOS app with immersive space and the entity replaced with my test usdz file. I've been using the RealityKit Trace profiling template in xcode instruments in the immersive space and volume closed. This gets consistent draw call results.
If I have a single sphere mesh with one material I get one draw call, but the number of draw calls grows linearly with mesh count no matter how my entity is configured.
What I've tried
Create a test scene in blender, export with instancing enabled
Create a test scene in Reality Composer Pro using references
Author usda files by hand based on the OpenUSD spec
Programatically create a MeshResource with Contents at runtime
References
https://openusd.org/release/api/_usd__page__scenegraph_instancing.html
https://developer.apple.com/documentation/realitykit/meshresource
https://developer.apple.com/documentation/realitykit/meshresource/instance
Thank you
I'm building a game with a client-server architecture. Using GKMatch.chooseBestHostingPlayer(_:) rarely works. When I started testing it today, it worked once at the very beginning, and since then it always succeeds on one client and returns nil on the other client. I'm testing with a Mac and an iPhone. Sometimes it fails on the Mac, sometimes on the iPhone. On the device that it succeeds on, the provided host can be the device itself or the other one.
I created FB9583628 in August 2021, but after the Feedback Assistant team replied that they are not able to reproduce it, the feedback never went forward.
import SceneKit
import GameKit
#if os(macOS)
typealias ViewController = NSViewController
#else
typealias ViewController = UIViewController
#endif
class GameViewController: ViewController, GKMatchmakerViewControllerDelegate, GKMatchDelegate {
var match: GKMatch?
var matchStarted = false
override func viewDidLoad() {
super.viewDidLoad()
GKLocalPlayer.local.authenticateHandler = authenticate
}
private func authenticate(_ viewController: ViewController?, _ error: Error?) {
#if os(macOS)
if let viewController = viewController {
presentAsSheet(viewController)
} else if let error = error {
print(error)
} else {
print("authenticated as \(GKLocalPlayer.local.gamePlayerID)")
let viewController = GKMatchmakerViewController(matchRequest: defaultMatchRequest())!
viewController.matchmakerDelegate = self
GKDialogController.shared().present(viewController)
}
#else
if let viewController = viewController {
present(viewController, animated: true)
} else if let error = error {
print(error)
} else {
print("authenticated as \(GKLocalPlayer.local.gamePlayerID)")
let viewController = GKMatchmakerViewController(matchRequest: defaultMatchRequest())!
viewController.matchmakerDelegate = self
present(viewController, animated: true)
}
#endif
}
private func defaultMatchRequest() -> GKMatchRequest {
let request = GKMatchRequest()
request.minPlayers = 2
request.maxPlayers = 2
request.defaultNumberOfPlayers = 2
request.inviteMessage = "Ciao!"
return request
}
func matchmakerViewControllerWasCancelled(_ viewController: GKMatchmakerViewController) {
print("cancelled")
}
func matchmakerViewController(_ viewController: GKMatchmakerViewController, didFailWithError error: Error) {
print(error)
}
func matchmakerViewController(_ viewController: GKMatchmakerViewController, didFind match: GKMatch) {
self.match = match
match.delegate = self
startMatch()
}
func match(_ match: GKMatch, player: GKPlayer, didChange state: GKPlayerConnectionState) {
print("\(player.gamePlayerID) changed state to \(String(describing: state))")
startMatch()
}
func startMatch() {
let match = match!
if matchStarted || match.expectedPlayerCount > 0 {
return
}
print("starting match with local player \(GKLocalPlayer.local.gamePlayerID) and remote players \(match.players.map({ $0.gamePlayerID }))")
match.chooseBestHostingPlayer { host in
print("host is \(String(describing: host?.gamePlayerID))")
}
}
}
I recently published my first game on the App Store. It uses SceneKit with a SpriteKit overlay. All crashes Xcode downloaded for it so far are related to some SpriteKit/SceneKit internals.
The most common crash is caused by SKCShapeNode::_NEW_copyRenderPathData. What could cause such a crash?
crash.crash
While developing this game (and the BoardGameKit framework that appears in the crash log) over the years I experienced many crashes presumably caused by the SpriteKit overlay (I opened a post SceneKit app randomly crashes with EXC_BAD_ACCESS in jet_context::set_fragment_texture about such a crash in September 2024), and other people on the internet also mention that they experience crashes when using SpriteKit as a SceneKit overlay. Should I use a separate SKView and lay it on top of SCNView rather than setting SCNView.overlaySKScene? That seemed to solve the crashes for a guy on stackoverflow, but is it also encouraged by Apple?
I know SceneKit is deprecated, but according to Apple critical bugs would still be fixed. Could this be considered a critical bug?
Hello!
I'm currently building an app where I feed images into a Photogrammetry session to create a USDZ. Pretty straightforward, works great. We've recently started some testing on older devices, and have discovered that Photogrammetry is requiring devices that have LIDAR (we've seen some console logs referencing LIDAR if we stumble through a photogrammetry process without checking isSupported first)
Judging from @swredcam's posting about ReefScan from November 24 (https://developer.apple.com/forums/thread/769221) it looks like Photogrammetry did work on those non-LIDAR devices. In my own testing on an iPhone 12 mini with iOS 17, PhotogrammetrySession says it's not supported.
Since we're only feeding in a sequence of photos that have never had depth data, and they process fine on pro/max devices, we're curious why this would require a LIDAR sensor to work, when it seems like it did work without LIDAR in the past. Or is there some other limitation of non-pro devices that is causing photogrammetry to not be supported (especially on today's really powerful hardware)
Thanks!
++md
We as a team of engineers work on an app intended to visualize medical images. The type of situations where the app is used involves time critical decision making for acute clinical conditions. Stability of the app and performance are of utmost importance and can directly help timely treatment action. The app we are developing uses multiple libraries and tools like vtk, webgl, opengl, webkit, gl-matrix etc.
The problem specifically can be described as follows, it has been observed that when 3D volume is rendered in the app and we try to rotate the volume the rotation is slow, unresposive and laggy. Specifically, we have noticed that iOS 18.1 the volume rotation is much smoother as compared to latest iOS 18.2. Eariler, we have faced somewhat similar issue with iOS 17 but it got improved in iOS 18.1. This performance regression is affecting the user experience in our healthcare application.
We have taken reference from the cornerstone.js code and you can reproduce the issue using the following example: https://www.cornerstonejs.org/live-examples/volumeviewport3d
Steps to Reproduce:
Load the above mentioned test example on an iPhone running version 18.2 using safari.
Perform volume rendering using the provided dataset.
Measure the time taken by volume for each rotate or drag action.
Repeat the same steps on an iPhone running version 18.1 for comparison.
Additional Information:
Device Model Tested:
iPhone12, iPhone13, iPhone14
iOS Version With Issue:
18.2
18.3(Beta)
I would appreciate any insights or suggestions on how to address this performance regression. If additional information is needed, please let me know.
Thank you.
I am an AR developer working on Apple Silicon Macs. Currently, Reality Composer Pro does not allow exporting .reality files, and Reality Composer (classic) is not available for Apple Silicon. This creates a gap in the workflow for ARKit/RealityKit developers who need interactive .reality files for use in Xcode projects.
Having the ability to export .reality files directly from Reality Composer Pro on Mac would greatly streamline development and enable a fully native workflow on modern Macs. Alternatively, bringing Reality Composer (classic) to Apple Silicon would also resolve this issue.
I have submitted this as a feature request via Feedback Assistant (FB17900386). I encourage others with similar needs to reply or submit feedback as well.
Thank you!
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
ARKit
Reality Composer
RealityKit
Reality Composer Pro
I am integrating MetalFX FrameInterpolator into a custom Unity RenderGraph–based render pipeline (C++ native plugin + C# render passes), and I am hitting the following assertion at runtime:
/MetalFXDebugError.h:29: failed assertion `Color texture width mismatch from descriptor'
What makes this confusing is that all input/output textures have the correct width and height, and they exactly match the values specified in the MTLFXFrameInterpolatorDescriptor.
Setup
Input resolution: 1024 x 512
Output resolution: 2048 x 1024
MTLFXTemporalScaler is created first and then passed into MTLFXFrameInterpolator
The TemporalScaler and FrameInterpolator descriptors use the same input/output sizes and formats
All Metal textures:
Have no parentTexture
Are 2D textures
Match the descriptor sizes exactly (verified via logging)
Texture bindings at encode time
frameInterpolator.colorTexture = mtlTexColor; // 1024 x 512
frameInterpolator.prevColorTexture = mtlTexPrevColor; // 1024 x 512
frameInterpolator.motionTexture = mtlTexMotion; // 1024 x 512
frameInterpolator.depthTexture = mtlTexDepth; // 1024 x 512
frameInterpolator.uiTexture = mtlTexUI; // 2048 x 1024
frameInterpolator.outputTexture = mtlTexOutput; // 2048 x 1024
All widths/heights are logged and match:
Color : 1024 x 512 (input)
PrevColor : 1024 x 512 (input)
Motion : 1024 x 512 (input)
Depth : 1024 x 512 (input)
UI : 2048 x 1024 (output)
Output : 2048 x 1024 (output)
The TemporalScaler works correctly on its own.
The assertion only occurs when using FrameInterpolator.
Important detail about colorTexture
Originally, colorTexture was copied from BuiltinRenderTextureType.CurrentActive.
After reading that this might violate MetalFX semantics, I changed the pipeline so that:
colorTexture now comes from a dedicated private RenderGraph texture
It is not the backbuffer
It is not a drawable
It is not used as a final output
It is created before UI rendering
Despite this, the assertion still occurs.
Question
Can uiTexture for MTLFXFrameInterpolator legally come from a texture copied from BuiltinRenderTextureType.CurrentActive?
More generally:
Are there additional hidden constraints on colorTexture / prevColorTexture (such as Metal usage, storageMode, aliasing, or hazard tracking) that could cause this assertion, even when sizes match?
Does FrameInterpolator require colorTexture and prevColorTexture to be created in a very specific way (e.g. non-aliased, ShaderRead usage, identical Metal resource properties)?
Any clarification on the exact semantic requirements for colorTexture, prevColorTexture, or uiTexture in MetalFX FrameInterpolator would be greatly appreciated.
Hi all
I have two mystic issues with saving and fetching data to and from iCloud. Both repro only after first launch of an app.
1. [GKLocalPlayer fetchSavedGamesWithCompletionHandler:]
After first attempt I can see 0 saved games (but i know that there is at least one saved game) and there is no any error. In case if I try fetch one more time (without any additional actions) even immediately after first attempt I receive saved games correctly (not 0)
2. [GKLocalPlayer saveGameData: withName: completionHandler:]
After first attempt I can see error The requested operation could not be completed because local player has not been authenticated. In case if I try save one more time (without any additional actions) even immediately after first attempt I can save data successfully without any error
I found the same issue in StackOverflow, but there are no fixes...
Hi! I watched the WWDC25 session "Bring your SceneKit project to RealityKit" which seemed like a great resource for those of us transitioning from the now-deprecated SceneKit framework. The session mentioned that the full sample code for the project would be available to download, but I haven't been able to find it in the Code section of the video page or in the Sample Code Library.
Has the sample code been released yet? Having the project code would make it much easier to follow along with the RealityKit changes shown in the video. Thanks again for the great session.
In my project I need to do the following:
In runtime create metal Dynamic library from source.
In runtime create metal Executable library from source and Link it with my previous created Dynamic library.
Create compute pipeline using those two libraries created above.
But I get the following error at the third step:
Error Domain=AGXMetalG15X_M1 Code=2 "Undefined symbols:
_Z5noisev, referenced from: OnTheFlyKernel
" UserInfo={NSLocalizedDescription=Undefined symbols:
_Z5noisev, referenced from: OnTheFlyKernel
}
import Foundation
import Metal
class MetalShaderCompiler {
let device = MTLCreateSystemDefaultDevice()!
var pipeline: MTLComputePipelineState!
func compileDylib() -> MTLDynamicLibrary {
let source = """
#include <metal_stdlib>
using namespace metal;
half3 noise() {
return half3(1, 0, 1);
}
"""
let option = MTLCompileOptions()
option.libraryType = .dynamic
option.installName = "@executable_path/libFoundation.metallib"
let library = try! device.makeLibrary(source: source, options: option)
let dylib = try! device.makeDynamicLibrary(library: library)
return dylib
}
func compileExlib(dylib: MTLDynamicLibrary) -> MTLLibrary {
let source = """
#include <metal_stdlib>
using namespace metal;
extern half3 noise();
kernel void OnTheFlyKernel(texture2d<half, access::read> src [[texture(0)]],
texture2d<half, access::write> dst [[texture(1)]],
ushort2 gid [[thread_position_in_grid]]) {
half4 rgba = src.read(gid);
rgba.rgb += noise();
dst.write(rgba, gid);
}
"""
let option = MTLCompileOptions()
option.libraryType = .executable
option.libraries = [dylib]
let library = try! self.device.makeLibrary(source: source, options: option)
return library
}
func runtime() {
let dylib = self.compileDylib()
let exlib = self.compileExlib(dylib: dylib)
let pipelineDescriptor = MTLComputePipelineDescriptor()
pipelineDescriptor.computeFunction = exlib.makeFunction(name: "OnTheFlyKernel")
pipelineDescriptor.preloadedLibraries = [dylib]
pipeline = try! device.makeComputePipelineState(descriptor: pipelineDescriptor, options: .bindingInfo, reflection: nil)
}
}
I am developing a macOS terminal app, running on an M4 Pro, and using Metal.
I am not able use float8 or float16, both reporting Variable has incomplete type 'float16' (aka '__Reserved_Name__Do_not_use_float16').
Based on the system I should be able to use these. Either it is because it is also compiling to Intel, which they are not allowed, or something else. Either way I have not been able to figure out how to get past this.
IIs there a compiler setting I need to set to make this work? if so which one and what setting do I need? I only want to run this on M processes, on the latest version of OS so not interested in Intel version or backward compatibility.
Can I use them in SK and do the animations work?
Thanks, Patrick