Hello,
We are experiencing an issue with apparent lag or latency when interacting with objects in Unity using the XR Interaction Toolkit. Even when interactables are configured with the Movement Type set to Instantaneous, objects exhibit a noticeable delay when following hand movement. This issue is particularly apparent when the hand moves at higher speeds.
We would like to know: is any additional configuration required, or is there something else we need to do to resolve this?
Thank you very much for your assistance.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Imagine a native macOS app that acts as a "launcher" for a Java game.** For example, the "launcher" app might use the Swift Process API or a similar method to run the java command line tool (lets assume the user has installed Java themselves) to run the game.
I have seen How to Enable Game Mode. If the native launcher app's Info.plist has the following keys set:
LSApplicationCategoryType set to public.app-category.games
LSSupportsGameMode set to true (for macOS 26+)
GCSupportsGameMode set to true
The launcher itself can cause Game Mode to activate if the launcher is fullscreened. However, if the launcher opens a Java process that opens a window, then the Java window is fullscreened, Game Mode doesn't seem to activate. In this case activating Game Mode for the launcher itself is unnecessary, but you'd expect Game Mode to activate when the actual game in the Java window is fullscreened.
Is there a way to get Game Mode to activate in the latter case?
** The concrete case I'm thinking of is a third-party Minecraft Java Edition launcher, but the issue can also be demonstrated in a sample project (FB13786152). It seems like the official Minecraft launcher is able to do this, though it's not clear how. (Is its bundle identifier hardcoded in the OS to allow for this? Changing a sample app's bundle identifier to be the same as the official Minecraft launcher gets the behavior I want, but obviously this is not a practical solution.)
When using simd_inverse functions, the same input yields different results on different devices.
On the Mac mini with M4 pro and M3 ultra, the result is
But on the Mac mini of M2 ultra, the result is
Leaderboards working fine in iOS 26.1 but seem to be broken in 26.2 and also in the 26.3 developer beta. Players cannot submit scores and neither can they view scores on Apple's default leaderboards. Custom leaderboards that rely on pulling information using GameKit APIs also fail.
Is there a workaround or patch for this?
When previewing the recording of gameplay the buttons to exit or save are unclickable behind the top bar clock and Wi-Fi/5G status bar. Which means that you have to quit the game in order to continue.
Tested on multiple devices.
Does anyone have a solution to this? At the moment we have disabled it altogether for iOS 26 users.
Post can be removed.
Hi,
The metal-cpp distribution appears to only contain headers for Foundation and Quartzcore. The LearnMetalCPP download [1] provides a ZIP with an metal-cpp-extensions directory containing AppKit.hpp and MetalKit.hpp headers. First question: Are these headers distributed anywhere else more publicly? Without these headers only the renderer can be fully written in C++ as far as I can tell, i.e. no complete C++ NSApplication. Second question: Will these headers, if needed, be maintained (e.g. updated and/or extended) by Apple along side metal-cpp?
[1] https://developer.apple.com/metal/cpp/
Thank you and regards.
I notice some metal-cpp classes have static funtion like
static URL* fileURLWithPath(const class String* pPath);
static class ComputePassDescriptor* computePassDescriptor();
static class AccelerationStructurePassDescriptor* accelerationStructurePassDescriptor();
which return a new object.
these classes also provide 'alloc' and 'init' function to create object by default.
for object created by 'alloc' and 'init', I use something like NS::Shaderd_Ptr or call release directly to free memory. Because 'alloc' and 'init' not explicit call on these static function.
I wonder how to correctly free object created by these static function? did they managed by autorelease pool?
In Xcode version 16.2 (16C5032a) I created:
One [ScenekitApp] [File/New/Project/iOS/Game/Game Technology: SceneKit].
Three custom Frameworks [File/New/Project/iOS/Framework] [ASCSource.framework, ASCCommon.framework and ASCCoreData.framework].
In the four projects the [Minimum Deployments=iOS 15.0], [Swift version=6.0] and [BUILD_LIBRARY_FOR_DISTRIBUTION=Yes].
I directly installed the [Zip.framework] in [ASCSource.framework] without [pod init/pod install] and in the [General tab] [Frameworks, Libraries and Embeded Content] [Zip.framework Embed&Sign].
I installed the three frameworks [ASCSource.framework, ASCCommon.framework and ASCCoreData.framework] in [ScenekitApp] and everything works perfectly in Xcode version 16.2(16C5032a).
I updated Xcode version 16.2(16C5032a) to Xcode version 16.3(16E140) and some errors in the SDKs were indicated.
[Failed to build module 'ASCSource'; this SDK is not supported by the compiler (the SDK is built with 'Apple Swift version 6.0.3 effective-5.10 (swiftlang-6.0.3.1.10 clang-1600.0.30.1)', while this compiler is 'Apple Swift version 6.1 effective-5.10 (swiftlang-6.1.0.110.21 clang-1700.0.13.3)'). Please select a toolchain which matches the SDK].
[Failed to build module 'Zip'; this SDK is not supported by the compiler (the SDK is built with 'Apple Swift version 6.0.3 effective-5.10 (swiftlang-6.0.3.1.10 clang-1600.0.30.1)', while this compiler is 'Apple Swift version 6.1 effective-5.10 (swiftlang-6.1.0.110.21 clang-1700.0.13.3)'). Please select a toolchain which matches the SDK].
If I recompile all frameworks in Xcode version 16.3 (16E140) it will work, but I don't think it will be a good solution.
I found some discussions in links like the following without success.
https://github.com/firebase/firebase-ios-sdk/issues/13727
https://stackoverflow.com/questions/70556401/swift-version-conflict-this-sdk-is-not-supported-by-the-compiler-please-select
Any help is welcome.
Thanks everyone!!!
In my turn-based game, I receive GKListener event receivedTurnEventForMatch and decode the match.matchData. On occasion, the matchData is clearly stale and is from the previous turn. If I call the MatchMaker ViewController up and select that same match, the data is not stale, so it's not a matter of not calling endTurn.
I have tried both loadMatchWithID and loadMatchesWithCompletionHandler after receiving the receivedTurnEventForMatch, but the data is still stale.
Advice?
Topic:
Graphics & Games
SubTopic:
GameKit
Hi,
How to enable multitouch on ARView?
Touch functions (touchesBegan, touchesMoved, ...) seem to only handle one touch at a time. In order to handle multiple touches at a time with ARView, I have to either:
Use SwiftUI .simultaneousGesture on top of an ARView representable
Position a UIView on top of ARView to capture touches and do hit testing by passing a reference to ARView
Expected behavior:
ARView should capture all touches via touchesBegan/Moved/Ended/Cancelled.
Here is what I tried, on iOS 26.1 and macOS 26.1:
ARView Multitouch
The setup below is a minimal ARView presented by SwiftUI, with touch events handled inside ARView. Multitouch doesn't work with this setup.
Note that multitouch wouldn't work either if the ARView is presented with a UIViewController instead of SwiftUI.
import RealityKit
import SwiftUI
struct ARViewMultiTouchView: View {
var body: some View {
ZStack {
ARViewMultiTouchRepresentable()
.ignoresSafeArea()
}
}
}
#Preview {
ARViewMultiTouchView()
}
// MARK: Representable ARView
struct ARViewMultiTouchRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARViewMultiTouch(frame: .zero)
let anchor = AnchorEntity()
arView.scene.addAnchor(anchor)
let boxWidth: Float = 0.4
let boxMaterial = SimpleMaterial(color: .red, isMetallic: false)
let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial])
box.name = "Box"
box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)]))
anchor.addChild(box)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
// MARK: ARView
class ARViewMultiTouch: ARView {
required init(frame: CGRect) {
super.init(frame: frame)
/// Enable multi-touch
isMultipleTouchEnabled = true
cameraMode = .nonAR
automaticallyConfigureSession = false
environment.background = .color(.gray)
/// Disable gesture recognizers to not conflict with touch events
/// But it doesn't fix the issue
gestureRecognizers?.forEach { $0.isEnabled = false }
}
required dynamic init?(coder decoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in touches {
/// # Problem
/// This should print for every new touch, up to 5 simultaneously on an iPhone (multi-touch)
/// But it only fires for one touch at a time (single-touch)
print("Touch began at: \(touch.location(in: self))")
}
}
}
Multitouch with an Overlay
This setup works, but it doesn't seem right. There must be a solution to make ARView handle multi touch directly, right?
import SwiftUI
import RealityKit
struct MultiTouchOverlayView: View {
var body: some View {
ZStack {
MultiTouchOverlayRepresentable()
.ignoresSafeArea()
Text("Multi touch with overlay view")
.font(.system(size: 24, weight: .medium))
.foregroundStyle(.white)
.offset(CGSize(width: 0, height: -150))
}
}
}
#Preview {
MultiTouchOverlayView()
}
// MARK: Representable Container
struct MultiTouchOverlayRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> UIView {
/// The view that SwiftUI will present
let container = UIView()
/// ARView
let arView = ARView(frame: container.bounds)
arView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
arView.cameraMode = .nonAR
arView.automaticallyConfigureSession = false
arView.environment.background = .color(.gray)
let anchor = AnchorEntity()
arView.scene.addAnchor(anchor)
let boxWidth: Float = 0.4
let boxMaterial = SimpleMaterial(color: .red, isMetallic: false)
let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial])
box.name = "Box"
box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)]))
anchor.addChild(box)
/// The view that will capture touches
let touchOverlay = TouchOverlayView(frame: container.bounds)
touchOverlay.autoresizingMask = [.flexibleWidth, .flexibleHeight]
touchOverlay.backgroundColor = .clear
/// Pass an arView reference to the overlay for hit testing
touchOverlay.arView = arView
/// Add views to the container.
/// ARView goes in first, at the bottom.
container.addSubview(arView)
/// TouchOverlay goes in last, on top.
container.addSubview(touchOverlay)
return container
}
func updateUIView(_ uiView: UIView, context: Context) {
}
}
// MARK: Touch Overlay View
/// A UIView to handle multi-touch on top of ARView
class TouchOverlayView: UIView {
weak var arView: ARView?
override init(frame: CGRect) {
super.init(frame: frame)
isMultipleTouchEnabled = true
isUserInteractionEnabled = true
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let totalTouches = event?.allTouches?.count ?? touches.count
print("--- Touches Began --- (New: \(touches.count), Total: \(totalTouches))")
for touch in touches {
let location = touch.location(in: self)
/// Hit testing.
/// ARView and Touch View must be of the same size
if let arView = arView {
let entity = arView.entity(at: location)
if let entity = entity {
print("Touched entity: \(entity.name)")
} else {
print("Touched: none")
}
}
}
}
override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
let totalTouches = event?.allTouches?.count ?? touches.count
print("--- Touches Cancelled --- (Cancelled: \(touches.count), Total: \(totalTouches))")
}
}
Issue
When an Entity with a ViewAttachmentComponent is:
disabled using isEnabled = false
removed using removeFromParent()
and then enabled or added back again, the attached SwiftUI view is rendered correctly, but tap interactions stop working.
Specifically:
Button actions inside the attached view do not fire
TapGesture closures on child views do not respond
Expected Behavior
Tap interactions inside the attached view should continue to work after the Entity is re-enabled or re-added.
Actual Behavior
After being disabled or removed once, all tap interactions stop responding.
Comparison
When displaying the same SwiftUI view using RealityViewAttachments, this issue does not occur.
Removing and re-displaying the attachment still allows taps to work correctly.
Reproduction
Attached sample code reproduces the issue:
A RealityView with an Entity that has a ViewAttachmentComponent
The attached SwiftUI view contains a Toggle
The toggle updates isEnabled on the Entity
After toggling off and on, tap interactions stop responding
Environment
Xcode 26
visionOS 26
Question
Is this expected behavior of ViewAttachmentComponent, or a bug?
Is there a recommended way to temporarily hide or disable an Entity with ViewAttachmentComponent without breaking tap interactions?
import SwiftUI
import RealityKit
struct GestureTestView: View {
@State var sampleEnabled = true
@State var sampleEntity: Entity?
var body: some View {
RealityView { contents, attachments in
// After deleting and re-displaying it, taps no longer respond.
let sample = Entity(components: ViewAttachmentComponent(rootView: SampleView()))
// Executed successfully
//let sample = attachments.entity(for: "SampleView")!
contents.add(sample)
sample.position = [0, 1.2, -1]
sampleEntity = sample
let toggleButton = Entity(components: ViewAttachmentComponent(rootView: ToggleButtonView(isOn: $sampleEnabled)))
contents.add(toggleButton)
toggleButton.position = [0, 1, -1]
} update: { _, _ in
// run update closure
print(sampleEnabled)
// update sample entity enable
sampleEntity?.isEnabled = sampleEnabled
} attachments: {
Attachment(id: "SampleView") {
SampleView()
}
}
}
}
struct ToggleButtonView: View {
@Binding var isOn: Bool
var body: some View {
VStack {
Toggle(isOn: $isOn) {
Text("Toggle")
}
}
.padding()
.glassBackgroundEffect()
}
}
struct SampleView: View {
var body: some View {
VStack {
Button {
print("Hello, World!")
} label: {
Text("Hello, World!")
.padding()
}
}
.padding()
.glassBackgroundEffect()
}
}
#Preview(immersionStyle: .mixed) {
GestureTestView()
}
In swipe-driven games, a first downward swipe starting near the home indicator can trigger Reachability, even when using preferredScreenEdgesDeferringSystemGestures = .bottom and prefersHomeIndicatorAutoHidden = true. This causes the app to slide down to half screen and breaks gameplay. Please consider an API to temporarily defer Reachability while a custom gesture is active (similar to existing system gesture deferral), without disabling accessibility globally.
Environment:
Devices: iPhones with Home Indicator (Face ID)
Why this matters:
Bottom-origin swipes are core in many games (flick shots, slingshot, physics toss, bottom sheets). Current workarounds degrade UX and discoverability, and players still accidentally trigger Reachability.
Feedback Assistant Post
Topic:
Graphics & Games
SubTopic:
General
Hi, I am trying to detect if all the screen are in fullscreen mode.
The current approach is to get all windows' information from CGWindowListCopyWindowInfo and then compare the frame and coordinate with the frame of NSScreen.screens.
However, there is a problem, the y position of the window seems to be relative to the screen. As it is not absolute position, I cannot compare it with the coordinate of the screen.
Does anyone know if there are other information that I can use? Or is there a way to retrieve the absolute position or screen ID from the GCWIndow object?
Hi,
I'm rewriting my game from SceneKit to RealityKit, and I'm having trouble implementing the following scenario:
I tap on the iPhone screen to select an Entity that I want to drag.
If an Entity was tapped, it should then be possible to drag it left, right, etc.
SceneKit solution:
func CGPointToSCNVector3(_ view: SCNView, depth: Float, point: CGPoint) -> SCNVector3 {
let projectedOrigin = view.projectPoint(SCNVector3Make(0, 0, Float(depth)))
let locationWithz = SCNVector3Make(Float(point.x), Float(point.y), Float(projectedOrigin.z))
return view.unprojectPoint(locationWithz)
}
and then I was calling:
SCNView().hitTest(location, options: [SCNHitTestOption.firstFoundOnly:true])
the code was called inside of the UIPanGestureRecognizer in my UIViewController.
Could I reuse that code or should I go with the SwiftUI approach - something like that:
var body: some View {
RealityView {
....
} .gesture(TapGesture().onEnded {
})
?
I already have this code:
@State private var location: CGPoint?
.onTapGesture { location in
self.location = location
}
I'm trying to identify the entity that was tapped within the RealityView like that:
RealityView { content in
let box: ModelEntity = createBox() // for now there is only one box, however there will be many boxes
content.add(box)
let anchor = AnchorEntity(world: [0, 0, 0])
content.add(anchor)
_ = content.subscribe(to: SceneEvents.Update.self) { event in
//TODO: find tapped entity, so that it could be dragged inside of the DragGesture()
}
Any help would be appreciated.
I also noticed that if I create a TapGesture like that:
TapGesture(count: 1)
.targetedToAnyEntity()
and add it to my view using .gesture() then it is not triggered.
How can I paste a string to the findNavigator of a TextEditor ?
Topic:
Graphics & Games
SubTopic:
General
I'm trying to get video material to work on an imported 3D asset, and this asset is a USDC file. There's actually an example in this WWDC video from Apple. You can see it running on the flag in this airplane, but there are no examples of this, and there are no other examples on the internet. Does anybody know how to do this?
You can look at 10:34 in this video.
https://developer.apple.com/documentation/realitykit/videomaterial
Im new in the Mac area but for sure not UE. Windows is a long process to packaging but it could be done. All the documentation for Epic and from the internet is basically non existent with exactly how to package a project within UE. I have Xcode installed which makes sense, agreed to terms and install for MacOS, I've been able to make a project for several weeks now and want to package for a test run for my friends to play on Windows. Now I just get this in the log:
UATHelper: Packaging (Mac): ERROR: Failed to finalize the .app with Xcode. Check the log for more information
UATHelper: Packaging (Mac): Trace written to file /Users/rileysleger/Library/Logs/Unreal Engine/LocalBuildLogs/UBA-ProjectNightTerror-Mac-Development.uba with size 12.6kb
UATHelper: Packaging (Mac): Total time in Unreal Build Accelerator local executor: 8.12 seconds
UATHelper: Packaging (Mac): Result: Failed (OtherCompilationError)
UATHelper: Packaging (Mac): Total execution time: 9.71 seconds
PackagingResults: Error: Failed to finalize the .app with Xcode. Check the log for more information
UATHelper: Packaging (Mac): Took 9.77s to run dotnet, ExitCode=6
UATHelper: Packaging (Mac): UnrealBuildTool failed. See log for more details. (/Users/rileysleger/Library/Logs/Unreal Engine/LocalBuildLogs/UBA-ProjectNightTerror-Mac-Development.txt)
UATHelper: Packaging (Mac): AutomationTool executed for 0h 0m 10s
UATHelper: Packaging (Mac): AutomationTool exiting with ExitCode=6 (6)
UATHelper: Packaging (Mac): RunUAT ERROR: AutomationTool was unable to run successfully. Exited with code: 6
PackagingResults: Error: AutomationTool was unable to run successfully. Exited with code: 6
PackagingResults: Error: Unknown Error
This absolutely makes no sense to me. Anyone have ideas?
Code is download from apple official metal4 sample
[https://developer.apple.com/documentation/metal/drawing-a-triangle-with-metal-4?language=objc]
enable metal gpu trace in macOS schema and trace a frame in Xcode.
Xcode may show segment fault on App from some 'GTTrace' function when click trace button.
When replay a .gputrace file, Xcode may crash , throw an internal error or a XPC error.
The example code using old metal-renderer can trace without any problem and everything works fine.
Test Environment:
Xcode Version 26.2 (17C52)
macOS 26.2 (25C56)
M1 Pro 16GB A2442
Hi!
I hope everyone reading is doing well. I am working on developing a reinforcement learning agent that involves sending scan codes to a window, which I've been doing by sending virtual scan codes with CGEventCreateKeyboardEvent per the docs. There is no event source when I send the keyboard events.
However, when many keyboard events are happening (with the keys 'q', 'w', 'e', 'r', 'f', 'd', 's', space, arrow keys) in quick succession (<250ms), the enable dictation popup or the function button emojis popup appear for seemingly no reason. I have verified that I am using the correct scan codes for these keypresses, so I am wondering what else could cause this to happen. It is as if I am choosing to press f5 or fn. It does not happen when 'a' is the only button being pressed in quick succession.
One thing that I have not been able to easily find is the scan code inputs for dictation nor the function button. do these scan codes overlap somehow?
Thank you all for the help!
Hunter