Hi ,
I'm struggling with visionOS window management and need help with closing child windows programmatically.
App Structure
My app has a Main-Sub window hierarchy:
AWindow (Home/Main)
BWindow (Main feature window)
CWindow (Tool window - child of BWindow)
Navigation flow:
AWindow → BWindow (switch, 1 window on screen)
BWindow → CWindow (opens child, 2 windows on screen)
I want BWindow and CWindow to be separate movable windows (not sheet/popover) so users can position them independently in space.
The Problem
CWindow doesn't close when BWindow closes by tapping the X button below the app (next to the window bar)
User clicks X on BWindow → BWindow closes but CWindow remains
CWindow becomes orphaned on screen
Can close CWindow programmatically when switching BWindow back to AWindow
App launch issue
After closing both windows, CWindow is remembered as last window
Reopening app shows only CWindow instead of BWindow
User gets stuck in CWindow with no way back to BWindow
I've Tried Environment dismissWindow in cleanup but its not working.
// In BWindow.swift
.onDisappear {
if windowManager.isWindowOpen("cWindow") {
dismissWindow(id: "cWindow")
}
}
My App Structure Code Now
// in MyNameApp.swift
@main
struct MyNameApp: App {
var body: some Scene {
WindowGroup(id: "aWindow") {
AWindow()
}
WindowGroup(id: "bWindow") {
BWindow()
}
WindowGroup(id: "cWindow") {
CWindow()
}
}
}
// WindowStateManager.swift
class WindowStateManager: ObservableObject {
static let shared = WindowStateManager()
@Published private var openWindows: Set<String> = []
@Published private var windowDependencies: [String: String] = [:]
private init() {}
func markWindowAsOpen(_ id: String) {
markWindowAsOpen(id, parent: nil)
}
func markWindowAsClosed(_ id: String) {
openWindows.remove(id)
windowDependencies[id] = nil
}
func isWindowOpen(_ id: String) -> Bool {
let isOpen = openWindows.contains(id)
return isOpen
}
func markWindowAsOpen(_ id: String, parent: String? = nil) {
openWindows.insert(id)
if let parentId = parent {
windowDependencies[id] = parentId
}
}
func getParentWindow(of childId: String) -> String? {
let parent = windowDependencies[childId]
return parent
}
func getChildWindows(of parentId: String) -> [String] {
let children = windowDependencies.compactMap { key, value in
value == parentId ? key : nil
}
return children
}
func setNextWindowParent(_ parentId: String) {
UserDefaults.standard.set(parentId, forKey: "nextWindowParent")
}
func getAndClearNextWindowParent() -> String? {
let parent = UserDefaults.standard.string(forKey: "nextWindowParent")
UserDefaults.standard.removeObject(forKey: "nextWindowParent")
return parent
}
func forceCloseChildWindows(of parentId: String) {
let children = getChildWindows(of: parentId)
for child in children {
markWindowAsClosed(child)
NotificationCenter.default.post(
name: Notification.Name("ForceCloseWindow"),
object: nil,
userInfo: ["windowId": child]
)
forceCloseChildWindows(of: child)
}
}
func hasMainWindowOpen() -> Bool {
let mainWindows = ["main", "bWindow"]
return mainWindows.contains { isWindowOpen($0) }
}
func cleanupOrphanWindows() {
for (child, parent) in windowDependencies {
if isWindowOpen(child) && !isWindowOpen(parent) {
NotificationCenter.default.post(
name: Notification.Name("ForceCloseWindow"),
object: nil,
userInfo: ["windowId": child]
)
markWindowAsClosed(child)
}
}
}
}
// BWindow.swift
struct BWindow: View {
@Environment(\.dismissWindow) private var dismissWindow
@ObservedObject private var windowManager = WindowStateManager.shared
var body: some View {
VStack {
Button("Open C Window") {
windowManager.setNextWindowParent("bWindow")
openWindow(id: "cWindow")
}
}
.onAppear {
windowManager.markWindowAsOpen("bWindow")
}
.onDisappear {
windowManager.markWindowAsClosed("bWindow")
windowManager.forceCloseChildWindows(of: "bWindow")
}
.onChange(of: scenePhase) { oldValue, newValue in
if newValue == .background || newValue == .inactive {
windowManager.forceCloseChildWindows(of: "bWindow")
}
}
}
}
// CWindow.swift
import SwiftUI
struct cWindow: View {
@ObservedObject private var windowManager = WindowStateManager.shared
@State private var shouldClose = false
var body: some View {
// Content
}
.onDisappear {
windowManager.markWindowAsClosed("cWindow")
NotificationCenter.default.removeObserver(
self,
name: Notification.Name("ForceCloseWindow"),
object: nil
)
}
.onChange(of: scenePhase) { oldValue, newValue in
if newValue == .background {
}
}
.onAppear {
let parent = windowManager.getAndClearNextWindowParent()
windowManager.markWindowAsOpen("cWindow", parent: parent)
NotificationCenter.default.addObserver(
forName: Notification.Name("ForceCloseWindow"),
object: nil, queue: .main) { notification in
if let windowId = notification.userInfo?["windowId"] as? String, windowId == "cWindow" {
shouldClose = true
}
}
}
.onChange(of: shouldClose) { _, newValue in
if newValue {
dismissWindow()
}
}
}
The logs show everything executes correctly, but CWindow remains visible on screen.
Questions
Why doesn't dismissWindow(id:) work in cleanup scenarios?
Is there a proper way to create a window relationships like parent-child relationships in visionOS?
How can I ensure main windows open on app launch instead of tool windows?
What's the recommended pattern for dependent windows in visionOS?
Environment: Xcode 16.2, visionOS 2.0, SwiftUI
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello everyone,
I'm encountering a MultipeerConnectivity connection issue while developing a visionOS app and would like to ask if other developers have experienced similar problems.
Problem Description
In visionOS 26.0 Beta 3 and Beta 4, when a visionOS device attempts to connect to an iPad via MultipeerConnectivity, the iPad side completely fails to receive connection requests, resulting in connection establishment failure.
Specific Symptoms
After executing serviceBrowser?.invitePeer(peerID, to: mcSession, withContext: nil, timeout: 10.0) on the visionOS side
The iPad side shows no response and receives no connection invitation
Connection request times out after 10 seconds and is automatically rejected
No error logs or exception information are generated
Environment Information
visionOS version: 26.0 Beta 3 and Beta 4
Development environment: macOS Tahoe 26.0 Beta (25A5306g)
Target device: iPad (iOS 17.x)
Network environment: Same WiFi network
Comparative Test Results
visionOS 2.6 (22O785): Functionality completely normal
visionOS 26.0 Beta 1/2: Functionality normal
visionOS 26.0 Beta 3/4: Exhibits the above problems
Attempted Solutions
Checked network configuration and firewall settings
Adjusted MultipeerConnectivity parameters
Reinitialized MCSession and MCNearbyServiceBrowser
Cleared app cache and reinstalled
Reset network settings
Temporary Workaround
Currently, the only solution is to downgrade the visionOS device to version 2.6.
Impact of the Problem
This issue severely affects the development of cross-device collaboration features in visionOS apps, particularly scenarios requiring peer-to-peer communication with iOS/iPadOS devices.
Questions for Help
Have other developers encountered similar issues?
Are there any known solutions or workarounds?
Is this a known issue with visionOS 26.0 Beta?
Are there other known issues related to MultipeerConnectivity?
Relevant Code Snippet
// Connection invitation code
private var serviceBrowser: MCNearbyServiceBrowser?
let mcSession: MCSession
// Execute connection invitation
serviceBrowser?.invitePeer(peerID, to: mcSession, withContext: nil, timeout: 10.0)
Thank you for your help and suggestions!
Development Environment: Xcode 15.x
Target Platform: visionOS
Topic:
App & System Services
SubTopic:
Networking
Tags:
Beta
Multipeer Connectivity
Debugging
visionOS
During the WWDC Session called "Design widgets for visionOS" the presenter says:
You can choose whether the background of your widget participates in tinting. If you opted out, for example to preserve a photo or illustration, make sure it still looks good alongside the selected color palette.
Unfortunately, this session has no example code. Can someone point me to the correct way to do this? Is there a modifier we can use on views?
(reposting this in the correct topic/subtopic)
Hey all,
I'm working on a visionOS app that captures live frames from the left and right cameras of Apple Vision Pro using cameraFrame.sample(for: .left/.right).
Apple provides documentation on encoding side-by-side frames into MV-HEVC spatial video using CMTaggedBuffer:
Converting Side-by-Side 3D Video to MV-HEVC
My question:
Is there any way to render tagged frames (e.g. CMTaggedBuffer with .stereoView(.leftEye/.rightEye)) live, directly to a surface in RealityKit or Metal, without saving them to a file?
I’d like to create a true stereoscopic (spatial) live video preview, not just render two images side-by-side.
Any advice or insights would be greatly appreciated!
After updating to the latest visionOS beta, visionOS 26 Beta 4 (23M5300g) the ‘Presenting images in RealityKit’ sample from the following link no longer builds due to an error. https://developer.apple.com/documentation/RealityKit/presenting-images-in-realitykit
Expected / Previous:
Application builds and runs on device, working as described in the documentation.
Reality:
Application builds, but does not run on device due to an error (shown in screenshot) “Thread 1: EXC_BAD_ACCESS (code=1, address=0xb)”. The application still runs on the simulator, but not on device. When launching the app from Xcode, it builds and installs correctly but hangs due to the respective error. When loading the app from the Home Screen, the app does not load, and immediately returns to the Home Screen.
This Xcode project previously ran with no changes to code - the only change was updating the visionOS system software to the latest version. visionOS 26 Beta 4 (23M5300g)
Is anyone else experiencing this issue?
Hello everyone,
I am currently developing an experience for visionOS using RealityKit and I would like to achieve volumetric light effects, such as visible light rays or shafts through fog or dust.
I found this GitHub project: https://github.com/robcupisz/LightShafts, which demonstrates the kind of visual style I am aiming for. I would like to know if there is a way to create similar effects using RealityKit on visionOS.
So far, I have experimented with DirectionalLight, SpotLight, ImageBasedLight, and custom materials (e.g., additive blending on translucent meshes), but none of these approaches can replicate the volumetric light shaft look shown in the repository above.
Questions:
Is there a recommended technique or workaround in RealityKit to simulate light shafts or volumetric lighting?
Is creating a custom mesh (e.g., cone or volume geometry with gradient alpha and additive blending) the only feasible method?
Are there any examples, best practices, or sample projects from Apple or other developers that showcase a similar visual style?
Any advice or hints would be greatly appreciated. Thank you in advance!
Topic:
Spatial Computing
SubTopic:
General
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
visionOS
Hi!
I'm currently trying to render another XR scene in front of a RealityKit one.
Actually, I'm anchoring a plane to the head with a shader to display for left/right eye side-by-side images. By default, the camera has a near plane so I can directly draw at z=0.
Is there a way to change the camera near plane? Or maybe there is a better solution to overlay image/texture for left/right eyes?
Ideally, I would layer some kind of CompositorLayer on RealityKit, but that's sadly not possible from what I know.
Thanks in advance and have a good day!
I've tried following apple's documentation to apply a video material on a Model Entity, but I have encountered a compile error while attempting to specify the Spatial Audio type.
It is a 360 video on a Sphere which plays just fine, but the audio is too quiet compared to the volume I get when I preview the video on Xcode. So I tried tried to configure audio playback mode on the material but it gives me a compile error:
"audioInputMode' is unavailable in visionOS
audioInputMode' has been explicitly marked unavailable here
RealityFoundation.VideoPlaybackController.audioInputMode)"
https://developer.apple.com/documentation/realitykit/videomaterial/
Code:
let player = AVPlayer(url: url)
// Instantiate and configure the video material.
let material = VideoMaterial(avPlayer: player)
// Configure audio playback mode.
material.controller.audioInputMode = .spatial // this line won’t compile.
VisionOS 2.4, Xcode 16.4, also tried Xcode 26 beta 2.
The videos are HEVC MPEG-4 codecs.
Is there any other way to do this, or is there a workaround available?
Thank you.
I'm running into a persistent visual issue while deploying a floral corridor scene to Apple Vision Pro using Unity 6.0 with URP and Metal. The issue only appears on the Vision Pro device — everything looks fine in the Unity Editor.
Issue Description
When the frame rate drops to around 60–70 FPS, noticeable distortion artifacts appear around the edges of foliage models. It seems like the background meshes (behind the plants) get warped and leak through the edges of the foliage. Although this is most visible around the leaves, even solid objects like standard URP wall or box models show distorted edges when the issue occurs.
All the foliage uses Opaque or Alpha Clipping materials.
Things I've Tried
Changing the foliage materials to Transparent mode —distortion around edges disappears, but using Transparent for a large number of foliage assets is not ideal for performance or sorting complexity.
Reducing the number of foliage objects — with only a few plants in the scene and the frame rate staying around 100 FPS, the distortion disappears. However, this isn’t a practical solution for a full environment.
Possible Cause?
I came across this note in the Unity documentation:
"Ensure depth-buffer for each pixel is non-zero - on visionOS, the depth buffer is used for reprojection. To ensure visual effects like skyboxes and shaders are displayed beautifully, ensure that some value is written to the depth for each pixel."
Could this be related to the issue? Is it possible that Alpha Clipping with low pixel coverage leads to some pixels not writing to the depth buffer, which then causes problems during Vision Pro’s reprojection or foveated rendering? However, even when I disable Alpha Clipping entirely, the distortion issue still persists, so it may not be solely caused by clipping itself.
Project Setup
Unity 6.0 (URP)
Depth Texture: Enable
Using Metal as the graphics backend
Running on real Vision Pro hardware (not simulator)
Any advice on how to avoid these distortion issues on Vision Pro would be greatly appreciated.
Thanks!
prefetching logic for UICollectionView on VisionOS does not work.
I have set up a Standalone test repo to demonstrate this issue. This repo is basically a visionOS version of Apple's guide project on implementation of prefetching logic.
in repo you will see a simple ViewController that has UICollectionView, wrapped inside UIViewControllerRepresentable.
on scroll, it should print 🕊️ prefetch start on console to demonstrate func collectionView(_ collectionView: UICollectionView, prefetchItemsAt indexPaths: [IndexPath]) is called. However it never happens on VisionOS devices.
With the same code it behaves correctly on iOS devices
Topic:
Spatial Computing
SubTopic:
General
Tags:
SwiftUI
UIKit
visionOS
iPad and iOS apps on visionOS
Hello,
There are odd artifacts (one looks like an image placeholder) appearing when dismissing an immersive space which is displaying an ImagePresentationComponent. Both artifacts look like widgets..
See below our simple code displaying the ImagePresentationComponent and the images of the odd artifacts that appear briefly when dismissing the immersive space.
import OSLog
import RealityKit
import SwiftUI
struct ImmersiveImageView: View {
let logger = Logger(subsystem: AppConstant.SUBSYSTEM, category: "ImmersiveImageView")
@Environment(AppModel.self) private var appModel
var body: some View {
RealityView { content in
if let currentMedia = appModel.currentMedia,
var imagePresentationComponent = currentMedia.imagePresentationComponent {
let imagePresentationComponentEntity = Entity()
switch currentMedia.type {
case .iphoneSpatialMovie:
logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatial3DImmersive
case .twoD:
logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatial3DImmersive
case .visionProConvertedSpatialPhoto:
logger.info("\(#function) \(#line) spatialStereoImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatialStereoImmersive
default :
logger.error("\(#function) \(#line) Unsupported media type \(currentMedia.type)")
assertionFailure("Unsupported media type \(currentMedia.type)")
}
imagePresentationComponentEntity.components.set(imagePresentationComponent)
imagePresentationComponentEntity.position = AppConstant.Position.spacialImagePosition
content.add(imagePresentationComponentEntity)
}
let toggleViewAttachmentComponent = ViewAttachmentComponent(rootView: ToggleImmersiveSpaceButton())
let toggleViewAttachmentComponentEntity = Entity(components: toggleViewAttachmentComponent)
toggleViewAttachmentComponentEntity.position = SIMD3<Float>(
AppConstant.Position.spacialImagePosition.x + 1,
AppConstant.Position.spacialImagePosition.y,
AppConstant.Position.spacialImagePosition.z
)
toggleViewAttachmentComponentEntity.scale = AppConstant.Scale.attachments
content.add(toggleViewAttachmentComponentEntity)
}
}
}
In my visionOS app, I'm seeing this error in the console dozens of times. Anyone know what it means, or how to troubleshoot it?
Searching these forums and the usual other places hasn't come up with anything that seems relevant.
I would like to translate info in a three.js based web app as a 3D model in a volumetric window. Is it possible to do this in a similar manner as loading a web page in a WKWebView?
Hello, I'm adding a CollisionComponent to an entity in RealityView. CollisionComponent requires that a Mesh must be provided as a reference for collision detection. However, in order to achieve more accurate detection, I hope that this Mesh resource is a geometric shape of a USDZ model. Is there any way to make it happen? Thank you!
Hi,
I have downloaded the example to use the Main Camera with Apple Vision Pro.
I got the license file, the Entitlements file and I see Main Camera Access on capabilities view.
I have attached the AVP with the development kit band, however I don't see the device as option to install the App.
I selected Managed Run Destinations and I see the AVP as connected.
Do I miss anything to install the App on my AVP?
Hello,
I'm working with the new PortalComponent introduced in visionOS 2.0, and I've encountered some issues when transitioning entities between virtual and real-world spaces using crossingMode.
Specifically:
Lighting inconsistency: When CG content (ModelEntities with PhysicallyBasedMaterial) crosses the portal from virtual space into the real environment, the way light reflects on the objects changes noticeably. This causes a jarring visual effect, as the same material appears differently depending on the space it's in.
Unnatural transition visuals: During the transition, the CG models often appear to "emerge from the wall," especially when crossing from virtual to real. This ruins the immersive illusion and feels visually unnatural.
IBL adjustment attempts: I’ve tried adding an ImageBasedLightComponent to the world entity, and while it slightly improves the lighting consistency, the issue still remains to a noticeable degree.
My goal is to create a seamless visual experience when CG entities cross between spaces, without sudden lighting shifts or immersion-breaking geometry reveals.
Has anyone else experienced similar issues?
Is there a recommended setup or workaround to better control lighting and visual fidelity when using crossingMode with portals in visionOS 2.0?
Any guidance would be greatly appreciated.
Thank you!
I'm running into an issue in Xcode when working on a visionOS app. Whenever I try to drag a 3D model entity in my scene, the drag gesture doesn't work if there's a UITextView (or SwiftUI TextEditor) in background of the 3D entity. It seems like the UITextView is intercepting the gesture or preventing the drag interaction from reaching the 3D content.
Interestingly, when the 3D entity is placed infront of the ScrollView, the drag works as expected.
Has anyone else experienced this behavior? Is this a known limitation or a bug in the current tooling? Any workarounds or fixes would be appreciated.
Thanks!
Hello! I’m excited to see that Look to Scroll has been included in visionOS 26 Beta. I’m aiming to achieve a feature where the user’s gaze at a specific edge automatically scrolls to that position. However, I’ve experimented with ScrollView and haven’t been able to trigger this functionality. Could you advise if additional API modifiers are necessary? Thank you!
I am trying to get the new PresentationComponent working in VisionOS26 as seen in this WWDC video:
https://developer.apple.com/videos/play/wwdc2025/274/?time=962 (18:29 minutes into video)
Here is some other example code but it doesn't work either: https://stepinto.vision/devlogs/project-graveyard-devlog-002/
My simple Text view (that I am adding as a PresentationComponent) does not appear in my RealityView even though the entity is found. Here is a simple example built from an Xcode immersive view default project:
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
if let materializedImmersiveContentEntity = try? await Entity(named: "Test", in: realityKitContentBundle) {
content.add(materializedImmersiveContentEntity)
var presentation = PresentationComponent(
configuration: .popover(arrowEdge: .bottom),
content: Text("Hello, World!")
.foregroundColor(.red)
)
presentation.isPresented = true
materializedImmersiveContentEntity.components.set(presentation)
}
}
}
}
}
Here is the Apple reference: https://developer.apple.com/documentation/realitykit/presentationcomponent
I'm developing a VisionOS app with bouncing ball physics and struggling to achieve natural bouncing behavior using RealityKit's physics system. Despite following Apple's recommended parameters, the ball loses significant energy on each bounce and doesn't behave like a real basketball, tennis ball, or football would.
With identical physics parameters (restitution = 1.0), RealityKit shows significant energy loss. I've had to implement a custom physics system to compensate, but I want to use native RealityKit physics. It's impossible to make it work by applying custom impulses.
Ball Physics Setup (Following Apple Forum Recommendations)
// From PhysicsManager.swift
private func createBallEntityRealityKit() -> Entity {
let ballRadius: Float = 0.05
let ballEntity = Entity()
ballEntity.name = "bouncingBall"
// Mesh and material
let mesh = MeshResource.generateSphere(radius: ballRadius)
var material = PhysicallyBasedMaterial()
material.baseColor = .init(tint: .cyan)
material.roughness = .float(0.3)
material.metallic = .float(0.8)
ballEntity.components.set(ModelComponent(mesh: mesh, materials: [material]))
// Physics setup from Apple Developer Forums
let physics = PhysicsBodyComponent(
massProperties: .init(mass: 0.624), // Seems too heavy for 5cm ball
material: PhysicsMaterialResource.generate(
staticFriction: 0.8,
dynamicFriction: 0.6,
restitution: 1.0 // Perfect elasticity, yet still loses energy
),
mode: .dynamic
)
ballEntity.components.set(physics)
ballEntity.components.set(PhysicsMotionComponent())
// Collision setup
let collisionShape = ShapeResource.generateSphere(radius: ballRadius)
ballEntity.components.set(CollisionComponent(shapes: [collisionShape]))
return ballEntity
}
Ground Plane Physics
// From GroundPlaneView.swift
let groundPhysics = PhysicsBodyComponent(
massProperties: .init(mass: 1000),
material: PhysicsMaterialResource.generate(
staticFriction: 0.7,
dynamicFriction: 0.6,
restitution: 1.0 // Perfect bounce
),
mode: .static
)
entity.components.set(groundPhysics)
Wall Physics
// From WalledBoxManager.swift
let wallPhysics = PhysicsBodyComponent(
massProperties: .init(mass: 1000),
material: PhysicsMaterialResource.generate(
staticFriction: 0.7,
dynamicFriction: 0.6,
restitution: 0.85 // Slightly less than ground
),
mode: .static
)
wall.components.set(wallPhysics)
Collision Detection
// From GroundPlaneView.swift
content.subscribe(to: CollisionEvents.Began.self) { event in
guard physicsMode == .realityKit else { return }
let currentTime = Date().timeIntervalSince1970
guard currentTime - lastCollisionTime > 0.1 else { return }
if event.entityA.name == "bouncingBall" || event.entityB.name == "bouncingBall" {
let normal = event.collision.normal
// Distinguish between wall and ground collisions
if abs(normal.y) < 0.3 { // Wall bounce
print("Wall collision detected")
} else if normal.y > 0.7 { // Ground bounce
print("Ground collision detected")
}
lastCollisionTime = currentTime
}
}
Issues Observed
Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce
Wall Sliding: Ball tends to slide down walls instead of bouncing naturally
No Damping Control: Comments mention damping values but they don't seem to affect the physics
Change in mass also doesn't do much.
Custom Physics System (Workaround)
I've implemented a custom physics system that manually calculates velocities and applies more realistic restitution values:
// From BouncingBallComponent.swift
struct BouncingBallComponent: Component {
var velocity: SIMD3<Float> = .zero
var angularVelocity: SIMD3<Float> = .zero
var bounceState: BounceState = .idle
var lastBounceTime: TimeInterval = 0
var bounceCount: Int = 0
var peakHeight: Float = 0
var totalFallDistance: Float = 0
enum BounceState {
case idle
case falling
case justBounced
case bouncing
case settled
}
}
Is this energy loss expected behavior in RealityKit, even with perfect restitution (1.0)?
Are there additional physics parameters (damping, solver iterations, etc.) that could improve bounce behavior?
Would switching to Unity be necessary for more realistic ball physics, or am I missing something in RealityKit?
Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. I apply custom impulses, but then if I have walls around the ball, it's almost impossible to make it look natural. I also saw this post https://developer.apple.com/forums/thread/759422 and ball is still not bouncing naturally.