visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts

Post

Replies

Boosts

Views

Activity

Buttons become unresponsive after using .windowStyle(.plain) with auto-hiding menu
I'm developing a visionOS panorama viewer app where I need to implement an auto-hiding floating menu in immersive space. The menu should: Show for 3 seconds when entering immersive mode Auto-hide after 3 seconds, Reappear when user taps anywhere (using SpatialTapGesture). Buttons should respond to gaze + pinch interaction The Problem: When I add .windowStyle(.plain) to achieve transparent window background for the auto-hide effect, all buttons in the menu become completely unresponsive to gaze + pinch interaction. The buttons only respond to direct finger touch (poking). Without .windowStyle(.plain): Buttons work correctly with gaze + pinch, but I cannot achieve transparent window background for hiding. With .windowStyle(.plain): Window can be transparent, but buttons lose gaze + pinch interaction. Code: App.swift: @main struct MyApp: App { @StateObject private var model = AppModel() var body: some Scene { WindowGroup(id: "MainWindow") { ContentView() .environmentObject(model) } .defaultSize(width: 900, height: 700) .windowResizability(.contentSize) .windowStyle(.plain) // <-- This causes the interaction issue ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() .environmentObject(model) } } } ContentView.swift (simplified): struct ContentView: View { @EnvironmentObject var model: AppModel @State private var isMenuVisible: Bool = true var body: some View { VStack { if model.isImmersiveViewActive { if isMenuVisible { // This menu's buttons don't respond to gaze+pinch immersiveControlMenu } } else { mainMenuButtons } } .glassBackgroundEffect() } private var immersiveControlMenu: some View { HStack { Button("Exit") { exitImmersiveSpace() } .buttonStyle(.bordered) // Also tried .plain, same issue } .padding() .glassBackgroundEffect() } } ImmersiveView.swift: struct ImmersiveView: View { @EnvironmentObject var model: AppModel var body: some View { RealityView { content in // Panorama sphere let sphere = ModelEntity(mesh: .generateSphere(radius: 1000), materials: [material]) content.add(sphere) // Tap detector for menu toggle let tapDetector = Entity() tapDetector.components.set(CollisionComponent(shapes: [.generateSphere(radius: 900)])) tapDetector.components.set(InputTargetComponent()) content.add(tapDetector) } .gesture( SpatialTapGesture() .targetedToAnyEntity() .onEnded { _ in model.shouldShowMenu = true } ) } } Environment: Xcode 26.2 visionOS 26.3 Vision Pro device Questions: Is .windowStyle(.plain) expected to affect button interaction behavior? What is the recommended approach to achieve a transparent/hidden window in immersive mode while maintaining button interactivity? Is there an alternative to .windowStyle(.plain) for hiding window chrome in visionOS? Thank you for any guidance!
5
0
1.1k
Feb ’26
🔥 Xcode 26 RC – visionOS App Icon Created with Icon Composer Appears Empty
App Icon created with Icon Composer is empty for visionOS app We are developing a universal app, and the app’s icon was created using Icon Composer. Xcode 26, RC visionOS 26 and visionOS 2.5 App Icons on macOS, iOS, and iPadOS are correct We have archived the app for macOS and iOS and successfully uploaded it to the App Store. This strongly suggests that the App Icon configuration in our project settings is correct for these platforms. App Icon issue on visionOS However, the visionOS app icon is not working as expected: When testing on the Vision Pro simulator (versions 2.x and 26.0), the app icon appears empty. When archiving and submitting to the App Store, the process fails with the following error: The app’s Info.plist file is missing the CFBundleIcons.CFBundlePrimaryIcon key for the visionOS App Icon. This suggests that the project’s App Icon settings may not be correctly applied for visionOS builds. Request for assistance We are preparing to release our app, one of the first to support Liquid Glass, and would greatly appreciate guidance on how to resolve this issue with the visionOS app icon. FB20184218
1
0
343
Sep ’25
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
3
0
502
1w
Attaching a hand model to your hands
Hi, we have been working on an application that attaches a hand model to the users hands. Apple provides an animating hand models in visionOS project that is a useful starting point. https://developer.apple.com/documentation/visionOS/animating-hand-models-in-visionOS We have been trying to create our own hand model to attach but have had some issues with how it is attaching to the hand. For our hand model we want to include the forearm all the way up to the users elbow. I have attached a sample project of what our code currently looks like so you can run it. Just select show immersive space to attach the models. The left hand model is the space glove that we were trying to mirror. The right hand model is our model that we have been using. I have mapped each of the joints to the pertaining joint name on our model. The first issue we are having seems to be based around the placement of the forearm. It attaches itself at the wrist. The second issue seems to be around rotation. Our team is looking for some guidance on what needs to change in order to map this model correctly. Thanks in advance!
2
0
258
Feb ’26
Website environment disappears suddenly
After I updated to visionOS 26.4, I noticed my website environment would suddenly turn off occasionally while I was watching YouTube in Safari. My M2 AVP was still warm after the update. Is turning off a website environment expected behavior when the headset gets warm (e.g., perhaps to reduce load)? If not, anyone have an idea this might happen?
1
0
249
1w
How to integrate Apple Immersive Video into the app you are developing.
Hello, Let me ask you a question about Apple Immersive Video. https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/ I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format. First, I would like to know if it is possible to integrate Apple Immersive Video into an app. Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app? It would be great if you could also share any helpful website resources. I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos. As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements. I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed? I’ve asked several questions, and that’s all for now. Thank you in advance!
2
1
960
Nov ’25
fileImporter issue in visionOS with iPhone app (that can run on visionOS)
Happy new year to all! I have created an iOS app that also runs on Apple Vision Pro. On iOS, when you activate the fileImporter modal, you can swipe down the modal in iOS to dismiss. However, in visionOS, this same modal CANNOT be swiped down to cancel/dismiss. If you are drilled deep into a file hierarchy, you have to navigate back to the top level to tap X to dismiss. Is there a way to add swipe down to the visionOS implementation of fileImporter, or any other workaround so the user doesn't have to navigate back to the top to dismiss? Again, this is not a visionOS app but an iOS app compatible for use in Vision Pro. Thanks!
2
0
948
Jan ’26
Extracting IP with swift on visionOS
Hey everyone, I’m developing an app for visionOS where I need to display the Apple Vision Pro’s current IP address. For this I’m using the following code, which works for iOS, macOS, and visionOS in the simulator. Only on a real Apple Vision Pro it’s unable to extract an IP. Could it be that visionOS currently doesn’t allow this? Have any of you had the same experience and found a workaround? var address: String = "no ip" var ifaddr: UnsafeMutablePointer<ifaddrs>? = nil if getifaddrs(&ifaddr) == 0 { var ptr = ifaddr while ptr != nil { defer { ptr = ptr?.pointee.ifa_next } let interface = ptr?.pointee let addrFamily = interface?.ifa_addr.pointee.sa_family if addrFamily == UInt8(AF_INET) { if let name: Optional<String> = String(cString: (interface?.ifa_name)!), name == "en0" { var hostname = [CChar](repeating: 0, count: Int(NI_MAXHOST)) getnameinfo(interface?.ifa_addr, socklen_t((interface?.ifa_addr.pointee.sa_len)!), &hostname, socklen_t(hostname.count), nil, socklen_t(0), NI_NUMERICHOST) address = String(cString: hostname) } } } freeifaddrs(ifaddr) } return address } Thanks in advance for any insights or tips! Best Regards, David
2
1
176
Jun ’25
Launching a timeline on a specific model via notification
Hello! I’m familiar with the discussion on “Sending messages to the scene”, and I’ve successfully used that code. However, I have several instances of the same model in my scene. Is it possible to make only one specific model respond to a notification? For example, can I pass something like RealityKit.NotificationTrigger.SourceEntity in userInfo or use another method to target just one instance?
1
1
204
May ’25
Add Unity Project to existing VisionOS App
Hello, As titled, my team is trying to find a way to add unity projects to our current developments. We have checked several posts and tutorials, but find they are all about porting to a brand new project. Without modifying too much on our current swift codes, we wonder if we can add Unity part as a WindowGroup/ImmersiveSpace like the following? :) struct TestVisionUnityApp: App { var body: some Scene { // from default template WindowGroup { ContentView() .... } // @TODO WindowGroup {...} } }
0
1
316
Jul ’25
Bouncy ball in RealityKit - game
I'm developing a VisionOS app with bouncing ball physics and struggling to achieve natural bouncing behavior using RealityKit's physics system. Despite following Apple's recommended parameters, the ball loses significant energy on each bounce and doesn't behave like a real basketball, tennis ball, or football would. With identical physics parameters (restitution = 1.0), RealityKit shows significant energy loss. I've had to implement a custom physics system to compensate, but I want to use native RealityKit physics. It's impossible to make it work by applying custom impulses. Ball Physics Setup (Following Apple Forum Recommendations) // From PhysicsManager.swift private func createBallEntityRealityKit() -> Entity { let ballRadius: Float = 0.05 let ballEntity = Entity() ballEntity.name = "bouncingBall" // Mesh and material let mesh = MeshResource.generateSphere(radius: ballRadius) var material = PhysicallyBasedMaterial() material.baseColor = .init(tint: .cyan) material.roughness = .float(0.3) material.metallic = .float(0.8) ballEntity.components.set(ModelComponent(mesh: mesh, materials: [material])) // Physics setup from Apple Developer Forums let physics = PhysicsBodyComponent( massProperties: .init(mass: 0.624), // Seems too heavy for 5cm ball material: PhysicsMaterialResource.generate( staticFriction: 0.8, dynamicFriction: 0.6, restitution: 1.0 // Perfect elasticity, yet still loses energy ), mode: .dynamic ) ballEntity.components.set(physics) ballEntity.components.set(PhysicsMotionComponent()) // Collision setup let collisionShape = ShapeResource.generateSphere(radius: ballRadius) ballEntity.components.set(CollisionComponent(shapes: [collisionShape])) return ballEntity } Ground Plane Physics // From GroundPlaneView.swift let groundPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 1.0 // Perfect bounce ), mode: .static ) entity.components.set(groundPhysics) Wall Physics // From WalledBoxManager.swift let wallPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 0.85 // Slightly less than ground ), mode: .static ) wall.components.set(wallPhysics) Collision Detection // From GroundPlaneView.swift content.subscribe(to: CollisionEvents.Began.self) { event in guard physicsMode == .realityKit else { return } let currentTime = Date().timeIntervalSince1970 guard currentTime - lastCollisionTime > 0.1 else { return } if event.entityA.name == "bouncingBall" || event.entityB.name == "bouncingBall" { let normal = event.collision.normal // Distinguish between wall and ground collisions if abs(normal.y) < 0.3 { // Wall bounce print("Wall collision detected") } else if normal.y > 0.7 { // Ground bounce print("Ground collision detected") } lastCollisionTime = currentTime } } Issues Observed Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce Wall Sliding: Ball tends to slide down walls instead of bouncing naturally No Damping Control: Comments mention damping values but they don't seem to affect the physics Change in mass also doesn't do much. Custom Physics System (Workaround) I've implemented a custom physics system that manually calculates velocities and applies more realistic restitution values: // From BouncingBallComponent.swift struct BouncingBallComponent: Component { var velocity: SIMD3<Float> = .zero var angularVelocity: SIMD3<Float> = .zero var bounceState: BounceState = .idle var lastBounceTime: TimeInterval = 0 var bounceCount: Int = 0 var peakHeight: Float = 0 var totalFallDistance: Float = 0 enum BounceState { case idle case falling case justBounced case bouncing case settled } } Is this energy loss expected behavior in RealityKit, even with perfect restitution (1.0)? Are there additional physics parameters (damping, solver iterations, etc.) that could improve bounce behavior? Would switching to Unity be necessary for more realistic ball physics, or am I missing something in RealityKit? Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. I apply custom impulses, but then if I have walls around the ball, it's almost impossible to make it look natural. I also saw this post https://developer.apple.com/forums/thread/759422 and ball is still not bouncing naturally.
9
0
946
Nov ’25
Looking for a way to implement the video display effect in Apple's 'Spatial Gallery'
Hi guys, I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel. May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code. Thanks!
3
1
253
Jun ’25
WebXR Consent Dialog
Based on the "Build immersive web experiences with WebXR"-Video for visionOS there is no way to disable the consent prompts for entering an immersive experience or consent hand-tracking. For the microphone it's possible to "greenlight" specific websites for mic input, which works great. I'd welcome it, if it were possible to add specific websites in the settings, in which those consent dialogs aren't shown each time. In my opinion, the user interaction through a button that launches the experience would be sufficient to not disorient.
0
1
130
Jun ’25
Guided Access - Detect when setup (Eyes + Hands) is done
Hello, I am building a kiosk-style app for VisionOS which will be used in Guided Access mode, to be given to various visitors. So each of them will do hands + eyes setup, standard Guided Access thing. I want my experience to auto-start playing content when setup is done. I looked everywhere, but found no way do detect whether setup is complete? Also adding any kind of interface to start the app manually is risky, since buttons etc remain visible an interactable WHILE setup takes place. Delay-based approach also wont work, since setup can be skipped, or failed, or be done quickly, slowly... So it takes between 10 seconds and a few minutes. So the question is - is there any way to get notification, or check some bool or something that will tell me that Hands + Eyes setup in Guided mode is complete (or skipped)? Thanks in advance!
2
1
657
Feb ’26
Subject: Handling Z-Up Blender USDZ Models in RealityKit (visionOS) for Transform Updates
Hello everyone, I'm working on a visionOS application using RealityKit and am encountering a common coordinate system challenge when integrating 3D models created in Blender. My goal is to display and dynamically update the Transform (position, rotation, scale) of models created in Blender within RealityKit. The issue arises because Blender's default coordinate system is Z-up, and while exporting to USD/USDZ, I don't have a reliable "Y-up" export option that correctly reorients the model and its transform data for RealityKit's Y-up convention. This means I'm essentially exporting models with their "up" direction along the Z-axis. When I load these Z-up exported models into RealityKit, they are often oriented incorrectly. To then programmatically update their Transform (e.g., move them, rotate them based on game logic, or apply physics), I need to ensure that the Transform values I set align with RealityKit's Y-up system, even though the original model data was authored in a Z-up context. My questions are: What is the recommended transformation process (e.g., using simd_quatf or simd_float4x4) to convert a Transform that was conceptually defined in a Z-up coordinate system to RealityKit's Y-up coordinate system? Specifically, when I have a Transform (or its translation, rotation, scale components) from a Z-up context, how should I apply this to a RealityKit Entity so it appears and behaves correctly in a Y-up world? Are there any existing convenience APIs or helper functions within RealityKit, simd, or other Apple frameworks that simplify this Z-up to Y-up Transform conversion process? Or is a manual application of a transformation quaternion (e.g., simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])) the standard approach? Any guidance, code examples, or best practices from those who have faced similar challenges would be greatly appreciated! Thank you.
1
1
494
Jul ’25
Buttons become unresponsive after using .windowStyle(.plain) with auto-hiding menu
I'm developing a visionOS panorama viewer app where I need to implement an auto-hiding floating menu in immersive space. The menu should: Show for 3 seconds when entering immersive mode Auto-hide after 3 seconds, Reappear when user taps anywhere (using SpatialTapGesture). Buttons should respond to gaze + pinch interaction The Problem: When I add .windowStyle(.plain) to achieve transparent window background for the auto-hide effect, all buttons in the menu become completely unresponsive to gaze + pinch interaction. The buttons only respond to direct finger touch (poking). Without .windowStyle(.plain): Buttons work correctly with gaze + pinch, but I cannot achieve transparent window background for hiding. With .windowStyle(.plain): Window can be transparent, but buttons lose gaze + pinch interaction. Code: App.swift: @main struct MyApp: App { @StateObject private var model = AppModel() var body: some Scene { WindowGroup(id: "MainWindow") { ContentView() .environmentObject(model) } .defaultSize(width: 900, height: 700) .windowResizability(.contentSize) .windowStyle(.plain) // <-- This causes the interaction issue ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() .environmentObject(model) } } } ContentView.swift (simplified): struct ContentView: View { @EnvironmentObject var model: AppModel @State private var isMenuVisible: Bool = true var body: some View { VStack { if model.isImmersiveViewActive { if isMenuVisible { // This menu's buttons don't respond to gaze+pinch immersiveControlMenu } } else { mainMenuButtons } } .glassBackgroundEffect() } private var immersiveControlMenu: some View { HStack { Button("Exit") { exitImmersiveSpace() } .buttonStyle(.bordered) // Also tried .plain, same issue } .padding() .glassBackgroundEffect() } } ImmersiveView.swift: struct ImmersiveView: View { @EnvironmentObject var model: AppModel var body: some View { RealityView { content in // Panorama sphere let sphere = ModelEntity(mesh: .generateSphere(radius: 1000), materials: [material]) content.add(sphere) // Tap detector for menu toggle let tapDetector = Entity() tapDetector.components.set(CollisionComponent(shapes: [.generateSphere(radius: 900)])) tapDetector.components.set(InputTargetComponent()) content.add(tapDetector) } .gesture( SpatialTapGesture() .targetedToAnyEntity() .onEnded { _ in model.shouldShowMenu = true } ) } } Environment: Xcode 26.2 visionOS 26.3 Vision Pro device Questions: Is .windowStyle(.plain) expected to affect button interaction behavior? What is the recommended approach to achieve a transparent/hidden window in immersive mode while maintaining button interactivity? Is there an alternative to .windowStyle(.plain) for hiding window chrome in visionOS? Thank you for any guidance!
Replies
5
Boosts
0
Views
1.1k
Activity
Feb ’26
🔥 Xcode 26 RC – visionOS App Icon Created with Icon Composer Appears Empty
App Icon created with Icon Composer is empty for visionOS app We are developing a universal app, and the app’s icon was created using Icon Composer. Xcode 26, RC visionOS 26 and visionOS 2.5 App Icons on macOS, iOS, and iPadOS are correct We have archived the app for macOS and iOS and successfully uploaded it to the App Store. This strongly suggests that the App Icon configuration in our project settings is correct for these platforms. App Icon issue on visionOS However, the visionOS app icon is not working as expected: When testing on the Vision Pro simulator (versions 2.x and 26.0), the app icon appears empty. When archiving and submitting to the App Store, the process fails with the following error: The app’s Info.plist file is missing the CFBundleIcons.CFBundlePrimaryIcon key for the visionOS App Icon. This suggests that the project’s App Icon settings may not be correctly applied for visionOS builds. Request for assistance We are preparing to release our app, one of the first to support Liquid Glass, and would greatly appreciate guidance on how to resolve this issue with the visionOS app icon. FB20184218
Replies
1
Boosts
0
Views
343
Activity
Sep ’25
Displaying Gaussian splats in visionOS
Apple's new Spatial Personas use Gaussian Splatting, but I have not found any APIs for visionOS to display a Gaussian Splat like a PLY file. Am I just missing the Apple documentation? If not, are there common practices developers are using for displaying Gaussian Splats in visionOS?
Replies
1
Boosts
2
Views
625
Activity
Jan ’26
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
Replies
3
Boosts
0
Views
502
Activity
1w
What is the reason the hand-tracking joints have these axes? visionOS
What is the reason the hand-tracking joints have these axes? I'm trying to create a virtual hands model and that's a mess.
Replies
5
Boosts
0
Views
1.5k
Activity
Dec ’25
Attaching a hand model to your hands
Hi, we have been working on an application that attaches a hand model to the users hands. Apple provides an animating hand models in visionOS project that is a useful starting point. https://developer.apple.com/documentation/visionOS/animating-hand-models-in-visionOS We have been trying to create our own hand model to attach but have had some issues with how it is attaching to the hand. For our hand model we want to include the forearm all the way up to the users elbow. I have attached a sample project of what our code currently looks like so you can run it. Just select show immersive space to attach the models. The left hand model is the space glove that we were trying to mirror. The right hand model is our model that we have been using. I have mapped each of the joints to the pertaining joint name on our model. The first issue we are having seems to be based around the placement of the forearm. It attaches itself at the wrist. The second issue seems to be around rotation. Our team is looking for some guidance on what needs to change in order to map this model correctly. Thanks in advance!
Replies
2
Boosts
0
Views
258
Activity
Feb ’26
Website environment disappears suddenly
After I updated to visionOS 26.4, I noticed my website environment would suddenly turn off occasionally while I was watching YouTube in Safari. My M2 AVP was still warm after the update. Is turning off a website environment expected behavior when the headset gets warm (e.g., perhaps to reduce load)? If not, anyone have an idea this might happen?
Replies
1
Boosts
0
Views
249
Activity
1w
Curved/panorama window in visionOS 2?
The new Mac virtual display feature on visionOS 2 offers a curved/panoramic window. I was wondering if this is simply a property that can be applied to a window, or if it involves an immersive mode or SceneKit/RealityKit?
Replies
5
Boosts
0
Views
1.5k
Activity
Nov ’25
How to integrate Apple Immersive Video into the app you are developing.
Hello, Let me ask you a question about Apple Immersive Video. https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/ I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format. First, I would like to know if it is possible to integrate Apple Immersive Video into an app. Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app? It would be great if you could also share any helpful website resources. I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos. As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements. I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed? I’ve asked several questions, and that’s all for now. Thank you in advance!
Replies
2
Boosts
1
Views
960
Activity
Nov ’25
fileImporter issue in visionOS with iPhone app (that can run on visionOS)
Happy new year to all! I have created an iOS app that also runs on Apple Vision Pro. On iOS, when you activate the fileImporter modal, you can swipe down the modal in iOS to dismiss. However, in visionOS, this same modal CANNOT be swiped down to cancel/dismiss. If you are drilled deep into a file hierarchy, you have to navigate back to the top level to tap X to dismiss. Is there a way to add swipe down to the visionOS implementation of fileImporter, or any other workaround so the user doesn't have to navigate back to the top to dismiss? Again, this is not a visionOS app but an iOS app compatible for use in Vision Pro. Thanks!
Replies
2
Boosts
0
Views
948
Activity
Jan ’26
Extracting IP with swift on visionOS
Hey everyone, I’m developing an app for visionOS where I need to display the Apple Vision Pro’s current IP address. For this I’m using the following code, which works for iOS, macOS, and visionOS in the simulator. Only on a real Apple Vision Pro it’s unable to extract an IP. Could it be that visionOS currently doesn’t allow this? Have any of you had the same experience and found a workaround? var address: String = "no ip" var ifaddr: UnsafeMutablePointer<ifaddrs>? = nil if getifaddrs(&ifaddr) == 0 { var ptr = ifaddr while ptr != nil { defer { ptr = ptr?.pointee.ifa_next } let interface = ptr?.pointee let addrFamily = interface?.ifa_addr.pointee.sa_family if addrFamily == UInt8(AF_INET) { if let name: Optional<String> = String(cString: (interface?.ifa_name)!), name == "en0" { var hostname = [CChar](repeating: 0, count: Int(NI_MAXHOST)) getnameinfo(interface?.ifa_addr, socklen_t((interface?.ifa_addr.pointee.sa_len)!), &hostname, socklen_t(hostname.count), nil, socklen_t(0), NI_NUMERICHOST) address = String(cString: hostname) } } } freeifaddrs(ifaddr) } return address } Thanks in advance for any insights or tips! Best Regards, David
Replies
2
Boosts
1
Views
176
Activity
Jun ’25
Launching a timeline on a specific model via notification
Hello! I’m familiar with the discussion on “Sending messages to the scene”, and I’ve successfully used that code. However, I have several instances of the same model in my scene. Is it possible to make only one specific model respond to a notification? For example, can I pass something like RealityKit.NotificationTrigger.SourceEntity in userInfo or use another method to target just one instance?
Replies
1
Boosts
1
Views
204
Activity
May ’25
App Store Connect Warning
Apple Vision Pro support issue. The app contains the following UIRequiredDeviceCapabilities values, which aren’t supported in visionOS: [arkit]. When I try and distribute a build to the AppStore in Xcode, it comes up with this message, I have not selected Vision Pro as part of my build, can anyone please help? Thanks!
Replies
0
Boosts
1
Views
146
Activity
Jun ’25
Add Unity Project to existing VisionOS App
Hello, As titled, my team is trying to find a way to add unity projects to our current developments. We have checked several posts and tutorials, but find they are all about porting to a brand new project. Without modifying too much on our current swift codes, we wonder if we can add Unity part as a WindowGroup/ImmersiveSpace like the following? :) struct TestVisionUnityApp: App { var body: some Scene { // from default template WindowGroup { ContentView() .... } // @TODO WindowGroup {...} } }
Replies
0
Boosts
1
Views
316
Activity
Jul ’25
Bouncy ball in RealityKit - game
I'm developing a VisionOS app with bouncing ball physics and struggling to achieve natural bouncing behavior using RealityKit's physics system. Despite following Apple's recommended parameters, the ball loses significant energy on each bounce and doesn't behave like a real basketball, tennis ball, or football would. With identical physics parameters (restitution = 1.0), RealityKit shows significant energy loss. I've had to implement a custom physics system to compensate, but I want to use native RealityKit physics. It's impossible to make it work by applying custom impulses. Ball Physics Setup (Following Apple Forum Recommendations) // From PhysicsManager.swift private func createBallEntityRealityKit() -> Entity { let ballRadius: Float = 0.05 let ballEntity = Entity() ballEntity.name = "bouncingBall" // Mesh and material let mesh = MeshResource.generateSphere(radius: ballRadius) var material = PhysicallyBasedMaterial() material.baseColor = .init(tint: .cyan) material.roughness = .float(0.3) material.metallic = .float(0.8) ballEntity.components.set(ModelComponent(mesh: mesh, materials: [material])) // Physics setup from Apple Developer Forums let physics = PhysicsBodyComponent( massProperties: .init(mass: 0.624), // Seems too heavy for 5cm ball material: PhysicsMaterialResource.generate( staticFriction: 0.8, dynamicFriction: 0.6, restitution: 1.0 // Perfect elasticity, yet still loses energy ), mode: .dynamic ) ballEntity.components.set(physics) ballEntity.components.set(PhysicsMotionComponent()) // Collision setup let collisionShape = ShapeResource.generateSphere(radius: ballRadius) ballEntity.components.set(CollisionComponent(shapes: [collisionShape])) return ballEntity } Ground Plane Physics // From GroundPlaneView.swift let groundPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 1.0 // Perfect bounce ), mode: .static ) entity.components.set(groundPhysics) Wall Physics // From WalledBoxManager.swift let wallPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 0.85 // Slightly less than ground ), mode: .static ) wall.components.set(wallPhysics) Collision Detection // From GroundPlaneView.swift content.subscribe(to: CollisionEvents.Began.self) { event in guard physicsMode == .realityKit else { return } let currentTime = Date().timeIntervalSince1970 guard currentTime - lastCollisionTime > 0.1 else { return } if event.entityA.name == "bouncingBall" || event.entityB.name == "bouncingBall" { let normal = event.collision.normal // Distinguish between wall and ground collisions if abs(normal.y) < 0.3 { // Wall bounce print("Wall collision detected") } else if normal.y > 0.7 { // Ground bounce print("Ground collision detected") } lastCollisionTime = currentTime } } Issues Observed Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce Wall Sliding: Ball tends to slide down walls instead of bouncing naturally No Damping Control: Comments mention damping values but they don't seem to affect the physics Change in mass also doesn't do much. Custom Physics System (Workaround) I've implemented a custom physics system that manually calculates velocities and applies more realistic restitution values: // From BouncingBallComponent.swift struct BouncingBallComponent: Component { var velocity: SIMD3<Float> = .zero var angularVelocity: SIMD3<Float> = .zero var bounceState: BounceState = .idle var lastBounceTime: TimeInterval = 0 var bounceCount: Int = 0 var peakHeight: Float = 0 var totalFallDistance: Float = 0 enum BounceState { case idle case falling case justBounced case bouncing case settled } } Is this energy loss expected behavior in RealityKit, even with perfect restitution (1.0)? Are there additional physics parameters (damping, solver iterations, etc.) that could improve bounce behavior? Would switching to Unity be necessary for more realistic ball physics, or am I missing something in RealityKit? Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. I apply custom impulses, but then if I have walls around the ball, it's almost impossible to make it look natural. I also saw this post https://developer.apple.com/forums/thread/759422 and ball is still not bouncing naturally.
Replies
9
Boosts
0
Views
946
Activity
Nov ’25
Looking for a way to implement the video display effect in Apple's 'Spatial Gallery'
Hi guys, I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel. May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code. Thanks!
Replies
3
Boosts
1
Views
253
Activity
Jun ’25
WebXR Consent Dialog
Based on the "Build immersive web experiences with WebXR"-Video for visionOS there is no way to disable the consent prompts for entering an immersive experience or consent hand-tracking. For the microphone it's possible to "greenlight" specific websites for mic input, which works great. I'd welcome it, if it were possible to add specific websites in the settings, in which those consent dialogs aren't shown each time. In my opinion, the user interaction through a button that launches the experience would be sufficient to not disorient.
Replies
0
Boosts
1
Views
130
Activity
Jun ’25
Guided Access - Detect when setup (Eyes + Hands) is done
Hello, I am building a kiosk-style app for VisionOS which will be used in Guided Access mode, to be given to various visitors. So each of them will do hands + eyes setup, standard Guided Access thing. I want my experience to auto-start playing content when setup is done. I looked everywhere, but found no way do detect whether setup is complete? Also adding any kind of interface to start the app manually is risky, since buttons etc remain visible an interactable WHILE setup takes place. Delay-based approach also wont work, since setup can be skipped, or failed, or be done quickly, slowly... So it takes between 10 seconds and a few minutes. So the question is - is there any way to get notification, or check some bool or something that will tell me that Hands + Eyes setup in Guided mode is complete (or skipped)? Thanks in advance!
Replies
2
Boosts
1
Views
657
Activity
Feb ’26
Gloves assets from wwdc2023-10111
Are the glove assets used in the sample from wwdc2023-10111 available somewhere? Thanks
Replies
4
Boosts
1
Views
612
Activity
Dec ’25
Subject: Handling Z-Up Blender USDZ Models in RealityKit (visionOS) for Transform Updates
Hello everyone, I'm working on a visionOS application using RealityKit and am encountering a common coordinate system challenge when integrating 3D models created in Blender. My goal is to display and dynamically update the Transform (position, rotation, scale) of models created in Blender within RealityKit. The issue arises because Blender's default coordinate system is Z-up, and while exporting to USD/USDZ, I don't have a reliable "Y-up" export option that correctly reorients the model and its transform data for RealityKit's Y-up convention. This means I'm essentially exporting models with their "up" direction along the Z-axis. When I load these Z-up exported models into RealityKit, they are often oriented incorrectly. To then programmatically update their Transform (e.g., move them, rotate them based on game logic, or apply physics), I need to ensure that the Transform values I set align with RealityKit's Y-up system, even though the original model data was authored in a Z-up context. My questions are: What is the recommended transformation process (e.g., using simd_quatf or simd_float4x4) to convert a Transform that was conceptually defined in a Z-up coordinate system to RealityKit's Y-up coordinate system? Specifically, when I have a Transform (or its translation, rotation, scale components) from a Z-up context, how should I apply this to a RealityKit Entity so it appears and behaves correctly in a Y-up world? Are there any existing convenience APIs or helper functions within RealityKit, simd, or other Apple frameworks that simplify this Z-up to Y-up Transform conversion process? Or is a manual application of a transformation quaternion (e.g., simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])) the standard approach? Any guidance, code examples, or best practices from those who have faced similar challenges would be greatly appreciated! Thank you.
Replies
1
Boosts
1
Views
494
Activity
Jul ’25