visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts

Post

Replies

Boosts

Views

Activity

[WebXR] Support for AR module in VisionOS 2.x
Thank you again for pushing the web forward in VisionOS 2, super exciting! The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences. Samples such as this one are not supported: https://immersive-web.github.io/webxr-samples/immersive-ar-session.html In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything. Is this the expected behavior at this time? Any developer preview(s) we could try?
7
6
2k
Jul ’25
Blender to Reality Composer Pro 2.0 to SwiftUI + RealityKit visionOS Best Practices
Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0. I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later). I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes: I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations: https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6 When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible? As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation: import bpy import os # Set the directory where you want to save the USD files output_directory = “/path/to/export” # Ensure the directory exists if not os.path.exists(output_directory): os.makedirs(output_directory) # Function to export current scene as USD def export_scene_as_usd(output_path, start_frame, end_frame): bpy.context.scene.frame_start = start_frame bpy.context.scene.frame_end = end_frame # Export the scene as a USD file bpy.ops.wm.usd_export( filepath=output_path, export_animation=True ) # Save the current scene name original_scene = bpy.context.scene.name # Iterate through each action and export it as a USD file for action in bpy.data.actions: # Create a new scene for each action bpy.context.window.scene = bpy.data.scenes[original_scene].copy() new_scene = bpy.context.scene # Link the action to all relevant objects for obj in new_scene.objects: if obj.animation_data is not None: obj.animation_data.action = action # Determine the frame range for the action start_frame, end_frame = action.frame_range # Export the scene as a USD file output_path = os.path.join(output_directory, f"{action.name}.usdc") export_scene_as_usd(output_path, int(start_frame), int(end_frame)) # Delete the temporary scene to free memory bpy.data.scenes.remove(new_scene) print("Export completed.") I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually. I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations. I apply materials/textures to each so they appear the same, using Shader Graph. Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code. Two questions: Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines? If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode? I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI. Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!
4
2
1.1k
Apr ’25
Not receive onDisappear event on the first WindowGroup
Hi, I'm working on visionOS and find I can't get onDisappear event just on the first window after app launch. It comes like that: WindowGroup(id:"WindowA"){ MyView() .onDisappear(){ print("WindowA disappear") } } WindowGroup(id:"WindowB"){ MyView() .onDisappear(){ print("WindowB disappear") } } WindowGroup(id:"WindowC"){ MyView() .onDisappear(){ print("WindowC disappear") } } When the app first launch, it will open WindowA automatically And then I open WindowB and WindowC programatically. Then I tap the close button on window bar below window. If I close WindowB/WindowC, I can receive onDisappear event If I close WindowA, I can't receive onDisappear event If I reopen WindowA after it is closed and then close it again by tap the close button below window, I can receive onDisappear event Is there any logic difference for the first window on app launch? How can I get onDisappear Event for it. I'm using Xcode 16 beta 2
5
0
1.1k
Mar ’25
visionOS console warning: Trying to convert coordinates between views that are in different UIWindows
Hello, I have an iOS app that is using SwiftUI but the gesture code is written using UIGestureRecognizer. When I run this app on visionOS using the "Designed for iPad" destination and try to use any of my gestures I see this warning in the console: Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead. But I don't see any visible problems with the gestures. I see this warning printed out after the gesture takes place but before any of our gesture methods get kicked off. So now I am wondering if this is something we need to deal with or some internal work that needs to happen in UIKit. Does anyone have any thoughts on this?
3
0
1.1k
Dec ’24
Multiview HLS with HDR
I have an HDR10+ encoded video that if loaded as a mov plays back on the Apple Vision Pro but when that video is encoded using the latest (1.23b) Apple HLS tools to generate an fMP4 - the resulting m3u8 cannot be played back in the Apple Vision Pro and I only get back a "Cannot Open" error. To generate the m3u8, I'm just calling mediafilesegmenter (with -iso-fragmented) and then variantplaylistcreator. This completes with no errors but the m3u8 will playback on the Mac using VLC but not on the Apple Vision Pro. The relevant part of the m3u8 is: #EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=40022507,BANDWIDTH=48883974,VIDEO-RANGE=PQ,CODECS="ec-3,hvc1.1.60000000.L180.B0",RESOLUTION=4096x4096,FRAME-RATE=24.000,CLOSED-CAPTIONS=NONE,AUDIO="audio1",REQ-VIDEO-LAYOUT="CH-STEREO" {{url}} Has anyone been able to use the HLS tools to generate fMP4s of MV-HEVC videos with HDR10?
3
2
1k
Feb ’25
AVAudioSession gets interrupted when closing a window
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again. This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
3
1
831
Nov ’24
Loading USDZ asset into Model3D causes visionOS 2.0 beta 5 to crash
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug. After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere. This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
3
1
540
Jul ’25
Reality Composer Pro 2.0 shader graphs can't be loaded on visionOS 1
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface: On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity. However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console: If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2. Is it a known issue that Reality Composer 2 can't be used with visionOS 1? Is this intentional behavior? I've submitted feedback as FB14828873, including a sample project and repro steps. If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]." Thank you.
7
0
1.4k
May ’25
How to trigger actions by OnCollision in Behaviors Component
It's all about notifications to trigger actions from RCP's new Timeline system. From Compose interactive 3D content in Reality Composer Pro I am actually starting to confuse why there was need to use Entity.applyTapForBehaviors in code to trigger content in Behaviors Component. Simply because in Behaviors Component, we have chosen OnTap to allow a "Tap Notification" to trigger our action (on a selected target object). Then I guess by selecting OnCollision this trigger, I should write something like CollisionEvent.entityA.applyCollisionForBehaviors, which we don't have. And ofc the collision on my object won't trigger this action (because I only did things in RCP not in code). Ignoring this post has pointed out we could use Behaviors Component's OnNotification to trigger something for now. I found that I could still use OnTap trigger but actually put my code Entity.applyTapForBehaviors under my subscribed collision's begin event. That actually works better than OnCollision So what is the design principles here? And how could I trigger a collision notification to let my Behaviors Component's OnCollision actually works?
1
2
578
Mar ’25
Compose interactive 3D content in Reality Composer Pro -- Build Error
Compilation of the project for the WWDC 2024 session title Compose interactive 3D content in Reality Composer Pro fails. After applying the fix mentioned here (https://developer.apple.com/forums/thread/762030?login=true), the project still won't compile. Using Xcode 16 beta 7, I get these errors: error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Component Compatibility: AudioLibrary not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Component Compatibility: BlendShapeWeights not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults") error: Tool exited with code 1
4
0
841
Oct ’24
Xcode 16 / Reality Composer Pro 2 segmentation fault issue
After installing MacOS Sequoia, Xcode 16 & RealityComposer Pro 2 my Apple Vision Pro projects (worked perfectly with Xcode 15) started to give me Tool terminated by signal 'Segmentation fault: 11' error while compiling RealityKitContent assets. This happens only when I try to build project with .usdz models exported from Blender, but when I try with sample models from apple website it works nice with no errors. Is there any solution?
2
0
607
Dec ’24
GCControllerDidConnect notification not received in VisionOS 2.0
I am unable to get VisionOS 2.0 (simulator) to receive the GCControllerDidConnect notification and thus am unable to setup support for a gamepad. However, it works in VisionOS 1.2. For VisionOS 2.0 I've tried adding: .handlesGameControllerEvents(matching: .gamepad) attribute to the view Supports Controller User Interaction to Info.plist Supported game controller types -> Extended Gamepad to Info.plist ...but the notification still doesn't fire. It does when the code is run from VisionOS 1.2 simulator, both of which have the Send Game Controller To Device option enabled. Here is the example code. It's based on the Xcode project template. The only files updated were ImmersiveView.swift and Info.plist, as detailed above: import SwiftUI import GameController import RealityKit import RealityKitContent struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) } NotificationCenter.default.addObserver( forName: NSNotification.Name.GCControllerDidConnect, object: nil, queue: nil) { _ in print("Handling GCControllerDidConnect notification") } } .modify { if #available(visionOS 2.0, *) { $0.handlesGameControllerEvents(matching: .gamepad) } else { $0 } } } } extension View { func modify<T: View>(@ViewBuilder _ modifier: (Self) -> T) -> some View { return modifier(self) } }
2
1
882
Dec ’24
VisionOS 2.0 Main Camera Access Enterprise Entitlement Not Recognized in XCode
I am working on a project that requires access to the main camera on the Vision Pro. My main account holder applied for the necessary enterprise entitlement and we were approved and received the Enterprise.license file by email. I have added the Enterprise.license file to my project, and manually added the com.apple.developer.arkit.main-camera-access.allow entitlement to the entitlement file and set it to true since it was not available in the list when I tried to use the + Capability button in the Signing & Capabilites tab. I am getting an error: Provisioning profile "iOS Team Provisioning Profile: " doesn't include the com.apple.developer.arkit.main-camera-access.allow entitlement. I have checked the provisioning profile settings online, and there is no manual option for adding the main camera access entitlement, and it does not seem to be getting the approval from the license.
6
0
1.5k
2w
'Segmentation fault: 11' error after upgrading to the latest macOS version
Today I updated my macbook pro to macOS sequoia. with this I also downloaded the latest Xcode and visionOS 2 packages. I had a working project. Which did work with my vision pro which is updated to the latest visionOS 2. But now whenever I try to click on preview in xcode while editing a swift file I am receiving the following error: (lot of lines here) Library/Developer/Xcode/DerivedData/test2-fznbrpphddkqdaddrzamkayoajjm/Build/Intermediates.noindex/RealityKitContent.build/Debug-xrsimulator/RealityKitContent_RealityKitContent.build/DerivedSources/RealityAssetsGenerated/CustomComponentUSDInitializers.usda error: Tool terminated by signal 'Segmentation fault: 11' I tried exiting and restarting my mac but the problem is not going away. Can someone help me with this? Thank you!
8
0
1.8k
Nov ’24
Issue with tracking multiple images using ARKit on VisionOS
We are using the ARKit image tracking feature on visionOS 2.0 with three pre-registered images. The image tracking works, but only one image is actively tracked at a time. When more than one target image is visible to the camera, it has difficulty detecting and tracking the other images. Is this the expected behavior in visionOS, or is there something we need to do to resolve this issue?
4
1
454
Mar ’25
Skysphere flickering w attachment at finial display of scene
There is a flickering and slight dimming occurring specifically on skysphere, at initial load of the scene, when using Attachment. This is observed in the simulator and on the real device. Since we cannot upload a video illustrating the undesirable behaviour, I have to describe how to setup the project for you to observe it. To replicate the issue, follow these steps: Create a new visionOS app using Xcode template, see image. Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist), see image. Replace all swift files with those you will find in the attached texts. Add the skysphere image asset Skydome_8k found at this Apple Sample App Presenting an artist’s scene. Launch the app in debug mode via Xcode and onto the AVP device or simulator Continuously open and dismiss the skysphere by pressing on buttons Open Skysphere and Close. Observe the skysphere flicker and dim upon display of the skysphere. The current workaround is commented in file ThreeSixtySkysphereRealityView at lines 65, 70, 71, and 72. Uncomment these lines, and the flickering and dimming do not occur. Are we using attachments wrongly? Is this behavior known and documented? Or, is there really a bug in visionOS? AppModel InitialImmersiveView MainImmersiveView TestSkysphereAttachmentFlickerApp ThreeSixtySkysphereRealityView
1
0
408
Oct ’24
Availablility check is incorrect on visionOS
With this sample code here: import SwiftUI struct ContentView: View { var body: some View { Text("Hello world") .hoverEffect(isEnabled: true) } } private extension View { func hoverEffect(isEnabled: Bool) -> some View { if #available(iOS 17.0, *) { // VisionOS 2.0 goes in here? return self .hoverEffect(.automatic, isEnabled: isEnabled) } else { return self } } } You would expect if the destination was visionOS it would go into the else block but it doesn't. That seems incorrect since the condition should be true if the platform is iOS 17.0+. Also, I had this similar code that was distriubted via a xcframework and when that view is used in an app that is using the xcframework while running against visionOS there would be a runtime crash (EXC_BAD_ACCESS). The crash could only be reproduced when using that view from the xcframework and not the local source code. The problem was fixed by adding visionOS 1.0 to that availability check. But this shouldn't have been a crash in the first place. Does anyone have thoughts on this or possibly an explanation? Thank you!
5
2
560
Oct ’24
Generating procedural textures sample code error.
Screenshot: Specific error message: validateComputeFunctionArguments:1149: failed assertion `Compute Function(textureShader): Shader uses texture(texture[0]) as read-write, but hardware does not support read-write texture of this pixel format.' OS: visionOS 2.1 (22N5548c) simulator. Link: https://developer.apple.com/documentation/visionos/generating-procedural-textures-in-visionos
1
0
589
Oct ’24