I got more than 1 TB Immersive videos, and I want to play from them. Is there a way I can connect a ssd to Vision Pro via developer strap? Or is it possible to connect to a 10G Ethernet ad, and then using Ethernet to connect to a disk/NAS and attach the drive via ip?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I want to record animation with entity, then export it to .usd without using Reality Composer Pro, how to achieve that?
I am experiencing an issue where USDZ files exported from Blender do not display textures when opened in Apple Vision Pro Quick Look. Instead of the expected materials, the model appears completely white, as if the textures are missing or not being recognized by the rendering engine.
Topic:
Spatial Computing
SubTopic:
General
One of the most common ways to provide a window size in visionOS is to use the defaultSize scene modifier.
WindowGroup(id: "someID") {
SomeView()
}
.defaultSize(CGSize(width: 600, height: 600))
Starting in visionOS 26, using this has a side effect. visionOS 26 will restore windows that have been locked in place or snapped to surfaces. If a user has manually adjusted the size of a locked/snapped window, the users size is only restore in some cases.
Manual resize respected
Leaving a room and returning later
Taking the headset off and putting it back on later
Manual resize NOT respected
Device restart. In this case, the window is reopened where it was locked, but the size is set back to the values passed to defaultSize. The manual resizing adjustments the user has made are lost. This is counter to how all other windows and widgets work.
I reported this last month (FB18429638), but haven't heard back if this is a bug or intended behavior.
Questions
What is the best way to provide a default window size that will only be used when opening new windows–and not used during scene restoration?
Should we try to keep track of window size after users adjust them and save that somewhere?
If this is intended behavior, can someone please update the docs accordingly?
It's much easier to use custom material to bridge metal shader onto Reality model than using LowLevelTexture does the same thing.
Why VisionOS doesn't support this material:
Hi I’m using Vision Pro m5 and developer strap 2. When I connect it to my Mac, it still shows 480M… all systems are using latest firmware…
Anyone knows why?
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS.
You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it.
I personally know multiple people who are not buying the headset simply because you locked those features out.
No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
Hi,
I wanted to ask if you are familiar with a way of making the Logitech Muse sterile for operation room use?
Topic:
Spatial Computing
SubTopic:
General
The Section struct only publicly makes the center property available, but this is a SIMD3 that doesn't seem to line up with the rest of the model. All other objects have a 4x4 transform matrix that accurately gives each position and rotation.
When inspecting a Section in the debugger, many more properties are visible such as polygon and transform. Why are these not visible? The transform in particular seems necessary to make any sort of use of the Sections.
Hi,
Is there a resource or sample code about how to draw an outline around a mesh in RealityKit?
Typically, this is useful for visualizing a selection, like in Reality Composer Pro.
How to achieve such effect? A shader material? A post-process effect in ARView or RealityRenderer?
Methods such as duplicating the entity mesh, scaling it, and using material.faceCulling = .front did not look good in my experiments.
Thank you.
The following sample code project does not seem to work as expected:
https://developer.apple.com/documentation/visionos/implementing-shareplay-for-immersive-spaces-in-visionos
Have tried to get this project working with a client, but while we were able to see nearby users and make facetime calls, the color changing cube experience always remained a single color.
Are there step-by-step instructions that Apple has used to verify this sample code so I can try to recreate this sample code's expected behavior for both nearby participants and those in a Facetime call?
Topic:
Spatial Computing
SubTopic:
General
I’m currently developing an app for visionOS and working with an ImmersiveSpace. I’ve noticed that the system automatically enforces a safety boundary at approximately 1.5 meters. If the user moves beyond this limit, the content fades out or the system reverts to Passthrough.
Is there any way to disable this boundary or extend its radius?
This app is currently in the experimental/verification phase, and it is intended to be run on a Vision Pro in Developer Mode. Since the primary goal is to test large-scale spatial interactions during development, I am looking for any way—including developer-specific settings or configurations—to bypass or expand this limit.
If there isn't a direct API to change the boundary size, are there any recommended workarounds for testing movement within large environments?
Any insights would be greatly appreciated!
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it?
Thanks a lot,
Christophe
I am wondering, is it possible to somehow configure a 3D object to respond to the gaze of a person, like change colors of some parts of the 3D-Model where a person is looking, i.e. where a person's gaze lands on the surface of the 3D-model ?
For example, if there is a 3D model of a Cool Dragon 🐉 in the physical space of a person, when seen through with the mixed reality view, of a VisionPro. Now, it would be really cool to change only the color, or make some specific parts of the dragon skin shimmer, but only in the areas where a person is looking. Is this possible ? Is it do-able with eye-tracking of VisionPro ?
Any advice would be appreciated. 🤝 🤓 I am new to iOS and VisionOS development.
Topic:
Spatial Computing
SubTopic:
General
Hi,
When viewing a spatial photo scene on the Apple Vision Pro Photos app, you can tap on the immersive icon on the top right corner to transaction from the window presenting the image as spatial3d to an immersive photo scene with spatial3DImmersive where the window borders disappear. Could someone explain how to achieve that? I tried to do it but once I transition from spatial3d to spatial3DImmersive I can see still see a rectangle around the spatial image.
Thanks.
When I was developing the visionOS 26beta Widget, I found that it could not work normally when the real vision OS was running, and an error would appear.
Please adopt container background api
It is worth mentioning that this problem does not occur on the visionOS virtual machine.
Does anyone know what the reason and solution are, or whether this is a visionOS error that needs Feedback? Thank you!
Hi everyone,
I’m trying to verify something mentioned in the WWDC session “Explore enhancements to your spatial business app.”
At timestamp 3:36, the presenter states:
“You can now access your enterprise license files directly within your Apple Developer account.”
I’ve checked every section of my Developer account, including:
• Membership and Agreements
• Certificates, Identifiers & Profiles
• App Store Connect
• Additional Resources
• Account settings
…but no UI or section exposes these enterprise license files.
Since the Vision Entitlement Services framework actively checks these licenses (for example, mainCameraAccess entitlement approval), I need to confirm the location of the new license file.
Could someone from Apple or anyone who has seen this feature clarify:
1. Where exactly do these enterprise license files appear in the Developer account UI, or
2. Whether this feature has not rolled out yet?
Any guidance or screenshots from those who have access would be invaluable.
Thanks,
Topic:
Spatial Computing
SubTopic:
General
Tags:
Enterprise
Entitlements
Business and Enterprise
visionOS
We got very excited when we saw support for the PSVR2 on WWDC!
Particularly interesting is WebXR to us, so we got the controllers to give it a try.
Unfortunately they only register as gamepads in the navigator but not as XRInputDevice's
We went through the experimental flags and didn't find something that is directly related. Is there a flag we missed? If not, when do you have PSVR2 support planned for WebXR?
Hi, I'm working on a VisionOS app and would like to integrate Background Assets to download large files after the app is installed.
I'm wondering what would happen if the user takes off the headset while a background asset is being downloaded. Would it continue downloading or would the download be stopped/paused?
I was looking for a way to download large assets while the user is not wearing the Vision Pro, is there any other alternative?
Thanks in advance.
Has anyone had success with MeshInstancesComponent? I tried to follow the sample code from What's New in RealityKit but it wouldn't compile. I was able to use one of the init overloads to get it to compile, but using it crashes both my device and the simulator. Even with one instance.