visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,190 Posts
Sort by:
Post not yet marked as solved
3 Replies
2.5k Views
There was no mention if the Vision Pro could be used outside. Several of the other AR/VR systems out there are prohibited from this (sensors overload). Can the Vision Pro be used in sunlight? Thanks!
Posted
by
Post not yet marked as solved
2 Replies
984 Views
Hey everyone, I’m thinking about building a warehouse management app for the new Vision Pro. But I have some concerns and would love to get some of your opinions or feedback: Comfort for Extended Use: How does Vision Pro fare for all-day use? Is it comfortable without causing fatigue or nausea? This is crucial in a warehouse scenario. Movement: Is Vision Pro built for scenarios with plenty of movement, or is it more of a stationary device? Warehousing involves a lot of moving around. Visual Accuracy: Are the passthrough visuals accurate enough for precise tasks like opening boxes and handling items? This would be a make-or-break feature for a warehouse app. Since we use a lot of visual feedback from our eye to make fine adjustments to our movement, I imagine that the passthrough has to be quite good to not be clumsy when manipulating the real physical world. These are just some things on my mind. I’d really appreciate your thoughts and opinions on this. If any Apple folks are out there, your input would be super valuable too. Thanks in advance! Bastian
Posted
by
Post not yet marked as solved
1 Replies
773 Views
noob to apple software development here and i need to get some questions sorted before i can try to begin developing apps for visionOS I dont own a mac so the only way i can run XCode is through a MacOS 13.0 VM with a windows 10 host but that does not support XCode 15. Can XCode 14.3.1 do? Apple says the sdk will be relased later this month. Does that maen by installing the sdk I can develop visionOS apps on other platforms like Visual Studio code and still see how the app would look like running on the vision pro? Thanks a lot i just need to find a way to start coding asap
Posted
by
Post not yet marked as solved
0 Replies
516 Views
I am interested in experimenting with different algorithms to support people with color vision deficiency. Can shaders and/or color matrix transformations be applied to the camera images themselves via Unity and, if so can they be applied to each eye independently? Thanks, Tom
Posted
by
Post not yet marked as solved
1 Replies
792 Views
One thing that was not very clear for me on the WWDC videos regarding VisionOS app development was: If I want to trigger an action (let's say change the scene) using the user's relative position to do so, am I going to be able to do it? Example: If the user comes too close to an object, it starts to play some animation. Reference video: wwdc2023-10080
Posted
by
Post not yet marked as solved
2 Replies
693 Views
Hi, in the "Run your iPad and iPhone apps in the Shared Space" session video, it is mentioned that all suitable iOS/iPadOS apps will be made available in the visionOS app store by default. I'm wondering if someone could share the criteria that will be used to determine if an iOS/iPadOS application is suitable for visionOS?
Posted
by
Post not yet marked as solved
3 Replies
808 Views
Has Apple worked out how WebXR authored projects in Safari operate with VisionOS? Quest has support already. And I imagine many cross platform experiences (especially for professional markets where the apps are on windows through web) would be serve well with this. Is there documentation for this?
Posted
by
Post not yet marked as solved
4 Replies
1k Views
In ARKit for iPad, I could 1) build a mesh on top of the real world and 2) request a people occlusion map for use with my application so people couls move behind or in fromt of virtual content via compositing. However, in VisionOS, there is no ARFrame image to pass to the function that would generate the occlusion data. Is it possible to do people occlusion in visionOS? If so, how it is done—through a data provider, or is it automatic when passthrough is enabled? If it’s not possible, is this something that might have a solution in future updates as the platform develops? Being able to combine virtual content and the real world with people being able to interact with the content convincingly is a really important aspect to AR, so it would make sense for this to be possible.
Posted
by
Post not yet marked as solved
0 Replies
973 Views
On VisionOS, is a combination of full passthrough, unbounded volumes, and my own custom 3D rendering in Metal Possible? According to the RealityKit and Unity VisionOS talk, towards the end, it’s shown that an unbounded volume mode allows you to create full passthrough experiences with graphics rendering in 3D — essentially full 3D AR in which you fan move around the space. It’s also shown that you can get occlusion for the graphics. This is all great, however, I don’t want to use RealityKit or Unity in my case. I would like to be able to render to an unbounded volume using my own custom Metal renderer, and still get AR passthrough and the ability to walk around and composit virtual graphical content with the background. To reiterate, this is exactly what is shown in the video using Unity, but I’d like to use my own renderer instead of Unity or RealityKit. This doesn’t require access to the video camera texture, which I know is unavailable. Having the flexibility to create passthrough mode content in a custom renderwr is super important for making an AR experience in which I have control over rendering. One use case I have in mind is: Wizard’s Chess. You see the real world and can walk around a room-size chessboard with virtual chess pieces mixed with the real world, and you can see the other player through passthrough as well. I’d like to render graphics on my living room couches using scene reconstruction mesg anchors, for example, to change the atmosphere. The video already shows several nice use cases like being able to interact with a tabletop fantasy world with characters. Is what I’m describing possible with Metal? Thanks! EDIT: Also, if not volumes, then full spaces? I don’t need access to the camera images that are off-limits. I would just like passthrough + composition with 3D Metal content + full ARKit tracking and occlusion features.
Posted
by
Post not yet marked as solved
2 Replies
1.5k Views
Hi One point I would like to ask. How many specs do you need and which Mac would you recommend for Apple Vision Pro development? We use XCode, RealityKit, ARKit, Reality Composer Pro, Unity Editor which supports visionOS development, and MaterialX. If possible, what notebook and desktop models do you recommend? Best regards Sadao Tokuyama https://1planet.co.jp/
Posted
by
Post not yet marked as solved
2 Replies
774 Views
Watching "Go beyond the window with SwiftUI" and the presenter is talking about immersive spaces phases (about 12:41 in the presentation) being automatically becoming inactive on the user stepping outside of the system boundary. I am curious about how this system boundary is defined? What happens if I have a mixed immersive space and want to allow the user to walk around a large room (ar enhanced art gallery experience for example) and explore, after a few steps will the space become inactive? Thanks for any clarifications.
Posted
by
Post not yet marked as solved
0 Replies
488 Views
Hey! I had a question about how large a volume's defined bounds (along with its corresponding 3D content) is allowed to be within the Shared Space? Also, is a user able to change/scale the dimensions specified of the volume's bounds? Thanks.
Posted
by