visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,167 Posts
Sort by:
Post not yet marked as solved
1 Replies
48 Views
I am developing an app in mixed immersive native app on Vision Pro. In my RealityView, I add my scene by content.add(mainGameScene). Normally the anchored position (original coord) should be the device position but on the ground (with y == 0 on the ground). At least this is how I understand the RealityViewContent works. So if I place something at position (0, 0, -1.0), the object should be in the front of you but on the floor (z axis is pointing backwards) However recently I load a different scene and I add that with same code, content.add(mainGameScene), something has changed, my scene randomly anchored on the floor or ceiling, according to the places I stand or sit. When I open Visualizations of my anchoring point, I could see that anchor point I am using is on the ceiling. The correct one (around my foots) is left over there. How could I switch to the correct anchored position? Or does any setting can change the behavior of default RealityViewContent?
Posted
by milanowth.
Last updated
.
Post not yet marked as solved
4 Replies
615 Views
I am a Apple Developer member signed into my developer Apple ID on my Vision Pro and I‘m unable to access Beta Updates in Settings -> General -> Software Update. The option doesn’t even show up. I’ve tried restarting a few times and signing out and back in on my Vision Pro. I’ve been able to successfully deploy builds from Xcode to my Vision Pro. I’m also able to access Beta Updates from my other Apple devices on the same Apple ID. I’ve also noticed that my Apple ID avatar isn’t syncing—it shows the default initials in visionOS Settings and updating it there does not seem to sync across devices. Anyone have any ideas how I might fix this?
Posted
by n8chur.
Last updated
.
Post not yet marked as solved
3 Replies
283 Views
I am running VisionOS 1.0.3. On the software update page, there is no option to install a beta. I don't see any setting that would enable/disable this... anyone have any suggestions?
Posted
by JustMark.
Last updated
.
Post not yet marked as solved
0 Replies
42 Views
I'm currently developing an application where the models present inside a volumetric window may exceed the clipping boundaries of the window. ( Which I currently understand to be a maximum of 2m ) Because of this, as models move through the clipping boundaries, the interior of the models becomes visible. If possible, I'd like to cap these interiors with a solid fill so as to make them more visually appealing. However, as far as I can tell, I'm quite limited in how I might be able to achieve this when using RealityKit on VisionOS. Some approaches I've seen to accomplish similar effects seem to use multiple passes of model geometries rendering into stencil buffers and using that to inform whether or not a cap should be drawn. However, afiact, if I have opted into using a RealityView and RealityKit, I don't have the level of control over my render pipeline such that I can render ModelEntities and also have multiple rendering passes over the set of contained entities to render into a stencil buffer that I then provide to a separate set of "capping planes" ( how I currently imagine I might accomplish this effect ). Alternatively ( due to the nature of the models I'm using ) I considered using a height map to construct an approximation of a surface cap, but how I might use a shader to construct a height map of rendered entities seems similarly difficult using the VisionOS RealityView pipeline. It is not obvious to me how I could use a ShaderGraphMaterial to render to an arbitrary image buffer that I might then pass to other functions to use as an input; ShaderGraphMaterial seems biased to the fact that all image inputs and outputs are either literal files or the actual rendered buffer. Would anyone out there have already created an effect like this that might have some advice? Or, potentially correct any misunderstandings I have with regards to accessing the Metal pipeline for RealityView or using ShaderGraphMaterial to construct a height map?
Posted
by netshade.
Last updated
.
Post marked as solved
7 Replies
1.1k Views
Hello, When an iOS app runs on Vision Pro in compatible mode, is there a flag such as isiOSAppOnVision to determine the underlying OS at runtime? Just like the ProcessInfo.isiOSAppOnMac. It will be useful to optimize the app for visionOS. Already checked but not useful: #if os(xrOS) does not work in compatible mode since no code is recompiled. UIDevice.userInterfaceIdiom returns .pad instead of .reality. Thanks.
Posted
by Gong.
Last updated
.
Post not yet marked as solved
0 Replies
55 Views
Hello, I have an app that is using WorldAnchorProvider, basically soemthing similar to the obeject placement example. I'd like to show to the user a specific UI when no anchors where loaded. However, no matter where I move withing my house, they always load. So I was wondering, how far do I need to go in order to for the device not be able to load my placed world anchors? Thanks
Posted
by elmotron.
Last updated
.
Post not yet marked as solved
0 Replies
63 Views
For example, can I place items in vr in my living room, then walk into my bedroom and no longer see them as they are hidden behind a wall? Could I place something inside a cupboard?
Posted
by jinrui.
Last updated
.
Post not yet marked as solved
0 Replies
39 Views
Hi, I'm an indie developer trying to make a 2D prototype of a simple game where I have to drag and drop items from one box to another. I have so far implemented a working prototype with the .draggable (https://developer.apple.com/documentation/swiftui/view/draggable(_:)) function which works well in the simulator, but as soon as I use my vision pro, the finger pinch action doesn't register half the time. I can only select the object around 30% of the time. My code is as follows. DiskView(size: diskSize, rod: rodIndex) .draggable(DiskView(size: diskSize, rod: rodIndex)) .hoverEffect() I have also registered DiskView as a UTType and have it able to be transferred around. The business logic works, just the pinch gesture does not work half the time.
Posted Last updated
.
Post not yet marked as solved
1 Replies
87 Views
I am trying add Sign in with Apple but when I attempt to capability in my app nothing happens in the list does apple not able to provide this feature yet in Vision OS or is there any bug or may be ami missing something which does not seems?
Posted Last updated
.
Post not yet marked as solved
5 Replies
1.6k Views
Hi community, I have a pair of stereo images, one for each eye. How should I render it on visionOS? I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images. I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
Posted Last updated
.
Post not yet marked as solved
0 Replies
83 Views
Hi, does anyone know how to capture audio input in vision os? I tried the sample code from official examples https://developer.apple.com/documentation/avfoundation/avcapturesession , but it did work.
Posted
by Kuoxen.
Last updated
.
Post not yet marked as solved
1 Replies
145 Views
Hello. I'm creating a fully immersive application and I need to use hand tracking. I've included the corresponding key (SHandsTrackingUsageDescription) in the info.plist. Everything works correctly. At the beginning, the application displays the corresponding permission. But now here's the question: if I make a mistake and you click on Don't Allow, the permission prompt won't appear again and the application stops working. How can I request permission again? Thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
89 Views
Is there a way to display something on the front display (the one that normally displays EyeSight)? I want to be able to display some text there such as "Working, please do not disturb" or "on the call", or something funny. Can we as developers access it in any way from our app?
Posted
by masterov.
Last updated
.
Post not yet marked as solved
1 Replies
173 Views
ISSUE: In our code we are using the ImageTrackingProvider and ARKit similarly with the code provided from Apple documentation: https://developer.apple.com/documentation/visionos/tracking-images-in-3d-space However, when the application runs and we move the image in real space, the Image Tracking Provider send updates with a very low rate (about one frame per sec!) on the real Vision Pro device (please see the attached video). According to WWDC2023 (https://developer.apple.com/videos/play/wwdc2023/10091) the image anchors are updated as soon as they are available automatically by the system and they are not depended from camera frame rates. Therefore, why this is happening? We tried also to create an ImageAnchor by using the Reality Composer Pro in order to build a scene with it and check if we could have better tracking speed and updates. However, we found that Reality Composer Pro does not support image anchors like its predecessor Reality Composer! We also created the ImageAnchor on a Reality Composer Project and we tried to import the reality project / scene to out visionOS app. However, when the app builds we take an incompatibility message: “RealityKitContent - Tool terminated by signal 'Bus error: 10’ ” Other Reality Composer Projects that do not have image anchors are imported without any problems! We also tried to find if there is a frame rate setting on the real Vision Pro device (for reasons of battery saver), but we couldn’t find any. Finally, we tried to change asynchronous Tasks to synchronous in our code, but this couldn’t solve the problem. As the image detection and tracking in our code runs perfectly on iOS devices, and we want to build our apps to pure immersive space visionOS projects, what else can we do to have the same efficiency and performance like iOS?
Posted
by papadig.
Last updated
.
Post not yet marked as solved
2 Replies
120 Views
Hello, I would like to change the aspect (scale, texture, color) of a 3D element (Model Entity) when I hovered it with my eyes. What should I do If I want to create a request for this feature? And how would I know if it will ever be considered or when it will appear?
Posted Last updated
.
Post not yet marked as solved
0 Replies
136 Views
I have a simple example to demonstrate... struct MyView: View { var body: some View { Text("WOW") } } struct MyOtherView: View { var body: some View { NavigationStack { Text("WOW") } } } On VisionOS, MyOtherView has a glass background effect that cannot be disabled. glassBackgroundEffect(displayMode: .never) .background(.clear), .foregroundColor(.clear), none of them work. I then resorted to the SwiftUIIntrospect package to try set .clear on various child objects of the NavigationStack but nothing is working. I am in control of my own glass containers. I have a couple with space between them, but with the NavigationStack it sets a background behind both of them ruining the effect. This is what MyOtherView renders as: I'm looking for it to be completely transparent except the text. Like the below layout. For now I will have to roll my own navigation.
Posted
by lbennett.
Last updated
.
Post not yet marked as solved
0 Replies
111 Views
Firstly, everything is ok. I have been connected Apple Vision Pro device to the Xcode via wireless network, also build my app in past several weeks. But since yesterday, I can not connect Apple Vision Pro device to my Xcode anymore. The device did not listed in Devices and Simulators window. I have tried: Update my Xcode to 15.3 Reboot my Mac and Apple Vision Pro Reset Apple Vision Pro also erase all data Other Macs in the same network also did not list any Vision Pro device I'm sure Vision Pro and Mac are in the same network, and it worked before. I go to Settings - General - Remote Devices, and open Xcode's Devices and Simulators window, still can't see any Apple Vision Pro device.
Posted
by A00a.
Last updated
.
Post not yet marked as solved
1 Replies
497 Views
Hi there, I have some existing metal rendering / shader views that I would like to use to present stereoscopic content on the Vision Pro. Is there a metal shader function / variable that lets me know which eye we're currently rendering to inside my shader? Something like Unity's unity_StereoEyeIndex? I know RealityKit has GeometrySwitchCameraIndex, so I want something similar (but outside of a RealityKit context). Many thanks, Rich
Posted
by RichLogan.
Last updated
.
Post not yet marked as solved
1 Replies
277 Views
I tried to show spatial photo on my application by swiftUI's Image but it just show flat version of it even I Use Vision Pro, so, how can I show spatial photo to users, does there any options for this?
Posted
by tkgka.
Last updated
.