visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Presenting immersive content in UIKit app
I have a UIKit app and would like to provide spacial experience when run on VisionOS. I added VisionOS support, but not sure how to present an immersive view. All tutorials are in SwiftUI, but my app is in UIKit. This is an example from a SwiftUI project, but how how do I declare this ImmersiveView in UIKit? struct VirtualApp: App { var body: some Scene { WindowGroup { ContentView() }.windowStyle(.volumetric) ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() } } } and in UIKit how do I make the call to open the ImmersiveView?
5
1
1.3k
2h
Error -34018 calling SecItemCopyMatching (no group)
Hi, I'm getting the error code -34018 in the VisionOS simulator when calling SecItemCopyMatching with this query: let getquery: [String: Any] = [kSecClass as String: kSecClassGenericPassword, kSecAttrAccount as String: name, kSecReturnData as String: kCFBooleanTrue!, kSecMatchLimit as String : kSecMatchLimitOne ] The console says: copy_matching Error Domain=NSOSStatusErrorDomain Code=-34018 "Client has neither application-identifier nor keychain-access-groups entitlements" UserInfo={numberOfErrorsDeep=0, NSDescription=Client has neither application-identifier nor keychain-access-groups entitlements} I'm NOT using groups. I've tried changing the bundle id. Xcode version 15.0 beta 2 (15A5161b) Anyone have any ideas? Anyone using the keychain in their visionOS app? :-)
2
0
811
Jul ’23
Switching between immersive spaces
I have 3 immersive spaces, and I'm trying to "jump" between them. Whenever I go from a space to the next one I try to dismiss the current one by executing await dismissImmersiveSpace() and right after await openImmersiveSpace(value: id). This is being performed inside of a Task, run on the click of a button. It seems like dismissImmersiveSpace is released before the actual space has been completely removed, as the next immersive space does not open. On the other hand, I added a manual waiting time between dismissing an immersive space and showing the next one, and everything seems to be working fine, which is why I suspect that this is a lifecycle issue of the dismissImmersiveSpace. That being said, is there any way to listen to the actual state of the dismissed immersive space, so that I know when I can present the next one? Is there any way around this without having to introduce a manual delay?
6
0
1.3k
Jan ’24
How to display stereo images in Apple Vision Pro?
Hi community, I have a pair of stereo images, one for each eye. How should I render it on visionOS? I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images. I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
7
0
2.6k
Jun ’24
How to get grounding shadow to work in VisionOS?
Hi, I'm trying to replicate ground shadow in this video. However, I couldn't get it to work in the simulator. My scene looks like the following which is rendered as an immersive space: The rocket object has the grounding shadow component with "cast shadow" set to true: but I couldn't see any shadow on the plane beneath it. Things I tried: using code to add the grounding shadow component, didn't work re-used the IBL from the helloworld project to get some lighting for the objects. Although the IBL worked, I still couldn't see the shadow tried adding a DirectionalLight but got an error saying that directional lights are not supported in VisionOS (despite the docs saying the opposite) A related question on lighting: I can see that the simulator definitely applies some scene lighting to objects. But it doesn't seem to do it perfectly. For example in the above screenshot I placed the objects under a transparent ceiling which is supposed to get a lot of lights. But everything is still quite dark.
6
1
2.3k
Aug ’23
RealityView update closure not executed upon state change
I have the following piece of code: @State var root = Entity() var body: some View { RealityView { content, _ in do { let _root = try await Entity(named: "Immersive", in: realityKitContentBundle) content.add(_root) // root = _root <-- this doesn't trigger the update closure Task { root = _root // <-- this does } } catch { print("Error in RealityView's make: \(error)") } } update: { content, attachments in // NOTE: update not called when root is modififed // unless root modification is wrapped in Task print(root) // the intent is to use root for positioning attachments. } attachments: { Text("Preview") .font(.system(size: 100)) .background(.pink) .tag("initial_text") } } // end body If I change the root state in the make closure by simply assigning it another entity, the update closure will not be called - print(root) will print two empty entities. Instead if I wrap it in a Task, the update closure would be called: I would see the correct root entity being printed. Any idea why this is the case? In general, I'm unsure the order in which the make, update and attachment closures are executed. Is there more guidance on what we should expect the order to be, what should we do typically in each closure, etc?
1
0
899
Jul ’23
visionOS - Positioning and sizing windows
Hi, In the visionOS documentation Positioning and sizing windows - Specify initial window position In visionOS, the system places new windows directly in front of people, where they happen to be gazing at the moment the window opens. Positioning and sizing windows - Specify window resizability In visionOS, the system enforces a standard minimum and maximum size for all windows, regardless of the content they contain. The first thing I don't understand is why it talk about macOS in visionOS documentation. The second thing, what is this page for if it's just to tell us that on visionOS we have no control over the position and size of 2D windows. Whereas it is precisely the opposite that would be interesting. I don't understand this limitation. It limits so much the use of 2D windows under visionOS. I really hope that this limitation will disappear in future betas.
10
1
3.2k
Jul ’23
How to gain full control over Apple Vision pro's display and render 2D graph plot on it
How can I achieve full control over Vision Pro's display and effectively render a 2D graph plot on it? I would appreciate guidance on the necessary steps or code snippets. P.s. As per Apple documentation For a more immersive experience, an app can open a dedicated Full Space where only that app’s content will appear. This still does not fulfill the 'flat bounded 2D' requirement as the Spaces provide an unbounded 3D immersive view.
2
0
713
Jul ’23
Debugging Max Volume size - GeometryReader3D units?
Hello, I'm curious if anyone has some useful debug tools for out-of-bounds issues with Volumes. I am opening a volume with a size of 1m, 1m, 10cm. I am adding a RealityView with a ModelEntity that is 0.5m tall and I am seeing the model clip at the top and bottom. I find this odd, because I feel like it should be within the size of the Volume.... I was curious what size SwiftUI says the Volume's size is so I tried using a GeometryReader3D to tell me.... GeometryReader3D { proxy in VStack { Text("\(proxy.size.width)") Text("\(proxy.size.height)") Text("\(proxy.size.depth)") } .padding().glassBackgroundEffect() } Unfortunately I get, 680, 1360, and 68. I'm guessing these units are in points, but that's not very helpful. The documentation says to use real-world units for Volumes, but none of the SwiftUI frame setters and getters appear to support different units. Is there a way to convert between the two? I'm not clear if this is a bug or a feature suggestion.
1
0
607
Oct ’23
Image Input for ShaderGraphMaterial
In RealityComposerPro, I've set up a Custom Material that receives an Image File as an input. When I manually select an image and upload it to RealityComposerPro as the input value, I'm able to easily drive the surface of my object/scene with this image. However, I am unable to drive the value of this "cover" parameter via shaderGraphMaterial.setParameter(name: , value: ) in Swift since there is no way to supply an Image as a value of type MaterialParameters.Value. When I print out shaderGraphMaterials.parameterNames I see both "color" and "cover", so I know this parameter is exposed. Is this a feature that will be supported soon / is there a workaround? I assume that if something can be created as an input to Custom Material (in this case an Image File), there should be an equivalent way to drive it via Swift. Thanks!
1
0
911
Aug ’23
Vision Pro Camera Access
Hello, I saw some things saying that developers will not have access to the external cameras on the vision pro. Does this mean that developers will not be able to use the cameras at all. Let's say I wanted to use the camera to place an object on a desk or in between some markers, would this not be possible? Or is there a way of still developing something like this?
1
1
1.2k
Aug ’23