Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

How to present an alert in visionOS immersive space?
My visionOS app uses an immersive view. If the app encounters an error, I want to present an alert. I tried in a demo app to present such an alert, but it is not shown. Nearly the same code on iOS presents an alert window. Here is my demo code, based on Apple's Immersive Environment App template: import SwiftUI import RealityKit import RealityKitContent struct ErrorInfo: LocalizedError, Equatable { var errorDescription: String? var failureReason: String? } struct ImmersiveView: View { @State private var presentAlert = false let error = ErrorInfo( errorDescription: "My error", failureReason: "No reason" ) var body: some View { RealityView { content, attachments in let mesh = MeshResource.generateBox(width: 1.0, height: 0.05, depth: 1.0) var material = UnlitMaterial() material.color.tint = .red let boardEntity = ModelEntity(mesh: mesh, materials: [material]) boardEntity.transform.translation = [0, 0, -3] content.add(boardEntity) } update: { content, attachments in // … } attachments: { // … } .onAppear { presentAlert = true } .alert( isPresented: $presentAlert, error: error, actions: { error in }, message: { error in Text(error.failureReason!) }) } } Since I cannot see any alert, is something wrong with my code? How should an alert be presented in immersive space?
1
0
528
Nov ’24
Access main camera on Apple Vision Pro
From visionOS 2.0 we can access Apple Vision Pro's main camera but only for Enterprise account as it is enterprise API only, I have a normal Developer account and I want to use main camera and want to have a video call feature in app by using main camera of AVP, is it possible to do it using developer account only. Currently using that account I am not able to create entitlement certificate as there is no option.
2
0
569
Nov ’24
Attachment always user facing
Hello, Is there a way to always have the attachments of a RealityView always face the user? For example, in a visionOS app, in an immersive space, we have an attachment. When the user either walks around the attachment, or rotates the parent entity, we would like the attachment to automatically rotate to face the user. How do we do this? I anticipated this to be a trivial feature to implement, since I thought I remembered seeing this feature as a built-in/opt-in option for attachments. But, I cannot find that feature. All and any recommendations are appreciated, thanks.
2
0
507
Nov ’24
Regarding real-time object tracking and real-time image recognition
We used real-time object tracking, and with enterprise permissions, we can improve the smoothness to 30Hz, but there are still noticeable delays. On one hand, we want to know why this delay occurs; is it due to performance considerations? Because we found that the delay in hand tracking is actually very low. On the other hand, we consider that it may be due to the complexity of 3D objects, so I considered using image tracking. However, we found that there are even more serious delays in image tracking and QR code tracking. We hope to optimize it. Currently, the frame rate for recognizing images for tracking seems to be one frame per second, and we hope to increase it because object recognition and tracking can be very smooth on other Apple platforms, such as iOS. Additionally, can we appropriately consider interfaces for depth recognition to obtain depth data? We want to know what accuracy vision can achieve in measuring the physical world, as well as the accuracy in rendering on the screen. We wonder if this is related to hardware devices like radar. Also, what accuracy can we achieve in tracking the movement distance of objects?
1
0
445
Nov ’24
Are there other ways to train the model? Training the VisionOS tracking model using Create ML on the Mac takes too much time each time.
I am developing with Apple Vision Pro to implement object tracking functionality, but each model needs to go into Create ML for training, and the training time is very long. Are there other ways to shorten training time while obtaining reference files in the same format? Additionally, can the delay in object tracking be further optimized? Although the refresh rate has been optimized, there is still a noticeable delay.
1
0
745
Nov ’24
Parent is changed during gesture interaction producing incorrect relative translation values.
I'm having trouble re-setting the position of a child entity during app re-load even though it appears that I am correctly obtaining and persisting the correct translation values after a drag gesture. The problem exists when I drag a child element to a new location (persist those new values) then reload the app to force re-positioning from persisted translation values. I notice that the parent relationship changes during interaction (tap or drag) which can be seen in the debug statements. I'm wondering if this is related to the problem, or, if the parent change is normal during re-rendering and is un-related to my problem. My thought process is since we care about relative translation values when persisting, if the parent relationship is changed just before persistence, then, are we persisting and setting the wrong values? Project Link: Private STEPS TO REPRODUCE Run the app. Drag the pre-loaded stage down the Y axis so that the floor of the stage is more visible to your eye (in order to better visualize the problem). Tap the button in the timeline to create a new project. Drag the only visible element from the left panel onto the timeline (element is labeled f_works_entity_1). There should now be a green 3d model added to the stage. Drag this green element to a new location (be careful to hover over the green element so that you don't inadvertently drag the stage). Re-run the app to see that the green element is offset to a new location, not the last dragged location. To reset and try again, delete the project canvas next to the project name (trash button) then restart the app. Areas of concern: RealityKitView is the only file you may need. Line 119 is where we create new child entities Lines 185-219 is where we persist and apply persisted values. You can also search FIXME in the file to see areas of concern. Tip: I have a tap gesture on each entity that produces a debug statement with info about the entity and its parent including IDs.
1
0
497
Nov ’24
Game controller issues in visionOS
Hi, I'm building a windowed game in visionOS which requires a gamepad. And I'm using a PS5 controller for this case. However, I found a few problems: When looking at the window close button, and press X in the gamepad, the window will be closed. This can happen accidentally when the user is playing. When press the PS button(the home button), I want my app to handle it, e.g. take the user to the title screen. But visionOS will always capture this input and opens the home screen(App Library). How to avoid these 2 issues? Or in general, how to make the gamepad input only available to my app but not the visionOS system?
1
0
628
Nov ’24
Prevent Window (or Volume) Mouse Focus
When using a trackpad (or screen-shared Mac) with the Vision Pro, moving your attention to a new window or app immediately refocuses the mouse cursor, which in many circumstances is really useful. But in circumstances where there is a viewer-only window, that window jumping gets in the way. Imagine a 3d object editor of some sort, with a live viewer in a second window, maybe a browser. Manipulating the 3d object with the mouse in the editor gets continually interrupted when looking at the live viewer because the cursor jumps to the viewer window. Is there anyway to reject that focus?
0
0
401
Nov ’24
RealityKit Rotation Origin
I am trying to rotate topEntity around the origin point of shapeEntity, but have not found a way to do so. topEntity is an entity group that also contains shapeEntity, so I cannot set topEntity as a child of shapeEntity. From Blender I set the correct origin of topEntity, but when I import the usd model into Reality Composer Pro it does not save the origin point and there is no way to set the origin in Reality Composer Pro. DragGesture() .targetedToEntity(where: .has(CustomComponent.self)) .onChanged({ value in let rotation = -Float(value.translation.height) let clampedRotation = min(max(rotation, 0), 45) if value.entity.name == "grab"{ if let topEntity = selectedEntity.findEntity(named: "top"), let shapeEntity = selectedEntity.findEntity(named: "Shape_1"){ topEntity.transform.rotation = simd_quatf( angle: clampedRotation * .pi / 180, axis: SIMD3(x: 0, y: 0, z: 1) ) } } })
1
0
399
Nov ’24
How to use `unproject(_:from:to:ontoPlane:)` of Content of RealityView
When using ARView of RealityKit, I can code like this let results = arView.raycast(from: point, allowing: .estimatedPlane, alignment: .any) to get the 3D position of where I tap on the plane. In iOS 18, we can use RealityView and I found that unproject(_:from:to:ontoPlane:) may implement the same function, but I don't know how to set the ontoPlane parameter. Can someone help me with some code snippets?
1
0
569
Nov ’24
Limited Window Manipulation in Mixed Immersive Style in VisionOS
In VisionOS versions 2.1 and 2.2, I’m encountering a significant limitation when using the .immersionStyle(selection: .constant(.mixed), in: .mixed) mode, specifically in mixed immersive style. Here’s a breakdown of the behavior: In full immersion mode (.immersionStyle(selection: .constant(.full), in: .full)), users can interact with and manipulate system windows while inside a 3D model, allowing typical interactions like moving windows, pinching, or activating UI switches. However, in mixed immersive mode, using the exact same layout “inside” a 3D model (which doesn’t visually obstruct the window), users are unable to interact with window content or move the window. Basic interactions like pinching or toggling switches require users to physically touch these elements in AR space, which is inconsistent with the behavior in full immersion. From a usability perspective, this restriction seems unnecessary, as the software should ideally allow for similar interaction capabilities across both immersive styles. The expected behavior is to enable window manipulation within a 3D model in mixed mode, matching the functionality observed in full immersion. The scene in question is a House in which the user is placed during the immersion that's why I am referring to the user being "Inside" of the scene. Has anyone else experienced this or found a workaround?
2
1
428
Nov ’24
Creating a multiview video playback experience in visionOS. There is no back button on the player.
Function Introduction "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/" When I use this function, my videoPlayer has no back Action in player. And we did not find any method provided by the system "addChildViewControllerAndView(form)" "https://developer.apple.com/documentation/avkit/adopting-the-system-player-interface-in-visionos" Referencing this document also did not work As long as you enter this line of code let playerController = AVPlayerViewController() // Enable the multiview experience along with the default recommended set. playerController.experienceController.allowedExperiences = .recommended(including: [.multiview]) there is no back button, only full screen and zoom out
8
0
975
Nov ’24
Need to get The VisonOS Player's Camera forward vector on runtime
How can I access the player’s camera vector in VisionOS, specifically using RealityKit? In Unity and other game engines, there’s often an API like "Camera.main.transform.forward for this purpose. " I’ve found the head anchor but haven’t identified a way to obtain the forward vector in RealityKit. Is there a related API for this? Any guidance would be greatly appreciated. Thanks!
1
0
669
Nov ’24
How can I access the player’s camera vector in VisionOS, specifically using RealityKit?
How can I access the player’s camera vector in VisionOS, specifically using RealityKit? In Unity and other game engines, there’s often an API like Camera.main.transform.forward for this purpose. I’ve found the head anchor but haven’t identified a way to obtain the forward vector in RealityKit. Is there a related API for this? Any guidance would be greatly appreciated. Thanks!
1
0
513
Nov ’24
Tracking Multiple Instances of the Same Object on the Apple Vision Pro
Hello everyone, I'm currently working through this example project Exploring object tracking with ARKit to learn how to use Object Tracking with visionOS. I was able to modify the example project to overlay a different model onto the detected object. However, with the example code, I'm only able to track 1 instance of the trained object at a time. I'm wondering if there is a way to track multiple instances of 1 object? For example, I have a usdz model of a box, then trained that box for object tracking, I'm able to overlay a model of a chair over that box once it's detected. But now, I have multiple copies of that same box, and I want to arrange them so that when I wear the Vision Pro, I can see the chairs arranged however I want. I'm still new to visionOS development, so I'm not sure if there's a way to accomplish that by just training 1 object and having copies of it. If it helps, this is my current modification to overlay a virtual object ontop of a detected object. func loadReferenceObjects() async -> [ReferenceObject] { // Get a list of all reference object files in the app's main bundle and attempt to load each. var referenceObjectFiles: [String] = [] if let resourcesPath = Bundle.main.resourcePath { print("resource path: \(resourcesPath)") try? referenceObjectFiles = FileManager.default.contentsOfDirectory(atPath: resourcesPath).filter { $0.hasSuffix(".referenceobject") } } await withTaskGroup(of: Void.self) { group in for file in referenceObjectFiles { let objectURL = Bundle.main.bundleURL.appending(path: file) group.addTask { // get the file name let fileNameWithoutExtension = (file as NSString).deletingPathExtension // load each ref objs as task await self.loadSingleReferenceObject(url: objectURL, fileName: fileNameWithoutExtension) } } } return self.referenceObjects } // Private helper method to load a single object // and assign entity to it. private func loadSingleReferenceObject(url: URL, fileName: String) async { var referenceObject: ReferenceObject do { print("Loading reference object from \(url)") // Load the file as a `ReferenceObject` - this can take a while for larger objects. try await referenceObject = ReferenceObject(from: url) } catch { fatalError("Failed to load reference object with error \(error)") } // add the ref obj to the ref objs array self.referenceObjects.append(referenceObject) // entity with each file var model: Entity = Entity() // add entity according to the file name switch fileName { // Box1 ref obj binds with Chair1 case "Box1": // try to load the model do { try await model = Entity(named: "chair1", in: realityKitContentBundle) } catch { print("Failed to load chair1") } // Box2 ref obj binds with Chair2 case "Box2": // try to load the model do { try await model = Entity(named: "chair2", in: realityKitContentBundle) } catch { print("Failed to load chair2") } default: print("no model associated with this file name: \(fileName)") break } // map entity to ref objs usdzPerReferenceObject[referenceObject.id] = model } Any help or suggestions would be greatly appreciated. Thank you.
1
0
565
Nov ’24
Hover Effect & Tap Problem
Hi, since I updated my device to visionOS 2.0 or higher I have some problems with my app. Sometimes when I go with my eyes over buttons or the tabview which is a SwiftUI component that many apps use, the hover effect doesn't trigger anymore, I can tap on the icon from the tabview but doesn't extend when I hover it to se the description of the button. It is weird because it doesn't happen all the time. But in VisionOS 1.0 doesn't happen at all. My second issue is that I have an navbar as an attachment and this attachment has a draggable modifier that we created. The buttons are not tappable until I drag the navbar a little, this also never happened in VisionOS 1.0 but always happen in Vision 2.0+ because I've tested in the simulator with different versions. Is it possible these problems to be related to handTracking Service that we use from ARKit? Because sometimes when we close the trackers the app works as intended.
0
0
438
Nov ’24