visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

CompositorLayer, Metal and Alert
Hello. I am creating an app for APV in Metal. I have a CompositorLayer that launches a thread. What can I do to detect the environment alert to stop the thread until it is accepted by the user? If I don't stop the thread, I have problems with the hand tracking alert. Thank you.
1
0
63
5h
Screen Sleep Issue When Removing Vision Pro
Hello VisionOS Developer Community, I am currently working on a demo application for VisionOS, tailored for the Vision Pro. In my application, I have implemented a feature to prevent the screen from sleeping using the following code: UIApplication.shared.isIdleTimerDisabled = true This works perfectly on iOS, ensuring the screen remains active regardless of other system settings. However, I've run into a snag with VisionOS. Even with isIdleTimerDisabled set to true, the screen still sleeps when I take off the Vision Pro.
0
0
24
8h
Object Tracking Training: Objects to avoid
I'm working on training an object tracking model in CreateML for visionOS that has fan blades on it and looking to try to train while ignoring a section of the geometry. When I train currently, it can detect the object if the fan blades are in the same orientation as when scanned but if they move it struggles. I see there is an "objects to avoid" data source that can be added but upon reading the description, I don't think it does what I'm needing but I could be wrong. Is there anyway to have the training ignore a part of the geometry that has a significant effect on the silhouette of the object?
0
0
94
1d
Why is VisionOS Barcode Scanning an Enterprise API?
I'm seeking insight on why the new VisionOS Barcode Scanning API is categorized as an Enterprise API and restricted only for proprietary and in-house apps. I understand Apple's focus on privacy and I can see how this restriction could make sense for other Enterprise APIs like main camera access and passthrough screen capture. Why is barcode scanning restricted from open apps? What makes barcode scanning more of a risk to privacy versus the unrestricted APIs for object tracking, image tracking, or hand tracking?
1
0
92
1d
Issue entering immersive mode with Metal
Hi. I am developing an immersive application in Metal. The problem is that when the application starts for the first time, it shows two system windows: one related to the player's surrounding environment and the other to hand tracking permissions. The latter is the one in focus, and when I select the option to grant permissions, both windows disappear and the transition to the immersive view does not occur. I don't know how to handle this issue.
0
0
90
1d
Geometry recognition and measurement from MeshAnchor
FYI. The source code of the FindSurface demo app for Apple Vision Pro (visionOS) is available now. The Swift package of FindSurface™ library is required to build the source code into the demo app. https://github.com/CurvSurf/FindSurface-visionOS After starting the app, the floating panels (below) will appear on your right side, and you will see wireframe meshes that approximately describe your environments. Performing a spatial tap (pinching with your thumb and index finger) with staring at a location on the meshes will invoke FindSurface, with an indicator (blue disk) appearing on the surface you've gazed. Voice commands: “Tap” – Spatial tap (gazing & pinching). Invoke FindSurface. “Tap plane” – Plane selection. “Tap sphere” or “Tap ball” – Sphere selection. “Tap cylinder” – Cylinder selection. “Tap cone” – Cone selection. “Tap torus” or “Tap donut” – Torus selection. “Tap accuracy” or “Tap measurement accuracy” – Accuracy selection. “Tap mean distance”, “Tap average distance”, or “Tap distance” – Avg. Distance selection. “Tap touch radius” or “Tap seed radius” – Touch Radius selection. “Tap Inlier” – “Show inlier points” toggle. “Tap outline” – “Show geometry outline” toggle. “Tap clear” – “Clear Scene” click.
2
0
129
9h
3DoF Tracking on Vision Pro
Hello everyone, It seems that Vision Pro supports 6DoF tracking, but is it possible to switch to 3DoF tracking? The reason for my question is that I would like to use it while riding in a car, but it seems that the 6DoF tracking is not working well in this situation. I was wondering if switching to 3DoF tracking might solve the issue. Note: I am using the travel mode.
0
0
88
2d
disable ATS
My App needs to send and receive messages to the server, but my server does not have SSL, so I can only disable ATS in the development stage. But if I want to put the app on the shelf, then I still disable ATS when I put it on the shelf, and the server still does not have SSL. Will it be packaged? Is pp warned and terminated by Xcode? Will it be rejected by the Apple audit department? Can it be put on the App Store normally and provided to all users? Note: My server is completely safe without any security risks. I didn't apply for SSL just because I didn't have enough funds.
1
0
71
2d
Question about ARKit Object Tracking Capabilities
Hi everyone, I'm curious about the capabilities of ARKit's object tracking feature. Specifically, I'd like to know: Is there a size limit for the objects that can be tracked? Can ARKit differentiate between two objects with the same shape but different models (e.g., different colors)? Are objects with single colors and generic shapes (like squares or circles) effectively trackable? Any insights or examples from your experiences would be greatly appreciated! Thanks in advance.
0
0
100
2d
UI Tab Bar
Anyone else get these warnings when using UI Tab Bar in visionOS? Are these detrimental to pushing my visionOS app to the App Review Team? import SwiftUI import UIKit struct HomeScreenWrapper: UIViewControllerRepresentable { func makeUIViewController(context: Context) -> UITabBarController { let tabBarController = UITabBarController() // Home View Controller let homeVC = UIHostingController(rootView: HomeScreen()) homeVC.tabBarItem = UITabBarItem(title: "Home", image: UIImage(systemName: "house"), tag: 0) // Brands View Controller let brandsVC = UIHostingController(rootView: BrandSelection()) brandsVC.tabBarItem = UITabBarItem(title: "Brands", image: UIImage(systemName: "bag"), tag: 1) tabBarController.viewControllers = [homeVC, brandsVC] return tabBarController } func updateUIViewController(_ uiViewController: UITabBarController, context: Context) { // Update the UI if needed } } struct HomeScreenWrapper_Previews: PreviewProvider { static var previews: some View { HomeScreenWrapper() } }
1
0
104
1d
Receiving main camera stream
Hello, I recently got the entitlement for the Enterprise API this week. Although adding the license and the entitlement to the project, I couldn't get any frame from the cameraFrameUpdates. Here are the logs of the authorization and the cameraFrameUpdates [cameraAccess: allowed] CameraFrameUpdates(stream: Swift.AsyncStream<ARKit.CameraFrame>(context: Swift.AsyncStream<ARKit.CameraFrame>._Context)) Could anyone point out what I'm doing wrong in the process?
0
0
87
4d
GroupSessionJournal attachment loading error on Vision Pro
Hi all, Currently working on a shareplay feature where users pull data from a remote source and are able to share it in a volumetric window with others in the facetime call. However, I am running into an issue where the group activity/session seems to be throwing an error on the recipient of the journal's attachment with the description of notSupported. As I understand it, we use GroupSessionJournal for larger pieces of data like images (like in the Drawing Together example) and in my case 3d models. The current flow goes as follows: User will launch the app and fetch a model from remote. User can start a shareplay instance in which the system captures the volumetric window for users to join and see. At this point, only the original user can see the model. The user can press a button to share this model with the other participants using /// modelData is serialized `Data` try await journal.add(modelData) In the group session configuration, I already have a task listening for for await attachments in journal.attachments { for attachment in attachments { ... } } This task attempts to load data via the following code: let modelData = try await attachment.load(Data.self) /// this is where the error is thrown: `notSupported` I expect the attachment.load(Data.self) call to properly deliver the model data, but instead I am receiving this error. I have also attempted to wrap the model data within an enclosing struct that has a name and data property and conform the enclosing struct to Transferable but that continued to throw the notSupported error. Is there something I'm doing wrong or is this simply a bug in the GroupSessionJournal? Please let me know if more information is required for debugging and resolution. Thanks!
0
0
151
5d
Help with Virtual Hands
At the end of the WWDC 2023 sessoion https://developer.apple.com/wwdc23/10111 the session talks about implementing Virtual Hands. However the Code shown in the Video is not correct anymore with the latest version of VisionOS and also the example “SpaceGloves“ Entity referenced in the Video is not documented and there is no example project resource to reference either. I have been trying for the last two weeks to implement the same basic example shown in this video, including skinning and rigging my own hand meshes But have had significant issues doing so and found no working code/usdz examples to reference across different internet resource. Is it possible for a working project/code example including USDZ files for the Hand Meshes to be provided in order to test a working example of this as a starting point for building my own virtual hands? Thanks for your help.
0
3
180
5d
Blender to Reality Composer Pro 2.0 to SwiftUI + RealityKit visionOS Best Practices
Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0. I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later). I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes: I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations: https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6 When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible? As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation: import bpy import os # Set the directory where you want to save the USD files output_directory = “/path/to/export” # Ensure the directory exists if not os.path.exists(output_directory): os.makedirs(output_directory) # Function to export current scene as USD def export_scene_as_usd(output_path, start_frame, end_frame): bpy.context.scene.frame_start = start_frame bpy.context.scene.frame_end = end_frame # Export the scene as a USD file bpy.ops.wm.usd_export( filepath=output_path, export_animation=True ) # Save the current scene name original_scene = bpy.context.scene.name # Iterate through each action and export it as a USD file for action in bpy.data.actions: # Create a new scene for each action bpy.context.window.scene = bpy.data.scenes[original_scene].copy() new_scene = bpy.context.scene # Link the action to all relevant objects for obj in new_scene.objects: if obj.animation_data is not None: obj.animation_data.action = action # Determine the frame range for the action start_frame, end_frame = action.frame_range # Export the scene as a USD file output_path = os.path.join(output_directory, f"{action.name}.usdc") export_scene_as_usd(output_path, int(start_frame), int(end_frame)) # Delete the temporary scene to free memory bpy.data.scenes.remove(new_scene) print("Export completed.") I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually. I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations. I apply materials/textures to each so they appear the same, using Shader Graph. Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code. Two questions: Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines? If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode? I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI. Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!
0
0
124
6d
Apple Vision Pro app stops working after a while
Hello! I have developed an application using Unity that runs on the visionPro. I have correctly built it, installed it on the device and it works. The spatial computing application continue to work after several days (pretty normal, I launch the app and it works. It doesn't use any external services). After several weeks, like a month or so, I launch again the same app but it's not working anymore. The only way I have to make it work again is to rebuild and reinstall it again. What am I missing here? Why an application built and installed few weeks ago suddenly stops working on the VisionPro?
2
0
107
6d
The app “SGKJ_3DRealisticModel” has been killed by the operating system because it is using too much memory.
Hello! I encountered a problem when developing a project using VisionOS. I placed a file with a model size of 294.8MB in the project and tried to load the file model using the "FileManager. default. contentsOfDirectory (atPath: resourcesPath). filter {$0. hasSuffix (". usdz ")}" code. However, when the program was loaded, it was directly shut down by the system, prompting me to exceed memory. I would like to ask how to solve this problem?
0
0
61
6d