In WWDC24, visionOS hand tracking has a new function that can make an entity track the hand faster (but at the expense of a certain degree of accuracy), and the video only explains how to implement ARKit, so please ask how to implement the anchorEntiy in the reality view.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
When I first install and run the app, it requests authorization for hand tracking data. But then if I go to the settings and disable hand tracking from the app, it no longer requests. The output of requestAuthorization(for:) method just says [handTracking : denied]
Any idea why the push request only shows up once then never again?
While trying to control the following two scenes in 1 ImmersiveSpace, we found the following memory leak when we background the app while a stereoscopic video is playing.
ImmersiveView's two scenes:
Scene 1 has 1 toggle button
Scene 2 has same toggle button with a 180 degree skysphere playing a stereoscopic video
Attached are the files and images of the memory leak as captured in Xcode.
To replicate this memory leak, follow these steps:
Create a new visionOS app using Xcode template as illustrated below.
Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist.
Replace all swift files with those you will find in the attached texts.
In ImmersiveView, replace the stereoscopic video to play with a large 3d 180 degree video of your own bundled in your project.
Launch the app in debug mode via Xcode and onto the AVP device or simulator
Display the memory use by pressing on keys command+7 and selecting Memory in order to view the live memory graph
Press on the first immersive space's button "Open ImmersiveView"
Press on the second immersive space's button "Show Immersive Video"
Background the app
When the app tray appears, foreground the app by selecting it
The first immersive space should appear
Repeat steps 7, 8, 9, and 10 multiple times
Observe the memory use going up, the graph should look similar to the below illustration.
In ImmersiveView, upon backgrounding the app, I do:
a reset method to clear the video's memory
dismiss of the Immersive Space containing the video (even though upon execution, visionOS raises the purple warning "Unable to dismiss an Immersive Space since none is opened". It appears visionOS dismisses any ImmersiveSpace upon backgrounding, which makes sense..)
Am I not releasing the memory correctly?
Or, is there really a memory leak issue in either SwiftUI's ImmersiveSpace or in AVFoundation's AVPlayer upon background of an app?
App file TestVideoLeakOneImmersiveView
First ImmersiveSpace file InitialImmersiveView
Second ImmersiveSpace File ImmersiveView
Skysphere Model File Immersive180VideoViewModel
File AppModel
I have been experimenting some experiences in which I would like to use SharePlay to allow the app to be used by multiple users.
Currently I achieved sharing a volume containing a Reality Composer Pro scene inside of it, the scene contains some entities with an animation.
So far I have been able to correctly share the volume and its content, with the animation playing without problems, but once I activate SharePlay different users see different moments of the animation if no animation at all.
Is there a way to synchronize animations between all the users, no matter when someone entered the SharePlay session, aside from communicating the animation time once someone joins?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Group Activities
Reality Composer Pro
visionOS
I’m currently working on a project where I need to display a transparent GIF at a specific 3D point within an AR environment. I’ve tried various methods, including using RealityKit with UnlitMaterial and TextureResource, but the GIF still appears with a black background instead of being transparent.
Does anyone have experience with displaying animated transparent GIFs on an AR plane or point? Any guidance on how to achieve true transparency for GIFs in ARKit or RealityKit would be appreciated.
To set the stage:
I made a prototype of an app for a company, the app is to be used internally right now. Prototype runs perfectly on iOS, so now I got VP to port the app to its final destination. The first thing I found out is that the image tracking on VP is useless for moving images (and that's the core of my app). Also distance at which image is lost seems to be way shorter on VP. Now I'm trying to figure out if it's possible to fix/work around it in any way and I'm wondering if Enterprise API would change anything.
So:
Is it possible to request Enterprise API access as a single person with basic Apple Developer subscription? I looked around the forum and only got more confused.
Does QR code detection and tracking work any better than image detection, or anchor updates are the same?
Does the increased "object detection" frequency affect in any way image/QR tracking, or is it (as name implies) only for object tracking?
Would increasing the CPU/GPU headroom make any change to image/QR detection frequency?
Is there something to disable to make anchor updates more frequent? I don't need complex models, shadows, physics, etc.
Greetings
Michal
Hi guys I'm currently developing a game for the Vision Pro and i'm trying to figure out how the hand tracking works so I can make a superpower appear when the user looks at their hand and widens it. But im really struggling to wrap my head around the whole concept and how to implement it in my code.
Is there anything out there (other than apple doc) or anyone who could help me shed some light on the whole idea and how I could actually usefully implement it?
would be much appreciated
Thanks
Topic:
Spatial Computing
SubTopic:
ARKit
I"m trying to create a simple app for my students that will display .heic images taken with a nikon and them converted to .heic in the photos app. My attempts only result in the QuickLook viewer showing the images in 2d. Any guidance? Here is my ContentView:
import SwiftUI
import QuickLook
struct ContentView: View {
@State private var showQuickLook = false
@State private var previewURL: URL? = nil // State to store the URL for Quick Look
var body: some View {
VStack {
Button("See it in 3D") {
// Set the URL for the file from the bundle and toggle Quick Look presentation
if let imageURL = Bundle.main.url(forResource: "Michelia_fuego", withExtension: "heic") {
previewURL = imageURL // Set the preview URL if the image is found
showQuickLook.toggle() // Toggle to trigger Quick Look presentation
} else {
print("File not found") // Print error if the file is missing
}
}
.quickLookPreview($previewURL) // Binding to the URL
}
}
}
#Preview {
ContentView()
}
A popover presented from a view that is used as an attachment is properly displayed in preview mode in the canvas but not so at runtime. I was wondering if it is supported at all.
We are currently using Apple's Object capture module and wonder if it would be possible to collect the following data :
Device information
Current translation / rotation
Focal length embedded to the image headers
GPS localisation information.
Information about the exposure time
White balances and the color correction matrices
We also have 2 additional questions :
Is there an option to block close up accomodation of the camera ?
Is there a way for the object capture module to take a video instead of a series of picture ?
Screenshot:
Specific error message:
validateComputeFunctionArguments:1149: failed assertion `Compute Function(textureShader): Shader uses texture(texture[0]) as read-write, but hardware does not support read-write texture of this pixel format.'
OS: visionOS 2.1 (22N5548c) simulator.
Link:
https://developer.apple.com/documentation/visionos/generating-procedural-textures-in-visionos
This restriction causes me to be unable to use Metal to create images and simultaneously use Swift to add UI controls or RealityKit content (without using a window) in immersive mode.
Compilation of the project for the WWDC 2024 session title Compose interactive 3D content in Reality Composer Pro fails.
After applying the fix mentioned here (https://developer.apple.com/forums/thread/762030?login=true), the project still won't compile.
Using Xcode 16 beta 7, I get these errors:
error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Component Compatibility: AudioLibrary not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Component Compatibility: BlendShapeWeights not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults")
error: Tool exited with code 1
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS
There is a flickering and slight dimming occurring specifically on skysphere, at initial load of the scene, when using Attachment. This is observed in the simulator and on the real device.
Since we cannot upload a video illustrating the undesirable behaviour, I have to describe how to setup the project for you to observe it.
To replicate the issue, follow these steps:
Create a new visionOS app using Xcode template, see image.
Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist), see image.
Replace all swift files with those you will find in the attached texts.
Add the skysphere image asset Skydome_8k found at this Apple Sample App Presenting an artist’s scene.
Launch the app in debug mode via Xcode and onto the AVP device or simulator
Continuously open and dismiss the skysphere by pressing on buttons Open Skysphere and Close.
Observe the skysphere flicker and dim upon display of the skysphere.
The current workaround is commented in file ThreeSixtySkysphereRealityView at lines 65, 70, 71, and 72. Uncomment these lines, and the flickering and dimming do not occur.
Are we using attachments wrongly?
Is this behavior known and documented?
Or, is there really a bug in visionOS?
AppModel
InitialImmersiveView
MainImmersiveView
TestSkysphereAttachmentFlickerApp
ThreeSixtySkysphereRealityView
I have created a portal and attached it to a wall using the AnchorEntity. However, I am seeking guidance on how to determine the size of the wall so that the portal can fully occupy it. Initially, I attempted to locate relevant information within the demo code, but I encountered difficulties in comprehending certain sections. I would appreciate it if someone could provide a step-by-step explanation or a reference to the appropriate code. Thank you for your assistance.
In the visionOS App, I want to detect whether the user is in the room. My idea is as follows:
Check whether there are walls around the user
May I ask how to do it? Thanks!
I just do as the document in https://developer.apple.com/documentation/shadergraph/realitykit/cube-image-(realitykit)
I have a .ktx file, and use CubeImage node to load it, then Convert node, but it shows black.
I check it on my Vision Pro, it's still black, I don't know why? Is it something wrong?
ps: I also use Image node to load .ktx file, it shows one image, so I belive .ktx file is right. I alse checked it on Vision Pro.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer Pro
Shader Graph Editor
visionOS
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project.
Replication steps
Open app
Open window via the push action
Press the digital crown
On the home screen select the apps icon again
The pushed window will now be dismissed.
There is a sample project linked here that shows off the issue, including a video of the bug in progress
I understand that the system helps maintain user comfort by automatically adjusting the opacity of content in certain situations, like when someone moves too quickly or gets too close to a physical object. The content in front of them dims briefly to allow a clearer view of their surroundings. And I'd like to know the specific distance at which the system begins to show the physical object, or what criteria are used for this adjustment.
Information is light on the new subdivision support for USD models in RealityKit, and I have been unable so far to get one of my models to actually subdivide within Reality Composer Pro or Quick Look (or when viewing on Vision Pro).
I've exported a few test models from Houdini and verified that it contains ' uniform token subdivisionScheme = "catmullClark"'. I've started with some very lightweight, basic meshes.
But, when viewing, they simply look like polygonal meshes. There's no 'subdividing' occurring at runtime when viewing the models.
Is there a trick to getting them to actually smooth-out?
Topic:
Spatial Computing
SubTopic:
General