How can I access the player’s camera vector in VisionOS, specifically using RealityKit?
In Unity and other game engines, there’s often an API like Camera.main.transform.forward for this purpose. I’ve found the head anchor but haven’t identified a way to obtain the forward vector in RealityKit.
Is there a related API for this? Any guidance would be greatly appreciated.
Thanks!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have Mac mini M4 with 16GB memory, the Xcode is 16.1, when I test my Vision Pro App with the Simulator, it is very slow and system shows the memory is under the high pressure.
How do I run/test/debug the application on Vision Pro directly? Tried to add my Vision Pro to my developer account, it didn't work due to cannot find UDID, when I hook the USB to the battery, it only shows Battery device ID.
Topic:
Spatial Computing
SubTopic:
General
Hi, are we allowed to push the default support in Package.swift up to iOS 18 to allow for the latest APIs?
And with the terms of the competition, can we use stock 3D USDZ assets?
Thank you!
I'm working on creating a panorama view in AVP. When I got to this line of code Xcode says that "Type 'Entity' does not conform to protocol 'View'":
private var realityView: RealityView!
as well as this line, with the same error message:
private func setupPanoramaScene(for content: RealityView.Content)
What should I put as a argument for reality view? It doesn't work without arguments either.
Topic:
Spatial Computing
SubTopic:
ARKit
Hello everyone,
I'm currently working through this example project Exploring object tracking with ARKit to learn how to use Object Tracking with visionOS.
I was able to modify the example project to overlay a different model onto the detected object. However, with the example code, I'm only able to track 1 instance of the trained object at a time. I'm wondering if there is a way to track multiple instances of 1 object?
For example, I have a usdz model of a box, then trained that box for object tracking, I'm able to overlay a model of a chair over that box once it's detected. But now, I have multiple copies of that same box, and I want to arrange them so that when I wear the Vision Pro, I can see the chairs arranged however I want.
I'm still new to visionOS development, so I'm not sure if there's a way to accomplish that by just training 1 object and having copies of it.
If it helps, this is my current modification to overlay a virtual object ontop of a detected object.
func loadReferenceObjects() async -> [ReferenceObject] {
// Get a list of all reference object files in the app's main bundle and attempt to load each.
var referenceObjectFiles: [String] = []
if let resourcesPath = Bundle.main.resourcePath {
print("resource path: \(resourcesPath)")
try? referenceObjectFiles = FileManager.default.contentsOfDirectory(atPath: resourcesPath).filter { $0.hasSuffix(".referenceobject") }
}
await withTaskGroup(of: Void.self) { group in
for file in referenceObjectFiles {
let objectURL = Bundle.main.bundleURL.appending(path: file)
group.addTask {
// get the file name
let fileNameWithoutExtension = (file as NSString).deletingPathExtension
// load each ref objs as task
await self.loadSingleReferenceObject(url: objectURL, fileName: fileNameWithoutExtension)
}
}
}
return self.referenceObjects
}
// Private helper method to load a single object
// and assign entity to it.
private func loadSingleReferenceObject(url: URL, fileName: String) async {
var referenceObject: ReferenceObject
do {
print("Loading reference object from \(url)")
// Load the file as a `ReferenceObject` - this can take a while for larger objects.
try await referenceObject = ReferenceObject(from: url)
} catch {
fatalError("Failed to load reference object with error \(error)")
}
// add the ref obj to the ref objs array
self.referenceObjects.append(referenceObject)
// entity with each file
var model: Entity = Entity()
// add entity according to the file name
switch fileName {
// Box1 ref obj binds with Chair1
case "Box1":
// try to load the model
do {
try await model = Entity(named: "chair1", in: realityKitContentBundle)
} catch {
print("Failed to load chair1")
}
// Box2 ref obj binds with Chair2
case "Box2":
// try to load the model
do {
try await model = Entity(named: "chair2", in: realityKitContentBundle)
} catch {
print("Failed to load chair2")
}
default:
print("no model associated with this file name: \(fileName)")
break
}
// map entity to ref objs
usdzPerReferenceObject[referenceObject.id] = model
}
Any help or suggestions would be greatly appreciated.
Thank you.
Hello Apple Team,
I am working on a RealityKit project for iOS, where I need to place a 3D asset far away from the camera (approximately 15 to 30 meters).
When enabling people occlusion, the 3D asset gets clipped when moved far away.
Is it possible to enable people occlusion for assets at close range (less than 10 meters) while disabling it for assets farther away to prevent clipping?
I understand that it is possible to switch configurations at runtime. However, I would like to place assets both close to and far from the camera simultaneously.
Thank you for your help!
Kind regards
We have a native iOS app that supports the upload and display of USDZ files. It has been working great since beta (late 2022) and live launch (late 2023) until now. But recently we had reports from some users on Max model phones (14 + 15 Pro Max) at least. When they launch tap and launch a 3D file the Quick Look player is triggered. So far so good. But for affected users the controls along the top of the player - X (close) AR | Object (toggle) and share button - are moving too high up the phone screen and getting stuck (untappable) behind the phone's top status bar (time, camera bug, connection, battery).
This means that when they open a USDZ file in AR or 3D view they have to hard-close the app to get out of it again. This doesn't happen when they open a USDZ file from Files, Dropbox etc on their phone (which also uses the Quick Look player). The controls only move up and get stuck when launching a USDZ from within our app.
I'm at a loss to figure out what might be causing this on some phones and not all others. And why only when opening a USDZ file from our app! So far we have replicated this issue on a single iphone 14 Pro Max and a 15Pro Max, both running iOS18+
We have tested on other 15Pro Max's on same OS, and Pros, normal iPhones, Minis and are not experiencing the issue. You would think that a USDZ file is a USDZ file and that your iPhone knows what to do with it and open it in the Quick Look player regardless of where you open the file from. Why would the navigation items be moving if you open the USDZ file from within our app, and why only for some select users?
We will continue to troubleshoot and test but I wanted to throw this out to the community in case anyone had experienced this or if anyone had any theories that would expedite our testing. Your thoughts are most appeciated!
Here is a video showing the expected (correct) behaviour: https://www.dropbox.com/scl/fi/0sp8s4opaf2m4gukkcbrk/How-opening-a-USDZ-should-behave_correct-behaviour.MP4?rlkey=tzzau9x91mwox66gsgguryhep&st=qiykmne9&dl=0 and a screenSHOT attached below of what is happening on one of the affected user's iPhone 15Pro Max.
Hello,
After watching the Work with Reality Composer Pro content in Xcode, I had created the following custom component.
public struct TestComponent : Component, Codable{
public var text : String = "helloWorld"
public init() {}
}
I had registered the custom component as suggested in App.init function
init() {
RealityKitContent.TestComponent.registerComponent()
}
The custom component is decoded and realityView shows the sphere, when I load the "Scene" from realityKitContent bundle.
But if I export the scene to a separate file named "test_scene.usdz" on disk and shared to the simulator and then trying to load it load in reality view causes
EXC_BAD_ACCESS #0 0x0000000194c8d508 in Swift._StringObject.getSharedUTF8Start() -> Swift.UnsafePointer<Swift.UInt8> ()
Printing the loaded entity, shows the customComponent but when trying to load in show realityview , crashes the app immediately. Is there a way to fix it?
Currently I want to recreate a window which is similar to system window in ImmersiveSpace. But we only can use the meter unit in RealityKit. I create a plane entity, I don't know how to set the size using meter unit to make the plane's size totally consistent with the system window.
Also, I want to know the z and y position of the system window in the immersive space.
I am new to the graph editor and was able to achieve some results. However, I am noticing that my graphs are getting very tangled, confusing, and hard to debug. I was wondering whether:
is it possible to define variables, to store the value of computations, and refer to them in other parts of the graph, without having to link them graphically? This would help in tidying the tangled mess I created. In the "Explore materials in Reality Composer Pro" video, I saw that it is possible to create "instances", but I am not sure if that is what I need. For example: does the shader compiler optimize them, so that there is no need to recompute each instance?
Is there any functionality to debug the graph, trying inputs and seeing what the numeric outputs would be?
What will be the pros and cons if unreal engine is used in apple vision pro?
What are the issues that unreal will face in future on apple vision pro?
What will be the specifications required for unreal to work on vision pro?
Today I updated my macbook pro to macOS sequoia. with this I also downloaded the latest Xcode and visionOS 2 packages.
I had a working project. Which did work with my vision pro which is updated to the latest visionOS 2. But now whenever I try to click on preview in xcode while editing a swift file I am receiving the following error:
(lot of lines here)
Library/Developer/Xcode/DerivedData/test2-fznbrpphddkqdaddrzamkayoajjm/Build/Intermediates.noindex/RealityKitContent.build/Debug-xrsimulator/RealityKitContent_RealityKitContent.build/DerivedSources/RealityAssetsGenerated/CustomComponentUSDInitializers.usda
error: Tool terminated by signal 'Segmentation fault: 11'
I tried exiting and restarting my mac but the problem is not going away. Can someone help me with this?
Thank you!
Topic:
Spatial Computing
SubTopic:
General
Tags:
Xcode Sanitizers and Runtime Issues
RealityKit
visionOS
In visionOS, once an immersive space is opened, the background color is solid black, is it possible to make this background transparent?
FYI, The Immersive spaces on visionOS uses Compositor Services for drawing 3D content.
I started a new project using RealityKit and RealityView, intended as an AR app on iPhone and iPad, but eventually VisionOS as well. I'm challenged because I find much of the recent documentations, WWDC videos, etc, include features that are VisionOS only.
Right now, I would simply like to create some gesture functionality that is similar to AR Quick Look defaults, meaning drag to reposition, two fingers to rotate or zoom. In the past, this would be implemented with something like:
arView.installGestures([.all], for: entity)
however, with RealityView I don't know how (or if possible) to access an ARView.
In RealityKit, I have found this doc: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
However, many of the features in that posting are VisionOS only, and I've found no good documentation on the topic that is specific or at least compatible with iOS.
I know reverting to an ARView is an option, but I want to use RealityView if at all possible as I see it as more forward-looking.
Topic:
Spatial Computing
SubTopic:
General
My visionOS requires access to users' personal photos. The trigger mechanism is: when user firstly opens a FooView, a task attached to that FooView and calling let status = PHPhotoLibrary.authorizationStatus(for: .readWrite), if the status is .notDetermined, then calling PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler) to let visionOS pop out a window to request Photo access.
However, the app crashes every time when user selects Limited Access and the system try to pop out a photo library picker. And btw, I have set Prevent limited photos access alert to Yes, but it shouldn't affect the behavior here I guess.
There was a debugger message here:
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Presentations are not permitted within volumetric window scenes.'
However, the window this view belongs to is a .plain style window (though there were 3D object appearing in the other view of same windowgroup)
This is my code snippet if this helps:
checkAndUpdatePhotoAuthorization is just a wrapper of PHPhotoLibrary.authorizationStatus(for: .readWrite)
private func checkAndUpdatePhotoAuthorization() -> PHAuthorizationStatus {
let currentStatus = PHPhotoLibrary.authorizationStatus(for: .readWrite)
switch currentStatus {
case .authorized:
print("Photo library access authorized.")
isPhotoGalleryAuthorized = true
isPhotoGalleryLimited = false
isPhotoGalleryAccessRestricted = false
isPhotoGalleryDetermined = true
case .limited:
print("Photo library access limited.")
isPhotoGalleryLimited = true
isPhotoGalleryAuthorized = false
isPhotoGalleryAccessRestricted = false
isPhotoGalleryDetermined = true
case .notDetermined:
isPhotoGalleryDetermined = false
print("Photo library access not determined.")
case .denied:
print("Photo library access denied.")
isPhotoGalleryAuthorized = false
isPhotoGalleryLimited = false
isPhotoGalleryAccessRestricted = false
showSettingsAlert = true
isPhotoGalleryDetermined = true
case .restricted:
print("Photo library access restricted.")
isPhotoGalleryAuthorized = false
isPhotoGalleryLimited = false
isPhotoGalleryAccessRestricted = true
showPhotoAuthExplainationAlert = true
isPhotoGalleryDetermined = true
@unknown default:
print("Photo library Unknown authorization status.")
isPhotoGalleryAuthorized = false
isPhotoGalleryLimited = false
isPhotoGalleryAccessRestricted = false
isPhotoGalleryDetermined = true
}
return currentStatus
}
And then FooView attaches task to fire up checkAndUpdatePhotoAuthorization()
var body: some View {
EmptyView()
}
.task {
try? await Task.sleep(for: .seconds(1.0))
let status = self.checkAndUpdatePhotoAuthorization()
if status == .notDetermined {
DispatchQueue.main.async {
PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler)
}
}
Another thing worth to mention is that SOMETIMES it won't crash when running on a debug build. But it crashes when it comes to TF.
Any other idea? Big thanks in advance
XCode version: 16.2 beta 3
VisionOS version: 2.2
How to detect whether two entities collide in RealityView and execute some instructions after the collision. It is assumed that both entities have collision components, and one of them also has an anchor component (such as a binding hand).
sample repo: https://github.com/ckse93/VideoDiffusionIssueSHowcase
Repo has detailed step by step workflow. as well as screenshot, python script compute result, and parameters
after running computeDiffuseReflectionUVs.py and mapping textures and reflection diffuse to objects, I noticed that reflection diffuse does not produce any color.
expected result is shown below, diffused light has color
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Is it possible to use a local wifi router connecting Vision Pro and Mac for developing? I tried from Unity and Xcode.
From Unity, the host app wouldn't open without WIFI (internet connection)
From Xcode, I can see the Vision Pro paired, but while try to run there's no device listed.
Any suggestions? Thanks a lot, /Ruiying
Topic:
Spatial Computing
SubTopic:
General
I am trying to achieve an effect such that the particles of a particle system are attracted to my hand entity. The hand entity is essentially an AnchorEntity that is tracking my right hand.
let particleEmitterEntities = context.entities(matching: particleEmitterQuery, updatingSystemWhen: .rendering)
for particleEmitterEntity in particleEmitterEntities {
if var particleEmitter = particleEmitterEntity.components[ParticleEmitterComponent.self] {
particleEmitter.mainEmitter.attractionCenter = rightHandEntity.position(relativeTo: nil)
// trying to get the world space position of the hand
// I also tried relative to particleEmitterEntity
particleEmitterEntity.components[ParticleEmitterComponent.self] = particleEmitter
} else {
fatalError("Cannot find particle emitter")
}
}
The particle attraction center doesn't seem to update
Another issue I am noticing here that My particle system doesn't show the particle image a lot of times and just renders a placeholder square when I do this, when I comment this code out I get the right particle image. I believe this is due to the number of times this loop runs to update the position of the attraction center.
What is the right way to do an effect where the particles are attracted to my hand.
I’m currently using the RealityKit/ObjectCaptureSession API to develop my app, and I’ve noticed that Apple’s official Reality Composer app also uses the same API. However, both my app and the Reality Composer app crash if the device doesn’t have enough storage space (approximately 4 GB free). Here is the debug log I’m seeing:
Insufficient storage: required 4000000000
Switch to error state. Got error = insufficientStorage(requiredBytes: 4000000000)
fromState == toState so punting transition! from=disabled toState=disabled
Punting transition since states match: disabled
Got error starting session! insufficientStorage(requiredBytes: 4000000000)
I would like to request:
A fix for the crash in the official Reality Composer app.
Guidance on how to properly handle this crash or error when using the ObjectCaptureSession API in my own app.
Thank you!