We are currently using Apple's Object capture module and wonder if it would be possible to collect the following data :
Device information
Current translation / rotation
Focal length embedded to the image headers
GPS localisation information.
Information about the exposure time
White balances and the color correction matrices
We also have 2 additional questions :
Is there an option to block close up accomodation of the camera ?
Is there a way for the object capture module to take a video instead of a series of picture ?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
We have a native iOS app that supports the upload and display of USDZ files. It has been working great since beta (late 2022) and live launch (late 2023) until now. But recently we had reports from some users on Max model phones (14 + 15 Pro Max) at least. When they launch tap and launch a 3D file the Quick Look player is triggered. So far so good. But for affected users the controls along the top of the player - X (close) AR | Object (toggle) and share button - are moving too high up the phone screen and getting stuck (untappable) behind the phone's top status bar (time, camera bug, connection, battery).
This means that when they open a USDZ file in AR or 3D view they have to hard-close the app to get out of it again. This doesn't happen when they open a USDZ file from Files, Dropbox etc on their phone (which also uses the Quick Look player). The controls only move up and get stuck when launching a USDZ from within our app.
I'm at a loss to figure out what might be causing this on some phones and not all others. And why only when opening a USDZ file from our app! So far we have replicated this issue on a single iphone 14 Pro Max and a 15Pro Max, both running iOS18+
We have tested on other 15Pro Max's on same OS, and Pros, normal iPhones, Minis and are not experiencing the issue. You would think that a USDZ file is a USDZ file and that your iPhone knows what to do with it and open it in the Quick Look player regardless of where you open the file from. Why would the navigation items be moving if you open the USDZ file from within our app, and why only for some select users?
We will continue to troubleshoot and test but I wanted to throw this out to the community in case anyone had experienced this or if anyone had any theories that would expedite our testing. Your thoughts are most appeciated!
Here is a video showing the expected (correct) behaviour: https://www.dropbox.com/scl/fi/0sp8s4opaf2m4gukkcbrk/How-opening-a-USDZ-should-behave_correct-behaviour.MP4?rlkey=tzzau9x91mwox66gsgguryhep&st=qiykmne9&dl=0 and a screenSHOT attached below of what is happening on one of the affected user's iPhone 15Pro Max.
With Xcode16 and VIsionOS SDK 2.0, result of consecutive ar_anchor_get_timestamp may differ many seconds.
Is there any way to detect the timestamp jumping?
Topic:
Spatial Computing
SubTopic:
ARKit
This restriction causes me to be unable to use Metal to create images and simultaneously use Swift to add UI controls or RealityKit content (without using a window) in immersive mode.
Information is light on the new subdivision support for USD models in RealityKit, and I have been unable so far to get one of my models to actually subdivide within Reality Composer Pro or Quick Look (or when viewing on Vision Pro).
I've exported a few test models from Houdini and verified that it contains ' uniform token subdivisionScheme = "catmullClark"'. I've started with some very lightweight, basic meshes.
But, when viewing, they simply look like polygonal meshes. There's no 'subdividing' occurring at runtime when viewing the models.
Is there a trick to getting them to actually smooth-out?
Topic:
Spatial Computing
SubTopic:
General
Can we access a Vision pro's spatial persona in application's view without using SharePlay or group activity like any other 3d Avatar?
I want to use that persona in app and without live rendering I just want to pass some voice commands like Avatar is speaking.
The documentation at https://developer.apple.com/documentation/visionos/designing-realitykit-content-with-reality-composer-pro states:
Reality Composer Pro treats your imported assets as read-only.
This is a huge obstacle for me, as I need to do multiple adjustments to the scene.
I somehow managed to actually import one of my assets into the scene and can manipulate it directly, but now I can't figure out how I did this.
As I have to prepare further assets and would like to do this directly in Reality Composer Pro, I'm looking for a way to actually load them into the scene.
Any idea how this can be done?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I have a rkasset package, from which I load my scene.
In the scene, I'm using entity.findEntity(named:"..") to find entities to activate/deactivate.
When I have entities deactivated in the *.usda, they are not found with this method. Further inspection shows that the deactivated entities seem not to be compiled into the build.
Is there anything I can set that prevents skipping the build for these deactivated entities?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
In the visionOS App, I want to detect whether the user is in the room. My idea is as follows:
Check whether there are walls around the user
May I ask how to do it? Thanks!
I have created a portal and attached it to a wall using the AnchorEntity. However, I am seeking guidance on how to determine the size of the wall so that the portal can fully occupy it. Initially, I attempted to locate relevant information within the demo code, but I encountered difficulties in comprehending certain sections. I would appreciate it if someone could provide a step-by-step explanation or a reference to the appropriate code. Thank you for your assistance.
I just do as the document in https://developer.apple.com/documentation/shadergraph/realitykit/cube-image-(realitykit)
I have a .ktx file, and use CubeImage node to load it, then Convert node, but it shows black.
I check it on my Vision Pro, it's still black, I don't know why? Is it something wrong?
ps: I also use Image node to load .ktx file, it shows one image, so I belive .ktx file is right. I alse checked it on Vision Pro.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer Pro
Shader Graph Editor
visionOS
Is it possible to manage the behavior of timeline totally from code?
I am exploring the Compose interactive 3D content in Reality Composer Pro sample project after seeing the related video, but the example shows only the use of Behaviors from RCP to activate timelines actions.
I was wondering if it is possible to, somehow, retrieve some kind of timeline controller that allows me access to its informations just like the AnimationPlaybackController does with single animations.
What I would like to achieve is being able to play/pause/retrieve timestamp from them in order to allow synchronization between different users on SharePlay
Hello,
I am looking to create a shader to update an entity's rendering. As a basic example say I want to recolour an entity, but leave its original textures showing through:
I understand with VisionOS I need to use Reality Composer Pro to create the shader, but I'm lost as how to reference the original colour that I'm trying to update in the node graph. All my attempts appear to completely override the textures in the entity (and its sub-entities) that I want to impact. Also the tutorials / examples I've looked at appear to create materials, not add an effect on top of existing materials.
Any hints or pointers?
Assuming this is possible, I've been trying to load the material in code, and apply to an entity. But do I need to do this to all child entities, or just the topmost?
do {
let entity = MyAssets.createModelEntity(.plane) // Loads from bundle and performs config
let material = try await ShaderGraphMaterial(named: "/Root/TestMaterial", from: "Test", in: realityKitContentBundle)
entity.applyToChildren {
$0.components[ModelComponent.self]?.materials = [material]
}
root.addChild(entity)
} catch {
fatalError(error.localizedDescription)
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
visionOS
Hi,
since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ.
Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm)
So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project.
Replication steps
Open app
Open window via the push action
Press the digital crown
On the home screen select the apps icon again
The pushed window will now be dismissed.
There is a sample project linked here that shows off the issue, including a video of the bug in progress
I understand that the system helps maintain user comfort by automatically adjusting the opacity of content in certain situations, like when someone moves too quickly or gets too close to a physical object. The content in front of them dims briefly to allow a clearer view of their surroundings. And I'd like to know the specific distance at which the system begins to show the physical object, or what criteria are used for this adjustment.
Hi everyone,
I’m working on an app for VisionOS that needs to recognize individual rooms in a hallway based on the person the room belongs to (using the name displayed on each office door). Is there any sample code or resource that can guide me in implementing this feature?
Thanks in advance for your help!
Hello Developers,
I am currently in the initial planning stages of my bachelor thesis in computer science, where I will be developing an application in collaboration with a manufacturer of large-scale machinery. One of the core features I aim to implement is the ability for multiple Apple Vision Pro users to view the same object in augmented reality simultaneously, each from their respective positions relative to the object.
I am still exploring how best to achieve this feature. My initial approach involves designating one device as the host of a "room" within the application, allowing other users to join. If I can accurately determine the relative positions of all users to the host device, it should be possible to display the AR content correctly in terms of angle, size, and location for each user.
Despite my research, I haven't found much information on similar projects, and I would appreciate any insights or suggestions. Specifically, I am curious about common approaches for synchronizing AR experiences across multiple devices. Given that the Apple Vision Pro does not have a GPS sensor, I am also looking for alternative methods to precisely determine the positions of multiple devices relative to each other.
Any advice or shared experiences would be greatly appreciated!
Best regards,
Revin
I am new to the graph editor and was able to achieve some results. However, I am noticing that my graphs are getting very tangled, confusing, and hard to debug. I was wondering whether:
is it possible to define variables, to store the value of computations, and refer to them in other parts of the graph, without having to link them graphically? This would help in tidying the tangled mess I created. In the "Explore materials in Reality Composer Pro" video, I saw that it is possible to create "instances", but I am not sure if that is what I need. For example: does the shader compiler optimize them, so that there is no need to recompute each instance?
Is there any functionality to debug the graph, trying inputs and seeing what the numeric outputs would be?