I was starting to test visionOS SDK on an existing project that has been running fine on iPad (iOS 17) with Xcode 15. It can be configured to run on visionOS simulator on a MacBook that runs M1 chip without any change in Xcode’s project Build Settings.
However the Apple Vision Pro simulator doesn’t appear when I run Xcode 15 on Intel MacBook Pro, unless I change the SUPPORTED_PLATFORMS key on the Xcode’s project Build Settings to visionOS.
Although, I can understand that a MacBook pro running M1 / M2 chip would be the ideal platform to run the visionOS simulator, it’s so much better if we can run the visionOS simulator on iPadOS, as it has the same arm64 architecture, and it has all the hardware needed to run camera, GPS, and Lidar.
The Mac is not a good simulator, even though it has an M1 / M2 chip, first of all:
It doesn’t have a dual facing camera (front and back)
It doesn’t have a Lidar
It doesn’t have GPS
It doesn’t have a 5G cellular radio
It’s not portable enough for developers to design use cases around spatial computing
Last but not least, I have problems or not very clear on simulating ARKit with actual camera frames on a VisionPro simulator, while I would estimate this can be simulated perfectly on an iPadOS.
My suggestion is to provide us developers with a simulator that can be run on iPadOS, that will increase developers adoption and improve the design and prototyping phase of apps running on the actual Vision Pro device.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
im try load ARWorldMap roomCaptureView?.captureSession.arSession.run(arWorldTrackingConfig, options: []), but when i load ARWorldMap, RoomPlan not work, the screen goes black, and nothing is displayed,
Does anyone know the cause and solution?
Now that we have the Vision Pro, I really want to start using Apple's Object Capture API to transform real objects into 3D assets. I watched the latest Object Capture vid from WWDC 23 and noticed they were using a "sample app".
Does Apple provide this sample app to VisionOS developers or do we have to build our own iOS app?
Thanks and cheers!
I have been playing with RealityKit and ARKit. One thing I am not able to figure out is if it's possible to actually place an object, say on a floor behind a couch and not be able to see it if viewing the area it was place from the other side of the couch.
If thats confusing I apologize. Basically I want to "hide" objects in a closet or behind other physical objects.
Are we just not there yet with this stuff? Or is there a particular way to do it I am missing?
It just seems odd when I place an object then I see it "on top" of the couch from the other side.
Thanks!
Brandon
The version of Reality Composer Pro is 1.0 from Xcode 15 beta 2. Every time I click 'Particle Emitter' button, it will crash. I can't open the Diorama, the demo from document, neither.
SceneReconstructionProvider.isSupported and PlaneDetectionProvider.isSupported both return false when running in the simulator (Xcode 15b2).
There is no mention of this in release notes. Seems that this makes any kind of AR apps that depend on scene understanding impossible to run in the sim.
For example, this code described in this article is not possible to run in simulator: https://developer.apple.com/documentation/visionos/incorporating-surroundings-in-an-immersive-experience
Am I missing something or is this really the current state of the sim?
Does this mean if we want to build mixed-immersion apps we need to wait to get access to Vision Pro hardware?
Will I be able to open an ARSession with ARFaceTrackingConfiguration on visionOS?
Will I be able to have the face blendshapes?
In full immersive (VR) mode on visionOS, if I want to use compositor services and a custom Metal renderer, can I still get the user’s hands texture so my hands appear as they are in reality? If so, how?
If not, is this a valid feature request in the short term? It’s purely for aesthetic reasons. I’d like to see my own hands, even in immersive mode.
I am learning the visionOS and in the tutorial the presenter uses the Stylized Clouds.usds. Where can I find this file to import to my Reality Composer Pro?
While making ARKit object detection application,
the scanned object(ARReferenceObject) is 5~20MB for detecting an object smoothly.
Is there a way to reduce this size?
Why is it needed?
I have more than 200 objects to detect. and if an object takes 5MB, then almost 1GB will be occupied only for my application, which seems not appropriate.
I have an older app that is a mix of Swift & Objective-C. I have 2 groups of storyboards for the iPhone and the iPad using storyboard references.
There seems to be a bug, when using the Simulator, it is loading the storyboard specified by the key "Main storyboard file base name" and not using the key "Main storyboard file base name (iPad)". I did change the first key to use the iPad storyboard & it then worked as expected in the visionOS simulator.
The raw keys are:
UIMainStoryboardFile
UIMainStoryboardFile~ipad
What should I do?
Is there a way to move a Rigged Character with its Armature Bones in ARKit/RealityKit?
I am trying to do this
When I try to move using JointTransform the usdz robot provided in
https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/capturing_body_motion_in_3d
It gives me the following:
I see the documentation on Character Rigging etc. But is the movement through armature bones only available through a third party software. Or can it be done in Reality Kit/Arkit/RealityView?
https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/rigging_a_model_for_motion_capture
Hello!
On iOS with ARView we have postProcess to apply image effects including shaders directly onto camera output.
Apparently, ARView isn't available for visionOS at least for now so I wonder is there any way to achieve the same here?
Thanks.
Hi community,
I have a pair of stereo images, one for each eye. How should I render it on visionOS?
I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images.
I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
Does RealityKit have a way to do instanced rendering?
When I clone a ModelEntity, and add it to scene, I get 1 draw call per ModelEntity – even if I don't change any properties on the entity, etc.
I was hoping to use RealityKit for a 3D .nonAR game, and think this would be required.
Hi all.
I am new to swift and AR. I'm trying a project on AR and ran into a problem that I can't change the material on the models. With geometry such as a sphere or a cube, everything is simple.
Tell me what am I doing wrong?
My simple code:
@IBOutlet var sceneView: ARSCNView!
var modelNode: SCNNode!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/jacket.usdz")!
modelNode = scene.rootNode.childNode(withName: "jacket", recursively: true)
let material = SCNMaterial()
material.diffuse.contents = UIImage(named: "art.scnassets/58.png")
modelNode.childNodes[0].geometry?.materials = [material]
sceneView.scene = scene
I’m still a little unsure about the various spaces and capabilities. I’d like to make full use of hand tracking, joints and all. In the mode with passthrough and a single application present (not a shared space), is that available? (I am pretty sure that the answer is “yes,” but I’d like to confirm.) What is this mode called in the system? Mixed full-space?
Hi,
I have an existing iPhone/iPad app that I'm considering for Vision Pro. It's an informational app that shares info. with the watch.
What would be better:
To add a target for the vision pro.
Make a separate app so I can use all of the VP's features.
What are YOU doing to support the Vision Pro?
Thanks,
Dan