I was starting to test visionOS SDK on an existing project that has been running fine on iPad (iOS 17) with Xcode 15. It can be configured to run on visionOS simulator on a MacBook that runs M1 chip without any change in Xcode’s project Build Settings.
However the Apple Vision Pro simulator doesn’t appear when I run Xcode 15 on Intel MacBook Pro, unless I change the SUPPORTED_PLATFORMS key on the Xcode’s project Build Settings to visionOS.
Although, I can understand that a MacBook pro running M1 / M2 chip would be the ideal platform to run the visionOS simulator, it’s so much better if we can run the visionOS simulator on iPadOS, as it has the same arm64 architecture, and it has all the hardware needed to run camera, GPS, and Lidar.
The Mac is not a good simulator, even though it has an M1 / M2 chip, first of all:
It doesn’t have a dual facing camera (front and back)
It doesn’t have a Lidar
It doesn’t have GPS
It doesn’t have a 5G cellular radio
It’s not portable enough for developers to design use cases around spatial computing
Last but not least, I have problems or not very clear on simulating ARKit with actual camera frames on a VisionPro simulator, while I would estimate this can be simulated perfectly on an iPadOS.
My suggestion is to provide us developers with a simulator that can be run on iPadOS, that will increase developers adoption and improve the design and prototyping phase of apps running on the actual Vision Pro device.
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Post
Replies
Boosts
Views
Activity
im try load ARWorldMap roomCaptureView?.captureSession.arSession.run(arWorldTrackingConfig, options: []), but when i load ARWorldMap, RoomPlan not work, the screen goes black, and nothing is displayed,
Does anyone know the cause and solution?
I have been playing with RealityKit and ARKit. One thing I am not able to figure out is if it's possible to actually place an object, say on a floor behind a couch and not be able to see it if viewing the area it was place from the other side of the couch.
If thats confusing I apologize. Basically I want to "hide" objects in a closet or behind other physical objects.
Are we just not there yet with this stuff? Or is there a particular way to do it I am missing?
It just seems odd when I place an object then I see it "on top" of the couch from the other side.
Thanks!
Brandon
SceneReconstructionProvider.isSupported and PlaneDetectionProvider.isSupported both return false when running in the simulator (Xcode 15b2).
There is no mention of this in release notes. Seems that this makes any kind of AR apps that depend on scene understanding impossible to run in the sim.
For example, this code described in this article is not possible to run in simulator: https://developer.apple.com/documentation/visionos/incorporating-surroundings-in-an-immersive-experience
Am I missing something or is this really the current state of the sim?
Does this mean if we want to build mixed-immersion apps we need to wait to get access to Vision Pro hardware?
Will I be able to open an ARSession with ARFaceTrackingConfiguration on visionOS?
Will I be able to have the face blendshapes?
In full immersive (VR) mode on visionOS, if I want to use compositor services and a custom Metal renderer, can I still get the user’s hands texture so my hands appear as they are in reality? If so, how?
If not, is this a valid feature request in the short term? It’s purely for aesthetic reasons. I’d like to see my own hands, even in immersive mode.
While making ARKit object detection application,
the scanned object(ARReferenceObject) is 5~20MB for detecting an object smoothly.
Is there a way to reduce this size?
Why is it needed?
I have more than 200 objects to detect. and if an object takes 5MB, then almost 1GB will be occupied only for my application, which seems not appropriate.
How do I trace a map on the floor as the user walks through their house, like a trail or heatmap, and then save this trail to CoreData
Would it be possible to load and view this map later in the same spot? Or rescan the trail in the same area?
I'm trying to scan a real world object with Apple ARKit Scanner .
Sometimes the scan is not perfect, so I'm wondering if I can obtain an .arobject in other ways, for example with other scanning apps, and then merge all the scans into one single more accurate scan. I know that merging is possible, in fact, during the ARKit Scanner session the app prompts me if I want to merge multiple scans, and in that case I can select previous scan from my files app, in this context I would like to add from other sources.
Is it possible ? And if yes, are out there any other options to obtain an .arobject, or is that a practical way to improve the quality of object detection?
Thanks
Is there a way to move a Rigged Character with its Armature Bones in ARKit/RealityKit?
I am trying to do this
When I try to move using JointTransform the usdz robot provided in
https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/capturing_body_motion_in_3d
It gives me the following:
I see the documentation on Character Rigging etc. But is the movement through armature bones only available through a third party software. Or can it be done in Reality Kit/Arkit/RealityView?
https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/rigging_a_model_for_motion_capture
Hello!
On iOS with ARView we have postProcess to apply image effects including shaders directly onto camera output.
Apparently, ARView isn't available for visionOS at least for now so I wonder is there any way to achieve the same here?
Thanks.
Does RealityKit have a way to do instanced rendering?
When I clone a ModelEntity, and add it to scene, I get 1 draw call per ModelEntity – even if I don't change any properties on the entity, etc.
I was hoping to use RealityKit for a 3D .nonAR game, and think this would be required.
Hi all.
I am new to swift and AR. I'm trying a project on AR and ran into a problem that I can't change the material on the models. With geometry such as a sphere or a cube, everything is simple.
Tell me what am I doing wrong?
My simple code:
@IBOutlet var sceneView: ARSCNView!
var modelNode: SCNNode!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/jacket.usdz")!
modelNode = scene.rootNode.childNode(withName: "jacket", recursively: true)
let material = SCNMaterial()
material.diffuse.contents = UIImage(named: "art.scnassets/58.png")
modelNode.childNodes[0].geometry?.materials = [material]
sceneView.scene = scene
I’m still a little unsure about the various spaces and capabilities. I’d like to make full use of hand tracking, joints and all. In the mode with passthrough and a single application present (not a shared space), is that available? (I am pretty sure that the answer is “yes,” but I’d like to confirm.) What is this mode called in the system? Mixed full-space?
Currently, I'm developing on visionOS. WorldTracking worked, but still don't know how to persist WorldAnchor.
That is what I want:
User A: deploy 3D content in WorldTracking and get WorldAnchors, and saved the Anchors.
User B: come to the same place, and load saved Anchors by A, and then User B can view what A have deployed.
I’m embarking on a new project that will involve animating 3D faces & mouths. I’m looking at using ARFaceAnchors and blendShapes to capture data that will be used to animate the models’ facial expressions.
I have a few basic questions:
(1) As far as I can tell, Apple has not supported exporting Memojis to rigged 3D models. Is this still the case?
(2) I did find one web site that said Apple’s AvatarKit is now public, but everywhere else I’ve checked, it is still a private framework (and Xcode complains). Is AvatarKit still private?
(3) It looks like all 52 blendShapes for an ARFaceAnchor are updated every frame, which updates 60 times a second This is 3120 data points per second. Are there any best practice guides to reduce the data? For example, “These 10 blendShapes capture the most important features for animating a face.”
(4) It appears that visionOS does not support ARFaceAnchor. If I want to present a remote user as a Memoji (or other rigged model) in a shared experience, is there any way to do that at the current time?
Is there a method to extract the iphone position and orientation? Or to get the trajectory of the iphone during the room scan?