I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona?
Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I
Any help is very welcome, thanks.
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In visionOS, I want to make a watch. After the actual production, the display of the hand makes it impossible to see the watch due to the virtual watch.
How to set up the watch to give priority to display?
Topic:
Spatial Computing
SubTopic:
ARKit
I have an application running on visionOS 2.0 that uses the ARKit C API to create anchors and listen for updates.
I am running an ARKit session with a WorldTrackingProvider (and a CameraFrameProvider, if that is relevant)
Then, I am registering a callback using ar_world_tracking_provider_set_anchor_update_handler_f
When updates arrive I iterate over the updated anchors using ar_world_anchors_enumerate_anchors_f.
Then, as described in the https://developer.apple.com/documentation/visionos/tracking-points-in-world-space documentation, I walk around and hold down the Digital Crown to reposition the current space. This resets the world origin to my current position.
When this happens, anchor updates arrive. In most cases, the anchor updates return the new transform (using ar_world_anchor_get_origin_from_anchor_transform) but sometimes I get an anchor update that reports the transform of the anchor from before the world origin was repositioned. Meaning instead of staying in place in the physical world, the world anchor moves relative to me.
I can work around this by calling ar_world_tracking_provider_copy_all_world_anchors_f which provides me with the correct transform, but this async method also adds some noticeable delay to the anchor updates.
Is this already a known issue?
In my Vision OS app I am using plane detection and I want to create planes that have physics I want to create an effect that my reality kit entities rest on real world detected planes.
I was curious to see that the code below that I found in the Samples is the most efficient way of doing this.
func processPlaneDetectionUpdates() async {
for await anchorUpdate in planeTracking.anchorUpdates {
let anchor = anchorUpdate.anchor
if anchorUpdate.event == .removed {
planeAnchors.removeValue(forKey: anchor.id)
if let entity = planeEntities.removeValue(forKey: anchor.id) {
entity.removeFromParent()
}
return
}
planeAnchors[anchor.id] = anchor
let entity = Entity()
entity.name = "Plane \(anchor.id)"
entity.setTransformMatrix(anchor.originFromAnchorTransform, relativeTo: nil)
// Generate a mesh for the plane (for occlusion).
var meshResource: MeshResource? = nil
do {
let contents = MeshResource.Contents(planeGeometry: anchor.geometry)
meshResource = try MeshResource.generate(from: contents)
} catch {
print("Failed to create a mesh resource for a plane anchor: \(error).")
return
}
var material = UnlitMaterial(color: .red)
material.blending = .transparent(opacity: .init(floatLiteral: 0))
if let meshResource {
// Make this plane occlude virtual objects behind it.
entity.components.set(ModelComponent(mesh: meshResource, materials: [material]))
}
// Generate a collision shape for the plane (for object placement and physics).
var shape: ShapeResource? = nil
do {
let vertices = anchor.geometry.meshVertices.asSIMD3(ofType: Float.self)
shape = try await ShapeResource.generateStaticMesh(positions: vertices,
faceIndices: anchor.geometry.meshFaces.asUInt16Array())
} catch {
print("Failed to create a static mesh for a plane anchor: \(error).")
return
}
if let shape {
entity.components.set(CollisionComponent(shapes: [shape], isStatic: true))
let physics = PhysicsBodyComponent(mode: .static)
entity.components.set(physics)
}
let existingEntity = planeEntities[anchor.id]
planeEntities[anchor.id] = entity
contentEntity.addChild(entity)
existingEntity?.removeFromParent()
}
}
}
extension MeshResource.Contents {
init(planeGeometry: PlaneAnchor.Geometry) {
self.init()
self.instances = [MeshResource.Instance(id: "main", model: "model")]
var part = MeshResource.Part(id: "part", materialIndex: 0)
part.positions = MeshBuffers.Positions(planeGeometry.meshVertices.asSIMD3(ofType: Float.self))
part.triangleIndices = MeshBuffer(planeGeometry.meshFaces.asUInt32Array())
self.models = [MeshResource.Model(id: "model", parts: [part])]
}
}
extension GeometrySource {
func asArray<T>(ofType: T.Type) -> [T] {
assert(MemoryLayout<T>.stride == stride, "Invalid stride \(MemoryLayout<T>.stride); expected \(stride)")
return (0..<count).map {
buffer.contents().advanced(by: offset + stride * Int($0)).assumingMemoryBound(to: T.self).pointee
}
}
func asSIMD3<T>(ofType: T.Type) -> [SIMD3<T>] {
asArray(ofType: (T, T, T).self).map { .init($0.0, $0.1, $0.2) }
}
subscript(_ index: Int32) -> (Float, Float, Float) {
precondition(format == .float3, "This subscript operator can only be used on GeometrySource instances with format .float3")
return buffer.contents().advanced(by: offset + (stride * Int(index))).assumingMemoryBound(to: (Float, Float, Float).self).pointee
}
}
extension GeometryElement {
subscript(_ index: Int) -> [Int32] {
precondition(bytesPerIndex == MemoryLayout<Int32>.size,
"""
This subscript operator can only be used on GeometryElement instances with bytesPerIndex == \(MemoryLayout<Int32>.size).
This GeometryElement has bytesPerIndex == \(bytesPerIndex)
"""
)
var data = [Int32]()
data.reserveCapacity(primitive.indexCount)
for indexOffset in 0 ..< primitive.indexCount {
data.append(buffer
.contents()
.advanced(by: (Int(index) * primitive.indexCount + indexOffset) * MemoryLayout<Int32>.size)
.assumingMemoryBound(to: Int32.self).pointee)
}
return data
}
func asInt32Array() -> [Int32] {
var data = [Int32]()
let totalNumberOfInt32 = count * primitive.indexCount
data.reserveCapacity(totalNumberOfInt32)
for indexOffset in 0 ..< totalNumberOfInt32 {
data.append(buffer.contents().advanced(by: indexOffset * MemoryLayout<Int32>.size).assumingMemoryBound(to: Int32.self).pointee)
}
return data
}
func asUInt16Array() -> [UInt16] {
asInt32Array().map { UInt16($0) }
}
public func asUInt32Array() -> [UInt32] {
asInt32Array().map { UInt32($0) }
}
}
I was also curious to know if I can do this without ARKit using SpatialTrackingSession. My understanding is that using SpatialTrackingSession in RealityKit I can only get the transforms of the AnchorEntities but it won't have geometry information to create the collision shapes.
https://developer.apple.com/documentation/realitykit/model3d/ontapgesture(count:coordinatespace:perform:)
link -> double tap gesture deprecated in visionOS2.0. use only watchOS. right..?
so how can i make a double tap gesture in visionOS??
How can I create a 3D model of clothing that behaves like real fabric, with realistic physics? Is it possible to achieve this model by photogrammetry? I want to use this model in the Apple Vision Pro and interact with it using hand gestures.
Now I'm developing a 3D motion capture app by using ARKit.
So I tested this sample code, but in iOS18, hand's and leg's orientations seems to be wrong.
Forrowing image is sample app's screen captures in iOS17 and iOS18.
I'm exploring face tracking and experimenting with ARKit's ARSCNFaceGeometry face mesh. I'm running a minimal demo application on the latest iPad Pro M4 11-inch, and I've provided the code below.
I've heard that Apple still offers some of the best face tracking technology on consumer devices, largely because they are one of the few that combine depth and image data. Both a colleague and I tested the demo, and while it works as well or better than some other solutions we tried, we weren’t particularly impressed compared to Google’s MediaPipe or Nvidia’s Maxine, both of which rely solely on image data without depth. In our case, the ARKit face mesh doesn’t always align perfectly with the chin, and as the face rotates, in some areas vertices shift by up to a centimeter from their original position.
This led us to question whether our demo app was using the TrueDepth sensor at all. To test this, we used a piece of cardboard with a small hole punched in it and taped it over the sensor array, leaving only the camera exposed. On the iOS lock screen, this prevents FaceID from working, but we still get a clear image from the camera. With the TrueDepth sensor blocked, the face mesh tracking in our app still functioned, but honestly, we couldn’t detect a significant difference in tracking performance with or without the TrueDepth sensor obscured.
Could we be setting up the face tracking configuration incorrectly? Or has face tracking in newer versions of iOS become less dependent on the TrueDepth sensor?
The controller:
import SwiftUI
import ARKit
struct FaceTrackingView1: UIViewControllerRepresentable {
func makeUIViewController(context: Context) -> FaceTrackingViewController1 {
return FaceTrackingViewController1()
}
func updateUIViewController(_ uiViewController: FaceTrackingViewController1, context: Context) {
}
}
class FaceTrackingViewController1: UIViewController, ARSCNViewDelegate, ARSessionDelegate {
var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView = ARSCNView(frame: view.bounds)
sceneView.delegate = self
sceneView.automaticallyUpdatesLighting = true
view.addSubview(sceneView)
let config = ARFaceTrackingConfiguration()
sceneView.session.run(config)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard anchor is ARFaceAnchor else { return nil }
let faceGeometry = ARSCNFaceGeometry(device: sceneView.device!)!
let faceNode = SCNNode(geometry: faceGeometry)
faceNode.geometry?.firstMaterial?.fillMode = .lines // Makes it a wireframe mesh
return faceNode
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry = node.geometry as? ARSCNFaceGeometry else { return }
faceGeometry.update(from: faceAnchor.geometry)
}
}
The view:
import SwiftUI
struct ContentView: View {
@State private var isFaceTrackingActive = false
var body: some View {
VStack {
Text("Face mesh tracking demo")
.font(.title)
.padding()
Button(action: {
isFaceTrackingActive.toggle()
}) {
Text("Start Face Tracking")
.font(.title2)
.padding()
.background(Color.blue)
.foregroundColor(.white)
.cornerRadius(10)
}
.fullScreenCover(isPresented: $isFaceTrackingActive) {
FaceTrackingView1()
}
}
.padding()
}
}
#Preview {
ContentView()
}
Topic:
Spatial Computing
SubTopic:
ARKit
I am working on a React Native app, specifically on the iOS native module with RealityKit. An apparently unexplainable error keeps happening at runtime, as you can see from the following image:
Crash log from XCode When I try to retrieve the position of my AnchorEntity relative to the world space (so using relativeTo: nil), it triggers a runtime exception during some of its internal calls:
CoreRE: re::BucketArray<unsigned short*, 32ul>::operator[](unsigned long) + 204
As you can see from the code, my AnchorEntity is not null as there is a guard check. I also tried to move that code into an objective c static function in order to use @try @catch and catch runtime exceptions, to later realise that RealityKit is not compatible with ObjectiveC. Do you have any idea/suggestion on how to fix it/prevent it?
Topic:
Spatial Computing
SubTopic:
ARKit
We are currently working with the Enterprise APIs for visionOS 2 and have successfully obtained the necessary entitlements for passthrough camera access. Our goal is to capture images of external real-world objects using the passthrough camera of the Vision Pro, not just take screenshots or screen captures.
Our specific use case involves:
1. Accessing the raw passthrough camera feed.
2. Capturing high-resolution images of objects in the real world through the camera.
3. Processing and saving these images for further analysis within our custom enterprise app.
We would greatly appreciate any guidance, tutorials, or sample code that could help us achieve this functionality. If there are specific APIs or best practices for handling real-world image capture via passthrough cameras with the Enterprise APIs, please let us know.
Hi guys I'm currently developing a game for the Vision Pro and i'm trying to figure out how the hand tracking works so I can make a superpower appear when the user looks at their hand and widens it. But im really struggling to wrap my head around the whole concept and how to implement it in my code.
Is there anything out there (other than apple doc) or anyone who could help me shed some light on the whole idea and how I could actually usefully implement it?
would be much appreciated
Thanks
Topic:
Spatial Computing
SubTopic:
ARKit
I’m currently working on a project where I need to display a transparent GIF at a specific 3D point within an AR environment. I’ve tried various methods, including using RealityKit with UnlitMaterial and TextureResource, but the GIF still appears with a black background instead of being transparent.
Does anyone have experience with displaying animated transparent GIFs on an AR plane or point? Any guidance on how to achieve true transparency for GIFs in ARKit or RealityKit would be appreciated.
We are currently using Apple's Object capture module and wonder if it would be possible to collect the following data :
Device information
Current translation / rotation
Focal length embedded to the image headers
GPS localisation information.
Information about the exposure time
White balances and the color correction matrices
We also have 2 additional questions :
Is there an option to block close up accomodation of the camera ?
Is there a way for the object capture module to take a video instead of a series of picture ?
With Xcode16 and VIsionOS SDK 2.0, result of consecutive ar_anchor_get_timestamp may differ many seconds.
Is there any way to detect the timestamp jumping?
Topic:
Spatial Computing
SubTopic:
ARKit
I have created two scenes, one immersive and one volumetric using Reality Composer Pro. In my test app I can view both and they render correctly.
However, I would like to add entities programmatically. I am trying this;
var body: some View {
RealityView { content in
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
viewModel.rootEntity = scene
content.add(scene)
var anchorEntity = AnchorEntity(world: [0, 0, -0.5])
let sphere = MeshResource.generateSphere(radius: 2.0)
let material = SimpleMaterial(color: .red, roughness: 0.5, isMetallic: true)
let modelEntity = ModelEntity(mesh: sphere, materials: [material])
anchorEntity.addChild(modelEntity)
content.add(anchorEntity)
}
}
}
However, the sphere does not appear in the volume. I also tried it in the immersive space and it does not appear there either.
What am I missing?
Topic:
Spatial Computing
SubTopic:
ARKit
if you check the code here,
https://developer.apple.com/documentation/compositorservices/interacting-with-virtual-content-blended-with-passthrough
var body: some Scene {
ImmersiveSpace(id: Self.id) {
CompositorLayer(configuration: ContentStageConfiguration()) { layerRenderer in
let pathCollection: PathCollection
do {
pathCollection = try PathCollection(layerRenderer: layerRenderer)
} catch {
fatalError("Failed to create path collection \(error)")
}
let tintRenderer: TintRenderer
do {
tintRenderer = try TintRenderer(layerRenderer: layerRenderer)
} catch {
fatalError("Failed to create tint renderer \(error)")
}
Task(priority: .high) { @RendererActor in
Task { @MainActor in
appModel.pathCollection = pathCollection
appModel.tintRenderer = tintRenderer
}
let renderer = try await Renderer(layerRenderer,
appModel,
pathCollection,
tintRenderer)
try await renderer.renderLoop()
Task { @MainActor in
appModel.pathCollection = nil
appModel.tintRenderer = nil
}
}
layerRenderer.onSpatialEvent = {
pathCollection.addEvents(eventCollection: $0)
}
}
}
.immersionStyle(selection: .constant(appModel.immersionStyle), in: .mixed, .full)
.upperLimbVisibility(appModel.upperLimbVisibility)
the only way it's dealing with the error is fatalError.
And don't think I can throw anything or return anything else?
Is there a way I can gracefully handle this and show a message box in UI?
I was hoping I could somehow trigger a failure and have https://developer.apple.com/documentation/swiftui/openimmersivespaceaction return fail.
but couldn't find a nice way to do so.
Let me know if you have ideas.
I'm seeking detailed information about the rotation matrix of the iPhone's front-facing (selfie) camera when using ARKit.
Specifically, I need to understand:
The exact rotation matrix applied to the front-facing camera's output in ARKit.
Whether this matrix is consistent across all iPhone models or if there are variations.
If there are any transformations applied to align the camera's coordinate system with the device's orientation, particularly in portrait mode.
How this rotation matrix relates to the transform property of `ARFrame.camera
By applying for the enterprise API, we can obtain the data of video frames collected by VisionPro glasses, and then we process the collected video frames to achieve the function of eliminating a certain object. But it was not found how to insert the processed video frames into the data source collected by the system camera.
So I would like to ask if there is any API that can insert processed video frames into the original data and present them to the user?
This effect is similar to the right side twist of VisionPro glasses, which allows the physical world and digital space to blend perfectly after rotation. So, I would like to ask if there is a related API that can solve this problem?
STEPS TO REPRODUCE
Obtain video frames,
Process the obtained video frames
Insert the processed video frames into the VisonOS system camera.
System: VisionOS 2.0
API used: Enterprise APIs Main camera access permissions
Hi all,
I am having trouble debugging an error where the wireframe object entity representation for the Object Tracking Demo: "Explore object tracking for visionOS" appears incorrect in the right eye of the Vision Pro but correct in the left eye. Would anyone happen to know what is going on? I have attempted to offset the object by changing world coordinates, but this moves the object in both the left and the right eye. Could this be due to the new visionOS beta update (2.0 --> 2.2) ? I am currently using visionOS 2.2.
Thanks!
Topic:
Spatial Computing
SubTopic:
ARKit
In visionOS beta, when using ARKit for image detection, the initially detected AnchorUpdate status is .add, and subsequent detections of the same image are marked as .update. However, after toggling immersiveSpace, the same image is detected with the status .add again. After updating to visionOS 2.1, the first detection status remains `add, and subsequent detections of the same image remain .update, even after toggling immersiveSpace. Could this be due to a change in processing flow?