Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

To what extend does AR FaceTracking still rely the Truedepth camera?
I'm exploring face tracking and experimenting with ARKit's ARSCNFaceGeometry face mesh. I'm running a minimal demo application on the latest iPad Pro M4 11-inch, and I've provided the code below. I've heard that Apple still offers some of the best face tracking technology on consumer devices, largely because they are one of the few that combine depth and image data. Both a colleague and I tested the demo, and while it works as well or better than some other solutions we tried, we weren’t particularly impressed compared to Google’s MediaPipe or Nvidia’s Maxine, both of which rely solely on image data without depth. In our case, the ARKit face mesh doesn’t always align perfectly with the chin, and as the face rotates, in some areas vertices shift by up to a centimeter from their original position. This led us to question whether our demo app was using the TrueDepth sensor at all. To test this, we used a piece of cardboard with a small hole punched in it and taped it over the sensor array, leaving only the camera exposed. On the iOS lock screen, this prevents FaceID from working, but we still get a clear image from the camera. With the TrueDepth sensor blocked, the face mesh tracking in our app still functioned, but honestly, we couldn’t detect a significant difference in tracking performance with or without the TrueDepth sensor obscured. Could we be setting up the face tracking configuration incorrectly? Or has face tracking in newer versions of iOS become less dependent on the TrueDepth sensor? The controller: import SwiftUI import ARKit struct FaceTrackingView1: UIViewControllerRepresentable { func makeUIViewController(context: Context) -> FaceTrackingViewController1 { return FaceTrackingViewController1() } func updateUIViewController(_ uiViewController: FaceTrackingViewController1, context: Context) { } } class FaceTrackingViewController1: UIViewController, ARSCNViewDelegate, ARSessionDelegate { var sceneView: ARSCNView! override func viewDidLoad() { super.viewDidLoad() sceneView = ARSCNView(frame: view.bounds) sceneView.delegate = self sceneView.automaticallyUpdatesLighting = true view.addSubview(sceneView) let config = ARFaceTrackingConfiguration() sceneView.session.run(config) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) sceneView.session.pause() } func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? { guard anchor is ARFaceAnchor else { return nil } let faceGeometry = ARSCNFaceGeometry(device: sceneView.device!)! let faceNode = SCNNode(geometry: faceGeometry) faceNode.geometry?.firstMaterial?.fillMode = .lines // Makes it a wireframe mesh return faceNode } func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor, let faceGeometry = node.geometry as? ARSCNFaceGeometry else { return } faceGeometry.update(from: faceAnchor.geometry) } } The view: import SwiftUI struct ContentView: View { @State private var isFaceTrackingActive = false var body: some View { VStack { Text("Face mesh tracking demo") .font(.title) .padding() Button(action: { isFaceTrackingActive.toggle() }) { Text("Start Face Tracking") .font(.title2) .padding() .background(Color.blue) .foregroundColor(.white) .cornerRadius(10) } .fullScreenCover(isPresented: $isFaceTrackingActive) { FaceTrackingView1() } } .padding() } } #Preview { ContentView() }
1
0
373
Oct ’24
RealityKit HasTransform.position throws a runtime exception
I am working on a React Native app, specifically on the iOS native module with RealityKit. An apparently unexplainable error keeps happening at runtime, as you can see from the following image: Crash log from XCode When I try to retrieve the position of my AnchorEntity relative to the world space (so using relativeTo: nil), it triggers a runtime exception during some of its internal calls: CoreRE: re::BucketArray<unsigned short*, 32ul>::operator[](unsigned long) + 204 As you can see from the code, my AnchorEntity is not null as there is a guard check. I also tried to move that code into an objective c static function in order to use @try @catch and catch runtime exceptions, to later realise that RealityKit is not compatible with ObjectiveC. Do you have any idea/suggestion on how to fix it/prevent it?
0
0
252
Oct ’24
Capturing External Object Images via Vision Pro Passthrough Camera with Enterprise APIs
We are currently working with the Enterprise APIs for visionOS 2 and have successfully obtained the necessary entitlements for passthrough camera access. Our goal is to capture images of external real-world objects using the passthrough camera of the Vision Pro, not just take screenshots or screen captures. Our specific use case involves: 1. Accessing the raw passthrough camera feed. 2. Capturing high-resolution images of objects in the real world through the camera. 3. Processing and saving these images for further analysis within our custom enterprise app. We would greatly appreciate any guidance, tutorials, or sample code that could help us achieve this functionality. If there are specific APIs or best practices for handling real-world image capture via passthrough cameras with the Enterprise APIs, please let us know.
0
0
598
Oct ’24
HandTracking
Hi guys I'm currently developing a game for the Vision Pro and i'm trying to figure out how the hand tracking works so I can make a superpower appear when the user looks at their hand and widens it. But im really struggling to wrap my head around the whole concept and how to implement it in my code. Is there anything out there (other than apple doc) or anyone who could help me shed some light on the whole idea and how I could actually usefully implement it? would be much appreciated Thanks
1
0
422
Oct ’24
How to Display Transparent GIF on a 3D Point in AR Environment?
I’m currently working on a project where I need to display a transparent GIF at a specific 3D point within an AR environment. I’ve tried various methods, including using RealityKit with UnlitMaterial and TextureResource, but the GIF still appears with a black background instead of being transparent. Does anyone have experience with displaying animated transparent GIFs on an AR plane or point? Any guidance on how to achieve true transparency for GIFs in ARKit or RealityKit would be appreciated.
2
0
516
Oct ’24
Apple's Object capture
We are currently using Apple's Object capture module and wonder if it would be possible to collect the following data : Device information Current translation / rotation Focal length embedded to the image headers GPS localisation information. Information about the exposure time White balances and the color correction matrices We also have 2 additional questions : Is there an option to block close up accomodation of the camera ? Is there a way for the object capture module to take a video instead of a series of picture ?
1
0
634
Oct ’24
how to add entities to a volume or immersive view programmatically?
I have created two scenes, one immersive and one volumetric using Reality Composer Pro. In my test app I can view both and they render correctly. However, I would like to add entities programmatically. I am trying this; var body: some View { RealityView { content in if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { viewModel.rootEntity = scene content.add(scene) var anchorEntity = AnchorEntity(world: [0, 0, -0.5]) let sphere = MeshResource.generateSphere(radius: 2.0) let material = SimpleMaterial(color: .red, roughness: 0.5, isMetallic: true) let modelEntity = ModelEntity(mesh: sphere, materials: [material]) anchorEntity.addChild(modelEntity) content.add(anchorEntity) } } } However, the sphere does not appear in the volume. I also tried it in the immersive space and it does not appear there either. What am I missing?
1
0
438
Oct ’24
Proper way of handing opening ImmersiveSpace?
if you check the code here, https://developer.apple.com/documentation/compositorservices/interacting-with-virtual-content-blended-with-passthrough var body: some Scene { ImmersiveSpace(id: Self.id) { CompositorLayer(configuration: ContentStageConfiguration()) { layerRenderer in let pathCollection: PathCollection do { pathCollection = try PathCollection(layerRenderer: layerRenderer) } catch { fatalError("Failed to create path collection \(error)") } let tintRenderer: TintRenderer do { tintRenderer = try TintRenderer(layerRenderer: layerRenderer) } catch { fatalError("Failed to create tint renderer \(error)") } Task(priority: .high) { @RendererActor in Task { @MainActor in appModel.pathCollection = pathCollection appModel.tintRenderer = tintRenderer } let renderer = try await Renderer(layerRenderer, appModel, pathCollection, tintRenderer) try await renderer.renderLoop() Task { @MainActor in appModel.pathCollection = nil appModel.tintRenderer = nil } } layerRenderer.onSpatialEvent = { pathCollection.addEvents(eventCollection: $0) } } } .immersionStyle(selection: .constant(appModel.immersionStyle), in: .mixed, .full) .upperLimbVisibility(appModel.upperLimbVisibility) the only way it's dealing with the error is fatalError. And don't think I can throw anything or return anything else? Is there a way I can gracefully handle this and show a message box in UI? I was hoping I could somehow trigger a failure and have https://developer.apple.com/documentation/swiftui/openimmersivespaceaction return fail. but couldn't find a nice way to do so. Let me know if you have ideas.
1
0
543
Oct ’24
**Title:** Front-Facing Camera Rotation Matrix in ARKit: Consistency, Transformations, and `ARFrame.camera` Alignment
I'm seeking detailed information about the rotation matrix of the iPhone's front-facing (selfie) camera when using ARKit. Specifically, I need to understand: The exact rotation matrix applied to the front-facing camera's output in ARKit. Whether this matrix is consistent across all iPhone models or if there are variations. If there are any transformations applied to align the camera's coordinate system with the device's orientation, particularly in portrait mode. How this rotation matrix relates to the transform property of `ARFrame.camera
0
0
613
Oct ’24
How to insert modified video frames into the system camera to achieve AI erase effect?
By applying for the enterprise API, we can obtain the data of video frames collected by VisionPro glasses, and then we process the collected video frames to achieve the function of eliminating a certain object. But it was not found how to insert the processed video frames into the data source collected by the system camera. So I would like to ask if there is any API that can insert processed video frames into the original data and present them to the user? This effect is similar to the right side twist of VisionPro glasses, which allows the physical world and digital space to blend perfectly after rotation. So, I would like to ask if there is a related API that can solve this problem? STEPS TO REPRODUCE Obtain video frames, Process the obtained video frames Insert the processed video frames into the VisonOS system camera. System: VisionOS 2.0 API used: Enterprise APIs Main camera access permissions
1
0
554
Nov ’24
Eye Difference in Object Tracking
Hi all, I am having trouble debugging an error where the wireframe object entity representation for the Object Tracking Demo: "Explore object tracking for visionOS" appears incorrect in the right eye of the Vision Pro but correct in the left eye. Would anyone happen to know what is going on? I have attempted to offset the object by changing world coordinates, but this moves the object in both the left and the right eye. Could this be due to the new visionOS beta update (2.0 --> 2.2) ? I am currently using visionOS 2.2. Thanks!
1
0
375
Nov ’24
ARKit AnchorUpdate<ImageAnchor>.event Behavior Changes in visionOS 2.1
In visionOS beta, when using ARKit for image detection, the initially detected AnchorUpdate status is .add, and subsequent detections of the same image are marked as .update. However, after toggling immersiveSpace, the same image is detected with the status .add again. After updating to visionOS 2.1, the first detection status remains `add, and subsequent detections of the same image remain .update, even after toggling immersiveSpace. Could this be due to a change in processing flow?
2
0
430
Nov ’24
Access to Raw Lidar point cloud
Is it possible to access the raw lidar measurements before the sceneDepth calculation is done to combines the lidar measurements with visual data. In low light environments the lidar scanner should still work and provide depth info but I cannot figure out how to access those pure lidar depth measurements. I am currently using: guard let frame = arView.session.currentFrame, let depthData = frame.sceneDepth?.depthMap else { print("Depth data is unavailable.") return } but this is the depth data after sensor fusion occurs and fails in low light conditions.
1
0
952
Nov ’24
ObjectCapture from ARKit
We are currently using ObjectCapture from ARKit, and we would like to fix exposure time, white balance parameter and ISO. How can we do this ? Additionally, we'd like to obtain the following information from the ARKit : white balance parameters (in case we cannot fix them) and color correction matrices ?
0
0
513
Nov ’24
INFOS FROM CMSSampleBuffer
==> Which information will I get from CMSSampleBuffer ? Is there an option to block close up accomodation of the camera ? Is there a way for the object capture module to take a video instead of a series of picture ? It would be fantastic to have an answer on all of these questions to be able to move forward on new implementations.
0
0
352
Nov ’24
Post processing in VisionOS
WWDC21 had a cool demo project with fish, with a watery, misty look (Dive into RealityKit). It used post processing in RealityKit, but the ARView class isn’t available in VisionOS. Can CompositorLayer be used instead for post processing in full immersion?
0
0
289
Nov ’24
VisionOS DockingRegion getting ignored
Hi, I added DockingRegion to my scene from Reality Composer Pro, and I am able to load up the scene, but DockingRegion is getting ignored and the scene is getting rendered with no change in AVPlayerViewController window. As it can be seen in Reality Composer Pro screenshot below, I set the width of the player to 666, and moved it to the back by 300cm, but the actual result does not reflect the position I set on Reality Composer Pro. Is there anything else I should do other than loading up the Entity and adding to RealityView? Specifically, do I have to get the DockingRegion within the usda file and somehow enable it?
3
0
478
Dec ’24