Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Retrieve AnchorEntity (Hand Locations) Position through update(context: Scene) function
Hello there, I'm currently working on a Hand Tracking System. I've already placed some spheres on some joint points on the left and right hand. Now I want to access the translation/position value of these entities in the update(context: Scene) function. Now my question is, is it possible to access them via .handAnchors(), or which types of .handSkeleton.joint(name) are referencing the same entity? (E.g. is AnchorEntity(.hand(.right, location: .indexFingerTip)) the same as handSkeleton.joint(.indexFingerTip). The goal would be to access the translation of the joints where a sphere has been placed per hand and to be able to update the data every frame through the update(context) function. I would very much appreciate any help! See code example down below: ImmersiveView.swift import SwiftUI import RealityKit import ARKit struct ImmersiveView: View { public var body: some View { RealityView { content in /* HEAD */ let headEntity = AnchorEntity(.head) content.add(headEntity) /* LEFT HAND */ let leftHandWristEntity = AnchorEntity(.hand(.left, location: .wrist)) let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: .indexFingerTip)) let leftHandWristSphere = ModelEntity(mesh: .generateSphere(radius: 0.02), materials: [SimpleMaterial(color: .red, isMetallic: false)]) let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)]) leftHandWristEntity.addChild(leftHandWristSphere) content.add(leftHandWristEntity) leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere) content.add(leftHandIndexFingerEntity) } } } TrackingSystem.swift import SwiftUI import simd import ARKit import RealityKit public class TrackingSystem: System { static let query = EntityQuery(where: .has(AnchoringComponent.self)) private let arKitSession = ARKitSession() private let worldTrackingProvider = WorldTrackingProvider() private let handTrackingProvider = HandTrackingProvider() public required init(scene: RealityKit.Scene) { setUpSession() } private func setUpSession() { Task { do { try await arKitSession.run([worldTrackingProvider, handTrackingProvider]) } catch { print("Error: \(error)") } } } public func update(context: SceneUpdateContext) { guard worldTrackingProvider.state == .running && handTrackingProvider.state == .running else { return } let _ = context.entities(matching: Self.query, updatingSystemWhen: .rendering) if let avp = worldTrackingProvider.queryDeviceAnchor(atTimestamp: currentTime) { let hands = handTrackingProvider.handAnchors(at: currentTime) ... } } }
2
0
731
Aug ’24
Vision Pro - Throw object by hand
Hello All, I'm desperate to found a solution and I need your help please. I've create a simple cube in Vision OS. I can get it by hand (close my hand on it) and move it pretty where I want. But, I would like to throw it (exemple like a basket ball). Not push it, I want to have it in hand and throw it away of me with a velocity and direction = my hand move (and finger opened to release it). Please put me on the wait to do that. Cheers and thanks Mathis
9
0
1.5k
Aug ’24
Object Tracking with RealtyView
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code: RealityView { content in if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(model) } } Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add? Note:I have seen https://developer.apple.com/videos/play/wwdc2024/10101/, but I don't know much about it.
3
1
1.1k
Jun ’24
Access persona on VisionOS
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona? Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I Any help is very welcome, thanks.
3
0
1.1k
Apr ’24
RoomPlan Framework v2 - Stairs missing
Hello Community, I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible. Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part. Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue. Thank you in advance for any assistance you can provide. Best regards
3
1
823
Mar ’24
Getting to MeshAnchor.MeshClassification from MeshAnchor?
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces. This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there? I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
3
0
1.2k
Mar ’24
How do we author a "reality file" like the ones on Apple's Gallery?
How do we author a Reality File like the ones under Examples with animations at https://developer.apple.com/augmented-reality/quick-look/ ?? For example, "The Hab" : https://developer.apple.com/augmented-reality/quick-look/models/hab/hab_en.reality Tapping on various buttons in this experience triggers various complex animations. I don't see any way to accomplish this in Reality Composer. And I don't see any way to export/compile to a "reality file" from within Xcode. How can I use multiple animations within a single GLTF file? How can I set up multiple "tap target" on a single object, where each one triggers a different action? How do we author something similar? What tools do we use? Thanks
6
2
1.9k
Nov ’23
When using ARKit, why can’t you get the front-facing and back-facing camera feeds at once?
I’d like to use ARKit world tracking and display both the back camera feed and the front camera feeds, using the front feed as as a PIP. This would work great for an internet streaming use case. However, it’s impossible. As soon as ARKit is told to use one mode, the camera for the other side freezes/doesn’t work. This page also says you have to pick one camera to show: https://developer.apple.com/documentation/arkit/arkit_in_ios/choosing_which_camera_feed_to_augment?language=objc A question to the developers: why is this limitation in-place? Are there any work-arounds for the use case of ARKit world tracking + displaying the back camera feed + displaying the front camera feed as an overlay? It’s possible to do this with plain camera initialization without ARKit. (There’s an official example.) With ARKit, it no longer works. It’s strange that I cannot access the front feed via one of the other frameworks, but I guess that ARKit blocks that.
3
0
1k
Jul ’23
Is SceneKit depricated ?
Hi everyone! I am working on AR app and wanted to implement object occlusion because it removes drift pretty much from the object. This working great with RealityKit sample But I am unable to replicate such behaviour it with scenekit. Because scenekit does not offer object occlusion. Can we say scenekit is getting depricated, and we should re-write app in RealityKit (which is obviously a big task)?
5
0
1.5k
Jan ’22
What causes »ARSessionDelegate is retaining X ARFrames« console warning?
Hi, since iOS 15 I've repeatedly noticed the console warning »ARSessionDelegate is retaining X ARFrames. This can lead to future camera frames being dropped« even for rather simple projects using RealityKit and ARKit. Could someone from the ARKit team please elaborate what causes this warning and what can be done to avoid it? If I remember correctly I didn't even assign an ARSessionDelegate. Thank you!
4
1
3.3k
Nov ’21
How the depth-map is aligned to rgb image ?
I want to know are depth map and RGB image are perfectly aligned(do both have the same principle point)? If yes then how the depth- map is created. The depth map on iphone12 has 256x192 resolution as opposed to an RGB image (1920x1440). I am interested in exact pixel-wise depth. Is it possible to get the raw depth map of 1920x1440 resolution ? How is the depth-map is created at 256 x 192 resolution? Behind the scenes does the pipeline captures it at 1920 x1440 resolution and then resize it to 256x192? I have so many questions as there are no intrinsic, extrinsic, and calibration data given regarding the lidar. I would greatly appreciate it if someone can explain the steps from a computer-vision perspective. Many Thanks
6
2
3.4k
Aug ’21