Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Is there any way, I can use the Object Tracking applications on an iOS (iPhone) AR App.
I have been referencing the Object Tracking Tutorial from WWDC 2024 on Vision OS, how Create ML is used to create a reference object, and we can track them in the ARSession. I am looking forward to building this feature on an AR app for iPhone, I am using iPhone 13 Pro Max. I have created couple of reference objects from the Create ML.
1
0
302
Mar ’25
ARKit hand tracking
Hello, I am developing a visionOS application and am interested in obtaining detailed data of users’ hands through ARKit, including but not limited to Transform and rotation angle. I have reviewed Happy Beem, but it appears to only introduce the method of identifying the user’s specific gestures. Could you please advise on how to obtain the Transform and rotation angle of the user’s hand? Thank you.
1
0
415
Mar ’25
ARPlaneGeometry Negative triangles indices ?
Hello When processing an ARPlaneAnchor geometry using its ARPlaneGeometry, the triangleIndices is an array of Int16. It's supposed to be an index buffer, which can only be uint16 or uint32 metal. What am I supposed to do with negative indices ? Negative indices are rare but do appear sometimes. Thank you
1
0
187
Mar ’25
ARMeshAnchors are very unreliable on iPad Pro (4th gen)
Hello, We are developing an AR app that requires the lidar meshes. Unfortunately the ARMeshAnchors that allows us to retrieve the mesh data are very unreliable. It happens very often that the ARSession removes all ARMeshAnchors and takes anywhere from 5s to 30s to reappear. The planes detection (ARPlaneAnchors) are still working fine and the camera tracking is also working normally. I tried a basic ARKit sample app, and got the same behaviour as our own app. Is this a known issue ? Anything we can do to mitigate the issue ? Thank you
1
0
285
Mar ’25
Adding reference image failed in visionos
I am trying the image tracking of ARKit on VisionPro, but there seems to be some problem when adding reference image. Here is my code: let images = ReferenceImage.loadReferenceImages(inGroupNamed: "photos") print("Images: \(images)") try await appState!.arkitSession.run([imageTracking]) It can successfully print those images, however sometimes it will print the error message like this: ARImageTrackingRemoteService: Adding reference image <ARReferenceImage: 0x3032399e0 name="chair" physicalSize=(0.070, 0.093)> failed. When this error message is printed, the corresponding image can not be tracked. I do not understand why this will happen, because sometimes the image can be successfully added, but other time not, even for the same image. It makes my app not stable. Besides, there are some other error messages, and I do not know whether it is related: ARPredictorRemoteService <0x1042154a0>: Query queue is not running. Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
1
0
323
Mar ’25
VNDetectHumanBodyPose3DRequest failing on Vision Pro
Hi! I attempted running a sample project for detecting human pose in 3D with vision framework, that can be found here: https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision. It works perfectly on my Macbook Pro M1, but fails on Apple Vision Pro. After selecting a photo, an endless loading screen is displayed and the following message is produced in the console: Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Network path is nil: (null) Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}. de-activating session 70138 after timeout Is human pose detection expected to work on VisionOS? Is there any special configuration required, that I might be missing?
1
0
85
Mar ’25
Convert entity to image
VisionPro 开发,XCode,我想在窗口中找到一个显示模型的图片。这个模型可以改变它的材料,它不是唯一的形象,它正在改变。如何将此模型转换为图像
1
0
55
Apr ’25
[VisionPro] Placing virtual entities around my arm
Hi everyone, I'm developing a MR Vision Pro app where I’d like to anchor virtual objects (such as UI elements) around the user's arm. However, I’ve noticed that Vision Pro seems to mask out the area where the user’s real arm is, hiding virtual content in that region so that you see your real arm. Is there a way to render virtual elements on the user's arm—so that it looks like the object is placed directly on the arm despite the real-world passthrough? I was hoping there might be a way to adjust the depth or behavior of this masked-out region. Any insights or workarounds would be greatly appreciated! Thanks :)
1
0
69
Mar ’25
How to Achieve Realistic Colors and Textures in LiDAR Scanning with Swift
Hello, I'm developing a LiDAR-based scanning app using Swift, where I can successfully perform scans and export the results as .obj files. My goal is to have the scan's colors and textures closely resemble real-world visuals as captured by the camera, similar to the results shown in this repository. In the referenced repository, the result is demonstrated with a single screenshot, but I want to display the textures and colors throughout the entire scanning process, not just at the final result. To clarify, I'm not focused on scanning individual objects but rather larger environments like rooms, houses, or outdoor spaces such as streets. Here’s what I’m aiming for: Realistic colors and textures that match what the camera sees during the scan. Continuous texture rendering during the scanning process, not just in the final exported model. Could anyone share guidance, sample code, or point me to relevant documentation to achieve this? Any help would be greatly appreciated! Thank you!
1
0
77
May ’25
Can not remove final World Anchor
I’ve been having some issues removing anchors. I can add anchors with no issue. They will be there the next time I run the scene. I can also get updates when ARKit sends them. I can remove anchors, but not all the time. The method I’m using is to call removeAnchor() on the data provider. worldTracking.removeAnchor(forID: uuid) // Yes, I have also tried `removeAnchor(_ worldAnchor: WorldAnchor)` This works if there are more than one anchor in a scene. When I’m down to one remaining anchor, I can remove it. It seems to succeed (does not raise an error) but the next time I run the scene the removed anchor is back. This only happens when there is only one remaining anchor. do { // This always run, but it doesn't seem to "save" the removal when there is only one anchor left. try await worldTracking.removeAnchor(forID: uuid) } catch { // I have never seen this block fire! print("Failed to remove world anchor \(uuid) with error: \(error).") } I posted a video on my website if you want to see it happening. https://stepinto.vision/labs/lab-051-issues-with-world-tracking/ Here is the full code. Can you see if I’m doing something wrong? Is this a bug? struct Lab051: View { @State var session = ARKitSession() @State var worldTracking = WorldTrackingProvider() @State var worldAnchorEntities: [UUID: Entity] = [:] @State var placement = Entity() @State var subject : ModelEntity = { let subject = ModelEntity( mesh: .generateSphere(radius: 0.06), materials: [SimpleMaterial(color: .stepRed, isMetallic: false)]) subject.setPosition([0, 0, 0], relativeTo: nil) let collision = CollisionComponent(shapes: [.generateSphere(radius: 0.06)]) let input = InputTargetComponent() subject.components.set([collision, input]) return subject }() var body: some View { RealityView { content in guard let scene = try? await Entity(named: "WorldTracking", in: realityKitContentBundle) else { return } content.add(scene) if let placementEntity = scene.findEntity(named: "PlacementPreview") { placement = placementEntity } } update: { content in for (_, entity) in worldAnchorEntities { if !content.entities.contains(entity) { content.add(entity) } } } .modifier(DragGestureImproved()) .gesture(tapGesture) .task { try! await setupAndRunWorldTracking() } } var tapGesture: some Gesture { TapGesture() .targetedToAnyEntity() .onEnded { value in if value.entity.name == "PlacementPreview" { // If we tapped the placement preview cube, create an anchor Task { let anchor = WorldAnchor(originFromAnchorTransform: value.entity.transformMatrix(relativeTo: nil)) try await worldTracking.addAnchor(anchor) } } else { Task { // Get the UUID we stored on the entity let uuid = UUID(uuidString: value.entity.name) ?? UUID() do { try await worldTracking.removeAnchor(forID: uuid) } catch { print("Failed to remove world anchor \(uuid) with error: \(error).") } } } } } func setupAndRunWorldTracking() async throws { if WorldTrackingProvider.isSupported { do { try await session.run([worldTracking]) for await update in worldTracking.anchorUpdates { switch update.event { case .added: let subjectClone = subject.clone(recursive: true) subjectClone.isEnabled = true subjectClone.name = update.anchor.id.uuidString subjectClone.transform = Transform(matrix: update.anchor.originFromAnchorTransform) worldAnchorEntities[update.anchor.id] = subjectClone print("🟢 Anchor added \(update.anchor.id)") case .updated: guard let entity = worldAnchorEntities[update.anchor.id] else { print("No entity found to update for anchor \(update.anchor.id)") return } entity.transform = Transform(matrix: update.anchor.originFromAnchorTransform) print("🔵 Anchor updated \(update.anchor.id)") case .removed: worldAnchorEntities[update.anchor.id]?.removeFromParent() worldAnchorEntities.removeValue(forKey: update.anchor.id) print("🔴 Anchor removed \(update.anchor.id)") if let remainingAnchors = await worldTracking.allAnchors { print("Remaining Anchors: \(remainingAnchors.count)") } } } } catch { print("ARKit session error \(error)") } } } }
1
2
151
May ’25
RealityRenderer's Perspective Camera's FOV
Hi, I have been using RealityRenderer to render scenes in MacOS as spatial videos and view it in Vision Pro and it is awesome. I understand that it uses PerspectiveCamera to render. I wanted to know what is the default FOV for this camera and how much can we push it? I want to ideally render a scene with 180 degrees of fov. Thanks
1
0
99
May ’25
I need to loop my videoMaterial.
I need to loop my videoMaterial and I don't know how to make it happen in my code. I have included an image of my videoMaterial code. Any help making this happen with be greatly appreciated. Thank you, Christopher
1
0
91
Jun ’25
Tracking multiple ImageAnchor simultaneously on VisionOS
Using the example code posted here: https://developer.apple.com/documentation/visionOS/tracking-images-in-3d-space I can register multiple ReferenceImage s with a ImageTrackingProvider, but only one updates at a time - to have realtime updating, I can only have one ImageAnchor in my field of view at a time. Is it possible to track multiple imageAnchors at the same time in the same field of view? As in having several ImageAnchor's tracked and entities updated to the transforms of the anchor in the same frame/moment from the Apple Vision Pro?
1
1
124
3w
ARDepthData.confidenceMap only returns low confidence on certain devices
A few users have recently reported no longer being able to capture point clouds using our app, specifically on iPhone 15 Pro devices. We recently found an in-house device that exhibits this behavior and found that the confidenceMap contains only low confidence values, regardless of the environment being captured. Our app uses a higher confidence threshold; setting the threshold to a lower value produces noisy results as expected, so that is a non-viable option. Other LiDAR based apps have been tested with this device and the results are the same. No points, or noisy point clouds in apps that allow a lower confidence threshold setting. On devices that exhibit this behavior the "Displaying a point cloud using scene depth" Apple sample app can be used to visualize the issue. First reports of this new behavior occurred as early as iOS 18.4. Looking for recommendations on which team(s) at Apple to reach out to with these findings since the behavior manifests on only a small sample of devices.
1
0
197
Jun ’25
I need to troubleshoot Transform Drift in ARKit
Hi all, I'm currently developing a real-time object reconstruction app using ARKit. The goal is to scan large objects using ARKit’s depth and transform data, and generate a point cloud. However, I’m facing a major challenge - Transform Drift / World Alignment Issues The localToWorld transform provided by ARKit frequently seems to drift or become unstable across frames. This results in misaligned point clouds even when the device is moved slowly or kept relatively still. In some cases, a static surface scanned over a few seconds results in clearly misaligned fragments. This makes it difficult to accurately stitch a multi-frame point cloud. I have experimented with various lighting conditions and object textures, but the issue persists in all cases. At times, the relative error between frames reaches up to 20 cm, while in other instances the error is minimal; however, the drift gradually accumulates over time, leading to an overall enlargement of the reconstructed object. I have attached images of both cases here. Questions: Are there specific conditions under which ARKit’s world transform is expected to drift? Is there a way to detect or recover from this drift during runtime? Any best practices for maintaining consistent tracking during scanning or measurement sessions?
1
0
76
Jun ’25