Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Barcode Anchor Jitter in Vision Pro due to Invalid enterprise api for barcode scanning Values
We’re using the enterprise API for spatial barcode/QR code scanning in the Vision Pro app, but we often get invalid values for the barcode anchor from the API, leading to jittery barcode positions in the UI. The code we’re using is attached below. import SwiftUI import RealityKit import ARKit import Combine struct ImmersiveView: View { @State private var arkitSession = ARKitSession() @State private var root = Entity() @State private var fadeCompleteSubscriptions: Set = [] var body: some View { RealityView { content in content.add(root) } .task { // Check if barcode detection is supported; otherwise handle this case. guard BarcodeDetectionProvider.isSupported else { return } // Specify the symbologies you want to detect. let barcodeDetection = BarcodeDetectionProvider(symbologies: [.code128, .qr, .upce, .ean13, .ean8]) do { try await arkitSession.requestAuthorization(for: [.worldSensing]) try await arkitSession.run([barcodeDetection]) print("Barcode scanning started") for await update in barcodeDetection.anchorUpdates where update.event == .added { let anchor = update.anchor // Play an animation to indicate the system detected a barcode. playAnimation(for: anchor) // Use the anchor's decoded contents and symbology to take action. print( """ Payload: \(anchor.payloadString ?? "") Symbology: \(anchor.symbology) """) } } catch { // Handle the error. print(error) } } } // Define this function in ImmersiveView. func playAnimation(for anchor: BarcodeAnchor) { guard let scene = root.scene else { return } // Create a plane sized to match the barcode. let extent = anchor.extent let entity = ModelEntity(mesh: .generatePlane(width: extent.x, depth: extent.z), materials: [UnlitMaterial(color: .green)]) entity.components.set(OpacityComponent(opacity: 0)) // Position the plane over the barcode. entity.transform = Transform(matrix: anchor.originFromAnchorTransform) root.addChild(entity) // Fade the plane in and out. do { let duration = 0.5 let fadeIn = try AnimationResource.generate(with: FromToByAnimation<Float>( from: 0, to: 1.0, duration: duration, isAdditive: true, bindTarget: .opacity) ) let fadeOut = try AnimationResource.generate(with: FromToByAnimation<Float>( from: 1.0, to: 0, duration: duration, isAdditive: true, bindTarget: .opacity)) let fadeAnimation = try AnimationResource.sequence(with: [fadeIn, fadeOut]) _ = scene.subscribe(to: AnimationEvents.PlaybackCompleted.self, on: entity, { _ in // Remove the plane after the animation completes. entity.removeFromParent() }).store(in: &fadeCompleteSubscriptions) entity.playAnimation(fadeAnimation) } catch { print("Error") } } }
3
0
501
Jan ’25
WorldAnchors added and removed immediately
I can add a WorldAnchor to a WorldTrackingProvider. The next time I start my app, the WorldAnchor is added back, and then is immediately removed: dataProviderStateChanged(dataProviders: [WorldTrackingProvider(0x0000000300bc8370, state: running)], newState: running, error: nil) AnchorUpdate(event: added, timestamp: 43025.248134708, anchor: WorldAnchor(id: C0A1AE95-F156-45F5-9030-895CAABF16E9, isTracked: true, originFromAnchorTransform: <translation=(0.048458 0.000108 -0.317565) rotation=(0.00° 15.44° -0.00°)>)) AnchorUpdate(event: removed, timestamp: 43025.348131208, anchor: WorldAnchor(id: C0A1AE95-F156-45F5-9030-895CAABF16E9, isTracked: false, originFromAnchorTransform: <translation=(0.000000 0.000000 0.000000) rotation=(-0.00° 0.00° 0.00°)>)) It always leaves me with zero anchors in .allAnchors...the ARKitSession is still active at this point
8
0
665
Jan ’25
Significant deviation of depth map values captured in ARKit framework
I use ARKit to build an app, scan rooms to collect the spatial data of objects and re-construct the 3D scene. the problem is I found the depth map values captured in ARFrame significantly deviate from the real distances, even nonlinearly, for the distances below 1.5m, values are basically correct, but beyond 1.5m, they are smaller than real values. for example read 1.9m from the generated depthmap.tiff, but real distance is 3 meters. below is my code of generating tiff file to record depth map data: Generated TIFF file (captured from ARKit): as shown above, the maximum distance is around 1.9m, but real distance to that wall is more than 3 meters, and also you can see, the depth map picture captured in ARKit is quite blurry, particularly at far distance (> 2.0m), almost smeared out. Generated TIFF file (captured from AVFoundation): In comparison, the depth map captured from traditional AVFoundation and with the same hardware device is much clear, the values seem not in meter unit though.
1
0
469
Feb ’25
Tracking geographic locations in AR - Sample App Code
Hello, I was looking back into downloading the Tracking geographic locations in AR sample app from https://developer.apple.com/documentation/arkit/tracking-geographic-locations-in-ar Unfortunately the Download links to the .zip of the DisplayingAPointCloudUsingSceneDepth sample project. The exact same issue occurs when trying to download the sample code from https://developer.apple.com/documentation/ARKit/creating-a-fog-effect-using-scene-depth Wondering if those links are deliberately broken because of possible deprecations. Thanks to any Apple Engineer willing to look into that.
2
0
411
Feb ’25
Entity HoverEffect Fired When inside Another Entity Collider
Hi folks, I’m new to Vision Pro stack, still trying to learn all the nuances. Here is a problem I can’t seem to find an answer. I placed entity A( a small .02 radius sphere) inside entity B( size:.1 box). Both entities have HoverEffectComponent, and both inputcomponent is set to .direct. Entity A is NOT a child of Entity B. When I direct touch Entity B, I noticed that Entity A’s hover effect is fired as well. This only happens if Entity A‘s position is inside Entity B. The gesture that is only targeted at Entity A doesn’t work either. I double checked Entity A collider which sits inside entity B collider, my direct touch shouldn’t have trigger its hove effect. Having one collider inside another seems to produce unpredictable behavior? Thanks in advance 🙏🙏🙏 Context: I’m trying to create an invisible bound around Entity A, so when my hand approaches the bound to grab Entity A, a nice spotlight hover effect would fire first on the bound before hand reaching entity A.
2
0
315
Feb ’25
VisionOS hands replacement inside an Immersive Space
Still don't understand why no one is clarifying about this Apple Video https://developer.apple.com/videos/play/wwdc2023/10111 At the end of this video, there's an incomplete tutorial about connecting a USDZ with mesh and Skeleton structure to the hand tracking system. No example project is linked, and no one is giving the community any clarification. Please can you help us to understand how to proceed?
4
0
502
Mar ’25
A question about interacting with entity
I am a newby of spatial computing and I am using ARKit and RealityKit to develop a visionPro app. I want to accomplish such a goal: If the user's hand touchs an object(an entity in RealityView) on the table, it will post a Window. But I do not know how to handle the event "the user's hand touchs the object". Should I use hand tracking feature to do some computing by myself? Or is there some api to use directly? Thank you!
1
0
510
Feb ’25
Difference in ARKit plane detection from iPhone 8 to iPhone 15
I am developing an ARKit based application that requires plane detection of the tabletop at which the user is seated. Early testing was with an iPhone 8 and iPhone 8+. With those devices, ARKit rapidly detected the plane of the tabletop when it was only 8 to 10 inches away. Using iPhone 15 with the same code, it seems to require me to move the phone more like 15 to 16 inches away before detecting the plane of the table. This is an awkward motion for a user seated at a table. To validate that it was not necessarily a feature of my code, I determined that the same behavior results with Apple's sample AR Interaction application. Has anyone else experienced this, and if so, have suggestions to improve the situation?
2
0
467
Feb ’25
How to make a RealityKit `Entity` respond to Environment light
I am developing an visionos app. I load a .usdz file as a Reality Entity(such as a cabbage). And I want such an effect: When I turn on a desk lamp in real world near the Entity, the surface of the Entity will correctly respond to the light in the real world. I want an effect like this: https://www.reddit.com/r/virtualreality/comments/1as01mm/shiny_disco_ball_reflecting_my_room/ I look up the api such as ImageBasedLightComponent andVirtualEnvironmentProbeComponent in RealityKit、EnvironmentLightEstimationProvider in ARKit,but I do not know how to code. Besides, it will be better if the shadow will also respond to the light correctly.
1
0
493
Feb ’25
When to use an AnchorEntity or HandTrackingProvider in VisionOS
As I understand it there are two ways I can track a hand, or a joint, in RealityKit: either, create an AnchorEntity, for example AnchorEntity(.hand(.left, location: .palm)) or, set up an ARSession with a HandTrackingProvider ( a lot more code which I haven't repeated here). Assuming this is correct, when would I want to use one over the other?
2
0
392
Mar ’25
ARKit hand tracking
Hello, I am developing a visionOS application and am interested in obtaining detailed data of users’ hands through ARKit, including but not limited to Transform and rotation angle. I have reviewed Happy Beem, but it appears to only introduce the method of identifying the user’s specific gestures. Could you please advise on how to obtain the Transform and rotation angle of the user’s hand? Thank you.
1
0
415
Mar ’25
A question about adding grounding shadow in visionPro
I want adding grounding shadow on my Entity in RealityView on visionPro. However it seems that the shadow can only appear on another Entity. So I using plane detection in ARKit and add a transparent plane on it to render shadow. let planeEntity = ModelEntity(mesh: .generatePlane(width: anchor.geometry.extent.width, height: anchor.geometry.extent.height), materials: [material]) planeEntity.components.set(OpacityComponent(opacity: 0.0)) But sometimes there will be a border around my Entityon the plane. I do not know why it will happen, and I want remove the border.
5
0
372
Mar ’25
Merge MeshAnchor from Scene Reconstruction for Vision Pro
Hi there, I'm trying to merge the mesh anchor into a single mesh, but couldn't find any resources on this. Here is the code where I make the mesh from each mesh anchor, and assigned it to a model component with a shader graph material. func run(_ sceneRec: SceneReconstructionProvider) async { for await update in sceneRec.anchorUpdates { switch update.event { case .added, .updated: // Get or create entity for this anchor let anchorEntity = anchors[update.anchor.id] ?? { let entity = ModelEntity() root?.addChild(entity) anchors[update.anchor.id] = entity return entity }() // Remove any existing children for child in anchorEntity.children { child.removeFromParent() } // Generate the mesh from the anchor guard let mesh = try? await MeshResource(from: update.anchor) else { return } guard let shape = try? await ShapeResource.generateStaticMesh(from: update.anchor) else { continue } print("Mesh added, vertices: \(update.anchor.geometry.vertices.count), bounds: \(mesh.bounds)") // Get the material to use var material: RealityKit.Material if isMaterialLoaded, let loadedMaterial = self.shaderMaterial { material = loadedMaterial } else { // Use a temporary material until the shader loads var tempMaterial = UnlitMaterial() tempMaterial.color = .init(tint: .purple.withAlphaComponent(0.5)) material = tempMaterial } await MainActor.run { anchorEntity.components.set(ModelComponent(mesh: mesh, materials: [material])) anchorEntity.setTransformMatrix(update.anchor.originFromAnchorTransform, relativeTo: nil) // Add collision component with static flag - required for spatial interactions anchorEntity.components.set(CollisionComponent( shapes: [shape], isStatic: true, filter: .default )) // Make entity interactive - enables spatial taps, drags, etc. anchorEntity.components.set(InputTargetComponent()) let shadowComponent = GroundingShadowComponent( castsShadow: true, receivesShadow: true ) anchorEntity.components.set(shadowComponent) } I then use a spatial tap gesture to set the position parameter in the shader graph material that creates a nice gradient from the tap position on the mesh to the rest of the mesh. SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in let tappedEntity = value.entity // Check if the tapped entity is a child of tracking.meshAnchors if isChildOfMeshAnchors(entity: tappedEntity) { // Get local position (in the entity's coordinate space) let localPosition = value.location3D // Convert to world position (scene coordinate space) let worldPosition = value.convert(localPosition, from: .local, to: .scene) print("Tapped mesh anchor at local position: \(localPosition)") print("Tapped mesh anchor at world position: \(worldPosition)") // Update the material parameter with the tap position updateMaterialTapPosition(entity: tappedEntity, position: worldPosition) } else { print("Tapped entity is not a mesh anchor") } } } My issue is that because there are several mesh anchors, the gradient often gets cut off by the edge of the mesh generated from the mesh anchor as suppose to a nice continuous gradient across the entire scene reconstructed mesh I couldn't find any documentations on how to merge mesh from mesh anchors, any tips would be helpful! Thank you!
3
0
321
Mar ’25
Human Body joint tracking in VisionOS
The goal is to achieve precise joint tracking for clinical assessment. The Doctor is wearing the AVP and observing the Patients movement. Do you have any recommended best practices for integrating real-time joint tracking and displaying them on the patient within visionOS? We attempted to use VNHumanBodyPose3DObservation, which theoretically should work, but we are unable to display the detected joints in an Immersive Space for real-time validation. This makes it difficult for the doctor to ensure accurate tracking and if possible a photo or video of the Range of Motion assessment would be needed for the patient record. Are there alternative methods to achieve precise real-time joint tracking without requiring main camera access (com.apple.developer.arkit.main-camera-access.allow)?
3
0
262
Mar ’25
VNDetectHumanBodyPose3DRequest failing on Vision Pro
Hi! I attempted running a sample project for detecting human pose in 3D with vision framework, that can be found here: https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision. It works perfectly on my Macbook Pro M1, but fails on Apple Vision Pro. After selecting a photo, an endless loading screen is displayed and the following message is produced in the console: Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Network path is nil: (null) Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}. de-activating session 70138 after timeout Is human pose detection expected to work on VisionOS? Is there any special configuration required, that I might be missing?
1
0
85
Mar ’25