Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Billboard Entity with AttachmentView
Hey Everyone, Happy New Year! I wanted to see if you have seen this before. I have added an attachment to the RealityView as a child on an entity that has a Billboard component set on it. I wanted to create the effect that the attachment is offset by .5 meters from center and follows the device as you move around it. IT works great until you try click a button. The attachment moves with the billboard, but the collision box around the attachment is not following it. If I position myself perfectly it works. Video Example: https://youtu.be/4d9Vx7K8MmU // // ImmersiveView.swift // Billboard Attachment // // Created by Justin Leger on 1/3/25. // import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { var rootEntity = Entity() var body: some View { RealityView { content, attachments in // Add the initial RealityKit content let sphereEntity = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .red, roughness: 1, isMetallic: false)]) sphereEntity.position = [0.0, 1.0, -2.0] let controlsPivotEntity = Entity() controlsPivotEntity.components[BillboardComponent.self] = .init() // Extract the attachemnt entity and disable it before its used. if let controlsViewAttachmentEntity = attachments.entity(for: PlacedThingControls.attachmentId) { controlsViewAttachmentEntity.position.z = 0.5 controlsPivotEntity.addChild(controlsViewAttachmentEntity) sphereEntity.addChild(controlsPivotEntity) } content.add(sphereEntity) } attachments: { Attachment(id: PlacedThingControls.attachmentId) { PlacedThingControls() } } } } #Preview(immersionStyle: .mixed) { ImmersiveView() .environment(AppModel()) } struct PlacedThingControls: View { static let attachmentId = "placed-thing-3D-controls" var body: some View { VStack { HStack(spacing: 0) { Button { print("🗺️🗺️🗺️ Map selected pieces") } label: { Text("\(Image(systemName: "plus.square.dashed")) Manage Mesh Maps") .fontWeight(.semibold) .frame(maxWidth: .infinity) } .padding(.leading, 20) Spacer() Button(role: .destructive) { print("🗑️🗑️🗑️ Delete selected pieces") } label: { Label { Text("Delete") } icon: { Image(systemName: "trash") } .labelStyle(.iconOnly) } .padding(.trailing, 20) } .padding(.vertical) .frame(minWidth: 320, maxWidth: 480) } .glassBackgroundEffect() } }
5
0
851
Jan ’25
How to find the camera transform (or view matrix) in the world coordinate from a camera frame
I'm trying to implement a prototype to render virtual objects in a mixed immersive space on the camer frames captured by CameraFrameProvider. Here are what I have done: Get camera's instrinsics from frame.primarySample.parameters.intrinsics Get camera's extrinsics from frame.primarySample.parameters.extrinsics Get the device anchor by worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) Setup a RealityKit.RealityRenderer to render virtual objects on the captured camera frames let realityRenderer = try RealityKit.RealityRenderer() realityRenderer.cameraSettings.colorBackground = .outputTexture() let cameraEntity = PerspectiveCamera() // see https://developer.apple.com/forums/thread/770235 let cameraTransform = deviceAnchor.originFromAnchorTransform * extrinsics.inverse cameraEntity.setTransformMatrix(cameraTransform, relativeTo: nil) cameraEntity.camera.near = 0.01 cameraEntity.camera.far = 100 cameraEntity.camera.fieldOfViewOrientation = .horizontal // manually calculated based on camera intrinsics cameraEntity.camera.fieldOfViewInDegrees = 105 realityRenderer.entities.append(cameraEntity) realityRenderer.activeCamera = cameraEntity Virtual objects, which should be seen in the camera frames, are clipped out by the camera transform. If I use deviceAnchor.originFromAnchorTransform as the camera transform, virtual objects can be rendered on camera frames at wrong positions (I think it is because the camera extrinsics isn't used to adjust the camera to the correct position). My question is how to use the camera extrinsic matrix for this purpose? Does the camera extrinsics point to a similar orientation of the device anchor with some minor rotation and postion change? Here is an extrinsics from a camera frame. It seems that the direction of Y-axis and Z-axis are flipped by the extrinsics. So the camera is point to a wrong direction. simd_float4x4([[0.9914258, 0.012555369, -0.13006608, 0.0], // X-axis [-0.0009778949, -0.9946325, -0.10346654, 0.0], // Y-axis [-0.13066702, 0.10270659, -0.98609203, 0.0], // Z-axis [0.024519, -0.019568002, -0.058280986, 1.0]]) // translation
3
0
740
Jan ’25
ARKit: Prevent Asset Clipping
Hello Apple Team, I am working on a RealityKit project for iOS, where I need to place a 3D asset far away from the camera (approximately 15 to 30 meters). When enabling people occlusion, the 3D asset gets clipped when moved far away. Is it possible to enable people occlusion for assets at close range (less than 10 meters) while disabling it for assets farther away to prevent clipping? I understand that it is possible to switch configurations at runtime. However, I would like to place assets both close to and far from the camera simultaneously. Thank you for your help! Kind regards
1
0
494
Jan ’25
Reality View argument type does not conform to protocol view
I'm working on creating a panorama view in AVP. When I got to this line of code Xcode says that "Type 'Entity' does not conform to protocol 'View'": private var realityView: RealityView! as well as this line, with the same error message: private func setupPanoramaScene(for content: RealityView.Content) What should I put as a argument for reality view? It doesn't work without arguments either.
3
0
462
Jan ’25
Barcode Anchor Jitter in Vision Pro due to Invalid enterprise api for barcode scanning Values
We’re using the enterprise API for spatial barcode/QR code scanning in the Vision Pro app, but we often get invalid values for the barcode anchor from the API, leading to jittery barcode positions in the UI. The code we’re using is attached below. import SwiftUI import RealityKit import ARKit import Combine struct ImmersiveView: View { @State private var arkitSession = ARKitSession() @State private var root = Entity() @State private var fadeCompleteSubscriptions: Set = [] var body: some View { RealityView { content in content.add(root) } .task { // Check if barcode detection is supported; otherwise handle this case. guard BarcodeDetectionProvider.isSupported else { return } // Specify the symbologies you want to detect. let barcodeDetection = BarcodeDetectionProvider(symbologies: [.code128, .qr, .upce, .ean13, .ean8]) do { try await arkitSession.requestAuthorization(for: [.worldSensing]) try await arkitSession.run([barcodeDetection]) print("Barcode scanning started") for await update in barcodeDetection.anchorUpdates where update.event == .added { let anchor = update.anchor // Play an animation to indicate the system detected a barcode. playAnimation(for: anchor) // Use the anchor's decoded contents and symbology to take action. print( """ Payload: \(anchor.payloadString ?? "") Symbology: \(anchor.symbology) """) } } catch { // Handle the error. print(error) } } } // Define this function in ImmersiveView. func playAnimation(for anchor: BarcodeAnchor) { guard let scene = root.scene else { return } // Create a plane sized to match the barcode. let extent = anchor.extent let entity = ModelEntity(mesh: .generatePlane(width: extent.x, depth: extent.z), materials: [UnlitMaterial(color: .green)]) entity.components.set(OpacityComponent(opacity: 0)) // Position the plane over the barcode. entity.transform = Transform(matrix: anchor.originFromAnchorTransform) root.addChild(entity) // Fade the plane in and out. do { let duration = 0.5 let fadeIn = try AnimationResource.generate(with: FromToByAnimation<Float>( from: 0, to: 1.0, duration: duration, isAdditive: true, bindTarget: .opacity) ) let fadeOut = try AnimationResource.generate(with: FromToByAnimation<Float>( from: 1.0, to: 0, duration: duration, isAdditive: true, bindTarget: .opacity)) let fadeAnimation = try AnimationResource.sequence(with: [fadeIn, fadeOut]) _ = scene.subscribe(to: AnimationEvents.PlaybackCompleted.self, on: entity, { _ in // Remove the plane after the animation completes. entity.removeFromParent() }).store(in: &fadeCompleteSubscriptions) entity.playAnimation(fadeAnimation) } catch { print("Error") } } }
3
0
501
Jan ’25
WorldAnchors added and removed immediately
I can add a WorldAnchor to a WorldTrackingProvider. The next time I start my app, the WorldAnchor is added back, and then is immediately removed: dataProviderStateChanged(dataProviders: [WorldTrackingProvider(0x0000000300bc8370, state: running)], newState: running, error: nil) AnchorUpdate(event: added, timestamp: 43025.248134708, anchor: WorldAnchor(id: C0A1AE95-F156-45F5-9030-895CAABF16E9, isTracked: true, originFromAnchorTransform: <translation=(0.048458 0.000108 -0.317565) rotation=(0.00° 15.44° -0.00°)>)) AnchorUpdate(event: removed, timestamp: 43025.348131208, anchor: WorldAnchor(id: C0A1AE95-F156-45F5-9030-895CAABF16E9, isTracked: false, originFromAnchorTransform: <translation=(0.000000 0.000000 0.000000) rotation=(-0.00° 0.00° 0.00°)>)) It always leaves me with zero anchors in .allAnchors...the ARKitSession is still active at this point
8
0
665
Jan ’25
Significant deviation of depth map values captured in ARKit framework
I use ARKit to build an app, scan rooms to collect the spatial data of objects and re-construct the 3D scene. the problem is I found the depth map values captured in ARFrame significantly deviate from the real distances, even nonlinearly, for the distances below 1.5m, values are basically correct, but beyond 1.5m, they are smaller than real values. for example read 1.9m from the generated depthmap.tiff, but real distance is 3 meters. below is my code of generating tiff file to record depth map data: Generated TIFF file (captured from ARKit): as shown above, the maximum distance is around 1.9m, but real distance to that wall is more than 3 meters, and also you can see, the depth map picture captured in ARKit is quite blurry, particularly at far distance (> 2.0m), almost smeared out. Generated TIFF file (captured from AVFoundation): In comparison, the depth map captured from traditional AVFoundation and with the same hardware device is much clear, the values seem not in meter unit though.
1
0
469
Feb ’25
Tracking geographic locations in AR - Sample App Code
Hello, I was looking back into downloading the Tracking geographic locations in AR sample app from https://developer.apple.com/documentation/arkit/tracking-geographic-locations-in-ar Unfortunately the Download links to the .zip of the DisplayingAPointCloudUsingSceneDepth sample project. The exact same issue occurs when trying to download the sample code from https://developer.apple.com/documentation/ARKit/creating-a-fog-effect-using-scene-depth Wondering if those links are deliberately broken because of possible deprecations. Thanks to any Apple Engineer willing to look into that.
2
0
411
Feb ’25
Entity HoverEffect Fired When inside Another Entity Collider
Hi folks, I’m new to Vision Pro stack, still trying to learn all the nuances. Here is a problem I can’t seem to find an answer. I placed entity A( a small .02 radius sphere) inside entity B( size:.1 box). Both entities have HoverEffectComponent, and both inputcomponent is set to .direct. Entity A is NOT a child of Entity B. When I direct touch Entity B, I noticed that Entity A’s hover effect is fired as well. This only happens if Entity A‘s position is inside Entity B. The gesture that is only targeted at Entity A doesn’t work either. I double checked Entity A collider which sits inside entity B collider, my direct touch shouldn’t have trigger its hove effect. Having one collider inside another seems to produce unpredictable behavior? Thanks in advance 🙏🙏🙏 Context: I’m trying to create an invisible bound around Entity A, so when my hand approaches the bound to grab Entity A, a nice spotlight hover effect would fire first on the bound before hand reaching entity A.
2
0
315
Feb ’25
VisionOS hands replacement inside an Immersive Space
Still don't understand why no one is clarifying about this Apple Video https://developer.apple.com/videos/play/wwdc2023/10111 At the end of this video, there's an incomplete tutorial about connecting a USDZ with mesh and Skeleton structure to the hand tracking system. No example project is linked, and no one is giving the community any clarification. Please can you help us to understand how to proceed?
4
0
502
Mar ’25
A question about interacting with entity
I am a newby of spatial computing and I am using ARKit and RealityKit to develop a visionPro app. I want to accomplish such a goal: If the user's hand touchs an object(an entity in RealityView) on the table, it will post a Window. But I do not know how to handle the event "the user's hand touchs the object". Should I use hand tracking feature to do some computing by myself? Or is there some api to use directly? Thank you!
1
0
510
Feb ’25
Difference in ARKit plane detection from iPhone 8 to iPhone 15
I am developing an ARKit based application that requires plane detection of the tabletop at which the user is seated. Early testing was with an iPhone 8 and iPhone 8+. With those devices, ARKit rapidly detected the plane of the tabletop when it was only 8 to 10 inches away. Using iPhone 15 with the same code, it seems to require me to move the phone more like 15 to 16 inches away before detecting the plane of the table. This is an awkward motion for a user seated at a table. To validate that it was not necessarily a feature of my code, I determined that the same behavior results with Apple's sample AR Interaction application. Has anyone else experienced this, and if so, have suggestions to improve the situation?
2
0
468
Feb ’25
How to make a RealityKit `Entity` respond to Environment light
I am developing an visionos app. I load a .usdz file as a Reality Entity(such as a cabbage). And I want such an effect: When I turn on a desk lamp in real world near the Entity, the surface of the Entity will correctly respond to the light in the real world. I want an effect like this: https://www.reddit.com/r/virtualreality/comments/1as01mm/shiny_disco_ball_reflecting_my_room/ I look up the api such as ImageBasedLightComponent andVirtualEnvironmentProbeComponent in RealityKit、EnvironmentLightEstimationProvider in ARKit,but I do not know how to code. Besides, it will be better if the shadow will also respond to the light correctly.
1
0
493
Feb ’25
When to use an AnchorEntity or HandTrackingProvider in VisionOS
As I understand it there are two ways I can track a hand, or a joint, in RealityKit: either, create an AnchorEntity, for example AnchorEntity(.hand(.left, location: .palm)) or, set up an ARSession with a HandTrackingProvider ( a lot more code which I haven't repeated here). Assuming this is correct, when would I want to use one over the other?
2
0
392
Mar ’25
ARKit hand tracking
Hello, I am developing a visionOS application and am interested in obtaining detailed data of users’ hands through ARKit, including but not limited to Transform and rotation angle. I have reviewed Happy Beem, but it appears to only introduce the method of identifying the user’s specific gestures. Could you please advise on how to obtain the Transform and rotation angle of the user’s hand? Thank you.
1
0
415
Mar ’25
A question about adding grounding shadow in visionPro
I want adding grounding shadow on my Entity in RealityView on visionPro. However it seems that the shadow can only appear on another Entity. So I using plane detection in ARKit and add a transparent plane on it to render shadow. let planeEntity = ModelEntity(mesh: .generatePlane(width: anchor.geometry.extent.width, height: anchor.geometry.extent.height), materials: [material]) planeEntity.components.set(OpacityComponent(opacity: 0.0)) But sometimes there will be a border around my Entityon the plane. I do not know why it will happen, and I want remove the border.
5
0
372
Mar ’25
Merge MeshAnchor from Scene Reconstruction for Vision Pro
Hi there, I'm trying to merge the mesh anchor into a single mesh, but couldn't find any resources on this. Here is the code where I make the mesh from each mesh anchor, and assigned it to a model component with a shader graph material. func run(_ sceneRec: SceneReconstructionProvider) async { for await update in sceneRec.anchorUpdates { switch update.event { case .added, .updated: // Get or create entity for this anchor let anchorEntity = anchors[update.anchor.id] ?? { let entity = ModelEntity() root?.addChild(entity) anchors[update.anchor.id] = entity return entity }() // Remove any existing children for child in anchorEntity.children { child.removeFromParent() } // Generate the mesh from the anchor guard let mesh = try? await MeshResource(from: update.anchor) else { return } guard let shape = try? await ShapeResource.generateStaticMesh(from: update.anchor) else { continue } print("Mesh added, vertices: \(update.anchor.geometry.vertices.count), bounds: \(mesh.bounds)") // Get the material to use var material: RealityKit.Material if isMaterialLoaded, let loadedMaterial = self.shaderMaterial { material = loadedMaterial } else { // Use a temporary material until the shader loads var tempMaterial = UnlitMaterial() tempMaterial.color = .init(tint: .purple.withAlphaComponent(0.5)) material = tempMaterial } await MainActor.run { anchorEntity.components.set(ModelComponent(mesh: mesh, materials: [material])) anchorEntity.setTransformMatrix(update.anchor.originFromAnchorTransform, relativeTo: nil) // Add collision component with static flag - required for spatial interactions anchorEntity.components.set(CollisionComponent( shapes: [shape], isStatic: true, filter: .default )) // Make entity interactive - enables spatial taps, drags, etc. anchorEntity.components.set(InputTargetComponent()) let shadowComponent = GroundingShadowComponent( castsShadow: true, receivesShadow: true ) anchorEntity.components.set(shadowComponent) } I then use a spatial tap gesture to set the position parameter in the shader graph material that creates a nice gradient from the tap position on the mesh to the rest of the mesh. SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in let tappedEntity = value.entity // Check if the tapped entity is a child of tracking.meshAnchors if isChildOfMeshAnchors(entity: tappedEntity) { // Get local position (in the entity's coordinate space) let localPosition = value.location3D // Convert to world position (scene coordinate space) let worldPosition = value.convert(localPosition, from: .local, to: .scene) print("Tapped mesh anchor at local position: \(localPosition)") print("Tapped mesh anchor at world position: \(worldPosition)") // Update the material parameter with the tap position updateMaterialTapPosition(entity: tappedEntity, position: worldPosition) } else { print("Tapped entity is not a mesh anchor") } } } My issue is that because there are several mesh anchors, the gradient often gets cut off by the edge of the mesh generated from the mesh anchor as suppose to a nice continuous gradient across the entire scene reconstructed mesh I couldn't find any documentations on how to merge mesh from mesh anchors, any tips would be helpful! Thank you!
3
0
321
Mar ’25