Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Person Occlusion on the Vision Pro
I am currently creating an app where two people share an instance of an immersive space so that they are able to point to certain things in the immersive space. Right now, other people are hidden behind the immersive space, and even with people awareness enabled for everything, people are still too difficult to see. I've found this documentation (https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people) which describes what I want to do, but it is only listed as working on iOS an iPadOS. Is there anything similar to this that will work on VisionOS?
2
0
103
Mar ’25
ARKit Planes do not appear where expected on visionOS
I'm using ARKitSession and PlaneDetectionProvider to detect planes. I have a basics process to create an entity for each detected plane. Each one will get a random color for the material. Each plane is sized based on the bounds of the anchor provided by ARKit. let mesh = MeshResource.generatePlane( width: anchor.geometry.extent.width, depth: anchor.geometry.extent.height ) Then I'm using this to position each entity. entity.transform = Transform(matrix: anchor.originFromAnchorTransform) This seems to be the right method, but many (not all) planes are not where they should be. The sizes look OK, but the X and Y positions off. Take this large green plane on the wall. It should span the entire wall, but it is offset along the X position so that it is pushed to the left from where the center of the anchor is. When I visualize surfaces using the Xcode debugging tools, that tool reports the planes where I'd expect them to be. Can you see what I'm getting wrong here? Full code below struct Example068: View { @State var session = ARKitSession() @State private var planeAnchors: [UUID: Entity] = [:] @State private var planeColors: [UUID: Color] = [:] var body: some View { RealityView { content in } update: { content in for (_, entity) in planeAnchors { if !content.entities.contains(entity) { content.add(entity) } } } .task { try! await setupAndRunPlaneDetection() } } func setupAndRunPlaneDetection() async throws { let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted]) if PlaneDetectionProvider.isSupported { do { try await session.run([planeData]) for await update in planeData.anchorUpdates { switch update.event { case .added, .updated: let anchor = update.anchor if planeColors[anchor.id] == nil { planeColors[anchor.id] = generatePastelColor() } let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!) planeAnchors[anchor.id] = planeEntity case .removed: let anchor = update.anchor planeAnchors.removeValue(forKey: anchor.id) planeColors.removeValue(forKey: anchor.id) } } } catch { print("ARKit session error \(error)") } } } private func generatePastelColor() -> Color { let hue = Double.random(in: 0...1) let saturation = Double.random(in: 0.2...0.4) let brightness = Double.random(in: 0.8...1.0) return Color(hue: hue, saturation: saturation, brightness: brightness) } private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity { let mesh = MeshResource.generatePlane( width: anchor.geometry.extent.width, depth: anchor.geometry.extent.height ) var material = PhysicallyBasedMaterial() material.baseColor.tint = UIColor(color) let entity = ModelEntity(mesh: mesh, materials: [material]) entity.transform = Transform(matrix: anchor.originFromAnchorTransform) return entity } }
3
0
141
Apr ’25
How to Achieve Realistic Colors and Textures in LiDAR Scanning with Swift
Hello, I'm developing a LiDAR-based scanning app using Swift, where I can successfully perform scans and export the results as .obj files. My goal is to have the scan's colors and textures closely resemble real-world visuals as captured by the camera, similar to the results shown in this repository. In the referenced repository, the result is demonstrated with a single screenshot, but I want to display the textures and colors throughout the entire scanning process, not just at the final result. To clarify, I'm not focused on scanning individual objects but rather larger environments like rooms, houses, or outdoor spaces such as streets. Here’s what I’m aiming for: Realistic colors and textures that match what the camera sees during the scan. Continuous texture rendering during the scanning process, not just in the final exported model. Could anyone share guidance, sample code, or point me to relevant documentation to achieve this? Any help would be greatly appreciated! Thank you!
1
0
77
May ’25
Transparent Material Turning into Occlusion Material
I am trying to create an object in immersive space that is partially transparent (~50% opacity). I have implemented this in a few different ways including creating a model entity and setting its opacity component to 0.5, and creating a custom material with blending set to a transparent opacity of 0.5. These both work partially, as they behaved as intended for many cases, but seemingly randomly would act like occlusion material and block any other immersive content behind them, showing the real world instead. Some notes: I am using RealityKit to render the semi-transparent object and an opaque object that is behind the semi-transparent object. I am using VisionOS 2.1, and am updating the location of the semi-transparent object often. Both objects are ModelEntities. I would appreciate any guidance on how to implement this. Please let me know if there are any other questions.
3
0
129
May ’25
ARKit sessionInterruptionEnded never called in Window Mode.
Hi 26 beta guys, I have apps using ARKit. In iPadOS 26 beta, ARKit stops working after switching to other apps. how to: Enable WindowMode in iPadOS 26 Launch my app and start ARSession Switch to another app (preference app, etc.) Switch back to my app AR stops updating camerafeed. I debug printed ARSessionDelegate, and found that after sessionWasInterrupted was called, sessionInterruptionEnded was never called. sessionInterruptionEnded is called if WindowMode disabled. Is this just a bug for 26 beta? I suspect there is similar problem with non-AR camera. Any idea?
2
0
94
Jun ’25
Projecting a Cube with a Number in ARKit
I'm a novice in RealityKit and ARKit. I'm using ARKit in SwiftUI to show a cube with a number as shown below. import SwiftUI import RealityKit import ARKit struct ContentView : View { var body: some View { return ARViewContainer() } } #Preview { ContentView() } struct ARViewContainer: UIViewRepresentable { typealias UIViewType = ARView func makeUIView(context: UIViewRepresentableContext<ARViewContainer>) -> ARView { let arView = ARView(frame: .zero, cameraMode: .ar, automaticallyConfigureSession: true) arView.enableTapGesture() return arView } func updateUIView(_ uiView: ARView, context: UIViewRepresentableContext<ARViewContainer>) { } } extension ARView { func enableTapGesture() { let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap(recognizer:))) self.addGestureRecognizer(tapGestureRecognizer) } @objc func handleTap(recognizer: UITapGestureRecognizer) { let tapLocation = recognizer.location(in: self) // print("Tap location: \(tapLocation)") guard let rayResult = self.ray(through: tapLocation) else { return } let results = self.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .any) if let firstResult = results.first { let position = simd_make_float3(firstResult.worldTransform.columns.3) placeObject(at: position) } } func placeObject(at position: SIMD3<Float>) { let mesh = MeshResource.generateBox(size: 0.3) let material = SimpleMaterial(color: UIColor.systemRed, roughness: 0.3, isMetallic: true) let modelEntity = ModelEntity(mesh: mesh, materials: [material]) var unlitMaterial = UnlitMaterial() if let textureResource = generateTextResource(text: "1", textColor: UIColor.white) { unlitMaterial.color = .init(tint: .white, texture: .init(textureResource)) modelEntity.model?.materials = [unlitMaterial] let id = UUID().uuidString modelEntity.name = id modelEntity.transform.scale = [0.3, 0.1, 0.3] modelEntity.generateCollisionShapes(recursive: true) let anchorEntity = AnchorEntity(world: position) anchorEntity.addChild(modelEntity) self.scene.addAnchor(anchorEntity) } } func generateTextResource(text: String, textColor: UIColor) -> TextureResource? { if let image = text.image(withAttributes: [NSAttributedString.Key.foregroundColor: textColor], size: CGSize(width: 18, height: 18)), let cgImage = image.cgImage { let textureResource = try? TextureResource(image: cgImage, options: TextureResource.CreateOptions.init(semantic: nil)) return textureResource } return nil } } I tap the floor and get a cube with '1' as shown below. The background color of the cube is black, I guess. Where does this color come from and how can I change it into, say, red? Thanks.
4
0
110
Jul ’25
Keyboard Tracking
Hi! I'm currently experimenting on Apple Vision Pro with hand and head anchors. Is there a way to get an anchor linked to the apple magic keyboard (as the detection is already done to display inputs at the top)? Thanks in advance, Have a good day!
1
0
620
Jul ’25
Perspective problem
Hi, I called it "perspective problem", but I'm not quite sure what it is. I have a tag that I track with builtin camera. I calculate its pose, then use extrinsics and device anchor to calculate where to place entity with model. When I place an entity that overlaps with physical object and start to look at it from different angles, the virtual object begins to move. Initially I thought that it's something wrong with calculations, or some image distortion closer to camera edges is affecting tag detection. To check, I calculated the position only once and displayed entity there, the physical tracked object is not moving. Now, when I move my head, so the object is more to the left, or right in my field of view, the virtual object becomes misaligned to the left, or right. It feels like a parallax effect, but distance from me to entity and to physical object are exactly the same. Is that expected, because of some passthrough correction magic? And if so, can I somehow correct it back, so the entity always overlaps with object? I'm currently on v26 beta 5. I also don't quite understand the camera extrinsics, because it seems that I need to flip it around X by 180 degrees to make it work in deviceAnchor * extrinsics.inverse * tag (shouldn't it be in same coordinates as all other RealityKit things?).
3
0
200
Aug ’25
visionOS plane anchor rotation and wall direction are inconsistent
I have a problem with the wall plane detection using visionOS/ARKit: I am using ARKitSession's PlaneDetectionProvider detection.wall in the space of visionOS. I recorded the position and rotation information of the first detected plane, but found that the rotation value will be facing when the user starts the space. There is a deviation in different directions. That is to say, even if the plane is located on the same wall, the rotation quaternion will be different. I hope that no matter from which direction the user enters the scan, the real direction of the wall can be correctly obtained so that the virtual content can be accurately aligned with the wall. I have tried to use anchor.originFromAnchorTransform or Transform.rotation directly, but the rotation value is still affected by the user's initial orientation. In addition, I would like to know whether the user's initial orientation will affect the location information. If so, please provide a solution. Thank you!
1
0
428
3w
Rendering scene in RealityView to an Image
Is there any way to render a RealityView to an Image/UIImage like we used to be able to do using SCNView.snapshot() ? ImageRenderer doesn't work because it renders a SwiftUI view hierarchy, and I need the currently presented RealityView with camera background and 3D scene content the way the user sees it I tried UIHostingController and UIGraphicsImageRenderer like extension View { func snapshot() -> UIImage { let controller = UIHostingController(rootView: self) let view = controller.view let targetSize = controller.view.intrinsicContentSize view?.bounds = CGRect(origin: .zero, size: targetSize) view?.backgroundColor = .clear let renderer = UIGraphicsImageRenderer(size: targetSize) return renderer.image { _ in view?.drawHierarchy(in: view!.bounds, afterScreenUpdates: true) } } } but that leads to the app freezing and sending an infinite loop of [CAMetalLayer nextDrawable] returning nil because allocation failed. Same thing happens when I try return renderer.image { ctx in view.layer.render(in: ctx.cgContext) } Now that SceneKit is deprecated, I didn't want to start a new app using deprecated APIs.
3
0
984
2w
Access persona on VisionOS
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona? Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I Any help is very welcome, thanks.
3
0
1.2k
Nov ’24
Potential bug in Anchor updates on visionOS using the ARKit C API
I have an application running on visionOS 2.0 that uses the ARKit C API to create anchors and listen for updates. I am running an ARKit session with a WorldTrackingProvider (and a CameraFrameProvider, if that is relevant) Then, I am registering a callback using ar_world_tracking_provider_set_anchor_update_handler_f When updates arrive I iterate over the updated anchors using ar_world_anchors_enumerate_anchors_f. Then, as described in the https://developer.apple.com/documentation/visionos/tracking-points-in-world-space documentation, I walk around and hold down the Digital Crown to reposition the current space. This resets the world origin to my current position. When this happens, anchor updates arrive. In most cases, the anchor updates return the new transform (using ar_world_anchor_get_origin_from_anchor_transform) but sometimes I get an anchor update that reports the transform of the anchor from before the world origin was repositioned. Meaning instead of staying in place in the physical world, the world anchor moves relative to me. I can work around this by calling ar_world_tracking_provider_copy_all_world_anchors_f which provides me with the correct transform, but this async method also adds some noticeable delay to the anchor updates. Is this already a known issue?
0
0
377
Sep ’24
Is this the easiest way create scene planes that allow for collision with Realitykit entities
In my Vision OS app I am using plane detection and I want to create planes that have physics I want to create an effect that my reality kit entities rest on real world detected planes. I was curious to see that the code below that I found in the Samples is the most efficient way of doing this. func processPlaneDetectionUpdates() async { for await anchorUpdate in planeTracking.anchorUpdates { let anchor = anchorUpdate.anchor if anchorUpdate.event == .removed { planeAnchors.removeValue(forKey: anchor.id) if let entity = planeEntities.removeValue(forKey: anchor.id) { entity.removeFromParent() } return } planeAnchors[anchor.id] = anchor let entity = Entity() entity.name = "Plane \(anchor.id)" entity.setTransformMatrix(anchor.originFromAnchorTransform, relativeTo: nil) // Generate a mesh for the plane (for occlusion). var meshResource: MeshResource? = nil do { let contents = MeshResource.Contents(planeGeometry: anchor.geometry) meshResource = try MeshResource.generate(from: contents) } catch { print("Failed to create a mesh resource for a plane anchor: \(error).") return } var material = UnlitMaterial(color: .red) material.blending = .transparent(opacity: .init(floatLiteral: 0)) if let meshResource { // Make this plane occlude virtual objects behind it. entity.components.set(ModelComponent(mesh: meshResource, materials: [material])) } // Generate a collision shape for the plane (for object placement and physics). var shape: ShapeResource? = nil do { let vertices = anchor.geometry.meshVertices.asSIMD3(ofType: Float.self) shape = try await ShapeResource.generateStaticMesh(positions: vertices, faceIndices: anchor.geometry.meshFaces.asUInt16Array()) } catch { print("Failed to create a static mesh for a plane anchor: \(error).") return } if let shape { entity.components.set(CollisionComponent(shapes: [shape], isStatic: true)) let physics = PhysicsBodyComponent(mode: .static) entity.components.set(physics) } let existingEntity = planeEntities[anchor.id] planeEntities[anchor.id] = entity contentEntity.addChild(entity) existingEntity?.removeFromParent() } } } extension MeshResource.Contents { init(planeGeometry: PlaneAnchor.Geometry) { self.init() self.instances = [MeshResource.Instance(id: "main", model: "model")] var part = MeshResource.Part(id: "part", materialIndex: 0) part.positions = MeshBuffers.Positions(planeGeometry.meshVertices.asSIMD3(ofType: Float.self)) part.triangleIndices = MeshBuffer(planeGeometry.meshFaces.asUInt32Array()) self.models = [MeshResource.Model(id: "model", parts: [part])] } } extension GeometrySource { func asArray<T>(ofType: T.Type) -> [T] { assert(MemoryLayout<T>.stride == stride, "Invalid stride \(MemoryLayout<T>.stride); expected \(stride)") return (0..<count).map { buffer.contents().advanced(by: offset + stride * Int($0)).assumingMemoryBound(to: T.self).pointee } } func asSIMD3<T>(ofType: T.Type) -> [SIMD3<T>] { asArray(ofType: (T, T, T).self).map { .init($0.0, $0.1, $0.2) } } subscript(_ index: Int32) -> (Float, Float, Float) { precondition(format == .float3, "This subscript operator can only be used on GeometrySource instances with format .float3") return buffer.contents().advanced(by: offset + (stride * Int(index))).assumingMemoryBound(to: (Float, Float, Float).self).pointee } } extension GeometryElement { subscript(_ index: Int) -> [Int32] { precondition(bytesPerIndex == MemoryLayout<Int32>.size, """ This subscript operator can only be used on GeometryElement instances with bytesPerIndex == \(MemoryLayout<Int32>.size). This GeometryElement has bytesPerIndex == \(bytesPerIndex) """ ) var data = [Int32]() data.reserveCapacity(primitive.indexCount) for indexOffset in 0 ..< primitive.indexCount { data.append(buffer .contents() .advanced(by: (Int(index) * primitive.indexCount + indexOffset) * MemoryLayout<Int32>.size) .assumingMemoryBound(to: Int32.self).pointee) } return data } func asInt32Array() -> [Int32] { var data = [Int32]() let totalNumberOfInt32 = count * primitive.indexCount data.reserveCapacity(totalNumberOfInt32) for indexOffset in 0 ..< totalNumberOfInt32 { data.append(buffer.contents().advanced(by: indexOffset * MemoryLayout<Int32>.size).assumingMemoryBound(to: Int32.self).pointee) } return data } func asUInt16Array() -> [UInt16] { asInt32Array().map { UInt16($0) } } public func asUInt32Array() -> [UInt32] { asInt32Array().map { UInt32($0) } } } I was also curious to know if I can do this without ARKit using SpatialTrackingSession. My understanding is that using SpatialTrackingSession in RealityKit I can only get the transforms of the AnchorEntities but it won't have geometry information to create the collision shapes.
2
0
587
Oct ’24