Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

How to Achieve Realistic Colors and Textures in LiDAR Scanning with Swift
Hello, I'm developing a LiDAR-based scanning app using Swift, where I can successfully perform scans and export the results as .obj files. My goal is to have the scan's colors and textures closely resemble real-world visuals as captured by the camera, similar to the results shown in this repository. In the referenced repository, the result is demonstrated with a single screenshot, but I want to display the textures and colors throughout the entire scanning process, not just at the final result. To clarify, I'm not focused on scanning individual objects but rather larger environments like rooms, houses, or outdoor spaces such as streets. Here’s what I’m aiming for: Realistic colors and textures that match what the camera sees during the scan. Continuous texture rendering during the scanning process, not just in the final exported model. Could anyone share guidance, sample code, or point me to relevant documentation to achieve this? Any help would be greatly appreciated! Thank you!
1
0
93
May ’25
Transparent Material Turning into Occlusion Material
I am trying to create an object in immersive space that is partially transparent (~50% opacity). I have implemented this in a few different ways including creating a model entity and setting its opacity component to 0.5, and creating a custom material with blending set to a transparent opacity of 0.5. These both work partially, as they behaved as intended for many cases, but seemingly randomly would act like occlusion material and block any other immersive content behind them, showing the real world instead. Some notes: I am using RealityKit to render the semi-transparent object and an opaque object that is behind the semi-transparent object. I am using VisionOS 2.1, and am updating the location of the semi-transparent object often. Both objects are ModelEntities. I would appreciate any guidance on how to implement this. Please let me know if there are any other questions.
3
0
144
May ’25
ARKit sessionInterruptionEnded never called in Window Mode.
Hi 26 beta guys, I have apps using ARKit. In iPadOS 26 beta, ARKit stops working after switching to other apps. how to: Enable WindowMode in iPadOS 26 Launch my app and start ARSession Switch to another app (preference app, etc.) Switch back to my app AR stops updating camerafeed. I debug printed ARSessionDelegate, and found that after sessionWasInterrupted was called, sessionInterruptionEnded was never called. sessionInterruptionEnded is called if WindowMode disabled. Is this just a bug for 26 beta? I suspect there is similar problem with non-AR camera. Any idea?
2
0
105
Jun ’25
Projecting a Cube with a Number in ARKit
I'm a novice in RealityKit and ARKit. I'm using ARKit in SwiftUI to show a cube with a number as shown below. import SwiftUI import RealityKit import ARKit struct ContentView : View { var body: some View { return ARViewContainer() } } #Preview { ContentView() } struct ARViewContainer: UIViewRepresentable { typealias UIViewType = ARView func makeUIView(context: UIViewRepresentableContext<ARViewContainer>) -> ARView { let arView = ARView(frame: .zero, cameraMode: .ar, automaticallyConfigureSession: true) arView.enableTapGesture() return arView } func updateUIView(_ uiView: ARView, context: UIViewRepresentableContext<ARViewContainer>) { } } extension ARView { func enableTapGesture() { let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap(recognizer:))) self.addGestureRecognizer(tapGestureRecognizer) } @objc func handleTap(recognizer: UITapGestureRecognizer) { let tapLocation = recognizer.location(in: self) // print("Tap location: \(tapLocation)") guard let rayResult = self.ray(through: tapLocation) else { return } let results = self.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .any) if let firstResult = results.first { let position = simd_make_float3(firstResult.worldTransform.columns.3) placeObject(at: position) } } func placeObject(at position: SIMD3<Float>) { let mesh = MeshResource.generateBox(size: 0.3) let material = SimpleMaterial(color: UIColor.systemRed, roughness: 0.3, isMetallic: true) let modelEntity = ModelEntity(mesh: mesh, materials: [material]) var unlitMaterial = UnlitMaterial() if let textureResource = generateTextResource(text: "1", textColor: UIColor.white) { unlitMaterial.color = .init(tint: .white, texture: .init(textureResource)) modelEntity.model?.materials = [unlitMaterial] let id = UUID().uuidString modelEntity.name = id modelEntity.transform.scale = [0.3, 0.1, 0.3] modelEntity.generateCollisionShapes(recursive: true) let anchorEntity = AnchorEntity(world: position) anchorEntity.addChild(modelEntity) self.scene.addAnchor(anchorEntity) } } func generateTextResource(text: String, textColor: UIColor) -> TextureResource? { if let image = text.image(withAttributes: [NSAttributedString.Key.foregroundColor: textColor], size: CGSize(width: 18, height: 18)), let cgImage = image.cgImage { let textureResource = try? TextureResource(image: cgImage, options: TextureResource.CreateOptions.init(semantic: nil)) return textureResource } return nil } } I tap the floor and get a cube with '1' as shown below. The background color of the cube is black, I guess. Where does this color come from and how can I change it into, say, red? Thanks.
4
0
132
Jul ’25
Keyboard Tracking
Hi! I'm currently experimenting on Apple Vision Pro with hand and head anchors. Is there a way to get an anchor linked to the apple magic keyboard (as the detection is already done to display inputs at the top)? Thanks in advance, Have a good day!
1
0
637
Jul ’25
Perspective problem
Hi, I called it "perspective problem", but I'm not quite sure what it is. I have a tag that I track with builtin camera. I calculate its pose, then use extrinsics and device anchor to calculate where to place entity with model. When I place an entity that overlaps with physical object and start to look at it from different angles, the virtual object begins to move. Initially I thought that it's something wrong with calculations, or some image distortion closer to camera edges is affecting tag detection. To check, I calculated the position only once and displayed entity there, the physical tracked object is not moving. Now, when I move my head, so the object is more to the left, or right in my field of view, the virtual object becomes misaligned to the left, or right. It feels like a parallax effect, but distance from me to entity and to physical object are exactly the same. Is that expected, because of some passthrough correction magic? And if so, can I somehow correct it back, so the entity always overlaps with object? I'm currently on v26 beta 5. I also don't quite understand the camera extrinsics, because it seems that I need to flip it around X by 180 degrees to make it work in deviceAnchor * extrinsics.inverse * tag (shouldn't it be in same coordinates as all other RealityKit things?).
3
0
225
Aug ’25
visionOS plane anchor rotation and wall direction are inconsistent
I have a problem with the wall plane detection using visionOS/ARKit: I am using ARKitSession's PlaneDetectionProvider detection.wall in the space of visionOS. I recorded the position and rotation information of the first detected plane, but found that the rotation value will be facing when the user starts the space. There is a deviation in different directions. That is to say, even if the plane is located on the same wall, the rotation quaternion will be different. I hope that no matter from which direction the user enters the scan, the real direction of the wall can be correctly obtained so that the virtual content can be accurately aligned with the wall. I have tried to use anchor.originFromAnchorTransform or Transform.rotation directly, but the rotation value is still affected by the user's initial orientation. In addition, I would like to know whether the user's initial orientation will affect the location information. If so, please provide a solution. Thank you!
1
0
466
Sep ’25
Rendering scene in RealityView to an Image
Is there any way to render a RealityView to an Image/UIImage like we used to be able to do using SCNView.snapshot() ? ImageRenderer doesn't work because it renders a SwiftUI view hierarchy, and I need the currently presented RealityView with camera background and 3D scene content the way the user sees it I tried UIHostingController and UIGraphicsImageRenderer like extension View { func snapshot() -> UIImage { let controller = UIHostingController(rootView: self) let view = controller.view let targetSize = controller.view.intrinsicContentSize view?.bounds = CGRect(origin: .zero, size: targetSize) view?.backgroundColor = .clear let renderer = UIGraphicsImageRenderer(size: targetSize) return renderer.image { _ in view?.drawHierarchy(in: view!.bounds, afterScreenUpdates: true) } } } but that leads to the app freezing and sending an infinite loop of [CAMetalLayer nextDrawable] returning nil because allocation failed. Same thing happens when I try return renderer.image { ctx in view.layer.render(in: ctx.cgContext) } Now that SceneKit is deprecated, I didn't want to start a new app using deprecated APIs.
3
0
1k
Sep ’25
How to best manage ARKitSession in concurrent code
I have a visionOS app where I instantiate ARKitSession and various providers (HandTrackingProvider and WorldTrackingProvider) in my appModel. That way, I can pass these providers to a Task which runs a gRPC server for sending the data from these providers to a client. When the users enters the immersive space of the app, the ARKitSession will run the providers if they are not running already. I am now trying to implement the AccessoryTrackingProvider with the PSVR sense controllers but it does not fit with my current framework because the controllers may not be connected when the ARKitSession.run function is called. So I need to find a new place to start the session. My question is, if I already have a session which is running the hand and world tracking providers, can I start another session to run the accessory tracking? Should they all be running on the same session? Is there a way to stop the session and restart it when the controllers are connected? When I tried this, I get an error that says "It is not possible to re-run a stopped data provider (<ar_hand_tracking_provider_t: " but if I instantiate a new HandTrackingProvider, then the one that got passed to the gRPC task would no longer be the one running in the new session. Any advice on how best to manage the various providers and ARKit sessions would be greatly appreciated.
1
0
149
2w
Access Main Camera not working in VisionOS 26.1
I downloaded the official sample project “Accessing the Main Camera”, but I found that it’s not able to retrieve the camera feed on visionOS 26.1. After checking the debug logs, it seems the issue is caused by the system being unable to find the expected format. I tested on a device running visionOS 2, and the camera feed worked correctly — but only when using the sample code from the visionOS 2 version, not the current one. I also noticed that some of the APIs have changed between versions. Has anyone managed to successfully access the camera feed on visionOS 26.1?
4
0
696
4d
Access to Raw Lidar point cloud
Is it possible to access the raw lidar measurements before the sceneDepth calculation is done to combines the lidar measurements with visual data. In low light environments the lidar scanner should still work and provide depth info but I cannot figure out how to access those pure lidar depth measurements. I am currently using: guard let frame = arView.session.currentFrame, let depthData = frame.sceneDepth?.depthMap else { print("Depth data is unavailable.") return } but this is the depth data after sensor fusion occurs and fails in low light conditions.
1
0
968
Nov ’24
VisionOS DockingRegion getting ignored
Hi, I added DockingRegion to my scene from Reality Composer Pro, and I am able to load up the scene, but DockingRegion is getting ignored and the scene is getting rendered with no change in AVPlayerViewController window. As it can be seen in Reality Composer Pro screenshot below, I set the width of the player to 666, and moved it to the back by 300cm, but the actual result does not reflect the position I set on Reality Composer Pro. Is there anything else I should do other than loading up the Entity and adding to RealityView? Specifically, do I have to get the DockingRegion within the usda file and somehow enable it?
3
0
482
Dec ’24
Plane Model Falling into Infinity in RealityKit Immersive Space
We have a plane model (basecircle)without physics and rigid body components, and no gestures are implemented. However, when tapped, the model unexpectedly falls into infinity. func fetchEnvResource(){ var simpleMaterial = SimpleMaterial() env = try! Entity.loadModel(named: "bgMain5") env.position += [0,0,-10] envTexture = PhysicallyBasedMaterial() envTextureUnlit = UnlitMaterial() envTexture.baseColor = .init(texture: .init(try! .load(named: "bgMain5"))) envTextureUnlit.color = .init(texture: .init(try! .load(named: "bgMain5"))) env.isEnabled = false let anchor = AnchorEntity(world: [0, 0, -3]) baseCircle = ModelEntity(mesh: .generatePlane(width: 1.5, depth: 1.5, cornerRadius: 0.75), materials: [SimpleMaterial(color: .green, isMetallic: false)]) env.components.set(InputTargetComponent()) baseMaterial = PhysicallyBasedMaterial() baseMaterialUnlit = UnlitMaterial() baseMaterial.baseColor = .init(texture: .init(try! .load(named: "groundTexture"))) baseMaterial.baseColor.tint = UIColor(white: 1.0, alpha: CGFloat(textureOpacity)) baseCircle.model?.materials = [baseMaterial] baseCircle.generateCollisionShapes(recursive: false) baseCircle.components.set(InputTargetComponent()) baseCircle.components[PhysicsBodyComponent.self] = .init(PhysicsBodyComponent(massProperties: .default, mode: .static)) baseCircle.physicsBody = PhysicsBodyComponent( mode: .kinematic ) anchorEntity.addChild(baseCircle) baseCircle.position = [0,0,-3] baseCircle.isEnabled = false let cylinder = ModelEntity(mesh: .generateCylinder(height: 0.2, radius: 0.5), materials: [SimpleMaterial(color: .blue, isMetallic: false)]) cylinder.position = [0,-0.1,0] cylinder.generateCollisionShapes(recursive: false) cylinder.components[PhysicsBodyComponent.self] = .init(PhysicsBodyComponent(massProperties: .default, mode: .static)) cylinder.physicsBody = nil cylinder.scale = [500, 100, 100] anchor.addChild(cylinder) } The plane model in this issue is the BaseCircle. Any suggestions on how to solve this or potential fixes would be greatly appreciated
1
0
386
Dec ’24
VisionOS, passthrough through broadcast shows a black background
Hey, I have Enterprise Access on the account and have added the passthrough capability and the entitlement on the main project and the "Broadcast Upload" extension, too. The broadcast works except it returns a black screen. I am attaching some screenshots below of the entitlement file. I have tried searching online to no avail, so any help would be greatly appreciated. I am also attaching the code. import Foundation import AVFoundation import ReplayKit class VideoAssetWriter { private var isRecording = false private var outputStream: OutputStream? private func setupConnection() { guard outputStream == nil else { return } print("setting up connection.") let serverIP = macIP let port = 12345 var readStream: Unmanaged<CFReadStream>? var writeStream: Unmanaged<CFWriteStream>? CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault, serverIP as CFString, UInt32(port), &readStream, &writeStream) guard let writeStream = writeStream?.takeRetainedValue() else { print("Failed to create write stream") return } self.outputStream = writeStream as OutputStream self.outputStream?.open() } func startRecording() { isRecording = true } func processVideoSampleBuffer(_ sampleBuffer: CMSampleBuffer) { print("Processing Sample 1") guard isRecording else { return } print("Processing Sample 2") sendVideoChunkToServer(sampleBuffer) } private func sendVideoChunkToServer(_ sampleBuffer: CMSampleBuffer) { guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } print("Processing Sample 3") let ciImage = CIImage(cvPixelBuffer: imageBuffer) let context = CIContext() guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return } print("Processing Sample 4") let image = UIImage(cgImage: cgImage) if let imageData = image.jpegData(compressionQuality: 0.5) { guard imageData.count <= 10_000_000 else { print("Frame too large: \(imageData.count) bytes") return } if outputStream == nil { setupConnection() } print("sending frame size up connection.") // Convert to network byte order (big-endian) var frameSize = UInt32(imageData.count).bigEndian let sizeData = Data(bytes: &frameSize, count: MemoryLayout<UInt32>.size) _ = sizeData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: sizeData.count) } print("sending image data up connection.") // Send frame data _ = imageData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: imageData.count) } } } func stopRecording() { isRecording = false outputStream?.close() outputStream = nil } } This is the broadcast picker view wrapper: // Broadcast Picker View wrapper struct BroadcastButtonView: UIViewRepresentable { func makeUIView(context: Context) -> RPSystemBroadcastPickerView { let broadcastPickerView = RPSystemBroadcastPickerView( frame: CGRect(x: 0, y: 0, width: 200, height: 200) ) // Make sure this matches your broadcast extension bundle identifier broadcastPickerView.preferredExtension = "my-extension-bundle-identifier" broadcastPickerView.showsMicrophoneButton = false return broadcastPickerView } func updateUIView(_ uiView: RPSystemBroadcastPickerView, context: Context) { } } The extension SampleHandler: override func broadcastPaused() { print("paused broadcast") // User has requested to pause the broadcast. Samples will stop being delivered. } override func broadcastResumed() { print("resumed broadcast") // User has requested to resume the broadcast. Samples delivery will resume. } override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) { print("broadcast received") assetWriter?.processVideoSampleBuffer(sampleBuffer) } Looking forward to any and all help. Information Property list: Information property list for the extension: The capabilities:
1
0
500
Dec ’24
Vision - Time travel door
Hello All, We're going to do a scene now, kind of like a time travel door. When the user selects the scene, the user passes through the door to show the current scene. The changes in the middle need to be more natural. It's even better if you can walk through an immersive space... There is very little information now. How can I start doing this? Is there any information I can refer to thanks
2
0
555
Dec ’24
How to Implement a Curved Surface Effect for Video Playback and Allow Dynamic Width Adjustment in visionOS?
Dear Apple Engineers, I am working on a project in visionOS and need to implement a curved surface effect for video playback, where the width of the surface can be dynamically adjusted. Specifically, I want the video to be displayed on a curved surface (similar to a scroll unfolding), and the user should be able to adjust the width of this surface. I have the following specific questions: How can I implement a curved surface for video playback and ensure the video content is not stretched or distorted on the surface? How can I create a dynamic curved surface (such as a bending plane) in RealityKit or visionOS, where the width can be adjusted by the user? Is it possible to achieve more complex curved surface effects (such as scroll unfolding or bending) using Shaders or other techniques? Thank you very much for your help!
0
0
521
Dec ’24
Is it possible to create a sentence hover effect in Vision Pro?
I want a sentence custom hover effect, not a button. I want a hover effect when you look at one sentence out of many sentences. So I searched for reference videos https://youtu.be/DftRTx1oX6E , https://developer.apple.com/videos/play/wwdc2023/10110/ on apple youtube and visionOS documentation. But I haven't gotten anywhere near my wish feature yet. I respectfully request someone to help me. :)
1
0
417
Jan ’25