Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Tracking multiple ImageAnchor simultaneously on VisionOS
Using the example code posted here: https://developer.apple.com/documentation/visionOS/tracking-images-in-3d-space I can register multiple ReferenceImage s with a ImageTrackingProvider, but only one updates at a time - to have realtime updating, I can only have one ImageAnchor in my field of view at a time. Is it possible to track multiple imageAnchors at the same time in the same field of view? As in having several ImageAnchor's tracked and entities updated to the transforms of the anchor in the same frame/moment from the Apple Vision Pro?
2
1
213
2w
Enterprise API with Education Account
Hello, I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro. I am a graduate student studying at the university. And I have two problems, If I want to use passthrough in screen capture (in VisionOS), do I have to join Apple Developer Enterprise Program to get Enterprise API? and Can I buy Apple Developer Enterprise Program (Enterprise API) with my university account? Have any of you been able to do this? Thank you
2
1
227
Jul ’25
AR sessions fails with "Required sensor failed"
The AR based app I am working on right now is experiencing an issue. Sometimes, the AR session fails with a call to my ARSessionObserver's session(_ session: ARSession, didFailWithError error: Error) with the following error: Error Domain=com.apple.arkit.error Code=102 "Required sensor failed." NSLocalizedFailureReason="A sensor failed to deliver the required input.," NSLocalizedRecoverySuggestion="Make sure that the application has the required privacy settings." The underlying error seems to point to the CoreMotion framework: Domain=CMErrorDomain Code=102 "(null) Some people seem to have experienced this issue and solved it by making sure that the Compass Calibration switch is ON in Settings > Privacy > Location Services > System Services. For context, the ARWorldTrackingConfiguration.worldAlignment is set to .gravity The thing is it is already ON when I experience this issue. I also noticed that this issue happens way more often on the iPhone 16e than in any other device. Has anyone had similar experiences? I am looking for a way to prevent this error from happening (ideally) or handling in a way that does not affect the user. Any help is appreciated
0
1
183
Aug ’25
When using ARKit, why can’t you get the front-facing and back-facing camera feeds at once?
I’d like to use ARKit world tracking and display both the back camera feed and the front camera feeds, using the front feed as as a PIP. This would work great for an internet streaming use case. However, it’s impossible. As soon as ARKit is told to use one mode, the camera for the other side freezes/doesn’t work. This page also says you have to pick one camera to show: https://developer.apple.com/documentation/arkit/arkit_in_ios/choosing_which_camera_feed_to_augment?language=objc A question to the developers: why is this limitation in-place? Are there any work-arounds for the use case of ARKit world tracking + displaying the back camera feed + displaying the front camera feed as an overlay? It’s possible to do this with plain camera initialization without ARKit. (There’s an official example.) With ARKit, it no longer works. It’s strange that I cannot access the front feed via one of the other frameworks, but I guess that ARKit blocks that.
3
0
1.1k
Dec ’24
Is ARGeoTrackingConfiguration always more accurate than ARWorldTrackingConfiguration for world scale AR?
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations. Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects. Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience? If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned. Thanks.
3
0
1k
Oct ’25
Vision Pro - Throw object by hand
Hello All, I'm desperate to found a solution and I need your help please. I've create a simple cube in Vision OS. I can get it by hand (close my hand on it) and move it pretty where I want. But, I would like to throw it (exemple like a basket ball). Not push it, I want to have it in hand and throw it away of me with a velocity and direction = my hand move (and finger opened to release it). Please put me on the wait to do that. Cheers and thanks Mathis
9
0
1.5k
Feb ’25
VisionOS 2.0 Main Camera Access Enterprise Entitlement Not Recognized in XCode
I am working on a project that requires access to the main camera on the Vision Pro. My main account holder applied for the necessary enterprise entitlement and we were approved and received the Enterprise.license file by email. I have added the Enterprise.license file to my project, and manually added the com.apple.developer.arkit.main-camera-access.allow entitlement to the entitlement file and set it to true since it was not available in the list when I tried to use the + Capability button in the Signing & Capabilites tab. I am getting an error: Provisioning profile "iOS Team Provisioning Profile: " doesn't include the com.apple.developer.arkit.main-camera-access.allow entitlement. I have checked the provisioning profile settings online, and there is no manual option for adding the main camera access entitlement, and it does not seem to be getting the approval from the license.
6
0
1.5k
Sep ’25
Issues setting up the Enterprise API entitlements (Main Camera Access)
Hello, i've recently received the entitlements to access the main camera stream for a project on the Apple Vision Pro. What happens : When executing code from this WWDC tutorial , i'm getting this error when trying to use a Camera Frame Provider : ar_camera_frame_provider_t <0x300d58870>: Failed to start camera stream with error: <ar_error_t: 0x303fcc4c0 Error Domain=com.apple.arkit Code=100 "App not authorized." UserInfo={NSLocalizedFailureReason=Using camera frame provider requires an entitlement., NSLocalizedRecoverySuggestion=, NSLocalizedDescription=App not authorized.} What I've tried : I followed the instructions given by mail, by : adding the .license file at the root of my project, adding the .entitlements file by adding capabilities in the project (Main Camera Access & Passthrough in screen capture are there). I've added NSCameraDescription, NSEnterpriseMCAMUsageDescription and NSWorldSensingUsageDescription (they all have a value assigned). I've also followed those post & post advices. When checking on the Account settings, i do see the capabilities in the "additional capabilities" On first launch, I'm also getting prompted to accept the NSEnterpriseMCAMUsageDescription, so I assume the info.plist file is valid? What did i missed to get the entitlements working ? Here's the code : import ARKit import SwiftUI import Vision import RealityKit class MainCameraAccess { var arKitSession = ARKitSession() var cameraFrameProvider = CameraFrameProvider() var pixelBuffer: CVPixelBuffer? func startCameraSession() async { let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left]) // Request authorization await arKitSession.requestAuthorization(for: [.cameraAccess]) // Start the session do { try await arKitSession.run([cameraFrameProvider]) } catch { print("Failed to start ARKit session: \(error)") return } // Get camera frame updates guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } // Process frames for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } self.pixelBuffer = mainCameraSample.pixelBuffer } } func saveLatestImage() { guard let pixelBuffer = self.pixelBuffer else { print("No image available to save.") return } // Convert CVPixelBuffer to UIImage let ciImage = CIImage(cvPixelBuffer: pixelBuffer) let context = CIContext() guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { print("Failed to create CGImage.") return } let uiImage = UIImage(cgImage: cgImage) // Save UIImage to Photos Album UIImageWriteToSavedPhotosAlbum(uiImage, nil, nil, nil) print("Image saved to photo library.") } } Thanks in advance for the help, Jeremy
3
0
655
Dec ’24
Crash: offlineFloorPlanGeneration
Lately i got a lot of crashes on iOS 18 devices (mostly 13 Pro devices but also a 16 Pro Max), has anyone encountered similar issues and is there more information about offlineFloorPlanGeneration? com.apple.RoomScanCore.offlineFloorPlanGeneration EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000f5cab18e03c0 RoomScanCore RSFrameFromDictionary + 110992
1
0
624
Dec ’24
Image Anchoring Not Working Outside Reality Composer/Reality Composer Pro
Hi everyone, I’m having trouble with image anchoring when working on a project in Reality Composer and Reality Composer Pro. Here’s the issue: 1. What I’m Trying to Achieve: I want to create an AR scene where an object anchors to an image I provide. I don't want to create an app for this but just use the USDZ File the Scene creates. The USDZ File then should be viewable via the various integrations of AR Quick Look across the Apple Ecosystem. The image anchoring works perfectly when I preview the scene inside Reality Composer using AR mode. 2. The Problem: When I export the project (tried both USDZ and Reality formats) and open it on my iPhone using the Files app (which uses AR Quick Look), the image anchoring no longer works. The object doesn’t anchor to the provided image as expected. It just anchors to the first plane it recognizes and not the image. 3. What I’ve Tried: Exporting the scene in USDZ format. Exporting the scene in Reality format. Both formats result in the same issue: no image anchoring outside of the Reality Composer environment. Trying different images but all resulting in same manor that the image anchoring is not working Tried different iOS Version but resulting in the same issue 4. Current Setup: Reality Composer Pro version: 2.0 iPhone model: iPhone 13 Pro iOS version: 18.1. 5. What I Need Help With: Is there a way to ensure image anchoring works in exported files when opened via AR Quick Look? Do I need to configure something specific during the export process? Are there limitations in AR Quick Look that prevent image anchoring from functioning correctly? Do i need to create an app to make this work? I’d appreciate any advice or insights from the community. If anyone has experience with similar issues or knows of a workaround, please let me know! Thanks in advance, Mav
2
0
676
Dec ’24
VisionOS ARKit CameraFrame Sample Parameters Extrinsics
the following documentation tells me that the CameraFrame.Sample.Parameters.extrinsics is of type simd_float4x4, great! https://developer.apple.com/documentation/arkit/cameraframe/sample/parameters/4443449-extrinsics I have read in the answer of another post that this extrinsics represents the pose of the physical camera relative to the device anchor. Did I understand correctly that the device anchor is where the scene is rendered from onto the user's display? What is the coordinate system in which this offset is defined, which axis is left, which one is up, which one is forward? The last column of the extrinsics seems to define a translation of approximately 2 cm along the x axis, -2cm along the y axis and -5 cm along the z axis. I tried to measure the physical distance between the main left and right cameras in order to find out if it's rather 2cm or 5 cm from the "middle", it looks more like 5, so I assume that the z axis is looking towards the right (from the user's perspective). Is that so? For x and y, I assume that the physical camera is approximately 2 cm to the front of the user and 2cm to the bottom, which of x and y is horizontal, which on vertical? How is the camera image indexed, is it row-major and is the origin on the top left? I am looking forward to learning about all the details on these extrinsics in order to make use of it.
4
0
818
Jan ’25
Create Anchor on Objects from 2D Data
We're developing a VisionOS application, where we would like to do product recognition (like food items). We have enterprise entitlements and therefore also main camera access for VisionOS. We send this live camera frames to a trained CoreML model where we will receive 2D coordinates from the model detection prediction. Now, we would like to create a 3D anchor on the detected items so it can be visible for user. The 3D anchor is going to be the class name of the detected item. How do we transform this 2D coordinate from the model prediction to a 3D anchor?
4
0
735
Feb ’25
Billboard Entity with AttachmentView
Hey Everyone, Happy New Year! I wanted to see if you have seen this before. I have added an attachment to the RealityView as a child on an entity that has a Billboard component set on it. I wanted to create the effect that the attachment is offset by .5 meters from center and follows the device as you move around it. IT works great until you try click a button. The attachment moves with the billboard, but the collision box around the attachment is not following it. If I position myself perfectly it works. Video Example: https://youtu.be/4d9Vx7K8MmU // // ImmersiveView.swift // Billboard Attachment // // Created by Justin Leger on 1/3/25. // import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { var rootEntity = Entity() var body: some View { RealityView { content, attachments in // Add the initial RealityKit content let sphereEntity = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .red, roughness: 1, isMetallic: false)]) sphereEntity.position = [0.0, 1.0, -2.0] let controlsPivotEntity = Entity() controlsPivotEntity.components[BillboardComponent.self] = .init() // Extract the attachemnt entity and disable it before its used. if let controlsViewAttachmentEntity = attachments.entity(for: PlacedThingControls.attachmentId) { controlsViewAttachmentEntity.position.z = 0.5 controlsPivotEntity.addChild(controlsViewAttachmentEntity) sphereEntity.addChild(controlsPivotEntity) } content.add(sphereEntity) } attachments: { Attachment(id: PlacedThingControls.attachmentId) { PlacedThingControls() } } } } #Preview(immersionStyle: .mixed) { ImmersiveView() .environment(AppModel()) } struct PlacedThingControls: View { static let attachmentId = "placed-thing-3D-controls" var body: some View { VStack { HStack(spacing: 0) { Button { print("🗺️🗺️🗺️ Map selected pieces") } label: { Text("\(Image(systemName: "plus.square.dashed")) Manage Mesh Maps") .fontWeight(.semibold) .frame(maxWidth: .infinity) } .padding(.leading, 20) Spacer() Button(role: .destructive) { print("🗑️🗑️🗑️ Delete selected pieces") } label: { Label { Text("Delete") } icon: { Image(systemName: "trash") } .labelStyle(.iconOnly) } .padding(.trailing, 20) } .padding(.vertical) .frame(minWidth: 320, maxWidth: 480) } .glassBackgroundEffect() } }
5
0
878
Jan ’25
How to find the camera transform (or view matrix) in the world coordinate from a camera frame
I'm trying to implement a prototype to render virtual objects in a mixed immersive space on the camer frames captured by CameraFrameProvider. Here are what I have done: Get camera's instrinsics from frame.primarySample.parameters.intrinsics Get camera's extrinsics from frame.primarySample.parameters.extrinsics Get the device anchor by worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) Setup a RealityKit.RealityRenderer to render virtual objects on the captured camera frames let realityRenderer = try RealityKit.RealityRenderer() realityRenderer.cameraSettings.colorBackground = .outputTexture() let cameraEntity = PerspectiveCamera() // see https://developer.apple.com/forums/thread/770235 let cameraTransform = deviceAnchor.originFromAnchorTransform * extrinsics.inverse cameraEntity.setTransformMatrix(cameraTransform, relativeTo: nil) cameraEntity.camera.near = 0.01 cameraEntity.camera.far = 100 cameraEntity.camera.fieldOfViewOrientation = .horizontal // manually calculated based on camera intrinsics cameraEntity.camera.fieldOfViewInDegrees = 105 realityRenderer.entities.append(cameraEntity) realityRenderer.activeCamera = cameraEntity Virtual objects, which should be seen in the camera frames, are clipped out by the camera transform. If I use deviceAnchor.originFromAnchorTransform as the camera transform, virtual objects can be rendered on camera frames at wrong positions (I think it is because the camera extrinsics isn't used to adjust the camera to the correct position). My question is how to use the camera extrinsic matrix for this purpose? Does the camera extrinsics point to a similar orientation of the device anchor with some minor rotation and postion change? Here is an extrinsics from a camera frame. It seems that the direction of Y-axis and Z-axis are flipped by the extrinsics. So the camera is point to a wrong direction. simd_float4x4([[0.9914258, 0.012555369, -0.13006608, 0.0], // X-axis [-0.0009778949, -0.9946325, -0.10346654, 0.0], // Y-axis [-0.13066702, 0.10270659, -0.98609203, 0.0], // Z-axis [0.024519, -0.019568002, -0.058280986, 1.0]]) // translation
3
0
772
Jan ’25
Reality View argument type does not conform to protocol view
I'm working on creating a panorama view in AVP. When I got to this line of code Xcode says that "Type 'Entity' does not conform to protocol 'View'": private var realityView: RealityView! as well as this line, with the same error message: private func setupPanoramaScene(for content: RealityView.Content) What should I put as a argument for reality view? It doesn't work without arguments either.
3
0
474
Jan ’25
Barcode Anchor Jitter in Vision Pro due to Invalid enterprise api for barcode scanning Values
We’re using the enterprise API for spatial barcode/QR code scanning in the Vision Pro app, but we often get invalid values for the barcode anchor from the API, leading to jittery barcode positions in the UI. The code we’re using is attached below. import SwiftUI import RealityKit import ARKit import Combine struct ImmersiveView: View { @State private var arkitSession = ARKitSession() @State private var root = Entity() @State private var fadeCompleteSubscriptions: Set = [] var body: some View { RealityView { content in content.add(root) } .task { // Check if barcode detection is supported; otherwise handle this case. guard BarcodeDetectionProvider.isSupported else { return } // Specify the symbologies you want to detect. let barcodeDetection = BarcodeDetectionProvider(symbologies: [.code128, .qr, .upce, .ean13, .ean8]) do { try await arkitSession.requestAuthorization(for: [.worldSensing]) try await arkitSession.run([barcodeDetection]) print("Barcode scanning started") for await update in barcodeDetection.anchorUpdates where update.event == .added { let anchor = update.anchor // Play an animation to indicate the system detected a barcode. playAnimation(for: anchor) // Use the anchor's decoded contents and symbology to take action. print( """ Payload: \(anchor.payloadString ?? "") Symbology: \(anchor.symbology) """) } } catch { // Handle the error. print(error) } } } // Define this function in ImmersiveView. func playAnimation(for anchor: BarcodeAnchor) { guard let scene = root.scene else { return } // Create a plane sized to match the barcode. let extent = anchor.extent let entity = ModelEntity(mesh: .generatePlane(width: extent.x, depth: extent.z), materials: [UnlitMaterial(color: .green)]) entity.components.set(OpacityComponent(opacity: 0)) // Position the plane over the barcode. entity.transform = Transform(matrix: anchor.originFromAnchorTransform) root.addChild(entity) // Fade the plane in and out. do { let duration = 0.5 let fadeIn = try AnimationResource.generate(with: FromToByAnimation<Float>( from: 0, to: 1.0, duration: duration, isAdditive: true, bindTarget: .opacity) ) let fadeOut = try AnimationResource.generate(with: FromToByAnimation<Float>( from: 1.0, to: 0, duration: duration, isAdditive: true, bindTarget: .opacity)) let fadeAnimation = try AnimationResource.sequence(with: [fadeIn, fadeOut]) _ = scene.subscribe(to: AnimationEvents.PlaybackCompleted.self, on: entity, { _ in // Remove the plane after the animation completes. entity.removeFromParent() }).store(in: &fadeCompleteSubscriptions) entity.playAnimation(fadeAnimation) } catch { print("Error") } } }
3
0
514
Jan ’25
WorldAnchors added and removed immediately
I can add a WorldAnchor to a WorldTrackingProvider. The next time I start my app, the WorldAnchor is added back, and then is immediately removed: dataProviderStateChanged(dataProviders: [WorldTrackingProvider(0x0000000300bc8370, state: running)], newState: running, error: nil) AnchorUpdate(event: added, timestamp: 43025.248134708, anchor: WorldAnchor(id: C0A1AE95-F156-45F5-9030-895CAABF16E9, isTracked: true, originFromAnchorTransform: <translation=(0.048458 0.000108 -0.317565) rotation=(0.00° 15.44° -0.00°)>)) AnchorUpdate(event: removed, timestamp: 43025.348131208, anchor: WorldAnchor(id: C0A1AE95-F156-45F5-9030-895CAABF16E9, isTracked: false, originFromAnchorTransform: <translation=(0.000000 0.000000 0.000000) rotation=(-0.00° 0.00° 0.00°)>)) It always leaves me with zero anchors in .allAnchors...the ARKitSession is still active at this point
8
0
712
Jan ’25