Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

How the depth-map is aligned to rgb image ?
I want to know are depth map and RGB image are perfectly aligned(do both have the same principle point)? If yes then how the depth- map is created. The depth map on iphone12 has 256x192 resolution as opposed to an RGB image (1920x1440). I am interested in exact pixel-wise depth. Is it possible to get the raw depth map of 1920x1440 resolution ? How is the depth-map is created at 256 x 192 resolution? Behind the scenes does the pipeline captures it at 1920 x1440 resolution and then resize it to 256x192? I have so many questions as there are no intrinsic, extrinsic, and calibration data given regarding the lidar. I would greatly appreciate it if someone can explain the steps from a computer-vision perspective. Many Thanks
6
2
3.4k
Aug ’21
What causes »ARSessionDelegate is retaining X ARFrames« console warning?
Hi, since iOS 15 I've repeatedly noticed the console warning »ARSessionDelegate is retaining X ARFrames. This can lead to future camera frames being dropped« even for rather simple projects using RealityKit and ARKit. Could someone from the ARKit team please elaborate what causes this warning and what can be done to avoid it? If I remember correctly I didn't even assign an ARSessionDelegate. Thank you!
4
1
3.3k
Nov ’21
Is SceneKit depricated ?
Hi everyone! I am working on AR app and wanted to implement object occlusion because it removes drift pretty much from the object. This working great with RealityKit sample But I am unable to replicate such behaviour it with scenekit. Because scenekit does not offer object occlusion. Can we say scenekit is getting depricated, and we should re-write app in RealityKit (which is obviously a big task)?
5
0
1.5k
Jan ’22
When using ARKit, why can’t you get the front-facing and back-facing camera feeds at once?
I’d like to use ARKit world tracking and display both the back camera feed and the front camera feeds, using the front feed as as a PIP. This would work great for an internet streaming use case. However, it’s impossible. As soon as ARKit is told to use one mode, the camera for the other side freezes/doesn’t work. This page also says you have to pick one camera to show: https://developer.apple.com/documentation/arkit/arkit_in_ios/choosing_which_camera_feed_to_augment?language=objc A question to the developers: why is this limitation in-place? Are there any work-arounds for the use case of ARKit world tracking + displaying the back camera feed + displaying the front camera feed as an overlay? It’s possible to do this with plain camera initialization without ARKit. (There’s an official example.) With ARKit, it no longer works. It’s strange that I cannot access the front feed via one of the other frameworks, but I guess that ARKit blocks that.
3
0
1k
Jul ’23
How do we author a "reality file" like the ones on Apple's Gallery?
How do we author a Reality File like the ones under Examples with animations at https://developer.apple.com/augmented-reality/quick-look/ ?? For example, "The Hab" : https://developer.apple.com/augmented-reality/quick-look/models/hab/hab_en.reality Tapping on various buttons in this experience triggers various complex animations. I don't see any way to accomplish this in Reality Composer. And I don't see any way to export/compile to a "reality file" from within Xcode. How can I use multiple animations within a single GLTF file? How can I set up multiple "tap target" on a single object, where each one triggers a different action? How do we author something similar? What tools do we use? Thanks
6
2
1.9k
Nov ’23
Getting to MeshAnchor.MeshClassification from MeshAnchor?
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces. This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there? I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
3
0
1.2k
Mar ’24
RoomPlan Framework v2 - Stairs missing
Hello Community, I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible. Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part. Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue. Thank you in advance for any assistance you can provide. Best regards
3
1
817
Mar ’24
Access persona on VisionOS
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona? Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I Any help is very welcome, thanks.
3
0
1.1k
Apr ’24
Object Tracking with RealtyView
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code: RealityView { content in if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(model) } } Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add? Note:I have seen https://developer.apple.com/videos/play/wwdc2024/10101/, but I don't know much about it.
3
1
1.1k
Jun ’24
Vision Pro - Throw object by hand
Hello All, I'm desperate to found a solution and I need your help please. I've create a simple cube in Vision OS. I can get it by hand (close my hand on it) and move it pretty where I want. But, I would like to throw it (exemple like a basket ball). Not push it, I want to have it in hand and throw it away of me with a velocity and direction = my hand move (and finger opened to release it). Please put me on the wait to do that. Cheers and thanks Mathis
9
0
1.5k
Aug ’24
Retrieve AnchorEntity (Hand Locations) Position through update(context: Scene) function
Hello there, I'm currently working on a Hand Tracking System. I've already placed some spheres on some joint points on the left and right hand. Now I want to access the translation/position value of these entities in the update(context: Scene) function. Now my question is, is it possible to access them via .handAnchors(), or which types of .handSkeleton.joint(name) are referencing the same entity? (E.g. is AnchorEntity(.hand(.right, location: .indexFingerTip)) the same as handSkeleton.joint(.indexFingerTip). The goal would be to access the translation of the joints where a sphere has been placed per hand and to be able to update the data every frame through the update(context) function. I would very much appreciate any help! See code example down below: ImmersiveView.swift import SwiftUI import RealityKit import ARKit struct ImmersiveView: View { public var body: some View { RealityView { content in /* HEAD */ let headEntity = AnchorEntity(.head) content.add(headEntity) /* LEFT HAND */ let leftHandWristEntity = AnchorEntity(.hand(.left, location: .wrist)) let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: .indexFingerTip)) let leftHandWristSphere = ModelEntity(mesh: .generateSphere(radius: 0.02), materials: [SimpleMaterial(color: .red, isMetallic: false)]) let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)]) leftHandWristEntity.addChild(leftHandWristSphere) content.add(leftHandWristEntity) leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere) content.add(leftHandIndexFingerEntity) } } } TrackingSystem.swift import SwiftUI import simd import ARKit import RealityKit public class TrackingSystem: System { static let query = EntityQuery(where: .has(AnchoringComponent.self)) private let arKitSession = ARKitSession() private let worldTrackingProvider = WorldTrackingProvider() private let handTrackingProvider = HandTrackingProvider() public required init(scene: RealityKit.Scene) { setUpSession() } private func setUpSession() { Task { do { try await arKitSession.run([worldTrackingProvider, handTrackingProvider]) } catch { print("Error: \(error)") } } } public func update(context: SceneUpdateContext) { guard worldTrackingProvider.state == .running && handTrackingProvider.state == .running else { return } let _ = context.entities(matching: Self.query, updatingSystemWhen: .rendering) if let avp = worldTrackingProvider.queryDeviceAnchor(atTimestamp: currentTime) { let hands = handTrackingProvider.handAnchors(at: currentTime) ... } } }
2
0
731
Aug ’24
RoomCaptureSession custom ARSession missing SceneDepth
Hello We are exploring the iOS 17 RoomPlan updates that allow for a custom ARSession to be passed into the RoomCaptureSession via the new initializer. let roomCaptureSession = RoomCaptureSession(arSession: myARSession) Currently we use our ARSession to extract sceneDepth from the ARFrames via the delegate callback. This works prior to activation of the RoomCaptureSession via session.run(configuration). However, when we do call run on the RoomCaptureSession, sceneDepth is no longer present on the incoming ARFrames. Are these mutually exclusive? Should we expect ARFrame depth data to be present when a RoomCaptureSession is running with the shared ARSession?
1
0
731
Sep ’24
RealityKit/ARKit Environment Texturing broken on iOS 18
Devices running iOS 18 using RealityKit do not seem to receive lighting supplied via ARKit Environment Texturing (https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration/2977509-environmenttexturing). Instead just a default IBL is used by RealityKit. This happens with RealityView as well as ARView. It also happens when I explicitly opt-in to environment texturing: let worldTrackingConfig = ARWorldTrackingConfiguration() worldTrackingConfig.environmentTexturing = .automatic arView.session.run(worldTrackingConfig) Even the Xcode AR Template has this issue. I'm attaching a screenshot of the sample app running on iOS 18 where it's broken and from iOS 17 where it works as expected. I hope this can get resolved quickly since I see it as a major regression. Feedback ID: FB15091335 UPDATE: It works on my older iPhone XS (iOS 18 22A5282m) Broken on iPad Pro (11-inch) (3rd generation) (iPadOS 18.0 (22A5350a)) Maybe it's related to LiDAR? Thank you! iOS 17 (works): iOS 18 (broken):
3
1
983
Sep ’24
Running multiple ARKitSessions in the same app?
I would like to implement the following but I am not sure if this is a supported use case based on the current documentation: Run one ARKitSession with a WorldTrackingProvider in Swift for mixed immersion Metal rendering (to get the device anchor for the layer renderer drawable & view matrix) Run another ARKitSession with a WorldTrackingProvider and a CameraFrameProvider in a different library (that is part of the same app) using the ARKit C API and using the transforms from the anchors in that session to render objects in the Swift application part. In general, is this a supported use case or is it necessary to have one shared ARKitSession? Assuming this is supported, will the (device) anchors from both WorldTrackingProviders reference the same world coordinate system? Are there any performance downsides to having multiple ARKitSessions? Thanks
1
0
574
Sep ’24
Overwhelming shutter sound when capturing HD Image from ARFrame
Hello, I'm using the ARSessionDelegate function: func session(_ session: ARSession, didUpdate frame: ARFrame) to extract an HD Image let hdframe = try? await session.captureHighResolutionFrame().capturedImage which I am later on use to detect text on the image using VN. I'm using the HD picture, because the text bits I'm looking for can be very tiny. let requestHandler = VNImageRequestHandler(cgImage: image)//, orientation: .up, options: [:]) let textRequest = VNRecognizeTextRequest() let vnRequests = [textRequest] try requestHandler.perform(vnRequests) My issue is that, each time a captured HD image is extracted from the AR Scene, a shutter sound is played. I'm aware that shutter sounds are important for privacy, but I'm doing this in a very high frequency, which means that my app is currently unusable, when not muted. My two questions are: Is there any way to disable the sound in this case? Is there a better way to constantly scan the AR video stream for text than this approach?
0
0
272
Sep ’24
Multiple floor levels in one story
In lots of houses there are different levels but are still on the same floor. What i mean is that there are things like stairs on the entrance that only have a few steps and would count basically as the same story. RoomPlan already does a nice job recognizing them during the scanning but after the StructureBuilder or the optimization step it is not really satisfying. Has anyone managed to handle those cases? Or do you have to scan a specific way to capture such small differences within a level?
0
0
536
Sep ’24
Add new joint to ARKit skeleton
Hi everyone, I want to add new joint in addition to joints that provided by ARKit. for example extract the position of wrist and elbow, then add new joint between them in the middle of arm. I can't find a good documentation that can explain ARKit very well. If there is another information that I can use, please share it with me. thanks.
0
0
392
Sep ’24