Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

What's the recommended work flow to recognize and track a 3D object and then anchor 3D models to it? Is there a tutorial for it?
What would be the best way to go about recognizing a 3D physical object, then anchoring digital 3D assets to it? I would also like to use occlusion shaders and masks on the assets too. There's a lot of info out there, but the most current practices keep changing and I'd like to start in the right direction! If there is a tutorial or demo file that someone can point me to that would be great!
0
0
546
Aug ’23
How to control a rigged 3d hand model via hand motion capture?
Hi, I want to control a hand model via hand motion capture. I know there is a sample project and some articles about Rigging a Model for Motion Capture in ARKit document. BUT The solution is quite encapsulated in BodyTrackedEntity. I can't find appropriate Entity for controlling just a hand model. By using VNDetectHumanHandPoseRequest provided by Vision framework, I can get hand joint info, but I don't know how to use that info in RealityKit to control a 3d hand model. Do you know how to do that or do you have any idea on how should it be implemented? Thanks
0
0
591
Aug ’23
Question regarding Coordinate space of BoundingBoxes reported by VNFaceObservation
I am trying to use VNDetectFaceRectanglesRequest to detect face bounding boxes on frames obtained by ARKit callbacks. I have my app in Portrait Device Orientation and I am passing the .right orientation to perform method on VNSequenceRequestHandler something like: private let requestHandler = VNSequenceRequestHandler() private var facePoseRequest: VNDetectFaceRectanglesRequest! // ... try? self.requestHandler.perform([self.facePoseRequest], on: currentBuffer, orientation: orientation) Im setting .right for orientation above, in the hopes that Vision-Framework will re-orient before running inference. Im trying to draw the returned BB on top of the Image. Here's my results processing code: guard let faceRes = self.facePoseRequest.results?.first as? VNFaceObservation else { return } //Option1: Assuming reported BB is in coordinate space of orientation-adjusted pixel buffer // Problems/Observations: // BoundingBox turns into a square with equal width and height // Also BB does not cover entire face, but only from chin to eyes //Notice Height & Width are flipped below let flippedBB = VNImageRectForNormalizedRect(faceRes.boundingBox, currBufHeight, currBufWidth) //vs //Option2: Assuming, reported BB is in coordinate-system of original un-oriented pixel-buffer // Problem/Observations: // while the drawn BB does appear like a rectangle and covering most of the face, it is not always centered on the face. // It moves around the screen when I tilt the device or my face. let currBufWidth = CVPixelBufferGetWidth(currentBuffer) let currBufHeight = CVPixelBufferGetHeight(currentBuffer) let reportedBB = VNImageRectForNormalizedRect(faceRes.boundingBox, currBufWidth, currBufHeight) In Option1 above: BoundingBox becomes a square shape with Width and Height becoming equal. I noticed that the reported normalized BB has the same aspect ration as the Input Pixel Buffer, which is 1.33 . This is the reason that when I flip Width and Height params in VNImageRectForNormalizedRect, width and height become equal. In Option2 above: BB seems to be somewhat right height, it jumps around when I tilt the device or my head. What coordinate system are the reported bounding boxes in? Do I need to adjust for y-flippedness of Vision framework before I perform above operations? What's the best way to draw these BB on the captured-frame and or ARview? Thank you
0
0
449
Aug ’23
PerspectiveCamera in portrait and landscape modes
I have an ARView in nonAR cameraMode and a PerspectiveCamera. When I rotate my iPhone from portrait to landscape mode, the size of the content shrinks. For example, the attached image shows the same scenes with the phone in portrait and landscape modes. The blue cube is noticeable smaller in landscape. The size of the cube relative to the vertical space (i.e., the height of the view) in each situation is consistent. Is there a way to keep the scene (e.g., the cube) the same size whether I am in portrait or landscape mode?
1
0
534
Aug ’23
Detecting Tap Location on detected "PlaneAnchor"? Replacement for Raycast?
While PlaneAnchors are still not generated by the PlaneDetectionProvider in the simulator, I am still brainstorming how to detect a tap on one of the planes. In an iOS ARKit application I could use a raycastQuery on existingPlaneGeometry to make an anchor with the raycast result's world transform. I've not yet found the VisionOS replacement for this. A possible hunch is that I need to install my own mesh-less PlaneModelEntities for each planeAnchor that's returned by the PlaneDetectionProvider. From there I can use a TapGesture targeted to those models? And then I could build a an WorldAnchor from the tap location on those entities. Anyone have any ideas?
1
0
717
Aug ’23
Is there any metadata to correct or rectify the depth maps
Using the front Truedepth camera of a fixed-positioned iphone13, I capture the depth map of a static scene through AR Kit and then rectify (undistorted) them using the inverseLensDistortionLookupTable. Without changing the position of camera and the scene, every time that I launch the session, the rendered depth maps are randomly different. Given that I roughly know the ground truth, depending on the launched session, the depth maps are sometimes reasonably accurate and sometimes severely distorted non-linearly (up to 10cm bias), i.e. one side of the depth frame is still accurate other side is off. Is there any solution? or if there is any metadata to correct or rectify the depth maps later on?
0
0
276
Aug ’23
Why does this entity appear behind spatial tap collision location?
I am trying to make a world anchor where a user taps a detected plane. How am I trying this? First, I add an entity to a RealityView like so: let anchor = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous) anchor.transform.rotation *= simd_quatf(angle: -.pi / 2, axis: SIMD3<Float>(1, 0, 0)) let interactionEntity = Entity() interactionEntity.name = "PLANE" let collisionComponent = CollisionComponent(shapes: [ShapeResource.generateBox(width: 2.0, height: 2.0, depth: 0.02)]) interactionEntity.components.set(collisionComponent) interactionEntity.components.set(InputTargetComponent()) anchor.addChild(interactionEntity) content.add(anchor) This: Declares an anchor that requires a wall 2 meters by 2 meters to appear in the scene with continuous tracking Makes an empty entity and gives it a 2m by 2m by 2cm collision box Attaches the collision entity to the anchor Finally then adds the anchor to the scene It appears in the scene like this: Great! Appears to sit right on the wall. I then add a tap gesture recognizer like this: SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in guard value.entity.name == "PLANE" else { return } var worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene) let pose = Pose3D(position: worldPosition, rotation: value.entity.transform.rotation) let worldAnchor = WorldAnchor(transform: simd_float4x4(pose)) let model = ModelEntity(mesh: .generateBox(size: 0.1, cornerRadius: 0.03), materials: [SimpleMaterial(color: .blue, isMetallic: true)]) model.transform = Transform(matrix: worldAnchor.transform) realityViewContent?.add(model) I ASSUME This: Makes a world position from the where the tap connects with the collision entity. Integrates the position and the collision plane's rotation to create a Pose3D. Makes a world anchor from that pose (So it can be persisted in a world tracking provider) Then I make a basic cube entity and give it that transform. Weird Stuff: It doesn't appear on the plane.. it appears behind it... Why, What have I done wrong? The X and Y of the tap location appears spot on, but something is "off" about the z position. Also, is there a recommended way to debug this with the available tools? I'm guessing I'll have to file a DTS about this because feedback on the forum has been pretty low since labs started.
2
0
998
Aug ’23
USDZ Animation works on iPad with ARView but broken in RealityView on visionOS
I have a USDZ file with a 3d object that is animated. Loading the 3d object into an ARView on iPad the animation shows and plays as intended. Loading, the same 3d object in a RealityView in VisionOS, results in only part of the animation playing. The part of the animation that doesn't play is a mesh animation. This animation doesn't play correctly when viewing in Reality Composer Pro either. Is this a limitation of the beta software? Will mesh animations be supported on VisionOS and in RealityViews? ARView is unavailable on VisionOS. What other options do I have to support the existing animation?
0
1
594
Aug ’23
ARKit ImageTracking Distance
Hello I am developing a project using ImageTracking using ARKit on Unity. I am using a 10 cm square marker. When ImageTracking recognizes an image as a marker, it will not recognize the image unless it is very close. What is the maximum recognition distance for ARKit's ImageTracking? Or is there a way to extend the recognition distance?
0
0
244
Aug ’23
Vision Pro Camera | Outside view access?
Hi, Allow me to explain the use case. I have a business app that makes use of camera to scan (QR codes), and it also makes use of a handheld gun to scan barcodes. I need assistance on how can I use either of above two on Vision Pro (Vision OS)? I have gone through the WWDC23 video, and the documentation as well on what is available @ camera and what isn't. However, Is it possible to have access to the outside view using RealityView or any other 3D component? Or using existing classes to see what's outside and render it on the UI or any probable workaround on how to achieve this? Any help is appreciated. Thanks, Lokesh Sehgal
1
0
1.5k
Aug ’23
Override RoomPlan coaching prompts...
We want to be able to use our own prompt / coaching for RoomPlan I see that I can override the following method to see the RoomCaptureSession.Instruction and then add or our UI to coach the customer func captureSession(_ session: RoomCaptureSession, didProvide instruction: RoomCaptureSession.Instruction) { Logger.log(.info, category: .roomplan, message: "RoomCaptureSession.Instruction(\(instruction))") // Show coaching prompts } However, I don't know how to remove the coaching UI provided by the OS. If I disable coaching sessionConfig = RoomCaptureSession.Configuration() sessionConfig.isCoachingEnabled = false Then the callback above was not being called. I really want a willProvide method that I can return 'false' to say that I want to give my own UI instead of the UI provided by RoomPlan. Is there a way to provide my own ui for these RoomCaptureSession.Instructions? Thanks in advance
1
0
394
Aug ’23
ARSkeleton3D and Lidar
Does the ARSkeleton3D use Lidar data? If yes, for what purpose? For example, to place the person in the environment, to improve the estimation of the 3D positions of the joints or something else? Is there any documentation on that?
0
0
212
Aug ’23
Anchoring Objects to Surfaces in the Shared Space
Hi there! I am working on an app that is designed to be a 3D “widget” that could sit on someone’s desk or shelf and continue to sit there even when a user isn’t actively interacting with it, as if it was a real object on a shelf. Is there any way to place objects on surfaces for apps running in the Shared Space? From what I understand the position of a volume is determined by the user, and cannot be changed programatically. I understand Full Space apps have access to anchors and plane detection data, but I don’t want people to have to close everything else to use my app. A few questions: In the simulator, it seems that when I drag the volume around to try to place it on a surface, geometry inside of a RealityView can clip through “real” objects. Is this the expected behavior on a real device too? If so, could using ARKit in a Full Space to position the volume, then switching back to a Shared Space, be an option? Also, if the app is closed, and reopened, will the volume maintain its position relative to the user’s real-world environment? Thanks!
1
1
609
Sep ’23
Augmented Reality (AR) measurement app using ARViewContainer Value of type - 'ARViewContainer' has no member 'parentSession'
import SwiftUI import RealityKit import ARKit struct ARViewContainer: UIViewRepresentable { @Binding var distance: Float? func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) let configuration = ARWorldTrackingConfiguration() arView.session.run(configuration) arView.addGestureRecognizer(UITapGestureRecognizer(target: context.coordinator, action: #selector(context.coordinator.handleTap(_:)))) return arView } func updateUIView(_ uiView: ARView, context: Context) { } func makeCoordinator() -> Coordinator { Coordinator(self) } class Coordinator: NSObject, ARSessionDelegate { var parent: ARViewContainer init(_ parent: ARViewContainer) { self.parent = parent super.init() parent.parentSession.delegate = self }
0
0
407
Sep ’23