Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

Turn physical surface into touchscreen in VisionOS
In VisionOS is it possible to detect when a user is touching a physical surface in the real world and also to project 2D graphics on that surface? So imagine a windowless 2D app that is projected onto a surface, essentially turning a physical wall, table, etc. into a giant touchscreen? So kinda like this: https://appleinsider.com/articles/23/06/23/vision-pro-will-turn-any-surface-into-a-display-with-touch-control But I want every surface in the room to be touchable and be able to display 2D graphics on the face of that surface and not floating in space. So essentially turning every physical surface in the room into a UIView. Thanks!
2
0
717
Sep ’23
Verifying Image Tracking in Vision Pro Simulator
Hello! I am making a Vision Pro app. It is as simple AR Image Recognition-to-3D as possible: Load 10 images as ARReferenceImages A short code saying "if you see this image, display this 3D model on top of the image" Open AR camera When I build to iPhone it works just as intended. When I build to Vision Pro Simulator is - not surprisingly - does not work. Only a white rectangle in the "Simulator environment". Thought: I can build and run + getting "Successfully loaded 10 ARReferenceImages." Console print. My question: Can I verify somehow that this project will work on a Vision Pro? Best regards
1
1
652
Sep ’23
AR Kit Anchor
Hi, I’ve implemented an ARKit app that display an usdz object in the real world. In this scenario, the placement is via image recognition (Reality Composer Scene) Obviously when I don’t see the image (QR marker), the app could not detect the anchor and it will not place the object in the real world. Is it possibile to recognize an image (QR marker) and after placing the object on it, leave the object there ? So basically detect the marker place the object leave the object there, not depending on the image (marker) recognition Thanks
0
0
538
Sep ’23
ObjectCaptureView/Session blocks ARSession sceneUnderstanding
I used ObjectCaptureView with an ObjectCaptureSession in different setups, for example nested in an UIViewController so that I was able to deallocate the View and the Session after switching to another View. If I am going to use an ARSession with ARWorldTracking and SceneUnderstanding afterwards and the app won't show the overlaying Mesh anymore. Using SceneUnderstanding without opening the ObjectCaptureView previously works fine. Has someone faced the same issue, or how could I report this to apple? Seems like a problem with the ObjectCaptureView/Session itself. During the start of the ObjectCaptureSession the are also some logs in the Metadata telling me: "Wasn't able to pop ARFrame and Cameraframe at the same time", it will be shown like 10 or 15 times for every start. So I nested it in an ARSCNView but that didn't fixed it.
0
0
463
Sep ’23
VisionOS Simulator and ARKit Features
I have a few problems with the visionOS Simulator. The visionOS simulator does not show up the World Sense permission Pop-up Window, although I have entered the NSWorldSensingUsageDescription in the info.plist. As a result, my app always crashes in the visionOS simulator when I want to run the PlaneDetectionProvider or the SceneReconstructionProvider on my AR session with the Error: „Plane D“ So I can’t test ARKit features in my simulator. But I have already seen a few videos in which ARKit functions - such as the PlaneDetection - work in the simulator, as in the video I have attached here. Have you had similar experiences to me, or does it work for you without any problems?
2
2
531
Sep ’23
VisionOS Simulator with ARKit Features
I have a few problems with the visionOS Simulator. The visionOS simulator does not show up the World Sense permission Pop-up Window, although I have entered the NSWorldSensingUsageDescription in the info.plist. As a result, my app always crashes in the visionOS simulator when I want to run the PlaneDetectionProvider or the SceneReconstructionProvider on my AR session with the Error: 'ar_plane_detection_provider is not supported on this device.' So I can’t test ARKit features in my simulator. But I have already seen a few videos in which ARKit functions - such as the PlaneDetection - work in the simulator, as in the video I have attached here. Have you had similar experiences to me, or does it work for you without any problems? https://www.youtube.com/watch?v=NZ-TJ8Ln7NY&list=PLzoXsQeVGa05gAK0LVLQhZJEjFFKSdup0&index=2
1
0
383
Sep ’23
ARKit Pose deprecated in VisionOS?
Different results in XCode Beta 7 and Beta 8 on VisionOS simulator: Beta 7: the following compiles but crashes (on beta 8 Simulator?) private func createPoseForTiming(timing: LayerRenderer.Frame.Timing) -> Pose? { ... if let outPose = worldTracking.queryPose(atTimestamp: t) { // will crash (Beta 7 binary on 8 simulator?) } } Beta 8: compile error "Value of type 'LayerRenderer.Drawable' has no member 'pose'" private func createPoseForTiming(timing: LayerRenderer.Frame.Timing) -> OS_ar_pose? { ... if let outPose = worldTracking.queryPose(atTimestamp: t) { // "Value of type 'LayerRenderer.Drawable' has no member 'pose'" } } Changing Pose to OS_ar_pose for Beta 8, was recognized. But, new compile error regarding queryPose Did notice that docs say that Pose prediction is expensive. I could have sworn that there was a deprecation in the headers, but could not find. Is Pose deprecated for VisionOS? What is the way forward?
1
0
421
Sep ’23
Reconstructing 3D data points from DepthScene
Hello, I am trying to add a feature to my App that allows the user to, take a picture, open the image and by tapping on the screen to measure a linear distance on the image. According to this thread, by saving the cameraInstrincsInverse matrix and the localToWorld matrix, I should be able to get the 3D data points by using the location tapped on the screen and the depth from the SceneDepth API. I can't seem to find a formula using those parameters that allows me to compute the data I am looking for. Any help is appreciated! Thank you!
0
0
393
Sep ’23
iPhone 12 pro ARKit raw data for VIO
I would like to perform visual-inertial odometry using ARKit raw data on an iPhone 12 Pro. I have two main questions regarding the data. Extrinsic parameters of IMU and camera I am unsure which class or function in Swift contains the extrinsic matrix information. I am curious if any documentation provides the extrinsic matrix, or if using Kalibr is the only way to obtain it. Noise in gyro and accel data I would like to know if the accelerometer and gyroscope data from ARKit are filtered values or if they include bias and noise.
1
0
265
Sep ’23
Is ARKit's detecting 3D object reliable enough to use?
Hi, I'm doing a research on AR using real world object as the anchor. For this I'm using ARKit's ability to scan and detect 3d object. Here's what I found so far after scanning and testing object detection on many objects : It works best on 8-10" object (basically objects you can place on your desk) It works best on object that has many visual features/details (makes sense just like plane detection) Although things seem to work well and exactly the behavior I need, I noticed issues in detecting when : Different lighting setup, this means just directions of the light. I always try to maintain bright room light. But I noticed testing in the morning and in the evening sometimes if not most of the time will make detection harder/fails. Different environment, this means simply moving the object from one place to another will make detection fails or harder(will take significant amount of time to detect it). -> this isn't scanning process, this is purely anchor detection from the same arobject file on the same real world object. These two difficulties make me wonder if scanning and detecting 3d object will ever be reliable enough for real world case. For example you want to ship an AR app that contains the manual of your product where you can use AR app to detect and point the location/explanation of your product features. Has anyone tried this before? Is your research show the same behavior as mine? Does using LIDAR will help in scanning and detection accuracy? So far there doesn't seem to be any information on what actually ARKit does when scanning and detecting, maybe if anyone has more information I can learn on how to make better scan or what not. Any help or information regarding this matter that any of you willing to share will be really appreciated Thanks
5
0
1.4k
Sep ’23
AnchorEntity init(anchor:) doesn't exist anymore in XCode 15?
Just updated my XCode as 15 is released today (been using 15 beta for the last couple of months) and this part of code doesn't compile anymore : let anchor = AnchorEntity(anchor: objectAnchor) error says "no exact match in call to initializer" ? seems like it doesn't accept any parameter or only expecting AnchoringComponent.Target The code was okay and runs well before the update and I've been using 15 beta to make and test my current code. I checked the documentation and it seems initializing AnchorEntity using existing ARAnchor hasn't been deprecated https://developer.apple.com/documentation/realitykit/anchorentity/init(anchor:) What happens here?
1
0
396
Sep ’23
Collision handling between the entity and the real environment objects and planes in RealityKit
I'm trying to achieve a similar behaviour to the native AR preview app on iOS when we can place a model and once we move or rotate it, it automatically detects the obstacles and gives a haptic feedback, and doesn't go through the walls. I'm using the devices with LiDAR only. Here is what I have so far: Session setup private func configureWorldTracking() { let configuration = ARWorldTrackingConfiguration() configuration.planeDetection = [.horizontal, .vertical] configuration.environmentTexturing = .automatic if ARWorldTrackingConfiguration.supportsSceneReconstruction(.meshWithClassification) { configuration.sceneReconstruction = .meshWithClassification } let frameSemantics: ARConfiguration.FrameSemantics = [.smoothedSceneDepth, .sceneDepth] if ARWorldTrackingConfiguration.supportsFrameSemantics(frameSemantics) { configuration.frameSemantics.insert(frameSemantics) } session.run(configuration) session.delegate = self arView.debugOptions.insert(.showSceneUnderstanding) arView.renderOptions.insert(.disableMotionBlur) arView.environment.sceneUnderstanding.options.insert([.collision, .physics, .receivesLighting, .occlusion]) } Custom entity: class CustomEntity: Entity, HasModel, HasCollision, HasPhysics { var modelName: String = "" private var cancellable: AnyCancellable? init(modelName: String) { super.init() self.modelName = modelName self.name = modelName load() } required init() { fatalError("init() has not been implemented") } deinit { cancellable?.cancel() } func load() { cancellable = Entity.loadModelAsync(named: modelName + ".usdz") .sink(receiveCompletion: { result in switch result { case .finished: break case .failure(let failure): debugPrint(failure.localizedDescription) } }, receiveValue: { modelEntity in modelEntity.generateCollisionShapes(recursive: true) self.model = modelEntity.model self.collision = modelEntity.collision self.collision?.filter.mask.formUnion(.sceneUnderstanding) self.physicsBody = modelEntity.physicsBody self.physicsBody?.mode = .kinematic }) } Entity loading and placing let tapLocation = sender.location(in: arView) guard let raycastResult = arView.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .horizontal).first else { return } let entity = CustomEntity(modelName: modelName) let anchor = AnchorEntity(world: raycastResult.worldTransform) anchor.name = entity.name anchor.addChild(entity) arView.scene.addAnchor(anchor) arView.installGestures([.rotation, .translation], for: entity) This loads my model properly and allows me to move it and rotate as well, but I cannot figure out how to handle the collision handling with the real environment like walls and interrupt gestures once my model starts going thought it?
0
0
425
Sep ’23
How to convert ARDepthData to AVDepthData with float32 or depth32 disparity. "No Auxiliary Depth Data found"
I want to create a 3D model with Photogrammetry Session. It's working fine with AVCaptureSession with Depth Data . But I want to capture a series of frames from ARKit and sceneDepth which is of type ARDepthMap. Depthdata is being stored as tiff but I'm still getting the error . if let depthData = self.arView?.session.currentFrame?.sceneDepth?.depthMap { if let colorSpace = CGColorSpace(name: CGColorSpace.linearGray) { let depthImage = CIImage( cvImageBuffer: depthData,options: [ .auxiliaryDisparity: true ,.auxiliaryDepth : true] ) depthMapData = context.tiffRepresentation(of: depthImage,format: .Lf, colorSpace: colorSpace, options: [.disparityImage: depthImage]) } } if let image = self.arView?.session.currentFrame?.capturedImage { if let imageData = self.convertPixelBufferToHighQualityJPEG(pixelBuffer: image) { self.addCapture(Capture(id: photoId, photoData: imageData, photoPixelBuffer: image,depthData: depthMapData)) } }` "No Auxiliary Depth Data found" while running the photogrammetry session.
0
0
425
Sep ’23