Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit tag

377 Posts
Sort by:
Post not yet marked as solved
4 Replies
912 Views
I have built a simple reality composer project. It is one object and on scene start, it plays some music. And when you tap on an object, it plays a sound. works exactly as expected in reality composer in the preview area. And it works as expected in Xcode. However, when I export it as a USDZ, so that I can easily share it, it only lets me see the object and place and scale it in AR. It doesn't play the audio or play on tap as expected.Should this work?I did notice on my iPad that there is a tap to activate button at the top of the screen, but it either doesn't do anything or it just brings up the area to view it as object or AR or share it.thanks for any help or clarification.Dale
Posted
by
Post not yet marked as solved
6 Replies
1.5k Views
Traceback (most recent call last): File "/Applications/usdpython/usdzconvert/usdzconvert", line 17, in usdUtils.printError("failed to import pxr module. Please add path to USD Python bindings to your PYTHONPATH.") NameError: name 'usdUtils' is not definedI kept getting that error and unable to solve it.In older version of USDZConvert like version 0.62, I just unzip the tool and run usdzconvert after running USD.command and it will simply works. But with version 0.63, for few months now I cannot run the latest USDZConvert 😟 I have reported this many times to Apple and not resolved.Maybe 0.64 USDZConvert can fix this?Or is there in anyway I can fix the path error message?Thanks!
Posted
by
Post not yet marked as solved
0 Replies
449 Views
While loading multiple scenes (from reality composer) into arView, the scenes is not anchored in the same space.In this example, scene1 is loaded when the app starts. After the button is pressed, the scene2 is added into the scene. In both the scenes, the models are placed at the origin and are expected to overlap with scene2 is added into the view. However, the position of scene1 and scene2 is different when they are added into the arView.import UIKit import RealityKit class ViewController: UIViewController { @IBOutlet var arView: ARView! @IBOutlet weak var button: UIButton! var scene1: Experience.Scene1! var scene2: Experience.Scene2! override func viewDidLoad() { super.viewDidLoad() // Load the "Box" scene from the "Experience" Reality File scene1 = try! Experience.loadScene1() scene2 = try! Experience.loadScene2() // Add the box anchor to the scene arView.scene.addAnchor(scene1) } @IBAction func buttonPressed(_ sender: Any) { arView.scene.addAnchor(scene2) } }Note: This issues does not happen when both the scenes are added simultaneously.How to make sure that both the scenes are anchored at the same ARAnchor?
Posted
by
Post not yet marked as solved
1 Replies
1k Views
Gresting all,I am primarily used to using ARKit/ARfoundations through Unity, but am working with some colleagues looking for a lower technical hurdle for some AR, and Reality Composer seemed ideal.Part of the project would ideally bring volumetric captures (from depthkit or holocap, either of which can be exported as .obj sequences or their own Unity friendly format) into Reality Composer, I presume as animated .usdz files?Anyone have suggestions for this workflow? I would be endlessly grateful!Cheers*edit*. Changed Reality Capture to Reality Composer
Posted
by
Post not yet marked as solved
1 Replies
638 Views
For apps built specifically for the new Lidar sensor that have little to no use on devices without it, is there an appropriate Required device capabilities string?
Posted
by
Post marked as solved
9 Replies
1.1k Views
I am trying to do a hit test of sorts between a person in my ARFrame and a RealityKit Entity. So far I have been able to use the position value of my entity and project it to a CGPoint which I can match up with the ARFrame's segmentationBuffer to determine whether a person intersects with that entity. Now I want to find out if that person is at the same depth as that entity. How do I relate the SIMD3 position value for the entity, which is in meters I think, to the estimatedDepthData value?
Posted
by
Post not yet marked as solved
10 Replies
4.5k Views
I love how the LiDAR scan generates a beautifully colored mesh. Is it possible to retain that coloring when exporting (such as to an .OBJ file? The examples I've seen so far convert the LiDAR scan and create the .OBJ file, but none of those files include any of the coloring from the original scan. Is this even feasible?
Posted
by
Post not yet marked as solved
2 Replies
524 Views
I'm hosting multiple AR models (.usdz) on my website. I've got them set up so they can be accessed directly via the browser via Quick Look on iOS devices. One of the problems I run into is the protection of the AR models. I want to prevent users from being able to download them (the .usdz), which is now easy to do via MacOS/Windows/Android. Is there a way to protect the files so they can be accessed in AR Quick Look, but not downloaded in any way?
Posted
by
Post not yet marked as solved
2 Replies
966 Views
When I import the robot.fbx file into reality converter found in the Apple developer documentation found here: https://developer.apple.com/sample-code/ar/Biped-Robot.zip If I bring the .fbx version into RealityConverter, export it to USDZ and bring the USDZ file into Xcode 12 beta or any version below, the usdz robot character loses its skin bindings and the mesh has no skeleton attached. Has anyone experienced this? Have you been able to successfully convert an FBX rigged model to an USDZ model with all bones connected? I have tried this using Reality Converter beta 3 I did not edit the FBX robot character provided with Apples documentation sample.
Posted
by
Post marked as solved
5 Replies
557 Views
I have been working on developing an app for iPad Pro 2020 to capture the scene depth from the lidar sensor using ARKit 4 sceneDepth property. It was working completely fine until the iOS got updated today. The sceneDepth is always nil.
Posted
by
Post marked as solved
2 Replies
1.3k Views
import SwiftUI import RealityKit import AVFoundation ... let anchor = AnchorEntity(plane: .vertical)				 Shows error for Xcode 12beta2 and 12beta3: Extraneous argument label 'plane:' in call Type 'AnchoringComponent.Target' has no member 'vertical' What might be wrong?
Posted
by
Post marked as solved
5 Replies
2.5k Views
I am playing around in Realitykit and would like an Entity to always look at camera. I've seen similar questions on stackoverflow and developer forums and followed their suggestion to use look(at) method. I am aware that reality composer can achieve the same effect using behavior but I would later have to use the same technique to anchor an Arrow to the camera using AnchorEntity and have it look at another entity in the scene for user navigation. I am using look(at) method to orient the entity to the camera right now by calling it in ARKit session Didupdate method, however, it is broken:   entity.look(at: arView.cameraTransform.translation, from: entity.position, upVector: [0,0,0], relativeTo: nil) This will move my entity to a new position in the scene and since it is called at ARKit session Didupdate it will keep moving further and further away every frame. I thought it is not supposed to move since I've set the "from:" to object's current position so it always move to where it was and hence static. My entity is an AnchorEntity that act as a root entity for the model loaded from reality composer.
Posted
by
Post not yet marked as solved
1 Replies
605 Views
Can you apply face tracking to pre-recorded video from your iPhone 11 with ARKit? What format would you need to record? I assume you’d need the depth data from your camera as well.
Posted
by
Post not yet marked as solved
4 Replies
942 Views
I'm trying to dynamically add a UIImage that is in memory (that is dynamically created at runtime and changed over time) to a RealityKit scene. I see with TextureResource that I can easily do this by creating a material from an image file in my bundle, but I don't see any way to do this with just a UIImage. Is there another way to go about this that I'm just missing? Any help would be appreciated!
Posted
by
Post marked as solved
23 Replies
7.9k Views
I have seen this question come up a few times here on Apple Developer forums (recently noted here - https://developer.apple.com/forums/thread/655505), though I tend to find myself having a misunderstanding of what technology and steps are required to achieve a goal. In general, my colleague and I are try to use Apple's Visualizing a Point Cloud Using Scene Depth - https://developer.apple.com/documentation/arkit/visualizing_a_point_cloud_using_scene_depth sample project from WWDC 2020, and save the rendered point cloud as a 3D model. I've seen this achieved (there are quite a few samples of the final exports available on popular 3D modeling websites), but remain unsure how to do so. From what I can ascertain, using Model I/O seems like an ideal framework choice, by creating an empty MDLAsset and appending a MDLObject for each point to, finally, end up with a model ready for export. How would one go about converting each "point" to a MDLObject to append to the MDLAsset? Or am I going down the wrong path?
Posted
by
Post not yet marked as solved
1 Replies
678 Views
Hi! I've been working with AR Quick Look a lot recently and find it really useful. However, when I'm combining two .usdz-files into one, the experience isn't as great. I'm working with 3D-models that are supposed to be attached to vertical surfaces, and one by one they work flawlessly. But as soon as I add more models into the same, the objects won't "stick" as close to the wall as they do when they are separated. It's like there are some sort of "margin" applied to it. To create nested .usdz-files, I use Apple's command line tool: $ usdzcreateassetlib outputFile.usdz asset1.usdz [asset2.usdz [...]] Any idea why this might be the case? Thanks!
Posted
by
Post not yet marked as solved
1 Replies
568 Views
I'm currently trying to project a transparent .png file on to a flat surface (table/paper), but the shadow is giving me a gray box. I am a huge fan of the shadows in Reality Composer, however I'm searching for an option to adjust the opacity or turn off the shadow.
Posted
by