RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit tag

201 results found
Sort by:
Post not yet marked as solved
31 Views

is there somewhere I can download images for Object Capture?

I have downlaoded CaptureSample project and run it on my iPhone12 pro max, and I took some photos with the demo app, but the usdz file that I got from HelloProtogrammetryCommandLineTool seems not good. So I was wondering if there was a link where I could download some official high quality images with depth and gravity data, such as the original images of the 3D model made with Object Capture or some other Object Capture images? Attached is the USDZ file I generated
Asked Last updated
.
Post marked as solved
69 Views

RealityKit ImageAnchor disappears after 12.5 Update

I am currently doing an RealityKit AR App for a school project. I am using an Image Anchor and a 3D Model. Before the Update to 12.5, the 3D Model appeared and was placed in the 3D Room on top of the Card. When I moved the Camera further away from the Card, the Model would still be there, although it wasn't visible anymore. But now, if i move the Camera further away and the Card isn't visible anymore, the 3D Model disappears. I also created a new simple RealityKit Project with just the ImageAnchor and the Model and it still doesn't work as before. How do I fix this?
Asked Last updated
.
Post not yet marked as solved
138 Views

Reconstruction saved outdoor ARKit scenes

Hi, I am still working on an app to place simple 3D models on different places outdoor. I save the location (World Data) of the environment nearby and load and reconstruct the scene later on. I make use of the latest Apple device (iPhone 12 Pro) with the LiDAR scanner. Strange thing is that often you can't reconstruct the experience. Is the stored (LiDAR) data to accurate so that the scene has to be exactly the same? For example, could it be a problem if a flower leave was broken, so it's imposible to reconstruct? In my case (example) I've created two seperated scenes. I placed one arrow model (.usdz) on a flowerpot and one on a statue. I saved both, checked by reloading (model was still there) and came back the next day. It was rainy that day. I couldn't reproduce the AR scene around the flowerpot but the statue was no problem. Is there a way to make the scene simpler to recognize? For example, is it better to add horizontal and vertical plane detection besides the meshes? Or change the way of using the world mapping status? Other solution could be: place more models (arrows), so that one of the Anchors should match. Thanks in advance, Marc
Asked Last updated
.
Post not yet marked as solved
34 Views

RealityKit, placing object - remove placement rotation?

Hai! When I place an object in scene, the object is always parallel(I think) to the rotation of device(ipad). How can I disable that rotation and have it align with world axis? I'm trying to place an object/anchor in scene but every time I do it has a rotation. I can not find where the rotation is set as printing our matrices gives me 0,0,0 on rotation so I'm a little lost : let point = CGPoint(x: frame.midX, y: frame.midY); if let result = raycast(from: point, allowing: .estimatedPlane, alignment: .any).first { mLastObject = name var transform = simd_float4x4() transform.toIdentity() transform.columns.3 = result.worldTransform.columns.3 let resultAnchor = AnchorEntity(world: transform) // world ping let shadow = AnchorEntity(plane: AnchoringComponent.Target.Alignment.horizontal) // shadow ping resultAnchor.addChild(shadow) scene.anchors.append(resultAnchor) print("result : \(resultAnchor.transform)") print("result2 : \(resultAnchor.orientation)") print("shadow : \(shadow.transform)") print("shadow2 : \(shadow.orientation)") Perhaps, while I'm at it, how can I align object rotation to another object rotation? so they both face the same direction/axis/etc Regards Dariusz Just to clarify, the shadow object has wrong rotation, and thus any of its children/geometry objects are rotated. Setting orientation to 0,etcetc does not work.
Asked Last updated
.
Post marked as solved
173 Views

Can the Object Capture API be used to create building models?

Can the RealityKit 2's Object Capture API be used to create a model of a building's interior? I've only found examples of creating models using pictures taken by walking around an object, not from inside of it 🤔 I know photogrammetry in general can be used for such cases, but I'm not sure the new RealityKit's API supports it. I'd be grateful if someone who tried it shared their results (I can't right now) and if someone at Apple could confirm whether this should be supported or not by design. Thank you for your time 🙇‍♂️
Asked Last updated
.
Post marked as solved
691 Views

PhotogrammetrySession cannot be created

Hi, I'm using the sample code to create a 3D object from photos using PhotogrammetrySession but it returns this error: Error creating session: cantCreateSession("Native session create failed: CPGReturn(rawValue: -11)") Sample code I've used is this and this. Any idea? Thanks in advance!
Asked
by Rubenfern.
Last updated
.
Post not yet marked as solved
464 Views

rotate an entity in Realitykit

I am having trouble rotating an entity that is added in Reality Composer but rotated in code. I decided to follow the information in the Developer Documentation called Manipulating Reality Composer Scenes from Code. (https://developer.apple.com/documentation/realitykit/creating_3d_content_with_reality_composer/manipulating_reality_composer_scenes_from_code) I am starting with a default ARKit program that just displays the Steel Box. When I add code to rotate the box 45° it also shrinks the box along the x axis. The cube gets flattened about half way. Is there a way to rotate without it scaling at the same time? import UIKit import RealityKit class ViewController: UIViewController {          @IBOutlet var arView: ARView!          override func viewDidLoad() {         super.viewDidLoad()                  // Load the "Box" scene from the "Experience" Reality File         let boxAnchor = try! Experience.loadBox()                  // Add the box anchor to the scene         arView.scene.anchors.append(boxAnchor)                  if let box = boxAnchor.findEntity(named: "lid") {             let radians = -45.0 * Float.pi / 180.0             box.transform.rotation += simd_quatf(angle: radians, axis: SIMD3<Float>(1,0,0))         }              } } I have tried adding the "if let box =" part to above the "arView.scene.anchors.append(boxAnchor)" part and that doesn't help. I decided to go for the default code and work my way up to getting this to work with a touch to the box on the screen. Touching anywhere else should not rotate the box. Setting up a rotation animation inside Reality Composer is working, but I want a tap to rotate and a another tap to un-rotate. It will be opening a lid and closing a lid. I am on Catalina and xCode12. I have also noticed that the box is referred to as an object, entity, or element in the different pages of the documentation. This makes it difficult to we search for the issue I am having.
Asked
by fid.
Last updated
.
Post marked as solved
49 Views

RealityKit, unable to rotate Entity/ AnchorEntity object!

Hey I'm out of ideas... here is my code: let radians = val * Float.pi / 180 /// object.mEntity = usd Entity.loadModel() object.mEntity.orientation += simd_quatf(angle: radians, axis: SIMD3<Float>(1,0,0) object.mEntity.transform.rotation += simd_quatf(angle: radians, axis: SIMD3<Float>(1,0,0) object.anchorParent.transform.rotation+= simd_quatf(angle: radians, axis: SIMD3<Float>(1,0,0) the object is a struct that holds my Entity + anchor it was attached to. I have few anchors in general, like the "initial click" position anchor, then shadow anchor on top of it, and then model entity after that. So that I can move the model with shadow/etc. In any case, I cant rotate it, I always get "weird" rotation as output. Have a look at video > https://www.youtube.com/watch?v=9_WOfLivKvI&ab_channel=DariuszM%C4%85kowski
Asked Last updated
.
Post not yet marked as solved
79 Views

Sample transparent video texture

Are there any examples, or information showing video textures with transparency being used in RealityKit 2?
Asked Last updated
.
Post not yet marked as solved
35 Views

What is the proper way to pass custom metadata to PhotogrammetrySample?

Which of these would work/be best practice? sample.metadata = ["FocalLengthIn35mmFilm": "28mm"] sample.metadata = ["FocalLengthIn35mmFilm": 28.0] sample.metadata = ["kCGImagePropertyExifFocalLenIn35mmFilm": "28MM"]
Asked
by sam598.
Last updated
.
Post not yet marked as solved
87 Views

Vertex position in ARKit's reconstructed mesh

I've got a quick question regarding ARKits scene reconstruction. Is it possible to get the world coordinates for the faces/vertices that are part of the generated mesh or to select them individually? After looking through the documentation at apple and tinkering with the example apps it does not seem possible working with the faces property of ARMeshGeometry, but the vertices property does return coordinates. Here's apples code-snippet on how to select specific vertices: func vertex(at index: UInt32) -> SIMD3<Float> { assert(vertices.format == MTLVertexFormat.float3, "Expected three floats (twelve bytes) per vertex.") let vertexPointer = vertices.buffer.contents().advanced(by: vertices.offset + (vertices.stride * Int(index))) let vertex = vertexPointer.assumingMemoryBound(to: SIMD3<Float>.self).pointee return vertex } } I've tried to place objects at those coordinates to see what they refer to, but they somehow end up in the middle of the room, far away from the mesh.. leaving me a bit confused as to what the vertices coordinates actually refer to. I'd appreciate any answers on how to approach this!
Asked Last updated
.
Post marked as solved
51 Views

Orthographic projection in nonAR camera mode of RealityKit

Hello, I am considering to use RealityKit framework to create a 3D viewer. The viewer will be used to display virtual objects in AR and also in a fully virtual environment. I saw that I can initialize the ARView with a nonAR camera mode and set a perspective camera for it, but my viewer needs to have an option to display the scene also with orthographic projection, I couldn't find a way to do it. Is it possible to use RealityKit in a non AR environment and to display the scene with orthographic projection ? Any help will be highly appreciated! Yaniv
Asked
by uMoji.
Last updated
.
Post not yet marked as solved
59 Views

Is there a way to clip/cut objects?

When I take pictures on an object I can convert and export it to Reality Composer. But unfortunately I am not able to edit the usdz/obj/usda file, with any shipped tool. I would need a feature like shown here to define a boundary https://developer.apple.com/videos/play/wwdc2021/10076/ at 17:10
Asked
by m.bahl.
Last updated
.
Post not yet marked as solved
54 Views

Reality Composer scenes are ignored when added to ARView.scene when explicitly setting ARConfiguration.

When loading a scene from Reality Composer. It only appears to detect the Anchor that's associated with it and add the children entities from it if the arView.automaticallyConfigureSession is set to true. // Automatic set to default and will work let qrCodeScene = try! ExampleRC.loadExampleScene() arView.scene.addAnchor(qrCodeScene) But for scenarios where more control is needed, explicitly setting the ARConfiguration is needed: let config = ARWorldTrackingConfiguration() config.detectionImages = ARReferenceImage.referenceImages(inGroupNamed: "ARImages", bundle: nil) arView.session.run(config) Now before or after this is set any Reality Composer scenes added to the arView.scene are ignored. I understand that the detection images set is loaded when the config is initialized. But perhaps it should be able to load the explicit set and any ad hoc scenes that are added will also be detected as well throughout the session. Changing the session back to automaticallyConfigureSession = true does not seem to fix this. As a result the api for loading scenes is not useful when explicitly setting the configuration and the ARSessionDelegate is required: func session(_ session: ARSession, didAdd anchors: [ARAnchor]) { if let imageAnchor = anchors.compactMap({ $0 as? ARImageAnchor }).first { let anchorEntity = AnchorEntity(anchor: imageAnchor) arView.scene.addAnchor(anchorEntity) } } Unless there is something I am missing is this expected behavior?
Asked
by pg-obrien.
Last updated
.
Post not yet marked as solved
42 Views

Motion Capture with ARKit 5 on iOS 15 using processsor A12 or A13

WWDC21 announces the support of more body poses and gestures for processor A14. Furthermore rotations are more accurate and you can track these movements from a larger distance. But what are the improvements for devices with processor A12 or A13? Do they also benefit from any improvements when using iOS 15?
Asked
by Uscher.
Last updated
.