RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit tag

452 Posts
Sort by:
Post not yet marked as solved
0 Replies
973 Views
Currently, I have a requirement to use models created in Unity for use in Reality Kit, so I need to convert the models to USDZ formats. I used this approach (How to easily create AR content for iPhone using Unity), but the result was not as expected. Converted models do not display correctly, and animation on objects does not appear in their converted files. It was also noticed that objects made using the Unity particle system (e.g., confetti) were not converted using this approach. I also tried to convert by selecting the ‘Export selected as USDZ’ menu from Unity’s main menu bar, but nothing worked. So is there any effective way to convert the unity models, including the particle systems, to USDZ?
Posted
by
Post not yet marked as solved
0 Replies
741 Views
Hi everybody, I am an Engineering Student and at the University we have to create a little AR-App. Now, in Xcode I want to make an Image Tracking and above that Image, it should show my 3D Object. I followed this Video: "https://www.youtube.com/watch?v=VmPHE8M2GZI" until the minute 39:18. After that, it doesnt work. The simulator detects the Image and shows a light grey Plane above it, even if I move around. But the 3D Model doesn't show up. I imported the ns.obj file in art.scnassets Converted to SceneKit file .scn changed the texture "diffusion" to green I tried to scale it, but still no result I tried also with an 3D Object downloaded from the internet Long Story short.... it doesn't work. Does anyone knows what the Problem could be? Thank you very much. Greetings, Rosario PS: I use the Xcode Version 14.3. Thats my code in the ViewController.swift file: import SwiftUI import RealityKit import UIKit import SceneKit import ARKit class ViewController: UIViewController, ARSCNViewDelegate { @IBOutlet var sceneView: ARSCNView! var nsNode: SCNNode? override func viewDidLoad() { super.viewDidLoad() sceneView.delegate = self sceneView.autoenablesDefaultLighting = true let nsScene = SCNScene(named: "art.scnassets/ns.scn") nsNode = nsScene?.rootNode } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) let configuration = ARImageTrackingConfiguration() if let trackingImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: Bundle.main) { configuration.trackingImages = trackingImages configuration.maximumNumberOfTrackedImages = 2 } sceneView.session.run(configuration) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) sceneView.session.pause() } func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? { let node = SCNNode() if let imageAnchor = anchor as? ARImageAnchor { let size = imageAnchor.referenceImage.physicalSize let plane = SCNPlane(width: size.width, height: size.height) plane.firstMaterial?.diffuse.contents = UIColor.white.withAlphaComponent(0.5) plane.cornerRadius = 0.005 let planeNode = SCNNode(geometry: plane) planeNode.eulerAngles.x = -.pi / 2 node.addChildNode(planeNode) if let shapeNode = nsNode { node.addChildNode(shapeNode) } } return node } }
Posted
by
Post not yet marked as solved
1 Replies
416 Views
I am trying to make a simple 2d overlay for the FacialAnchor Mesh. I am unsure how to get my graphic to line up to what the mesh is showing. Is there a template image I should use to paint then apply that in xcode? Any tutorials or links to get me in the right direction would be much appreciated.
Posted
by
Post not yet marked as solved
1 Replies
809 Views
One thing that was not very clear for me on the WWDC videos regarding VisionOS app development was: If I want to trigger an action (let's say change the scene) using the user's relative position to do so, am I going to be able to do it? Example: If the user comes too close to an object, it starts to play some animation. Reference video: wwdc2023-10080
Posted
by
Post not yet marked as solved
2 Replies
2.1k Views
Hello Dev Community, I've been thinking over Apple's preference for USDZ for AR and 3D content, especially when there's the widely used GLTF. I'm keen to discuss and hear your insights on this choice. USDZ, backed by Apple, has seen a surge in the AR community. It boasts advantages like compactness, animation support, and ARKit compatibility. In contrast, GLTF too is a popular format with its own merits, like being an open standard and offering flexibility. Here are some of my questions toward the use of USDZ: Why did Apple choose USDZ over other 3D file formats like GLTF? What benefits does USDZ bring to Apple's AR and 3D content ecosystem? Are there any limitations of USDZ compared to other file formats? Could factors like compatibility, security, or integration ease have influenced Apple's decision? I would love to hear your thoughts on this. Feel free to share any experiences with USDZ or other 3D file formats within Apple's ecosystem!
Posted
by
Post not yet marked as solved
4 Replies
1k Views
In ARKit for iPad, I could 1) build a mesh on top of the real world and 2) request a people occlusion map for use with my application so people couls move behind or in fromt of virtual content via compositing. However, in VisionOS, there is no ARFrame image to pass to the function that would generate the occlusion data. Is it possible to do people occlusion in visionOS? If so, how it is done—through a data provider, or is it automatic when passthrough is enabled? If it’s not possible, is this something that might have a solution in future updates as the platform develops? Being able to combine virtual content and the real world with people being able to interact with the content convincingly is a really important aspect to AR, so it would make sense for this to be possible.
Posted
by
Post marked as solved
16 Replies
1.9k Views
Apparenrly, shadows aren’t generated for procedural geometry in RealityKit: https://codingxr.com/articles/shadows-lights-in-realitykit/ Has this been fixed? My projects tend to involve a lot of procedurally-generated meshes as opposed to importes-models. This will be even more important when VisionOS is out. On a similar note, it used to be that ground shadows were not per-entity. I’d like to enable or disable them per-entity. Is it possible? Since currently the only way to use passthrough AR in Vision OS will be to use RealityKit, more flexibility will be required. I can‘t simply apply my own preferences.
Posted
by
Post marked as solved
1 Replies
706 Views
I have been reading through the documentation and can not find a way to alter the users environment lighting. Is this not possible? Basically I would like to add a darkening to a room. Or change the HUE of the environment in the scene they are seeing. I can think of a few "hacks" to do this but figured there would be a fancy reality kit way to do so. If it is possible to "dim" or darken the environment I could then light up my models with lights but still have the real environment all around.
Post not yet marked as solved
0 Replies
255 Views
Hello, In my app I'm trying to delete all but one chosen plane and do some raycasting on that plane. I noticed that, whenever I tried to delete other planes they would instantly reappear. Here is some sample code of the ARViewController I'm using that demonstrates the problem class ARViewController: UIViewController { var arView: ARView! *** Bunch of stuff *** func session(_ session: ARSession, didAdd anchors: [ARAnchor]) { // Iterate through the detected anchors for anchor in anchors { // Check if the detected anchor is an ARPlaneAnchor if let planeAnchor = anchor as? ARPlaneAnchor { plane_count += 1 print("Plane added. Number of plane anchors = \(plane_count)") } } } func session(_ session: ARSession, didRemove anchors: [ARAnchor]) { for anchor in anchors { if let planeAnchor = anchor as? ARPlaneAnchor { plane_count -= 1 print("SESSION CALLED: Plane Removed. Number of plane anchors = \(plane_count)") } } } func deletePlanes(){ for anchor in arView.session.currentFrame?.anchors ?? [] { arView.session.remove(anchor: anchor) } } When deletePlanes() is called, I'll see the following output populate instantly SESSION CALLED: Plane Removed. Number of plane anchors = 2 SESSION CALLED: Plane Removed. Number of plane anchors = 1 SESSION CALLED: Plane Removed. Number of plane anchors = 0 Plane added. Number of plane anchors = 1 Plane added. Number of plane anchors = 2 Plane added. Number of plane anchors = 3 This even occurs when the phone is face down after detecting a few planes. It appears that the planes are not actually being removed from the session. Please let me know if I'm doing anything wrong here! Thanks.
Posted
by
Post not yet marked as solved
0 Replies
807 Views
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS. As I understand from the session: "Meet Object Capture for iOS", I realized that the API now accepts Point Cloud data from iPhone LiDAR sensor to create 3D assets. However, I was not able to find any source from official Apple Documentation on RealityKit and ObjectCapture that explains how to utilize Point Cloud data to create the session. I have two questions regarding this API. The original example from the documentation explains how to utilize the depth map from captured image by embedding the depth map into the HEIC image. This fact makes me assumed that PhotogrammetrySession also uses Point Cloud data that is embedded in the photo. Is this correct? I would also like to use the photos captured from iOS(and Point Cloud data) to use in PhotogrammetrySession on MacOS for full model detail. I know that PhotogrammetrySession provides PointCloud request result. Will using this output be the same as the one being captured on-device by the ObjectCaptureSession? Thanks everyone in advance and it's been a real pleasure working with the updated Object Capture APIs.
Posted
by
Post not yet marked as solved
0 Replies
475 Views
Hi, Suppose I have a table model imported from usdz with initially something like a "wood" material. When clicking on a button, I want the table to take a "marble" material. I only know how I can load the entity along with its assigned material from a usd file, but I want to load all the different materials, store them somewhere and be able to assign the materials dynamically when a button is clicked. Is there a way to do that? Thanks
Posted
by
Post not yet marked as solved
2 Replies
642 Views
Hi i am student from London studying app development in Arizona. I have a few ideas for apps that I believe would add to the Apple AR experience. I was wondering where I should go about getting started in the development process. Any Guidance would be much appreciated :)
Posted
by