RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Post

Replies

Boosts

Views

Activity

Animated USD file | Starting position of the USD is not the same as first frame of animation
Hello all, I am building for visionOS with another engineer and using Reality Composer Pro to validate usd files. The starting position of my animated usdz, its position when it's first loaded, is not the same as the first frame of the animation on the usdz file For testing, I am using the AR Quick Look asset 'toy_biplane_idle.usdz' which demonstrates the same 'error' we're currently getting with our own usdz files. When the usdz is loaded, it is on the ground plane - But when the aniamtion is played, the plane 'snaps' to the position of the first frame of the animation - This 'snapping' behavior is giving us problems. We want the user ot see this plane in its static 'load' position with the option to play the animation. But we dont want it to snap when the user presses play Is it possible to load the .usdz in the position specified by the first frame of the animation? What is the best way to fix this issue. Thanks!
1
0
666
Jan ’24
Reality Composer Pro - high cpu usage (even when hidden)
Activity monitor reports that Reality Composer Pro uses 150% CPU and always is the number one energy user on my M3 mac. Unfortunately the high cpu usage continues when the app is hidden or minimized. I can understand the high usage when a scene is visible and when interacting with the scene, but this appears to be a bug. Can anyone else confirm this or have a workaround? Can the scene processing at least be paused when app is hidden? Or better yet, find out why the cpu usage is so high when the scene is not changing. Reality Composer Pro Version 1.0 (409.60.6) on Sonoma 14.3 Thanks
1
1
543
Jan ’24
different composer entities with different behavior.
in Diorama project, let entity = try await Entity(named: "DioramaAssembled", in: RealityKitContent.RealityKitContentBundle) viewModel.rootEntity = entity content.add(entity) viewModel.updateScale() // Offset the scene so it doesn't appear underneath the user or conflict with the main window. entity.position = SIMD3<Float>(0, 0, -2) Object doesn't move around with Camera - with the simulator workthrough wasd key I can work around the object. But with different composer file, that I created let entity = try await Entity(named: "ImmersiveScene", in: realityKitContentBundle) { viewModel.rootEntity = entity content.add(entity) viewModel.updateScale() with wasd key in the simulator, model moves with it. What confirugation that I'm missing with ImmersiveScene Entity?
2
0
474
Feb ’24
Seeking Guidance on Extracting Point Cloud and Facial Measurements from Object Capture Scans
Hello Apple community, I am currently working with Object Capture and would appreciate some guidance on extracting specific data from the scans. I have successfully scanned objects, but I am now looking to obtain the point cloud and facial measurements from these scans. I have used https://developer.apple.com/documentation/RealityKit/guided-capture-sample as a reference for implementation. Point Cloud: How can I extract the point cloud data from my Object Capture scans? Are there any specific tools or methods recommended for this purpose? Facial Measurements: Is there a way to extract facial measurements accurately using Object Capture? Are there any built-in features or third-party tools that can assist with this? I've explored the documentation, but I would greatly benefit from any insights, tips, or recommended workflows from the community. Your expertise is highly appreciated! Thank you in advance.
0
0
524
Feb ’24
ImageBasedLightComponent: cast shadows?
In my RealityKit-based app I was using DirectionalLightComponent and DirectionalLightComponent.Shadow to cast shadows. As far as I can see, on visionOS only ImageBasedLightComponent is currently supported, so I transitioned from DirectionalLightComponent to ImageBasedLightComponent. The lighting is working fine, but I'm not able to cast shadows onto other entities (in my case, casting a shadow from a Moon onto a planet). Looking at ImageBasedLightReceiverComponent, there's GroundingShadowComponent which isn't what I'm looking for. Is there any way with ImageBasedLightComponent & ImageBasedLightReceiverComponent to cast shadows from an entity onto another entity?
0
0
413
Feb ’24
Flutter app using RealityKit to highlight buttons on visionOS?
Hi! I have a Flutter project that targets Web and iOS. Overall, our app works quite well on Vision Pro, with the only issue being that our UI elements do not highlight when the user looks at them. (Our UI will highlight on mouseover, however. We have tried tinkering with the mouseover visuals, but this did not help.) We're considering writing some native Swift code to patch this hole in Flutter's visionOS support. However, after some amount of searching, the documentation doesn't provide any obvious solutions. The HoverEffectComponent ( https://developer.apple.com/documentation/realitykit/hovereffectcomponent ) in RealityKit seems like the closest there is to adding focus-based behavior. However, if I understand correctly, this means adding an Entity for every Flutter UI element the user can interact with, and then rebuilding the list of Entities every time the UI is repainted... doesn't sound especially performant. Is there some other method of capturing the user's gaze in the context of an iOS app?
1
1
585
Feb ’24
Lighting does not apply to model added to a USDZ file from an MDLAsset
Hi, I'm trying to display an STL model file in visionOS. I import the STL file using SceneKit's ModelIO extension, add it to an empty scene USDA and then export the finished scene into a temporary USDZ file. From there I load the USDZ file as an Entity and add it onto the content. However, the model in the resulting USDZ file has no lighting and appears as an unlit solid. Please see the screenshot below: Top one is created from directly importing a USDA scene with the model already added using Reality Composer through in an Entity and works as expected. Middle one is created from importing the STL model as an MDLAsset using ModelIO, adding onto the empty scene, exporting as USDZ. Then importing USDZ into an Entity. This is what I want to be able to do and is broken. Bottom one is just for me to debug the USDZ import/export. It was added to the empty scene using Reality Composer and works as expected, therefore the USDZ export/import is not broken as far as I can tell. Full code: import SwiftUI import ARKit import SceneKit.ModelIO import RealityKit import RealityKitContent struct ContentView: View { @State private var enlarge = false @State private var showImmersiveSpace = false @State private var immersiveSpaceIsShown = false @Environment(\.openImmersiveSpace) var openImmersiveSpace @Environment(\.dismissImmersiveSpace) var dismissImmersiveSpace var modelUrl: URL? = { if let url = Bundle.main.url(forResource: "Trent 900 STL", withExtension: "stl") { let asset = MDLAsset(url: url) asset.loadTextures() let object = asset.object(at: 0) as! MDLMesh let emptyScene = SCNScene(named: "EmptyScene.usda")! let scene = SCNScene(mdlAsset: asset) // Position node in scene and scale let node = SCNNode(mdlObject: object) node.position = SCNVector3(0.0, 0.1, 0.0) node.scale = SCNVector3(0.02, 0.02, 0.02) // Copy materials from the test model in the empty scene to our new object (doesn't really change anything) node.geometry?.materials = emptyScene.rootNode.childNodes[0].childNodes[0].childNodes[0].childNodes[0].geometry!.materials // Add new node to our empty scene emptyScene.rootNode.addChildNode(node) let fileManager = FileManager.default let appSupportDirectory = try! fileManager.url(for: .applicationSupportDirectory, in: .userDomainMask, appropriateFor: nil, create: true) let permanentUrl = appSupportDirectory.appendingPathComponent("converted.usdz") if emptyScene.write(to: permanentUrl, delegate: nil) { // We exported, now load and display return permanentUrl } } return nil }() var body: some View { VStack { RealityView { content in // Add the initial RealityKit content if let scene = try? await Entity(contentsOf: modelUrl!) { // Displays middle and bottom models content.add(scene) } if let scene2 = try? await Entity(named: "JetScene", in: realityKitContentBundle) { // Displays top model using premade scene and exported as USDA. content.add(scene2) } } update: { content in // Update the RealityKit content when SwiftUI state changes if let scene = content.entities.first { let uniformScale: Float = enlarge ? 1.4 : 1.0 scene.transform.scale = [uniformScale, uniformScale, uniformScale] } } .gesture(TapGesture().targetedToAnyEntity().onEnded { _ in enlarge.toggle() }) VStack (spacing: 12) { Toggle("Enlarge RealityView Content", isOn: $enlarge) .font(.title) Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace) .font(.title) } .frame(width: 360) .padding(36) .glassBackgroundEffect() } .onChange(of: showImmersiveSpace) { _, newValue in Task { if newValue { switch await openImmersiveSpace(id: "ImmersiveSpace") { case .opened: immersiveSpaceIsShown = true case .error, .userCancelled: fallthrough @unknown default: immersiveSpaceIsShown = false showImmersiveSpace = false } } else if immersiveSpaceIsShown { await dismissImmersiveSpace() immersiveSpaceIsShown = false } } } } } #Preview(windowStyle: .volumetric) { ContentView() } To test this even further, I exported the generated USDZ and opened in Reality Composer. The added model was still broken while the test model in the scene was fine. This also further proved that import/export is fine and RealityKit is not doing something weird with the imported model. I am convinced this has to be something with the way I'm using ModelIO to import the STL file. Any help is appreciated. Thank you
0
0
666
Feb ’24
SpatialTap on NO entity?
How do I respond to a SpatialTapGesture in my RealityView when the tap is on no entity whatsoever? I tried just doing RealityView {} .gesture(SpatialTapGesture().onEnded { print("foo") }) but that doesn't get called. All I can find searching is advice to add Collision and Input components to entities, but I don't want this on an entity; I want it when the user is not looking at any specific entity.
0
0
423
Feb ’24
Draw a 3D Helix using lines on the Vision Pro
I'm rebuilding a Unity app in Swift because Unity's Polyspatial library doesn't support LineRenderers yet, and that's like 90% of my app. So far I can draw 2D lines in the VisionOS "Hello World" project using paths and CGPoints in the body View of the Globe.swift file. I don't really know what I'm doing, just got some example lines from ChatGPT that work for a line. I can't make these 3D though. I haven't been able to find anything on drawing lines for the Vision Pro. Not just 2D lines. I need to draw helixes (helices?) Am I missing something? Thanks, Adam
0
0
501
Feb ’24
Loading SwiftData @Model image as Texture for RealityKit modelEntity: How can i convert the type 'Data' to expected argument type 'URL'?
I can't figure this one out. I've been able to load image textures from a struct model but not a class Model for my modelEntity. This for example, works for me, this is what I have been using up to now, without SwiftData, using a struct to hold my model if let imageURL = model.imageURL { let picInBox2 = ModelEntity(mesh: .generateBox(size: simd_make_float3(0.6, 0.5, 0.075), cornerRadius: 0.01)) picInBox2.position = simd_make_float3(0, 0, -0.8) if let imageURL = model.imageURL { if let texture = try? TextureResource.load(contentsOf: imageURL) { var unlitMaterial = UnlitMaterial() var imageMaterial = UnlitMaterial() unlitMaterial.baseColor = MaterialColorParameter.texture(texture) picInBox2.model?.materials = [imageMaterial] } } However, when I try to use my SwiftData model it doesn't work. I need to convert Data to url and I am not able to do this. This is what I would like to use for my image texture, from my SwiftData model @Attribute(.externalStorage) var image: Data? If/when I try to do this, substitute if let imageURL = item.image { ` for the old if let imageURL = model.imageURL { in if let imageURL = model.imageURL { if let texture = try? TextureResource.load(contentsOf: imageURL) { var unlitMaterial = UnlitMaterial() var imageMaterial = UnlitMaterial() unlitMaterial.baseColor = MaterialColorParameter.texture(texture) picInBox2.model?.materials = [imageMaterial] } it doesn't work. I get the error: Cannot convert value of type 'Data' to expected argument type 'URL' How can i convert the type 'Data' to expected argument type 'URL'? The original imageURL I am using here comes from the struct Model where it's saved as a variable var imageURL: URL? = Bundle.main.url(forResource: "cat", withExtension: "png") I am at my wit's end. Thank you for any pointers!
1
0
966
Feb ’24
[VisionOS] Can't import RealitKit/RealityKit.h for custom shader materials
Version details: Xcode Version 15.3 beta (15E5178i) visionOS 1.0 (21N301) SDK + visionOS 1.0 (21N305) Simulator (Installed) I'm trying to make a ModelEntity with a CustomMaterial.GeometryModifier for which I also created a metal shader file. The said shader file is extremely simple at this time: #include <metal_stdlib> #include <RealityKit/RealityKit.h> using namespace metal; [[visible]] void ExpandGeometryModifier(realitykit::geometry_parameters params) { // Nothing. } When trying to compile my project, I get the following error: 'RealityKit/RealityKit.h' file not found Is this not supported on VisionOS?
2
0
815
Feb ’24
Children of a dragged entity get left behind when moving slowly
Hey friends, I'm using a drag gesture to rotate a parent object that contains several child colliders. When I drag slowly, sometimes the child colliders don't rotate along with the parent. Any help would be appreciated, thanks! .gesture( DragGesture() .targetedToAnyEntity() .onChanged { value in let startLocation = value.convert(value.startLocation3D, from: .local, to: .scene) let currentLocation = value.convert(value.location3D, from: .local, to: .scene) let delta = currentLocation - startLocation let spinX = Double(delta.y) let spinY = Double(delta.x) let pitch = Transform(pitch: Float(spinX * -1)).matrix let roll = Transform(roll: Float(spinY * -1)).matrix value.entity.transform.matrix = roll * pitch })
1
0
522
Feb ’24
Skyboxes for progressive views in Apple Vision Pro
I've added the Starfield image from Apple's World sample code to the Progressive immersive project template, and I've experimented with a few other images I had around. I have a few questions: (1) Lighter shots look fairly pixelated. Does Apple recommend any minimum/maximum resolutions for images used for the giant sphere? (I noticed Starfield is 4096x4096) (2) I just put the other images in the 2x well for the image set. Should I put other images in their own 2x well no matter the DPI of the image? (3) Apple's Starfield image is square, but skybox images I've used before tend to be much wider (with the top and bottom areas distorted). Is there a particular aspect ratio I should be using? (4) In at least one case, I think the center of the image was rotated to the right by about 20 degrees. Is this expected? Could it have been an artifact of the image's size or aspect ratio?
1
1
685
Feb ’24
RealityView Attachments normal
I have a view attachment attached to a hand anchor. When the attachment is facing away I don't want it to render. I might be missing something obvious, but I've made a System that runs on every render loop. In the update call I'm getting a reference to the Attachment using components. And this is as long as I got. I can't figure out how to get the normal of an Entity I receive in the update function. My plan was to take the head anchor normal and compare it to the entity normal. If they are facing each other I render the viewAttachment, otherwise not. Is there a simpler way? And if not, how do I get the normal of an entity?
0
0
484
Feb ’24
Location of demo "World App"
Where's the xcode project for the "World App" referenced in the Build spatial experiences with RealityKit? At 3 minutes in, "the world app" is shown with a 2D window, and seems to be the expected starting place for the 3 module series. I see the code snippets below the video, which seem to intend adjustments to the original project. I've searched a.. I found it by searching github, maybe I'm missing an obvious link on the page. It is available here: https://developer.apple.com/documentation/visionos/world under the documentation page. Hope this helps someone.
0
0
404
Feb ’24
VisionOS VideoMaterial on 3D Mesh
I'm trying to get video material to work on an imported 3D asset, and this asset is a USDC file. There's actually an example in this WWDC video from Apple. You can see it running on the flag in this airplane, but there are no examples of this, and there are no other examples on the internet. Does anybody know how to do this? You can look at 10:34 in this video. https://developer.apple.com/documentation/realitykit/videomaterial
1
0
446
Feb ’24