RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Post

Replies

Boosts

Views

Activity

RealityKit add gestures and animations to Entity/ModelEntity
I have a problem, I want to load a usdz file, place it and animate it in RealityKit. It should also be possible to apply standard gestures as done inarView.installGestures(for: entity)From what I know I have two choices to load an Entity, for example Apples "toy_biplane.usdz" (It also contains an skeletal animation).I havelet entity = try! Entity.load(named: "toy_biplane")orlet entity = try! Entity.loadModel(named: "toy_biplane")when I use the first one, the usdz is loaded as an Entity, not ModelEntity. I can executeentity.playAnimation(entity.repeat())But I can't usearView.installGestures(for: entity)because it does not conform to HasCollision. I tried subclassing Entity and conforming to HasCollision and HasModel. It compiles but even if I call the generateCollisionShape(recursive: true) the gestures are not working.So I tried the loadModel approach which returns an ModelEntity. There the arView.installGestures are working fine exactly as tried out. But when I want to play the animation, the airplane just rotates around my camer very weird.I also tried loading them asynchronously, no success.After many time of debugging I found out, that the Entity from the first approach contained many children from the usdz. Each of them is a part of the skeleton and has its own animation. Not so in the ModelEntity. The children property is an empty set and therefore the animation (e.g rotation of the propeller) is not applied to the skeletal element it belongs to but rather to the combined overall skeleton. Causing a rotation of the whole plane which is not what I want.What am I doing wrong or is something of this unintended behaviour from RealityKit?Thanks.
12
2
8.8k
Jul ’19
Reality Composer - Buttons don't work after exporting .reality file
Hello everyone,I had some problems with buttons and tap-trigger in AR Quick View.If I place a button-object and another object over the button ( in my case it was a sphere, or an imported usdz object) and export the project from Reality Composer as a .reality file - the button loses it's interactivity. It is working in a Reality Composer play mode ( in the example video I attached the sphere starts moving if you tap the button) but nothing happens if I export the project and test it in AR Quick View.Here is a small example of this problem (with attached .rpproject, .reality and two videos of testing the scene in Reality Composer play mode and in in AR Quick View) :https://drive.google.com/file/d/1eQa-pCEihRVtgP7jJUlpfhG5PjKZulJB/view?usp=sharingDo you have any ideas how to fix this problem?
2
1
891
Feb ’20
Subclassing / Modifying the built-in gesture recognizers
Hello, RealityKit offers an awesome interface to install gestures for the common interactions with the virtual object in the 3D space. One of them is the EntityTranslationGestureRecognizer to move the 3D object in the 3D space. When checking the documentation I found the velocity(in:) method which I'd like to modify to limit the speed an object can be moved through the 3D space. https://developer.apple.com/documentation/realitykit/entitytranslationgesturerecognizer/3255581-velocity I didn't find a straight forward way to subclass and install this gesture recognizer yet. Do I miss something? Best, Lennart
2
1
584
Jun ’20
error ‘Texture Descriptor Validation
My RealityKit app uses an ARView with camera mode .nonAR. Later it puts another ARView with camera mode .ar on top of this. When I apply layout constraints to the second view the program aborts with the follow messages. If both views are of type .ar this doesn't occur, it is only when the first view is .nonAR and then has the second presented over it. I have been unable so far to reproduce this behavior in a demo program to provide to you and the original code is complex and proprietary. Does anyone know what is happening? I've seen other questions concerning situation but not under the same circumstances. 2021-12-01 17:59:11.974698-0500 MyApp[10615:6672868] -[MTLTextureDescriptorInternal validateWithDevice:], line 1325: error ‘Texture Descriptor Validation MTLTextureDescriptor has width (4294967295) greater than the maximum allowed size of 16384. MTLTextureDescriptor has height (4294967295) greater than the maximum allowed size of 16384. MTLTextureDescriptor has invalid pixelFormat (0). ’ -[MTLTextureDescriptorInternal validateWithDevice:]:1325: failed assertion `Texture Descriptor Validation MTLTextureDescriptor has width (4294967295) greater than the maximum allowed size of 16384. MTLTextureDescriptor has height (4294967295) greater than the maximum allowed size of 16384. MTLTextureDescriptor has invalid pixelFormat (0).
3
0
2.4k
Dec ’21
RealityKit MeshResource generated from SwiftUI shape
On Scenekit, using SCNShapewe can create SCN geometry from SwiftUI 2D shapes/beziers:https://developer.apple.com/documentation/scenekit/scnshape Is there an equivalent in RealityKit? Could we use the generate(from:) for that?https://developer.apple.com/documentation/realitykit/meshresource/3768520-generate https://developer.apple.com/documentation/realitykit/meshresource/3768520-generate
2
0
887
Mar ’22
RealityViewContent update
I am working on a project where changes in a window are reflected in a volumetric view which includes a RealityView. I have a shared data model between the window and volumetric view, but it unclear to me how I can programmatically refresh the RealityViewContent. Initially I tried holding the RealityViewContent passed from the RealityView closure in the data model, and I also tried embedding a .sink into the closure, but because the RealityViewContent is inout, neither of those work. And changes to the window's contents do not cause the RealityView's update closure fire. Is there a way to notify the RealityViewContent to update?
4
0
786
Jun ’23
RealityKit visionOS anchor to POV
Hi, is there a way in visionOS to anchor an entity to the POV via RealityKit? I need an entity which is always fixed to the 'camera'. I'm aware that this is discouraged from a design perspective as it can be visually distracting. In my case though I want to use it to attach a fixed collider entity, so that the camera can collide with objects in the scene. Edit: ARView on iOS has a lot of very useful helper properties and functions like cameraTransform (https://developer.apple.com/documentation/realitykit/arview/cameratransform) How would I get this information on visionOS? RealityViews content does not seem offer anything comparable. An example use case would be that I would like to add an entity to the scene at my users eye-level, basically depending on their height. I found https://developer.apple.com/documentation/realitykit/realityrenderer which has an activeCamera property but so far it's unclear to me in which context RealityRenderer is used and how I could access it. Appreciate any hints, thanks!
7
6
2.8k
Jun ’23
AVFoundation with lidar and this year's RealityKit Object Capture.
With AVFoundation's builtInLiDARDepthCamera, if I save photo.fileDataRepresentation to heic, it only has Exif and TIFF metadata. But, RealityKit's object capture's heic image has not only Exif and TIFF, but also has HEIC metadata including camera calibration data. What should I do for AVFoundation's exported image has same meta data?
2
0
1k
Jun ’23
Object Capture With only manual capturing
Is it possible to capture only manually (automatic off) on object capture api ? And can I proceed to capturing stage right a way? Only Object Capture API captures real scale object. Using AVFoundation or ARKit, I've tried using lidar capturing HEVC or create PhotogrammetrySample, It doesn't create real scale object. I think, during object capture api, it catches point cloud, intrinsic parameter, and it help mesh to be in real scale. Does anyone knows 'Object Capture With only manual capturing' or 'Capturing using AVFoundation for real scale mesh'
2
0
1k
Jul ’23
Collision shapes for primitive shapes don't account for scale
Hello, I'm scaling a primitive cube to create a collision in RealityKit on iOS via Reality Composer Pro. However, when I export it as a usdz and then call generateCollisionShapes(recursive: true) it is ignoring the scaling I have set for the object. It appears like it ignores the non-uniform scaling I'm using and just creates a big cube collision in a scaled up version proportionally. I have tried to use bounding boxes, etc, to create my own collision ShapeResource but it consistently has this problem. If I take Reality Composer Pro out of the equation, it still has the same issue when adding a collision to a primitive cube. Is this a bug in RealityKit collision scaling?
0
0
316
Jul ’23
RealityKit cannot load Entity
Very often when I try to load an Entity in RealityView the system give me an error: The operation couldn’t be completed. (Swift.CancellationError error 1.) RealityView { content in do { let scene = try await Entity(named: "Test_Scene", in: realityKitContentBundle) content.add(scene) } catch { debugPrint("error loading scene", error.localizedDescription.debugDescription) } } The scene I created contain a basic square This is the main file struct first_app_by_appleApp: App { var body: some Scene { WindowGroup { ContentView() }.windowStyle(.volumetric) } } and inside info.plist file I setup UIApplicationPreferredDefaultSceneSessionRole with UIWindowSceneSessionRoleVolumetricApplication I think that could be a bug of the system but I'm not surprised since it's the first beta. If so, do you know any workarounds?
2
0
844
Jul ’23
How to get grounding shadow to work in VisionOS?
Hi, I'm trying to replicate ground shadow in this video. However, I couldn't get it to work in the simulator. My scene looks like the following which is rendered as an immersive space: The rocket object has the grounding shadow component with "cast shadow" set to true: but I couldn't see any shadow on the plane beneath it. Things I tried: using code to add the grounding shadow component, didn't work re-used the IBL from the helloworld project to get some lighting for the objects. Although the IBL worked, I still couldn't see the shadow tried adding a DirectionalLight but got an error saying that directional lights are not supported in VisionOS (despite the docs saying the opposite) A related question on lighting: I can see that the simulator definitely applies some scene lighting to objects. But it doesn't seem to do it perfectly. For example in the above screenshot I placed the objects under a transparent ceiling which is supposed to get a lot of lights. But everything is still quite dark.
6
1
2.1k
Jul ’23
RealityView update closure not executed upon state change
I have the following piece of code: @State var root = Entity() var body: some View { RealityView { content, _ in do { let _root = try await Entity(named: "Immersive", in: realityKitContentBundle) content.add(_root) // root = _root <-- this doesn't trigger the update closure Task { root = _root // <-- this does } } catch { print("Error in RealityView's make: \(error)") } } update: { content, attachments in // NOTE: update not called when root is modififed // unless root modification is wrapped in Task print(root) // the intent is to use root for positioning attachments. } attachments: { Text("Preview") .font(.system(size: 100)) .background(.pink) .tag("initial_text") } } // end body If I change the root state in the make closure by simply assigning it another entity, the update closure will not be called - print(root) will print two empty entities. Instead if I wrap it in a Task, the update closure would be called: I would see the correct root entity being printed. Any idea why this is the case? In general, I'm unsure the order in which the make, update and attachment closures are executed. Is there more guidance on what we should expect the order to be, what should we do typically in each closure, etc?
1
0
860
Jul ’23
Entity rotation animation doesn't go beyond 180 degree?
I use simple transform and move method to animate my entity. Something like this : let transform = Transform(scale: .one, rotation: simd_quatf(angle: .pi, axis: SIMD3(x:0, y:0, z:1), translate: .zero) myEntity.move(to: transform, relativeTo: myEntity, duration: 1) All is well, but when I try to rotate any more than 180 degree, the rotation stays still ? How do I animate something that wants to turn 360 degree? Thanks
2
0
705
Jul ’23
How to position windows in the environment in VisionOS?
The below code is my entry point import SwiftUI @main struct KaApp: App { var body: some Scene { WindowGroup { ContentView() } WindowGroup(id:"text-window"){ ZStack{ TextViewWindow().background(.ultraThickMaterial).edgesIgnoringSafeArea(.all) } }.windowStyle(.automatic).defaultSize(width: 0.1, height: 0.1, depth: 1, in: .meters) WindowGroup(id:"model-kala"){ ModelView() }.windowStyle(.volumetric).defaultSize(width: 0.8, height: 0.8, depth: 0.8, in:.meters) WindowGroup(id:"model-kala-2"){ AllModelsView().edgesIgnoringSafeArea(.all) }.windowStyle(.volumetric).defaultSize(width: 1, height: 1, depth: 1, in:.meters) } } I want to place the TextViewWindow exactly near a model that I have placed in the environment. But I'm unable to reposition the window to exactly where I want. if let Armor_Cyber = try? await ModelEntity(named:"Armor_Cyber"), let animation = Armor_Cyber.availableAnimations.first{ Armor_Cyber.playAnimation(animation.repeat(duration: .infinity)) Armor_Cyber.scale = [0.008, 0.008, 0.008] Armor_Cyber.position = [-4, -1, 0.15] let rotation = simd_quatf(angle: -.pi / 6, axis: SIMD3<Float>(0, 1, 0)) * simd_quatf(angle: -.pi / 2, axis: SIMD3<Float>(1, 0, 0)) * simd_quatf(angle: .pi / 2, axis: SIMD3<Float>(0, 0, 1)) Armor_Cyber.transform.rotation = rotation content.add(Armor_Cyber) } How can I place the windowGroup exactly on the right-top of the above model?
2
0
989
Jul ’23