RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit tag

444 Posts
Sort by:
Post marked as solved
2 Replies
396 Views
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces. This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there? I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
Posted
by Todd2.
Last updated
.
Post not yet marked as solved
1 Replies
349 Views
I was developing my project just like HappyBeam, having a mechanism that after playing a small game round, a main menu pops up letting player to play again or back to main menu. When I start my first game after installing it on my vision pro, it works fine, which was basically like HappyBeam, counting from 3 to 1 then await openImmersiveView(id: "***") to entering my game. After finish the round, I call await dismissImmersiveView() and reset my game ready for the next move. Letting player to choose play again or back to menu to continue my game. However, this time, when my counter counting 3 to 1, the immersive view didn't show up and the visionOS menu showed up instead (I guess it's because immersive view cannot be open). Some error showed in my logger are below <FBSWorkspaceScenesClient:0x281820e00 com.apple.frontboard.systemappservices> scene request failed to return scene with error response : <NSError: 0x2839bc270; domain: FBSWorkspaceErrorDomain; code: 1 ("InvalidScene"); "scene invalidated before create completion"> ------------------------------------------------ Unable to present an Immersive Space for id 'ImmersiveSpace': Error Domain=FBSWorkspaceErrorDomain Code=1 "scene invalidated before create completion" UserInfo={BSErrorCodeDescription=InvalidScene, NSLocalizedFailureReason=scene invalidated before create completion} ------------------------------------------------ Error: BSLogAddStateCaptureBlockWithTitle(EventDeferringState:com.milanow.mygame:SFBSystemService-C90B0828-4522-4098-9E6A-0D5968CFCEB8) state data format error: <NSError: 0x283947360; domain: BSSharedStateCapturing; code: 1; "Input generated no data"> { NSUnderlyingError = <__NSCFError: 0x2839451d0; domain: NSCocoaErrorDomain; code: 3851> { NSDebugDescription = Property list invalid for format: 200 (property lists cannot contain NULL); }; } Wonder what happens here since no helpful info could be found online. (I think the openimmersiveview code snippet is boring but I may still post it here) var body: some Scene { Group { WindowGroup(id: "MainUI") { MainView() } .windowStyle(.plain) .windowResizability(.contentSize) ImmersiveSpace(id: "ImmersiveSpace") { GameView() } .immersionStyle(selection: .constant(.mixed), in: .mixed) } .onChange(of: gameModel.state) { oldValue, newValue in guard oldValue != newValue else { return } if case let .dismissingImmersiveSpace(finish) = newValue { Task { await dismissImmersiveSpace() openWindow(id: "MainUI") finish() } } else if case let .openingImmersiveSpace(startPlaying) = newValue { Task { await openImmersiveSpace(id: "ImmersiveSpace") dismissWindow(id: "MainUI") startPlaying() } } else if case .playing = oldValue { openWindow(id: "MainUI") } else if case .playing = newValue { dismissWindow(id: "MainUI") } } } You can see that nothing magic here just follow what HappyBeam says. (Wonder what makes a scene invalid really)
Posted
by milanowth.
Last updated
.
Post not yet marked as solved
0 Replies
191 Views
Hello. I'm developing the app using ARKit and RealityKit. The purpose of the app is to scan the apartment and put furniture next to the walls. It works good, but if AR session takes more than 3 mins at some point app is crashed. According to crash report it's not something related to my code. I'm attaching crash report (company data is hidden). Any help is appreciated. Thanks in advance.
Posted
by volov3ly.
Last updated
.
Post not yet marked as solved
2 Replies
308 Views
I'm developing an app for Apple Vision Pro and have a question about RealityKit. Recently, I attempted to use drag gestures to manipulate two entities, A and B, with my left and right hands respectively. The two entities belong to the same RealityView. I anticipated that I could move Entity A with my left hand and Entity B with my right hand independently. However, I noticed that the movement of one hand affects both entities simultaneously. Presumably, DragGesture().onChanged is triggered twice for each entity. In an attempt to properly pair each hand with its corresponding entity, I investigated the platform.manipulatorGroup in the debugger. However, I encountered a compile error when trying to access the platform variable. Is it feasible to pair each hand with a specific entity and move both objects separately? Thank you in advance.
Posted Last updated
.
Post not yet marked as solved
1 Replies
219 Views
Trying to find some answers on why billboarding isn't working when attaching to an entity that is a child of an anchor. I'm trying to billboard an attachment so that it remains pointed at the user wherever they're viewing the content from. For example, showing some context information over dynamic 3d model on the table top. Not baked into a Reality Composer pro scene. I pulled the component and system used in the various Apple example projects (Diorama) that have the billboarding system. Playing around with the system and component I can add a simple model entity to the scene, tag it with the component and it works perfectly, all the time. As the camera moves it tracks it perfectly. Even when nested under other empty entities or off center or oddly rotated. Great! Then I wanted to apply this to an attachment that is shown from a model entity that is anchored to a horizontal plane and all of the sudden it doesn't work at all. I create the anchor: let anchor = AnchorEntity(.plane(.horizontal, classification: .table, minimumBounds: [0.01, 0.01])) if let lookAtText = attachments.entity(for: "LookAtMe") { lookAtText.position = [0,0.5,0] lookAtText.components.set(BillboardComponent()) lookAtText.name = "Look At Me" anchor.addChild(lookAtText) } content.add(anchor) The attachment shows correctly above the anchor, as expected, and does rotate some, just totally wrong or stops, It does not billboard correctly, or even remotely correctly. if I switch the anchor.addChild to be a content.add it isn't in the correct place, but billboarding works. I don't understand why adding it as a child to the anchor entity suddenly breaks completely unrelated systems. Am I doing something wrong or is this some sort of privacy issue? I can't find any documentation that using the look at api from an anchored entity is somehow forbidden.
Posted Last updated
.
Post marked as solved
1 Replies
211 Views
Unity's PolySpatial has HoverEffect GameObject supported, I think it basically means even though I don't know the exact entity that user is looking at. But the developer can provide a event callback to RealityKit that "please change me the entity's mesh to other color". Just like hoverEffect on SwiftUI component. So I wonder if there was a closure to let RealityKit the system to fire up my callback?
Posted
by milanowth.
Last updated
.
Post not yet marked as solved
9 Replies
834 Views
Hello all, RealityKit. visionOS. I have a parent entity (surface in the code below) that I'm adding him a child entity with all the required components for gestures (collision & input target). I'm setting the TapGesture as expected (same for SpatialTapGesture). Result: gesture is not working. Neither the hover effect. The entity is not recognized as a tappable element. However, if I'm adding the child entity to the content and not to the parent entity - everything is working. Enclosed code below for both scenarios. Any idea? Many thanks, Dudi Doesn't work - RealityView { content, attachments in let scene = clonedEntity.clone(recursive: true) ... surface.addChild(scene) // <- Doesn't work } attachments: { ... } .gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { value in openWindow(id: "detailed-window") }) .hoverEffect() Work - RealityView { content, attachments in let scene = clonedEntity.clone(recursive: true) ... content.add(scene) // <- Work } attachments: { ... } .gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { value in openWindow(id: "detailed-window") }) .hoverEffect()
Posted
by DudiSG.
Last updated
.
Post not yet marked as solved
0 Replies
165 Views
We are developing an AR app which uses spatial audio. If we want to use Realitykit to create the app, will we need to use a MacBook Pro running Silicon?
Posted
by Skynard.
Last updated
.
Post not yet marked as solved
1 Replies
298 Views
Hello, I've been trying to leverage instanced rendering in RealityKit on visionOS but have not had success. RealityKit states this is supported: https://developer.apple.com/documentation/realitykit/validating-usd-files https://developer.apple.com/videos/play/wwdc2021/10075/?time=1373 https://developer.apple.com/videos/play/wwdc2023/10099/?time=772 RealityKit Trace metrics Validating instancing is working: To test I made a base visionOS app with immersive space and the entity replaced with my test usdz file. I've been using the RealityKit Trace profiling template in xcode instruments in the immersive space and volume closed. This gets consistent draw call results. If I have a single sphere mesh with one material I get one draw call, but the number of draw calls grows linearly with mesh count no matter how my entity is configured. What I've tried Create a test scene in blender, export with instancing enabled Create a test scene in Reality Composer Pro using references Author usda files by hand based on the OpenUSD spec Programatically create a MeshResource with Contents at runtime References https://openusd.org/release/api/_usd__page__scenegraph_instancing.html https://developer.apple.com/documentation/realitykit/meshresource https://developer.apple.com/documentation/realitykit/meshresource/instance Thank you
Posted
by VirtDev.
Last updated
.
Post marked as solved
2 Replies
309 Views
I'm trying to make a simple demo of using ShaderGraphMaterial in a USDZ file that I can preview on Mac and VisionOS but I'm having trouble. In Reality Composer, I make a sphere, then assign a ShaderGraphMaterial to the material, with a simple diffuse color (green) input. When I save the file as .usda, it displays as a gray sphere on mac rather than the green sphere shown in reality composer. If I then convert to usdz using Reality Converter, I get a warning on import: "Shader nodes must have “id” as the implementationSource, with id values that begin with “Usd”. Also, shader inputs with connections must each have a single, valid connection source." And the exported .usdz also shows as a gray sphere. Is there a simple demo of a .usda file using ShaderGraphMaterial that displays on Mac, iOS, and VisionOS that I can look at to see how it looks internally? My actual problem is creating usdz / usda files on visionOS for viewing on iOS / Mac / VisionOS.. but the first step is showing it's possible to even use ShaderGraphMaterial across all platforms. Thanks
Posted
by cc4.
Last updated
.
Post not yet marked as solved
1 Replies
254 Views
I see example code converting the results of a SpatialTap to a SIMD3 location. For example, from WWDC session Meet ARKit for spatial computing: let location3D = value.convert(value.location3D, from: .global, to: .scene) What I really want is a simd_float4x4 that includes orientation of the surface that the tap gesture/cast collided with? My goal is to place an object with its Y-axis along the normal of the surface that was tapped. For example, in the referenced WWDC session, they create a CollisionComponent from the MeshAnchor data. If that mesh data is covering a curved couch cushion, I would like the normal from that curved cushion (i.e., the closest triangle approximating it). Is this possible? My planned fallback is to only use planes for collision surfaces for tap gestures, extract the tap gesture value's entity (which I am hoping is the plane), and grab its transform for the orientation information. I am hoping Apple has a simple function call that is more general than my fallback approach.
Posted
by Todd2.
Last updated
.
Post not yet marked as solved
0 Replies
343 Views
Is there any way to specify a clip volume or clipping planes on either a RealityView or the underlying RealityKit entity on visionOS? This was easy on SceneKit with shader modifiers, or in OpenGL, or WebGL, or with RealityKit on iOS or macOS with CustomMaterial surface shader, but CustomMaterial is not supported on visionOS.
Posted
by avitzur.
Last updated
.
Post not yet marked as solved
2 Replies
299 Views
Hello, I'm currently building an app that implements the on-device object capture API to create 3D models. I have two concerns that I cannot find addressed anywhere on the internet: Can on-device object capture be performed by devices without LiDAR? I understand that depth data is necessary for making scale-accurate models - if there is an option to disable it, where would one specify that in code? Can models be exported to .obj instead of .usdz? From WWDC2021 at 3:00 it is mentioned that it is possible with the Apple Silicon API but what about with on-device scanning? I would be very grateful if anyone is knowledgeable enough to provide some insight. Thank you so much!
Posted Last updated
.
Post not yet marked as solved
0 Replies
200 Views
I'm trying to better understand how loading entities works. If I do this: RealityView { content in // Add the initial RealityKit content if let scene = try? await Entity(named: "RCP_Scene", in: realityKitContentBundle) { content.add(scene) } } It returns the root with the two objects I have in the scene (sphere_01 and sphere_02). If I add a drag gesture to this entity it works on the root and gets applied to both sphere_01 and sphere_02 together (they both indiviually have collision and input components set to allow gestures). How do I get individual control of sphere_01 and sphere_02? Is it possible to load the root scene, as I'm doing above, and have individual control?
Posted Last updated
.
Post not yet marked as solved
0 Replies
180 Views
Hi, I am investigating how to emit the following in my visionOS app. https://www.hiroakit.com/archives/1432 https://blog.terresquall.com/2020/01/getting-your-emission-maps-to-work-in-unity/ Right now, I'm trying various things with Shader Graph in Reality Composer Pro, but I can't tell from the official documentation and WWDC session videos what the individual functions and combined effects of Reality Composer Pro's Shader Graph nodes are, I am having a hard time understanding the effects of the individual functions and combinations of them. I have a feeling that such luminous materials and expressions are not possible in visionOS to begin with. If there is a way to achieve this, please let me know. Thanks.
Posted Last updated
.
Post not yet marked as solved
2 Replies
513 Views
Hi, What are the limitations and capabilities of visionOS? I cannot find answers to the questions I have. Let's say you have some USDZ files stored in a cloud service, there are so many of them that the app would be huge if you put them in assets. You want to fetch the one you are interested in and show it while an app is running. Is it possible to load USDZ files at runtime from the network? Is there a limit to how many objects can be visible at once? Let's say I am in an open space, with no walls. I want to place 100 3D objects somewhere in space. Is it possible? What if I placed 500, 1000? Is there a way to save the anchor point of the object? I want to open the app again and have an object in the same place I left it. I would like to arrange my space and have objects always in the same spots. How does the OS behave if objects are in different rooms? Is it possible to walk around, visit different rooms, and have objects anchored there? Would it behave like real objects? Is it possible to color a plane? Let's say there is a wall and it's black. I want this wall to be orange. Is it possible?
Posted
by Coderian.
Last updated
.
Post not yet marked as solved
7 Replies
413 Views
Hi, I'm trying to rotate an entity in VisionPro. Most of the code is the same as the Diorama code from WWDC23. The problem I'm having is that the rotiation occurs but the axis of the rotation is not the center of my object. It seems to be centered on the zero coordinate of the immersive space . How do I change the rotation3DEffect to tell it to rotate around the entity? Not the space? Is it even possible? This is the code, the rotation is at the end. var body: some View { @Bindable var viewModel = viewModel RealityView { content, _ in do { let entity = try await Entity(named: "DioramaAssembled", in: RealityKitContent.RealityKitContentBundle) viewModel.rootEntity = entity content.add(entity) viewModel.updateScale() // Offset the scene so it doesn't appear underneath the user or conflict with the main window. entity.position = SIMD3<Float>(0, 0, -2) subscriptions.append(content.subscribe(to: ComponentEvents.DidAdd.self, componentType: PointOfInterestComponent.self, { event in createLearnMoreView(for: event.entity) })) entity.generateCollisionShapes (recursive: true) entity.components.set(InputTargetComponent()) } catch { print("Error in RealityView's make: \(error)") } } .rotation3DEffect(.radians(currentrotateByX), axis: .y) .rotation3DEffect(.radians(currentrotateByY), axis: .x)
Posted
by michelefu.
Last updated
.
Post not yet marked as solved
0 Replies
220 Views
We are porting a iOS Unity AR app to native visionOS. Ideally, we want to re-use our AR models in both applications. These AR models are rather simple. But still, converting them manually would be time-consuming, especially when it gets to the shaders. Is anyone aware of any attempts to write conversion tools for this? Maybe in other ecosystems like Godot or Unreal, where folks also want to convert the proprietary Unity format to something else? I've seen there's an FBX converter, but this would not care for shaders or particles. I am basically looking for something like the Polyspatial-internal conversion tools, but without the heavy weight of all the rest of Unity. Alternatively, is there a way to export a Unity project to visionOS and then just take the models out of the Xcode project?
Posted
by waldgeist.
Last updated
.
Post marked as solved
1 Replies
412 Views
I have a custom material in Reality Composer. When I attach it to a cube and try loading the scene in XCode, the material cannot be cast to a ShaderGraphMaterial because it has been changed to a PhysicallyBasedMaterial. The material was always a Custom material, I did not change the type in Reality Composer. Does anyone know how to fix?
Posted Last updated
.