RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

RealityView and Anchors
Hello, In the documentation for an ARView we see a diagram that shows that all Entity's are connected via AnchorEntity's to the Scene: https://developer.apple.com/documentation/realitykit/arview What happens when we are using a RealityView? Here the documentation suggests we direclty add Entity's: https://developer.apple.com/documentation/realitykit/realityview/ Three questions: Do we need to add Entitys to an AnchorEntity first and add this via content.add(...)? Is an Entity ignored by physics engine if attached via an Anchor? If both the AnchorEntity and an attached Entity is added via content.add(...), does is its anchor's position ignored?
0
0
62
22h
GroundingShadowComponent tanks performance even when set to false
Working on a vision OS app. I've noticed that even when castsShadow is false, performance goes down the drain when there are more than a few dozen entities that have GroundingShadowComponent. I managed to hard crash the Vision Pro with about 200 or so entities that each had two ModelEntities with GroundingShadowComponent attached but set to castShadows = false. My solution is to add and remove the GroundingShadowComponent from entities as needed, but I thought maybe someone at Apple might want to look into this. I don't expect great performance with that many entities casting shadows, but I'd think turning the shadow off would effectively disable the component and not incur a performance penalty.
0
0
214
4d
glassMaterialEffect not working in immersive view with a skybox
I seem to be running into an issue where the .glassBackgroundEffect modifier doesn't seem to render correctly. The issue is occurring when attached to a view shown in a RealityKit immersive view with a Skybox texture. The glass effect is applied but doesn't let any of the colour of the skybox behind it though. I have created a sample project which is just the immersive space template with the addition of a skybox texture and an attachment with the glassBackgroundEffect modifier. The RealityView itself is struct ImmersiveView: View { var body: some View { RealityView { content, attachments in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) let attachment = attachments.entity(for: "foo")! let leftSphere = immersiveContentEntity.findEntity(named: "Sphere_Left")! attachment.position = [0, 0.2, 0] leftSphere.addChild(attachment) // Add an ImageBasedLight for the immersive content guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return } let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) // Put skybox here. See example in World project available at var skyboxMaterial = UnlitMaterial() let skyboxTexture = try! await TextureResource(named: "pano") skyboxMaterial.color = .init(texture: .init(skyboxTexture)) let skyboxEntity = Entity() skyboxEntity.components.set(ModelComponent(mesh: .generateSphere(radius: 1000), materials: [skyboxMaterial])) skyboxEntity.scale *= .init(x: -1, y: 1, z: 1) content.add(skyboxEntity) } } update: { _, _ in } attachments: { Attachment(id: "foo") { Text("Hello") .font(.extraLargeTitle) .padding(48) .glassBackgroundEffect() } } } } The effect is shown bellow I've tried this both in the simulator and in a physical device and get the same behaviour. Not sure if this is an issue with RealityKit or if I'm just holding it wrong, would greatly appreciate any help. Thanks.
1
0
126
4d
How to get selected usdz model thumbnail image using QuickLookThumbnailing
I am doing below code for getting thumbnail from usdz model using the QuickLookThumbnailing, But don't get the proper out. guard let url = Bundle.main.url(forResource: resource, withExtension: withExtension) else{ print("Unable to create url for resource.") return } let request = QLThumbnailGenerator.Request(fileAt: url, size: size, scale: 10.0, representationTypes: .all) let generator = QLThumbnailGenerator.shared generator.generateRepresentations(for: request) { thumbnail, type, error in DispatchQueue.main.async { if thumbnail == nil || error != nil { print(error) }else{ let tempImage = Image(uiImage: thumbnail!.uiImage) print(tempImage) self.thumbnailImage = Image(uiImage: thumbnail!.uiImage) print("=============") } } } } Below Screen Shot for selected model : Below is the thumbnail image, which not come with guitar but get only usdz icon.
1
0
107
4d
EXC_BAD_ACCESS crash with .replace(withDrawables: ) on MaterialParameters.Value.textureResource
I get a crash every time I try to swap this texture for a drawable queue. I have a DrawableQueue leftQueue created on the main thread, and I invoke this block on the main thread. Scene.usda contains a reference to a model cinema. It crashes on the line with the replace(). if let shaderMaterial = try? await ShaderGraphMaterial(named: "/Root/cinema/_materials/Screen", from: "Scene.usda", in: theaterBundle) { if let imageParam = shaderMaterial.getParameter(name: "image"), case let .textureResource(imageTexture) = imageParam { imageTexture.replace(withDrawables: leftQueue) } } One weird thing, the imageParam has an invalid value when it crashes. imageParam RealityFoundation.MaterialParameters.Value <invalid> (0x0) Here is the stack trace is: * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x9) frame #0: 0x0000000191569734 libobjc.A.dylib`objc_release + 8 frame #1: 0x00000001cb9e5134 CoreRE`re::SharedPtr<re::internal::AssetReference>::reset(re::internal::AssetReference*) + 64 frame #2: 0x00000001cba77cf0 CoreRE`re::AssetHandle::operator=(re::AssetHandle const&) + 36 frame #3: 0x00000001ccc84d14 CoreRE`RETextureAssetReplaceDrawableQueue + 228 frame #4: 0x00000001acf9bbcc RealityFoundation`RealityKit.TextureResource.replace(withDrawables: RealityKit.TextureResource.DrawableQueue) -> () + 164 * frame #5: 0x00000001006d361c Screenlit`TheaterView.setupMaterial(self=Screenlit.TheaterView @ 0x000000011e2b7a30) at TheaterView.swift:245:74 frame #6: 0x00000001006e0848 Screenlit`closure #1 in closure #1 in closure #1 in closure #1 in TheaterView.body.getter(self=Screenlit.TheaterView @ 0x000000011e2b7a30) at TheaterView.swift:487 frame #7: 0x00000001006f1658 Screenlit`partial apply for closure #1 in closure #1 in closure #1 in closure #1 in TheaterView.body.getter at <compiler-generated>:0 frame #8: 0x00000001004fb7d8 Screenlit`thunk for @escaping @callee_guaranteed @Sendable @async () -> (@out τ_0_0) at <compiler-generated>:0 frame #9: 0x0000000100500bd4 Screenlit`partial apply for thunk for @escaping @callee_guaranteed @Sendable @async () -> (@out τ_0_0) at <compiler-generated>:0
2
0
149
1w
Triangle count and texture size budget for RealityKit on visionOS
In the past, Apple recommended restricting USDZ models to a maximum of 100,000 triangles and a texture sizes of 2048x2048 for Apple QuickLook (and I think for RealityKit on iOS in general). Does Apple have any recommended max polygon counts for visionOS? Is it the same for models running in a Volumetric window in the shared space and in ImmersiveSpace? What is the recommended texture size for visionOS? (I seem to recall 8192x8192, but I can't find it now)
1
0
162
4d
How to move the player in visionOS
I'm creating a full immersive app of a large 3d environment in which I need to be able to move the player with different options like, hand gestures, game controller and teleporting. I have worked with unreal engine in which moving the player is easy and well documented. But I have not been able to find any information on how I could achieve this in visionOS. Has anyone done something similar that could give me some advice or sample code? any help appreciated Guillermo
1
0
155
1w
Possible to use DataProviders in GroupActivities on VisionOS?
I built two parts of my app a bit disjointed: my physics component, which controls all SceneReconstruction, HandTracking, and WorldTracking. my spatial GroupActivities component that allows you to see personas of those that join the activity. My problem: When trying to use any DataProvider in a spatial experience, I get the ARKit Session Event: dataProviderStateChanged, which disables all of my providers. My question: Has anyone successfully been able to find a workaround for this? I think it would be amazing to have one user be able to be the "host" for the activity and the scene reconstruction provider still continue to run for them.
1
0
157
1w
Walking an entity around an immersive space in visionOS like the window drag bar
I'm trying to understand how Apple handles dragging windows around in an immersive space. 3d Gestures seem to be only half of the solution in that they are great if you're standing still and want to move the window an exaggerated amount around the environment, but if you then start walking while dragging, the amplified gesture sends the entity flying off into the distance. It seems they quickly transition from one coordinate system to another depending on if the user is physically moving. If you drag a window and start walking the movement suddenly matches your speed. When you stop moving, you can push and pull the windows around again like a super hero. Am I missing something obvious in how to copy this behavior? Hello world, which uses the 3d gesture has the same problem. You can move the world around but if you walk with it, it flies off. Are they tracking the head movement and if it's moved more than a certain amount it uses that offset instead? Is there anything out of the box that can do this before I try and hack my own solution?
1
0
188
1w
RealityKit Memory Leak
I'm building a visionOS app which loads a Reality Composer scene with a large number of models. The app includes several of these scenes, and allows the user to switch between them. Because the scenes have a large number of models, I want to unload the currently loaded scene before loading a different one. So far I have been unable to reclaim all of the used memory by removing the entities from the scene. I've made a few small changes to the Mixed Immersive app template which demonstrate this behavior which I've included below (apparently I'm unable to upload a zip file with the entire project). Using just the two spheres included in the reality kit content the leaked memory is fairly small, but if you add a couple larger models to the scene (I was able to easily find free ones online) then the memory leak becomes much more obvious. When the immersive space is initially opened, I'm seeing roughly 44MB of used memory (as shown in the Xcode Debug navigator). Each time I tap the "Load Models" and then "Unload Models" buttons, the memory use decreases but does not get back down to the initial amount. Subsequent loads and unloads will continue to increase the used memory (the amount of increase will depend on the models that you add to the scene). Also note that I've seen similar memory increases when dynamically creating the entities. Inside ViewModel.loadModels I've included some commented out code that dynamically creates entities instead of loading a Reality Composer scene. Is there a way to fully reclaim the used memory? I've tried many different ways to clear the RealityKit entities but so far have been unsuccessful. struct RKMemTestApp: App { private var viewModel = ViewModel() var body: some Scene { WindowGroup { ContentView() .environment(viewModel) } ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() .environment(viewModel) } } } Add this above the body in ContentView: @Environment(ViewModel.self) private var viewModel The ContentView body should be: VStack { Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace) .font(.title) .frame(width: 360) .padding(24) .glassBackgroundEffect() Button("Load Models") { viewModel.loadModels() } Button("Unload Models") { viewModel.unloadModels() } } ImmersiveView: struct ImmersiveView: View { @Environment(ViewModel.self) private var viewModel var body: some View { RealityView { content in if let rootEntity = viewModel.rootEntity { content.add(rootEntity) } } update: { content in if viewModel.rootEntity == nil && !content.entities.isEmpty { content.entities.removeAll() } else if let rootEntity = viewModel.rootEntity, content.entities.isEmpty { content.add(rootEntity) } } } } ViewModel: import Foundation import Observation import RealityKit import RealityKitContent @Observable class ViewModel { var rootEntity: Entity? init() { } func loadModels() { Task { if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) { Task { @MainActor in if rootEntity == nil { rootEntity = Entity() } rootEntity!.addChild(scene) } } } /*if rootEntity == nil { rootEntity = Entity() } for _ in 0..<1000 { let mesh = MeshResource.generateSphere(radius:0.1) let material = SimpleMaterial(color: .blue, roughness: 0, isMetallic: true) let entity = ModelEntity(mesh: mesh, materials: [material]) entity.position = [Float.random(in: 0.0..<1.0), Float.random(in: 0.5..<1.5), -Float.random(in: 1.5..<2.5)] rootEntity!.addChild(entity) }*/ } func unloadModels() { rootEntity?.children.removeAll() rootEntity?.removeFromParent() rootEntity = nil } }
0
0
156
1w
The behavior of hand interacts with object in Vision Pro
I am developing an immersive application featured with hands interacting my virtual objects. When my hand passes through the object, the rendered color of my hand is like blending hand color with object's color together, both semi transparent. I wonder if it is possible to make my hand be always "opaque", or say the alpha value of rendered hand (coz it's VST) is always 1, but the object's alpha value could be varied in terms of whether it is interacting with hand. (I was thinking this kind of feature might be supported by a specific component (just like HoverEffectComponent), but I didn't find that out)
0
0
164
2w
VisionOS: Simultaneous Drag & Rotate gestures
I have been trying to replicate the entity transform functionality present in the magnificent app Museum That Never Was (https://apps.apple.com/us/app/the-museum-that-never-was/id6477230794) -- it allows you to simultaneously rotate, magnify and translate the entity, using gestures with both hands (as opposed to normal DragGesture() which is a one-handed gesture). I am able to rotate & magnify simultaneously but translating via drag does not activate while doing two-handed gestures. Any ideas? My setup is something like so: Gestures: var drag: some Gesture { DragGesture() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureTranslation = value.convert(value.translation3D, from: .local, to: .scene) } .onEnded { value in itemTranslation += gestureTranslation gestureTranslation = .init() } } var rotate: some Gesture { RotateGesture3D() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureRotation = simd_quatf(value.rotation.quaternion).inverse } .onEnded { value in itemRotation = gestureRotation * itemRotation gestureRotation = .identity } } var magnify: some Gesture { MagnifyGesture() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureScale = Float(value.magnification) } .onEnded { value in itemScale *= gestureScale gestureScale = 1.0 } } RealityView modifiiers: .simultaneousGesture(drag) .simultaneousGesture(rotate) .simultaneousGesture(magnify) RealityView update block: entity.position = itemTranslation + gestureTranslation + exhibitDefaultPosition entity.orientation = gestureRotation * itemRotation entity.scaleAll(itemScale * gestureScale)
1
1
163
2w
Is there a way with XCode 15.2 or 15.3 to use RealityKit without VisionOS
I start a project for iPad/iPhone and I set SwiftUI - RealityKit and I can’t get the build to compile. I do nothing but create a project and hit run. So I am wondering if it’s even possible to run RealityKit on just an iPad anymore. I then tried to use Reality composer to import a basic cylinder shape to my project and that wouldn’t run either. So I am wondering how to get a 3D model into my iPad app so that the user can interact with it. Thanks for any help
0
0
174
2w
Add child of entity to a RealityKitView
[visionOS Question] I’m using the hierarchy of an entity loaded from a RealityKit Pro project to drive the content of a NavigationSplitView. I’d like to render any of the child entities in a RealityKitView in the detail pane when a user selects the child entity name from the list in the NavigationSplitView. I haven’t been able to render the entity in the detail view yet. I have tried updating the position/scaling to no avail. I also tried adding an AnchorEntity and set the child entity parent to it. I’m starting to suspect that the way to do it is to create a scene for each individual child entity in the RealityKit Pro project. I’d prefer to avoid this approach as I want a data-driven approach. Is there a way to implement my idea in RealityKit in code?
0
0
153
2w