RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

How to Detect Gaze & Gesture on Entity
It's a common system interaction to look at an item in SwiftUI and tap to select it. I'm confused how to do the same with ModelEntities. How do I use gaze to select a ModelEntity for context based actions? e.g. look at the green sphere and tap to pull up a menu. Or look in a direction and clap to **** away virtual objects etc. etc. If this is not possible is there a workaround?
1
0
269
Jun ’24
Triangle count and texture size budget for RealityKit on visionOS
In the past, Apple recommended restricting USDZ models to a maximum of 100,000 triangles and a texture sizes of 2048x2048 for Apple QuickLook (and I think for RealityKit on iOS in general). Does Apple have any recommended max polygon counts for visionOS? Is it the same for models running in a Volumetric window in the shared space and in ImmersiveSpace? What is the recommended texture size for visionOS? (I seem to recall 8192x8192, but I can't find it now)
2
0
560
Jun ’24
SceneKit or RealityKit for Non-AR Game Development
Hi everyone, I'm choosing a framework for developing a game that doesn't involve augmented reality (AR) and I'm unsure whether to use SceneKit or RealityKit. I would like to hear from Apple engineers on this matter. Which of these frameworks is better suited for creating non-AR games? Additionally, I'd like to know if it's possible to disable AR in RealityKit using the updated RealityView? Thanks in advance for your insights and recommendations!
2
0
550
Jun ’24
RealityKit for macOS example
I would like to code some RealityViews to run on my Mac first (and then incorporate them in a visionOS project) so that my code/test loop is faster, but I have not been able to find a simple example that supports Mac. Is it possible to have volumes on a Mac? Is there support for using a game controller to move around the RealityView, like in the visionOS simulator?
1
0
272
Jun ’24
Drag Gesture in Immersive Spaces with Reality Kit
I've been trying to get the drag gesture up and running so I can move my 3D model around in my immersive space, but for some reason I am not able to move it around. The model shows up in my visionOS 1.0 Simulator, but I can't seem to get it to move around. Would love some help with this and some resources too that would be helpful. Here's a snippet of my Reality View code import SwiftUI import RealityKit import RealityKitContent struct GearRealityView: View { static var modelEntity = Entity() var body: some View { RealityView { content in if let model = try? await Entity(named: "LandingGear", in: realityKitContentBundle) { GearRealityView.modelEntity = model content.add(model) } }.gesture( DragGesture() .targetedToEntity(GearRealityView.modelEntity) .onChanged({ value in GearRealityView.modelEntity.position = value.convert(value.location3D, from: .local, to: GearRealityView.modelEntity.parent!) }) ) } }
2
0
345
Jun ’24
Turn on ARView.ARSession.ARConfiguration.providesAudioData = true and add a ModelEntity to ARView (its Material is VideoMaterial (avPlayer: player) this video contains audio), and the video does not play properly?
import SwiftUI import RealityKit import ARKit import AVFoundation struct ContentView : View { var body: some View { ARViewContainer().edgesIgnoringSafeArea(.all) } } struct ARViewContainer: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) arView.session.delegate = context.coordinator let worldConfig = ARWorldTrackingConfiguration() worldConfig.planeDetection = .horizontal // worldConfig.providesAudioData = true // open here -----> Error: arView.session.run(worldConfig) addTestEntity(arView: arView) return arView } func updateUIView(_ uiView: ARView, context: Context) {} func makeCoordinator() -> Coordinator { Coordinator() } class Coordinator: NSObject, ARSessionDelegate, ARSessionObserver { func session(_ session: ARSession, didOutputAudioSampleBuffer audioSampleBuffer: CMSampleBuffer) { } } } func addTestEntity(arView: ARView) { let mesh = MeshResource.generatePlane(width: 0.5, depth: 0.35) guard let url = Bundle.main.url(forResource: "videoplayback", withExtension: "mp4") else { return } let player = AVPlayer(url: url) let videoMaterial = VideoMaterial(avPlayer: player) let model = ModelEntity(mesh: mesh, materials: [videoMaterial]) model.transform.translation.y = 0.05 let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2))) anchor.children.append(model) player.play() arView.scene.anchors.append(anchor) } Error: failed to update STS state: Error Domain=com.apple.STS-N Code=1396929899 "Error: failed to signal change" UserInfo={NSLocalizedDescription=Error: failed to signal change} failed to update STS state: Error Domain=com.apple.STS-N Code=1396929899 "Error: failed to signal change" UserInfo={NSLocalizedDescription=Error: failed to signal change} ...... ARSession <0x125d88040>: did fail with error: Error Domain=com.apple.arkit.error Code=102 "Required sensor failed." UserInfo={NSLocalizedFailureReason=A sensor failed to deliver the required input., NSUnderlyingError=0x302922dc0 {Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.}}, NSLocalizedRecoverySuggestion=Make sure that the application has the required privacy settings., NSLocalizedDescription=Required sensor failed.} iOS 17.5.1 Xcode 15.4
2
0
347
Jun ’24
2D or 3D indoor mapping app in iOS
Hello! I want to create an indoor mapping application in Swift, using the LiDAR scanner. I searched among frameworks and I found that ARKit, RealityKit and RoomPlan would be useful. Which is the proper way to create a 2D indoor mapping app? And which is the proper way to create a 3D indoor mapping app? Are there any modifications I have to make on my code in order to have both?
1
0
417
Jun ’24
RealityKit on iOS: New anchor entity takes ages to show up
I'm implementing an AR app with Image Tracking capabilities. I noticed that it takes very long for the entities I want to overlay on a detected image to show up in the video feed. When debugging using debugOptions.insert(.showAnchorOrigins), I realized that the image is actually detected very quickly, the anchor origins show up almost immediately. And I can also see that my code reacts with adding new anchors for my ModelEntities there. However, it takes ages for these ModelEntities to actually show up. Only if I move the camera a lot, they will appear after a while. What might be the reason for this behaviour? I also noticed that for the first image target, a huge amount of anchors are being created. They start from the image and go all up towards the user. This does not happen with subsequent (other) image targets.
1
0
230
Jun ’24
ImageBasedLighting: Failed to find EnvironmentResource with name in bundle
Hello. I am trying to load my own Image Based Lighting file in a visionOS RealityView. I used the code you get when creating a new project from scratch and selecting the immersive space to full when creating the project. With the sample file Apple provides, it works. But when I put my image in PNG, HEIC or EXR format in the same location the example file was in, it doesn't load and the error states: Failed to find resource with name "SkyboxUpscaled2" in bundle In this image you can see the file "ImageBasedLight", which is the one that comes with the project and the file "SkyboxUpscaled2" which is my own in the .exr format. if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) do{ let resource = try await EnvironmentResource(named: "SkyboxUpscaled2") let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) }catch{ print(error.localizedDescription) } Does anyone have an idea why the file is not found? Thanks in advance!
1
0
373
Jun ’24
Incorrect Attachment bounds VisionOS 1.1
We have been using attachment.bounds.extents to determine the size of a RealityView attachment at run time. It has been working fine until VisionOS 1.1 update. I wonder if we are doing something wrong as the release notes suggest some visual bounds calculation issue was fixed with the latest release. The funny thing is we did not have an issue before. Below is how we access to height value: let height = attachmentEntity.attachment.bounds.extents.y Previously it returned the correct value. Now it returns 0. I wonder if anyone else is having the same issue.
3
1
476
Jun ’24
State-of-the-Art 3D (no AR) on macOS using RealityKit?
What is the current recommendation for creating high-quality 3D content? The context is a hobbyist, specialised CAD app for macOS (with an iPadOS companion) that is mostly 2D but also offers a 3D visualization option (currently OpenGL). Somewhere down the line there might be an AR view but at the moment - certainly for macOS - it's purely generated 3D visualization, all rendered content. So starting with a rewrite of the 3D visualization in 2024 targeting macOS Sequoia/iPadOS 18 is RealityKit the suggested way forward? Cheers, Jay
4
0
494
Jun ’24
TextureResource.DrawableQueue broken in VisionOS 2?
I have an input texture in a ShaderGraphMaterial. I use .replace(withDrawables:) to replace the texture with a drawable queue. When I present drawables to this queue, nothing happens in VisionOS 2. The drawables are not presented, I can't get any more via nextDrawable() because the unpresented ones are holding things up. This is with both bgra8Unorm_srgb and rgba16float formats. I have confirmed the material applied to my object has the modified texture resources on them. It was working in VisionOS 1.2. What changed in VisionOS 2 to cause this?
3
0
281
Jun ’24
Generating accurate CollisionComponent
Currently in an app I am working on, we are adding collision shapes/components to objects by using the ShapeResource.generateConvex method to generate the shape from the mesh of our ModelEntity. Unfortunately, this does not result in a totally accurate collision shape. The following example is how the collision component looks currently. Is there anyway to generate a collision shape that fits the exact bounds of the ModelEntity?
2
1
431
Jun ’24
RealityKit MeshResource generated from SwiftUI shape
On Scenekit, using SCNShapewe can create SCN geometry from SwiftUI 2D shapes/beziers:https://developer.apple.com/documentation/scenekit/scnshape Is there an equivalent in RealityKit? Could we use the generate(from:) for that?https://developer.apple.com/documentation/realitykit/meshresource/3768520-generate https://developer.apple.com/documentation/realitykit/meshresource/3768520-generate
2
0
942
Jun ’24