Every time I dismiss ImmersiveSpace with progressive ImmersionStyle and open another one I get about 30-40% of immersion level. Can immersion be set to 100% in progressive by default?
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Posts under RealityKit tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
What I want to do:
I want to turn only the walls of a room into RealityKit Entities that I can collide with, or turn into occlusion surfaces.
This requires adding and maintaining RealityKit entities that with mesh information from the RoomAnchor. It also requires creating a "collision shape" from the mesh information.
What I've explored:
A RoomAnchor can provide me MeshAnchor.Geometry's that match only the "wall" portions of a Room.
I can use this mesh information to create RealityKit entities and add them to my immersive view.
But those Mesh's don't come with UUIDs, so I'm not sure how I could know which entities meshes need to to be updated as the RoomAnchor is updated.
As such I just keep adding duplicate wall entities.
A RoomAnchor also provides me with the UUIDs of its plane anchors, but no way to connect those to the provided meshes that I've discovered so far.
Here is how I add the green walls from the RoomAnchor wall meshes.
Note: I don't like that I need to wrap this in a task to satisfy the async nature of making a shape from a mesh. could be stuck with it, though.
Warning: this code will keep adding walls, even if there are duplicates and will likely cause performance issues :D.
func updateRoom(_ anchor: RoomAnchor) async throws {
print("ROOM ID: \(anchor.id)")
anchor.geometries(of: .wall).forEach { mesh in
Task {
let newEntity = Entity()
newEntity.components.set(InputTargetComponent())
realityViewContent?.addEntity(newEntity)
newEntity.components.set(PlacementUtilities.PlacementSurfaceComponent())
collisionEntities[anchor.id]?.components.set(OpacityComponent(opacity: 0.2))
collisionEntities[anchor.id]?.transform = Transform(matrix: anchor.originFromAnchorTransform)
// Generate a mesh for the plane
do {
let contents = MeshResource.Contents(planeGeometry: mesh)
let meshResource = try MeshResource.generate(from: contents)
// Make this plane occlude virtual objects behind it.
// entity.components.set(ModelComponent(mesh: meshResource, materials: [OcclusionMaterial()]))
collisionEntities[anchor.id]?.components.set(ModelComponent(mesh: meshResource, materials: [SimpleMaterial.init(color: .green, roughness: 1.0, isMetallic: false)]))
} catch {
print("Failed to create a mesh resource for a plane anchor: \(error).")
return
}
// Generate a collision shape for the plane (for object placement and physics).
var shape: ShapeResource? = nil
do {
let vertices = anchor.geometry.vertices.asSIMD3(ofType: Float.self)
shape = try await ShapeResource.generateStaticMesh(positions: vertices,
faceIndices: anchor.geometry.faces.asUInt16Array())
} catch {
print("Failed to create a static mesh for a plane anchor: \(error).")
return
}
if let shape {
let collisionGroup = PlaneAnchor.verticalCollisionGroup
collisionEntities[anchor.id]?.components.set(CollisionComponent(shapes: [shape], isStatic: true,
filter: CollisionFilter(group: collisionGroup, mask: .all)))
// The plane needs to be a static physics body so that objects come to rest on the plane.
let physicsMaterial = PhysicsMaterialResource.generate()
let physics = PhysicsBodyComponent(shapes: [shape], mass: 0.0, material: physicsMaterial, mode: .static)
collisionEntities[anchor.id]?.components.set(physics)
}
collisionEntities[anchor.id]?.components.set(InputTargetComponent())
}
}
}
In RealityKit in visionOS 1.0 I'm perplexed that PhysicallyBasedMaterial and CustomMaterial have faceCulling properties but ShaderGraphMaterial does not.
Is there some way to achieve front face culling in a shader graph without creating a separate mesh with reversed triangle vertex indices?
I was reading through cube-image node docs and it talked about loading data from a cubemap file in ktx format. It wasn’t clear if that was only for the original KTX format, and if that node was also able to take advantage of the KTX2 format?
Is this shader node only relevant for files in the original (v1) KTX format?
SwiftUI in visionOS has a modifier called preferredSurroundingsEffect that takes a SurroundingsEffect.
From what I can tell, there is only a single effect available: .systemDark.
ImmersiveSpace(id: "MyView") {
MyView()
.preferredSurroundingsEffect(.systemDark)
}
I'd like to create another effect to tint the color of passthrough video if possible.
Does anyone know how to create custom SurroundingsEffects?
I am working on an app where we are attempting to place large entities quite far away from the user, when trying to recognise a tap gesture on them though the gesture isn't being picked up for part of the model.
It seems as though the larger and further a model is placed the more offset the collision shape seems to be. It responds to taps in a region that shrinks towards the bottom right. The actual size of the collision shape appears to be correct when viewed with the collision shape debug visualisation. I've been able to replicate this behaviour in the simulator and on a physical device.
It's hard to explain in words, there's a video in the README for the repo here
I've been able to replicate the issue in a simple sample app. Not sure if I might be using it wrong or if it is expected behaviour for tap gestures to be a bit off when places a large distance from the user. Appreciate any help, thanks.
struct ImmersiveView: View {
@State private var tapCount = 0
var body: some View {
RealityView { content in
let sphere = ModelEntity(mesh: .generateSphere(radius: 50), materials: [UnlitMaterial(color: .red)])
sphere.setPosition([500, 0, 0], relativeTo: nil)
sphere.components.set([
InputTargetComponent(),
CollisionComponent(shapes: [.generateBox(width: 250, height: 250, depth: 250)]),
])
content.add(sphere)
}
.gesture(
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
tapCount += 1
print(tapCount)
}
)
}
}
``
I have attempted to use VideoMaterial with HDR HLS stream, and also a TextureResource.DrawableQueue with rgba16Float in a ShaderGraphMaterial.
I'm capturing to 64RGBAHalf with AVPlayerItemVideoOutput and converting that to rgba16Float.
I don't believe it's displaying HDR properly or behaving like a raw AVPlayer.
Since we can't configure any EDR metadata or color space for a RealityView, how do we display HDR video? Is using rgba16Float supposed to be enough?
Is expecting the 64RGBAHalf capture to handle HDR properly a mistake and should I capture YUV and do the conversion myself?
Thank you
Does anyone measure the brightness of Vision Pro?
It seems to be dimmer than I expected.
Besides, is there any way to set the brightness of the Vision Pro to be maximum by script?
Many thanks!
I'd like to enter a fully immersive scene without the grab bar and close icon.
The full immersion app that comes with Xcode doesn't exit the immersion state when "x" is hit, but all the grab etc disappears - if I could only do that programmatically!
I've tried conditionally removing the View that launches the ImmersiveSpace, but the WindowGroup seems to be the thing that puts out the UI I'm trying to hide...
WindowGroup {
if(gameState.immersiveSpaceOpened){
ContentView()
.environmentObject(gameState)
}
}
Hello,
In the documentation for an ARView we see a diagram that shows that all Entity's are connected via AnchorEntity's to the Scene:
https://developer.apple.com/documentation/realitykit/arview
What happens when we are using a RealityView? Here the documentation suggests we direclty add Entity's:
https://developer.apple.com/documentation/realitykit/realityview/
Three questions:
Do we need to add Entitys to an AnchorEntity first and add this via content.add(...)?
Is an Entity ignored by physics engine if attached via an Anchor?
If both the AnchorEntity and an attached Entity is added via content.add(...), does is its anchor's position ignored?
Does anyone have any guidance / experience using TriggerVolumes to detect collision rather than the Physics engine in Reality Kit.
Aside from not participating the physics engine are there any other downside or upsides to using them?
I am doing below code for getting thumbnail from usdz model using the QuickLookThumbnailing, But don't get the proper out.
guard let url = Bundle.main.url(forResource: resource, withExtension: withExtension) else{
print("Unable to create url for resource.")
return
}
let request = QLThumbnailGenerator.Request(fileAt: url, size: size, scale: 10.0, representationTypes: .all)
let generator = QLThumbnailGenerator.shared
generator.generateRepresentations(for: request) { thumbnail, type, error in
DispatchQueue.main.async {
if thumbnail == nil || error != nil {
print(error)
}else{
let tempImage = Image(uiImage: thumbnail!.uiImage)
print(tempImage)
self.thumbnailImage = Image(uiImage: thumbnail!.uiImage)
print("=============")
}
}
}
}
Below Screen Shot for selected model :
Below is the thumbnail image, which not come with guitar but get only usdz icon.
I seem to be running into an issue where the .glassBackgroundEffect modifier doesn't seem to render correctly. The issue is occurring when attached to a view shown in a RealityKit immersive view with a Skybox texture. The glass effect is applied but doesn't let any of the colour of the skybox behind it though.
I have created a sample project which is just the immersive space template with the addition of a skybox texture and an attachment with the glassBackgroundEffect modifier. The RealityView itself is
struct ImmersiveView: View {
var body: some View {
RealityView { content, attachments in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
let attachment = attachments.entity(for: "foo")!
let leftSphere = immersiveContentEntity.findEntity(named: "Sphere_Left")!
attachment.position = [0, 0.2, 0]
leftSphere.addChild(attachment)
// Add an ImageBasedLight for the immersive content
guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return }
let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25)
immersiveContentEntity.components.set(iblComponent)
immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity))
// Put skybox here. See example in World project available at
var skyboxMaterial = UnlitMaterial()
let skyboxTexture = try! await TextureResource(named: "pano")
skyboxMaterial.color = .init(texture: .init(skyboxTexture))
let skyboxEntity = Entity()
skyboxEntity.components.set(ModelComponent(mesh: .generateSphere(radius: 1000), materials: [skyboxMaterial]))
skyboxEntity.scale *= .init(x: -1, y: 1, z: 1)
content.add(skyboxEntity)
}
} update: { _, _ in
} attachments: {
Attachment(id: "foo") {
Text("Hello")
.font(.extraLargeTitle)
.padding(48)
.glassBackgroundEffect()
}
}
}
}
The effect is shown bellow
I've tried this both in the simulator and in a physical device and get the same behaviour.
Not sure if this is an issue with RealityKit or if I'm just holding it wrong, would greatly appreciate any help. Thanks.
What is the reason the hand-tracking joints have these axes? I'm trying to create a virtual hands model and that's a mess.
In the past, Apple recommended restricting USDZ models to a maximum of 100,000 triangles and a texture sizes of 2048x2048 for Apple QuickLook (and I think for RealityKit on iOS in general).
Does Apple have any recommended max polygon counts for visionOS? Is it the same for models running in a Volumetric window in the shared space and in ImmersiveSpace?
What is the recommended texture size for visionOS? (I seem to recall 8192x8192, but I can't find it now)
When developing a VisionPro application, I need to first move and then rotate the Entity in RealityView.
How can these two animations be executed sequentially? (I tested it and executing it simultaneously would result in incorrect animation positions)
Working on a vision OS app. I've noticed that even when castsShadow is false, performance goes down the drain when there are more than a few dozen entities that have GroundingShadowComponent. I managed to hard crash the Vision Pro with about 200 or so entities that each had two ModelEntities with GroundingShadowComponent attached but set to castShadows = false.
My solution is to add and remove the GroundingShadowComponent from entities as needed, but I thought maybe someone at Apple might want to look into this. I don't expect great performance with that many entities casting shadows, but I'd think turning the shadow off would effectively disable the component and not incur a performance penalty.
I get a crash every time I try to swap this texture for a drawable queue.
I have a DrawableQueue leftQueue created on the main thread, and I invoke this block on the main thread. Scene.usda contains a reference to a model cinema. It crashes on the line with the replace().
if let shaderMaterial = try? await ShaderGraphMaterial(named: "/Root/cinema/_materials/Screen", from: "Scene.usda", in: theaterBundle) {
if let imageParam = shaderMaterial.getParameter(name: "image"), case let .textureResource(imageTexture) = imageParam {
imageTexture.replace(withDrawables: leftQueue)
}
}
One weird thing, the imageParam has an invalid value when it crashes.
imageParam RealityFoundation.MaterialParameters.Value <invalid> (0x0)
Here is the stack trace is:
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x9)
frame #0: 0x0000000191569734 libobjc.A.dylib`objc_release + 8
frame #1: 0x00000001cb9e5134 CoreRE`re::SharedPtr<re::internal::AssetReference>::reset(re::internal::AssetReference*) + 64
frame #2: 0x00000001cba77cf0 CoreRE`re::AssetHandle::operator=(re::AssetHandle const&) + 36
frame #3: 0x00000001ccc84d14 CoreRE`RETextureAssetReplaceDrawableQueue + 228
frame #4: 0x00000001acf9bbcc RealityFoundation`RealityKit.TextureResource.replace(withDrawables: RealityKit.TextureResource.DrawableQueue) -> () + 164
* frame #5: 0x00000001006d361c Screenlit`TheaterView.setupMaterial(self=Screenlit.TheaterView @ 0x000000011e2b7a30) at TheaterView.swift:245:74
frame #6: 0x00000001006e0848 Screenlit`closure #1 in closure #1 in closure #1 in closure #1 in TheaterView.body.getter(self=Screenlit.TheaterView @ 0x000000011e2b7a30) at TheaterView.swift:487
frame #7: 0x00000001006f1658 Screenlit`partial apply for closure #1 in closure #1 in closure #1 in closure #1 in TheaterView.body.getter at <compiler-generated>:0
frame #8: 0x00000001004fb7d8 Screenlit`thunk for @escaping @callee_guaranteed @Sendable @async () -> (@out τ_0_0) at <compiler-generated>:0
frame #9: 0x0000000100500bd4 Screenlit`partial apply for thunk for @escaping @callee_guaranteed @Sendable @async () -> (@out τ_0_0) at <compiler-generated>:0
I am trying to build a website where I would like to render the USDZ 3D model on the browser without the AR feature. The user should be able to interact with the 3D model using a pointing device (mouse). If the user wants to see the 3D model in AR she/he can do so by loading the page on a compatible device where the 3D model can be projected in AR.
I am looking for an answer to how to display the USDZ 3D model on the browser without the AR feature.
I want to transfer this video stream to another device and then view it on the other device. But I did not see any development information related to the camera by checking the VisionOS documentation information, so I would like to ask if anyone knows how to do it?
Thank you.