Post not yet marked as solved
I am trying to make a shader for a disco ball lighting effect for my app. I want the light to reflect on the scene mesh.
i was curious if anyone has pointers on how to do this in shader graph in reality composer pro or writing a surface shader.
The effect rotates the dots as the ball spins.
This is the effect in the apple clips that applies the effect to the scene mesh
Post not yet marked as solved
I have a plane that is stereoscopic so represents to the user depth that is beyond the plane.
I would like to have the options to render the depth buffer for the pixels or to not render any information into the depth for the plane.
I cannot see any option in Shader Graph Material to affect the depth buffer during render. I also cannot see any way in RealityKit to not render to the depth buffer for an entity.
I'm open to any suggestions.
Post not yet marked as solved
Hello,
I am currently working on a project where I am creating a bookstore visualization with racks and shelves(Full immersive view). I have an array of names, each representing a USDZ object that is present in my working directory.
Here’s the enum I am trying to iterate over:
enum AssetName: String, Codable, Hashable, CaseIterable {
case book1 = "B1"
case book2 = "B2"
case book3 = "B3"
case book4 = "B4"
}
and the code for adding objects I wrote:
import SwiftUI
import RealityKit
struct LocalAssetRealityView: View {
let assetName: AssetName
var body: some View {
RealityView { content in
if let asset = try? await ModelEntity(named: assetName.rawValue) {
content.add(asset)
}
}
}
}
Now I get the error, when I try to add multiple objects on Button click:
Unable to present another Immersive Space when one is already requested or connected
please suggest any solutions. Also suggest if anything can be done to add positions for the objects as well programatically.
Post not yet marked as solved
Adding AVPlayer as attachment on the side using RealityKit. The video in it thought is not aligned. And thoughts on what could be going wrong?
RealityView { content, attachments in
let url = self.video.resolvedURL
let asset = AVURLAsset(url: url)
let playerItem = AVPlayerItem(asset: asset)
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.isPassthroughTintingEnabled = true
// entity.components[VideoPlayerComponent.self] = videoPlayerComponent
entity.position = [0, 0, 0]
entity.scale *= 0.50
player.replaceCurrentItem(with: playerItem)
player.play()
content.add(entity)
} update: { content, attachments in
// if content.entities.count < 2 {
if showAnotherPlayer {
if let attachment = attachments.entity(for: "Attachment") {
playerModel.loadVideo(library.selectedVideo!, presentation: .fullWindow)
//4. Position the Attachment and add it to the RealityViewContent
attachment.position = [1.0, 0, 0]
attachment.scale *= 1.0
//let radians = -45.0 * Float.pi / 180.0
//attachment.transform.rotation += simd_quatf(angle: radians, axis: SIMD3<Float>(0,1,0))
let entity = content.entities.first
attachment.setParent(entity)
content.add(attachment)
}
}
if showLibrary {
if let attachment = attachments.entity(for: "Featured") {
//4. Position the Attachment and add it to the RealityViewContent
attachment.position = [0.0, -0.3, 0]
attachment.scale *= 0.7
//let radians = -45.0 * Float.pi / 180.0
//attachment.transform.rotation += simd_quatf(angle: radians, axis: SIMD3<Float>(0,1,0))
let entity = content.entities.first
attachment.setParent(entity)
viewModel.attachment = attachment
content.add(attachment)
}
} else {
if let scene = content.entities.first?.scene {
let _ = print("found scene")
}
if let featuredEntity = content.entities.first?.scene?.findEntity(named: "Featured") {
let _ = print("featured entity found")
}
if let attachment = viewModel.attachment {
let _ = print("-- removing attachment")
if let anchor = attachment.anchor {
let _ = print("-- removing anchor")
anchor.removeFromParent()
}
attachment.removeFromParent()
content.remove(attachment)
} else {
let _ = print("the attachment is missing")
}
}
// }
} attachments: {
Attachment(id: "Attachment") {
PlayerView()
.frame(width: 2048, height: 1024)
.environment(library)
.environment(playerModel)
.onAppear {
DispatchQueue.main.asyncAfter(deadline: .now()+1) {
playerModel.play()
}
}
.onDisappear {
}
}
if showLibrary {
Attachment(id: "Featured") {
VideoListView(title: "Featured",
videos: library.videos,
cardStyle: .full,
cardSpacing: 20) { video in
library.selectedVideo = video
showAnotherPlayer = true
}
.frame(width: 2048, height: 1024)
}
}
}
PlayerView
Post not yet marked as solved
Hello everyone, I have just started learning the development and learning of visionPro app. I have a scene called Scene, and inside it is an object called Sphere. I want to add a drag animation to this Sphere alone. I follow the code below to achieve it. But my Sphere cannot actually be dragged in the Apple simulator. What is the reason?
struct ContentView: View {
@State var enlarge = false
@State var offset: Point3D = .zero
@State var sphereEntity: Entity?
var body: some View {
RealityView { content in
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(scene)
sphereEntity = content.entities.first?.findEntity(named: "Sphere")
sphereEntity?.components.set(InputTargetComponent(allowedInputTypes: .all))
}
}.gesture(DragGesture().targetedToEntity(sphereEntity ?? Entity()).onChanged({ value in
print(value.location3D)
sphereEntity?.position = value.convert(value.location3D, from: .local, to: sphereEntity?.parent! ?? Entity())
}))
.gesture(SpatialTapGesture().targetedToAnyEntity().onEnded({ _ in
print("Ssssssss")
})) .onAppear() {
}
}
}
Please see also the video demo of the problem I'm encountering:
https://youtu.be/V0ZkF-tVgKE
I've noticed that the custom Systems I've been creating for my RealityKit/visionOS app do not get updated every frame as the documentation (and common sense) would suggest. Instead, they appear to tick for a time after each UI interaction and then "stall". The systems will be ticked again after some interaction with the UI or sometimes with a large enough movement of the user. My understanding was that these Systems should not be tied to UI by default so I'm a bit lost as to why this is happening.
I've reproduced this by starting from a template project and adding a very simple couple of systems.
Here is the main System, which simply rotates the pair of spheres:
import RealityKit
import RealityKitContent
import SwiftUI
public struct RotationSystem: System {
static let query = EntityQuery(where: .has(RealityKitContent.WobblyThingComponent.self))
public init(scene: RealityKit.Scene) {
}
public func update(context: SceneUpdateContext) {
print("system update, deltaTime: \(context.deltaTime)")
let entities = context.scene.performQuery(Self.query).map({ $0 })
for entity in entities {
let newRotation = simd_quatf(angle: Float(context.deltaTime * 0.5), axis: [0, 1, 0]) * entity.transform.rotation
entity.transform.rotation = newRotation
}
}
}
The component (WobblyThingComponent) is attached to a parent of the two spheres in Reality Composer Pro, and both system and component are registered on app start in the usual way.
This system runs smoothly in the simulator, but not in the preview in XCode and not on the Vision Pro itself, which is kinda the whole point.
Here is a video of the actual behaviour on the Vision Pro:
https://youtu.be/V0ZkF-tVgKE
The log during this test confirms that the system is not being ticked often. You can see the very large deltaTime values, representing those long stalled moments:
system update, deltaTime: 0.2055550068616867
system update, deltaTime: 0.4999987483024597
I have not seen this problem when running the Diaroma sample project, yet when comparing side-by-side with my test projects I cannot for the life of me identify a difference which could account for this.
If anyone could tell me where I'm going wrong it would be greatly appreciated as I've been banging my head against this one for days.
Xcode: Version 15.3 (15E204a)
visionOS: 1.1 and 1.1.1
Post not yet marked as solved
Hi,
I am implementing player using RealityKit's VideoPlayerComponent and AVPlayer. When app enter immersive space, playback beigns. But only audio playabck, I can't see video. Do I need specify entity's position and size?
struct MyApp: App {
@State private var playerImmersionStyle: ImmersionStyle = .full
var body: some Scene {
WindowGroup {
ContentView()
}
.defaultSize(width: 800, height: 200)
ImmersiveSpace(id: "playerImmersionStyle") {
ImmersiveSpaceView()
}
.immersionStyle(selection: $playerImmersionStyle, in: playerImmersionStyle)
}
func application(_ application: UIApplication,
configurationForConnecting connectingSceneSession: UISceneSession,
options: UIScene.ConnectionOptions) -> UISceneConfiguration {
return UISceneConfiguration(name: "My Scene Configuration", sessionRole: connectingSceneSession.role)
}
}
struct PlayerViewEx: View {
let entity = Entity()
var body: some View {
RealityView() { content in
let entity = makeVideoEntity()
content.add(entity)
}
}
public func makeVideoEntity() -> Entity {
let url = Bundle.main.url(forResource: "football", withExtension: "mov")!
let asset = AVURLAsset(url: url)
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer()
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.isPassthroughTintingEnabled = true
entity.components[VideoPlayerComponent.self] = videoPlayerComponent
entity.scale *= 0.4
player.replaceCurrentItem(with: playerItem)
player.play()
return entity
}
}
#Preview {
PlayerViewEx()
}
Post not yet marked as solved
Is there any way to detect if an entity is being looked at in a RealityView. I know it is possible to add a "HoverEffectComponent()" which will highlight the entity a little when you gaze on it, but there doesn't seem to be any way to call a function from this. There is also no GazeGesture or anything similar.
Post not yet marked as solved
hi I am not sure what is going on...
I have been working on this model for a while on reality composer, and had no problem testing it that way...it always worked out perfectly.
So I imported the file into a brand new Xcode project... I created a new ARApp, and used SwiftUI.
I actually did it twice ...
And tested the version apple has with the box. In Apple's version, the app appears but the whole part where it tries to detect planes didn't show up. So I am confused.
I found a question that mentions the error messages I am getting but I am not sure how to get around it?
https://developer.apple.com/forums/thread/691882
//
// ContentView.swift
// AppToTest-02-14-23
//
// Created by M on 2/14/23.
//
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
return ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
// Load the "Box" scene from the "Experience" Reality File
//let boxAnchor = try! Experience.loadBox()
let anchor = try! MyAppToTest.loadFirstScene()
// Add the box anchor to the scene
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
#if DEBUG
struct ContentView_Previews : PreviewProvider {
static var previews: some View {
ContentView()
}
}
#endif
This is what I get at the bottom
2023-02-14 17:14:53.630477-0500 AppToTest-02-14-23[21446:1307215] Metal GPU Frame Capture Enabled
2023-02-14 17:14:53.631192-0500 AppToTest-02-14-23[21446:1307215] Metal API Validation Enabled
2023-02-14 17:14:54.531766-0500 AppToTest-02-14-23[21446:1307215] [AssetTypes] Registering library (/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten.
2023-02-14 17:14:54.716866-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/suFeatheringCreateMergedOcclusionMask.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.743580-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arKitPassthrough.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.744961-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/drPostAndComposition.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.745988-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arSegmentationComposite.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.747245-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute0.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.748750-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute1.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.749140-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute2.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.761189-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute3.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.761611-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute4.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.761983-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute5.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.762604-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute6.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.763575-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute7.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.764859-0500 AppToTest-02-14-23[21446:1307215] [Foundation.Serialization] Json Parse Error line 18: Json Deserialization; unknown member 'EnableARProbes' - skipping.
2023-02-14 17:14:54.764902-0500 AppToTest-02-14-23[21446:1307215] [Foundation.Serialization] Json Parse Error line 20: Json Deserialization; unknown member 'EnableGuidedFilterOcclusion' - skipping.
2023-02-14 17:14:55.531748-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534559-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534633-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534680-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534733-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534777-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534825-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534871-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534955-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:56.207438-0500 AppToTest-02-14-23[21446:1307383] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [2]
2023-02-14 17:17:15.741931-0500 AppToTest-02-14-23[21446:1307414] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [1]
2023-02-14 17:22:07.075990-0500 AppToTest-02-14-23[21446:1308137] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [1]
code-block
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces.
This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there?
I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
Post not yet marked as solved
I was developing my project just like HappyBeam, having a mechanism that after playing a small game round, a main menu pops up letting player to play again or back to main menu.
When I start my first game after installing it on my vision pro, it works fine, which was basically like HappyBeam, counting from 3 to 1 then await openImmersiveView(id: "***") to entering my game. After finish the round, I call await dismissImmersiveView() and reset my game ready for the next move. Letting player to choose play again or back to menu to continue my game.
However, this time, when my counter counting 3 to 1, the immersive view didn't show up and the visionOS menu showed up instead (I guess it's because immersive view cannot be open). Some error showed in my logger are below
<FBSWorkspaceScenesClient:0x281820e00 com.apple.frontboard.systemappservices> scene request failed to return scene with error response : <NSError: 0x2839bc270; domain: FBSWorkspaceErrorDomain; code: 1 ("InvalidScene"); "scene invalidated before create completion">
------------------------------------------------
Unable to present an Immersive Space for id 'ImmersiveSpace': Error Domain=FBSWorkspaceErrorDomain Code=1 "scene invalidated before create completion" UserInfo={BSErrorCodeDescription=InvalidScene, NSLocalizedFailureReason=scene invalidated before create completion}
------------------------------------------------
Error: BSLogAddStateCaptureBlockWithTitle(EventDeferringState:com.milanow.mygame:SFBSystemService-C90B0828-4522-4098-9E6A-0D5968CFCEB8) state data format error: <NSError: 0x283947360; domain: BSSharedStateCapturing; code: 1; "Input generated no data"> {
NSUnderlyingError = <__NSCFError: 0x2839451d0; domain: NSCocoaErrorDomain; code: 3851> {
NSDebugDescription = Property list invalid for format: 200 (property lists cannot contain NULL);
};
}
Wonder what happens here since no helpful info could be found online.
(I think the openimmersiveview code snippet is boring but I may still post it here)
var body: some Scene {
Group {
WindowGroup(id: "MainUI") {
MainView()
}
.windowStyle(.plain)
.windowResizability(.contentSize)
ImmersiveSpace(id: "ImmersiveSpace") {
GameView()
}
.immersionStyle(selection: .constant(.mixed), in: .mixed)
}
.onChange(of: gameModel.state) { oldValue, newValue in
guard oldValue != newValue else {
return
}
if case let .dismissingImmersiveSpace(finish) = newValue {
Task {
await dismissImmersiveSpace()
openWindow(id: "MainUI")
finish()
}
} else if case let .openingImmersiveSpace(startPlaying) = newValue {
Task {
await openImmersiveSpace(id: "ImmersiveSpace")
dismissWindow(id: "MainUI")
startPlaying()
}
} else if case .playing = oldValue {
openWindow(id: "MainUI")
} else if case .playing = newValue {
dismissWindow(id: "MainUI")
}
}
}
You can see that nothing magic here just follow what HappyBeam says.
(Wonder what makes a scene invalid really)
Post not yet marked as solved
Hello. I'm developing the app using ARKit and RealityKit. The purpose of the app is to scan the apartment and put furniture next to the walls. It works good, but if AR session takes more than 3 mins at some point app is crashed. According to crash report it's not something related to my code. I'm attaching crash report (company data is hidden). Any help is appreciated. Thanks in advance.
Post not yet marked as solved
I'm developing an app for Apple Vision Pro and have a question about RealityKit. Recently, I attempted to use drag gestures to manipulate two entities, A and B, with my left and right hands respectively. The two entities belong to the same RealityView.
I anticipated that I could move Entity A with my left hand and Entity B with my right hand independently. However, I noticed that the movement of one hand affects both entities simultaneously.
Presumably, DragGesture().onChanged is triggered twice for each entity. In an attempt to properly pair each hand with its corresponding entity, I investigated the platform.manipulatorGroup in the debugger. However, I encountered a compile error when trying to access the platform variable.
Is it feasible to pair each hand with a specific entity and move both objects separately?
Thank you in advance.
Post not yet marked as solved
Trying to find some answers on why billboarding isn't working when attaching to an entity that is a child of an anchor.
I'm trying to billboard an attachment so that it remains pointed at the user wherever they're viewing the content from. For example, showing some context information over dynamic 3d model on the table top. Not baked into a Reality Composer pro scene. I pulled the component and system used in the various Apple example projects (Diorama) that have the billboarding system.
Playing around with the system and component I can add a simple model entity to the scene, tag it with the component and it works perfectly, all the time. As the camera moves it tracks it perfectly. Even when nested under other empty entities or off center or oddly rotated.
Great! Then I wanted to apply this to an attachment that is shown from a model entity that is anchored to a horizontal plane and all of the sudden it doesn't work at all.
I create the anchor:
let anchor = AnchorEntity(.plane(.horizontal, classification: .table, minimumBounds: [0.01, 0.01]))
if let lookAtText = attachments.entity(for: "LookAtMe") {
lookAtText.position = [0,0.5,0]
lookAtText.components.set(BillboardComponent())
lookAtText.name = "Look At Me"
anchor.addChild(lookAtText)
}
content.add(anchor)
The attachment shows correctly above the anchor, as expected, and does rotate some, just totally wrong or stops, It does not billboard correctly, or even remotely correctly.
if I switch the anchor.addChild to be a content.add it isn't in the correct place, but billboarding works.
I don't understand why adding it as a child to the anchor entity suddenly breaks completely unrelated systems.
Am I doing something wrong or is this some sort of privacy issue? I can't find any documentation that using the look at api from an anchored entity is somehow forbidden.
Unity's PolySpatial has HoverEffect GameObject supported, I think it basically means even though I don't know the exact entity that user is looking at. But the developer can provide a event callback to RealityKit that "please change me the entity's mesh to other color". Just like hoverEffect on SwiftUI component.
So I wonder if there was a closure to let RealityKit the system to fire up my callback?
Post not yet marked as solved
Hello all,
RealityKit. visionOS.
I have a parent entity (surface in the code below) that I'm adding him a child entity with all the required components for gestures (collision & input target).
I'm setting the TapGesture as expected (same for SpatialTapGesture).
Result: gesture is not working. Neither the hover effect. The entity is not recognized as a tappable element.
However, if I'm adding the child entity to the content and not to the parent entity - everything is working.
Enclosed code below for both scenarios.
Any idea?
Many thanks,
Dudi
Doesn't work -
RealityView { content, attachments in
let scene = clonedEntity.clone(recursive: true)
...
surface.addChild(scene) // <- Doesn't work
} attachments: {
...
}
.gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { value in
openWindow(id: "detailed-window")
})
.hoverEffect()
Work -
RealityView { content, attachments in
let scene = clonedEntity.clone(recursive: true)
...
content.add(scene) // <- Work
} attachments: {
...
}
.gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { value in
openWindow(id: "detailed-window")
})
.hoverEffect()
Post not yet marked as solved
We are developing an AR app which uses spatial audio. If we want to use Realitykit to create the app, will we need to use a MacBook Pro running Silicon?
Post not yet marked as solved
Hello,
I've been trying to leverage instanced rendering in RealityKit on visionOS but have not had success.
RealityKit states this is supported:
https://developer.apple.com/documentation/realitykit/validating-usd-files
https://developer.apple.com/videos/play/wwdc2021/10075/?time=1373
https://developer.apple.com/videos/play/wwdc2023/10099/?time=772
RealityKit Trace metrics
Validating instancing is working:
To test I made a base visionOS app with immersive space and the entity replaced with my test usdz file. I've been using the RealityKit Trace profiling template in xcode instruments in the immersive space and volume closed. This gets consistent draw call results.
If I have a single sphere mesh with one material I get one draw call, but the number of draw calls grows linearly with mesh count no matter how my entity is configured.
What I've tried
Create a test scene in blender, export with instancing enabled
Create a test scene in Reality Composer Pro using references
Author usda files by hand based on the OpenUSD spec
Programatically create a MeshResource with Contents at runtime
References
https://openusd.org/release/api/_usd__page__scenegraph_instancing.html
https://developer.apple.com/documentation/realitykit/meshresource
https://developer.apple.com/documentation/realitykit/meshresource/instance
Thank you
I'm trying to make a simple demo of using ShaderGraphMaterial in a USDZ file that I can preview on Mac and VisionOS but I'm having trouble.
In Reality Composer, I make a sphere, then assign a ShaderGraphMaterial to the material, with a simple diffuse color (green) input. When I save the file as .usda, it displays as a gray sphere on mac rather than the green sphere shown in reality composer. If I then convert to usdz using Reality Converter, I get a warning on import:
"Shader nodes must have “id” as the implementationSource, with id values that begin with “Usd”. Also, shader inputs with connections must each have a single, valid connection source."
And the exported .usdz also shows as a gray sphere.
Is there a simple demo of a .usda file using ShaderGraphMaterial that displays on Mac, iOS, and VisionOS that I can look at to see how it looks internally?
My actual problem is creating usdz / usda files on visionOS for viewing on iOS / Mac / VisionOS.. but the first step is showing it's possible to even use ShaderGraphMaterial across all platforms.
Thanks
Post not yet marked as solved
I see example code converting the results of a SpatialTap to a SIMD3 location. For example, from WWDC session Meet ARKit for spatial computing:
let location3D = value.convert(value.location3D, from: .global, to: .scene)
What I really want is a simd_float4x4 that includes orientation of the surface that the tap gesture/cast collided with?
My goal is to place an object with its Y-axis along the normal of the surface that was tapped.
For example, in the referenced WWDC session, they create a CollisionComponent from the MeshAnchor data. If that mesh data is covering a curved couch cushion, I would like the normal from that curved cushion (i.e., the closest triangle approximating it).
Is this possible?
My planned fallback is to only use planes for collision surfaces for tap gestures, extract the tap gesture value's entity (which I am hoping is the plane), and grab its transform for the orientation information.
I am hoping Apple has a simple function call that is more general than my fallback approach.