Hi team I don't know from this error has popped up any suggestions please
Thanks
Zippy Games
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
116 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Looking for help on getting "On Tap" to work inside RCP for my AVP project. I can get it to work when using "on added to scene" but if I switch to "on tap", the audio will not play when attaching the audio to an entity in my scene. I'm using the same entity for the tap gesture that the audio is using for the emitter. Here is my work flow for the "on added to scene" that works correctly to help troubleshoot my non working "on tap".
Behaviors: "on added to scene". action - timeline
Input target: check mark enabled, allowed all
Collision set to default
Audio library: source mp3 file
Chanel Audio: resource mp3 file above
Timeline: Play Audio with mp3 file added
This set up in RCP allows my AVP project to launch correctly with audio "on added to scene". But when switching behaviors to "on tap", the audio will no longer play and I can not figure out why. I've tried several different options and nothing works. Please help!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Audio
USDZ
Reality Composer Pro
visionOS
He=I I have made this app with the Xcode 15.2 but the current stimulator is not able to open it properly with the error showing
Thanks
zipzy Games
Hello,
After watching the Work with Reality Composer Pro content in Xcode, I had created the following custom component.
public struct TestComponent : Component, Codable{
public var text : String = "helloWorld"
public init() {}
}
I had registered the custom component as suggested in App.init function
init() {
RealityKitContent.TestComponent.registerComponent()
}
The custom component is decoded and realityView shows the sphere, when I load the "Scene" from realityKitContent bundle.
But if I export the scene to a separate file named "test_scene.usdz" on disk and shared to the simulator and then trying to load it load in reality view causes
EXC_BAD_ACCESS #0 0x0000000194c8d508 in Swift._StringObject.getSharedUTF8Start() -> Swift.UnsafePointer<Swift.UInt8> ()
Printing the loaded entity, shows the customComponent but when trying to load in show realityview , crashes the app immediately. Is there a way to fix it?
sample repo: https://github.com/ckse93/VideoDiffusionIssueSHowcase
Repo has detailed step by step workflow. as well as screenshot, python script compute result, and parameters
after running computeDiffuseReflectionUVs.py and mapping textures and reflection diffuse to objects, I noticed that reflection diffuse does not produce any color.
expected result is shown below, diffused light has color
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
I am trying to apply impulseAction to an entity but everytime entity.playAnimation(impulseAnimation) is executed, the log says Cannot find a BindPoint for any bind path: "". I can't figure out what is wrong. Could someone please help me with this?
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle), var sphere = immersiveContentEntity.findEntity(named: "Sphere") {
sphere.components.set(CollisionComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)]))
sphere.components.set(PhysicsBodyComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)], mass: 1000))
sphere.components[PhysicsBodyComponent.self]?.isAffectedByGravity = false
sphere.position = [0, 1, -1]
content.add(immersiveContentEntity)
// Create an action to apply an impulse, forcing the object to move upwards.
let impulseAction = ImpulseAction(linearImpulse: [0, 1, 0])
// Create a small positive duration value.
let duration: TimeInterval = 1 / 30.0
// Create an animation for the action, which will start playing
// after five seconds.
do {
let impulseAnimation = try AnimationResource
.makeActionAnimation(for: impulseAction,
duration: duration,
delay: 5.0)
// Play the sequence animation that will play the actions.
sphere.playAnimation(impulseAnimation)
} catch {
print("Error: \(error)")
}
}
}
}
}
All the logs:
Could not locate file 'default-binaryarchive.metallib' in bundle.
Error creating the CFMessagePort needed to communicate with PPT.
AddInstanceForFactory: No factory registered for id <CFUUID 0x6000029a5b80> F8BB1C28-BAE8-11D6-9C31-00039315CD46
cannot add handler to 0 from 1 - dropping
nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket]
nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket]
Registering library (/Library/Developer/CoreSimulator/Volumes/xrOS_22N840/Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 2.2.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten.
cannot add handler to 0 from 1 - dropping
Cannot find a BindPoint for any bind path: "", ""
Sync object without snapshot while removing view (id: 2816861686082450363, type: 6373420419761316588[SelectableSceneContentIdentifierComponent]).
But i think only Cannot find a BindPoint for any bind path: "", "" is relevant.
I would like to implement an expression that pops out from the window to Immersive based on the following WWDC video.
This video introduces the new features of visionOS 2.0 in the form of refurbishing Apple's sample app BOT-anist.
https://developer.apple.com/jp/videos/play/wwdc2024/10153/?time=1252
In the video, it looks like ImmersiveSpaceAppModel is newly implemented.
However, the key code is not mentioned anywhere.
You pass appModel.robot as the from argument to the transform method of RealityViewContent.
It seems that BOT-anist has been updated once and can be downloaded from the following URL, but there is no class such as ImmersiveSpaceAppModel implemented in this app either.
https://developer.apple.com/documentation/visionos/bot-anist
Has it been further updated again?
Frankly, I'm not sure if it is possible to proceed as per the WWDC video.
Translated with DeepL.com (free version)
Topic:
App & System Services
SubTopic:
Processes & Concurrency
Tags:
Swift
RealityKit
Reality Composer Pro
visionOS
A ShaderGraphMaterial with an Occlusion Surface Output generated with RealityComposer 2 fails to load on iOS 18 and macOS 15 with the following error:
RealityFoundation.ShaderGraphMaterial.LoadError.invalidTypeFound (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/loaderror/invalidtypefound)
This happens with both https://developer.apple.com/documentation/shadergraph/realitykit/occlusion-surface-(realitykit) and https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit)
RealityView { content in
do {
let bgEntity = ModelEntity(mesh: .generateCone(height: 0.5, radius: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)])
bgEntity.position.z = -0.2
content.add(bgEntity)
let occlusionMaterial = try await ShaderGraphMaterial(named: "/Root/OcclusionMaterial", from: "OcclusionMaterial")
let testEntity = ModelEntity(mesh: .generateSphere(radius: 0.4), materials: [occlusionMaterial])
content.add(testEntity)
content.cameraTarget = testEntity
} catch {
print("Shader Graph Load Error:")
dump(error)
}
}
.realityViewCameraControls(.orbit)
.edgesIgnoringSafeArea(.all)
Feedback ID: FB15081296
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
"Although Xcode generates loading methods for all Reality Composer files in your Xcode project"
I do not find this to be true, sadly.
Does anyone have any luck or insight on how one can build just a simple MacOS app that will import a scene from a Reality File?
The documentation suggests that the simple act of bringing a .Reality File in (What about .realitycomposerpro?) will generate code, but that doesn't seem to happen.
The sample code (Spaceship) does not compile for MacOS.
I'd really love just the most generic template of an Xcode Project that compiles with a button that pops open a scene., Like the VisionOS default immersive project.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Swift
AR / VR
RealityKit
Reality Composer Pro
Hi!
I'm trying to play video on monitor 3D model like a material.
I wanna know if it's possible work. I searched about it, but I couldn't get enough information. Thank you in advance.
After installing MacOS Sequoia, Xcode 16 & RealityComposer Pro 2 my Apple Vision Pro projects (worked perfectly with Xcode 15) started to give me Tool terminated by signal 'Segmentation fault: 11' error while compiling RealityKitContent assets.
This happens only when I try to build project with .usdz models exported from Blender, but when I try with sample models from apple website it works nice with no errors. Is there any solution?
Topic:
Developer Tools & Services
SubTopic:
Xcode
Tags:
Xcode
RealityKit
Reality Composer Pro
visionOS
Hey there, I am working on an app that displays environmental data using PNG color channels to represent data ranges, which gets overlayed on a map. The sampled values aren't what I'm expecting though... for example an RGB value of 0x7f0000 (R = 0.5, G = 0, B = 0) would be seen as 0.21, 0, 0 in the shader. This basically makes it unusable if I'm trying to show scientific data... I'm half wondering if I am completely misunderstanding how sampling works in RealityKit / RealityComposerPro. Anybody have any idea why it works like this?
Actual result (chart labels added in photoshop):
Expected:
Red > 0.1 Shader Graph
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS
I have a VideoMaterial inside a RealityView and want to attach this to a DockingRegion inside an immersive environment.
It appears that adding the VideoMaterial entity as a child of the docking region somewhat works, but there are no lighting effects (specular, diffuse) from the playing video.
So essentially, how can you add a VideoMaterial to a DockingRegion and achieve the same reflections/behavior as using AVPlayerViewController.
The latter is not an option as I need custom controls.
I’m having issues getting Collision Shapes working in Reality Composer on iPadOS, or with Reality Composer Pro via Xcode on macOS?
I’ve posted a video recorded through my Vision Pro showing the issue.
The project i’m working on is a Dice Rolling application. The dice don’t appear to be working set as Collision Shape=Automatic, which I assume takes into account the actual silhouette of the shape.
https://youtu.be/upPtQY4QOAk?si=yyx6rbSSmVkLxBLg
They also don’t rest on their face when they land.
Anyone experience this type of behavior and found a solution? I’m currently doing this with Reality Composer, but most likely will also be wanting to get it to work properly in Reality Composer Pro as well.
Thx!
Hi
Hopefully someone can share some ideas on how to accomplish this.
I know we can load models from realityKitContentBundle like
let model = try? await Entity(named: “testModel”, in: realityKitContentBundle)
But this is in the root of RealityKitContent.rkassets , if I have the models in some subfolder then I have to add the complete path like
let model = try? await Entity(named: “/superModels/testModel”, in: realityKitContentBundle)
What I want is to be able to search recursively in all folders for that file as I have several subfolders with different models.
Any suggestion ?
Thanks in advance.
Guillermo
Hi guys, I have set size of cube is 20 cm. Then i want to transform scale with values x: 0.048, y: 1, z: 0.0.48, But strangely it reset all value to x: 1, y: 1, and z:1. Then i want to try with another decimal such as x: 1.2, y: 2, z: 1.2 the panel will automatically reset it into x: 1, y: 1, z: 1. My question is how to set transform scale that can accept decimal number ?
Topic:
Developer Tools & Services
SubTopic:
Xcode
Tags:
Reality Composer
AR / VR
Reality Composer Pro
Hey, I am having issues getting my Material X shaders to work properly in Reality Composer Pro that I've authored in Houdini.
The shader is very simple. It starts with a tiled image node that is written to the diffuse color of the preview surface node. This node is called mtxltileimage2.
When I create a tiled image node in RCP and configure it to have the same parameter values I get the texture to show up correctly. This node is called TiledImage.
One difference I can identify is that the second node has a grey icon whereas the first node has a blue icon. Could this be related to this issue?
Here is the USD viewer output for the two variants of the tiled image node.
Any pointers, misconceptions and help would be greatly appreciated. My goal is to be able and author these shaders in Houdini and import them into RCP. Trying to figure out the right pipeline for this workflow.
Hi, I added DockingRegion to my scene from Reality Composer Pro, and I am able to load up the scene, but DockingRegion is getting ignored and the scene is getting rendered with no change in AVPlayerViewController window. As it can be seen in Reality Composer Pro screenshot below, I set the width of the player to 666, and moved it to the back by 300cm, but the actual result does not reflect the position I set on Reality Composer Pro.
Is there anything else I should do other than loading up the Entity and adding to RealityView? Specifically, do I have to get the DockingRegion within the usda file and somehow enable it?
Hello,
I have a usdz model created in Maya. It's supposed to have a splashing animation associated with it, and this can be viewed in Maya and Blender, but for some reason when I export it to usdz and then import it into my Reality Composer Pro project, the animation is missing. I expect an animation library to be created on the entity when I drag it in like any other usdz with animation data, but that is not the case.
Any help would be appreciated on this issue.
Hi everyone,
I’m having trouble with image anchoring when working on a project in Reality Composer and Reality Composer Pro. Here’s the issue:
1. What I’m Trying to Achieve:
I want to create an AR scene where an object anchors to an image I provide. I don't want to create an app for this but just use the USDZ File the Scene creates. The USDZ File then should be viewable via the various integrations of AR Quick Look across the Apple Ecosystem. The image anchoring works perfectly when I preview the scene inside Reality Composer using AR mode.
2. The Problem:
When I export the project (tried both USDZ and Reality formats) and open it on my iPhone using the Files app (which uses AR Quick Look), the image anchoring no longer works. The object doesn’t anchor to the provided image as expected. It just anchors to the first plane it recognizes and not the image.
3. What I’ve Tried:
Exporting the scene in USDZ format.
Exporting the scene in Reality format.
Both formats result in the same issue: no image anchoring outside of the Reality Composer environment.
Trying different images but all resulting in same manor that the image anchoring is not working
Tried different iOS Version but resulting in the same issue
4. Current Setup:
Reality Composer Pro version: 2.0
iPhone model: iPhone 13 Pro
iOS version: 18.1.
5. What I Need Help With:
Is there a way to ensure image anchoring works in exported files when opened via AR Quick Look?
Do I need to configure something specific during the export process?
Are there limitations in AR Quick Look that prevent image anchoring from functioning correctly?
Do i need to create an app to make this work?
I’d appreciate any advice or insights from the community. If anyone has experience with similar issues or knows of a workaround, please let me know!
Thanks in advance, Mav
Topic:
Spatial Computing
SubTopic:
ARKit
Tags:
ARKit
AR Quick Look
Reality Composer
Reality Composer Pro