If I place the .usdz file in the project directory alongside other .swift files, ModelEntity loads it perfectly. However, if I try to load the same file from Reality Composer Pro under RealityKitContent.rkassets, I get the error: resourceNotFound("heart").
Could someone help me with this? Thank you so much
Code:
//
// TestttttttApp.swift
// Testtttttt
//
// Created by Zhendong Chen on 2/17/25.
//
import SwiftUI
@main
struct TestttttttApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
.windowStyle(.volumetric)
}
}
//
// ContentView.swift
// Testtttttt
//
// Created by Zhendong Chen on 2/17/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State private var enlarge = false
var body: some View {
RealityView { content in
do {
// MARK: Work
let scene = try await ModelEntity(named: "heart")
content.add(scene)
// MARK: Doesn't work
// let scene = try await ModelEntity(named: "heart", in: realityKitContentBundle)
// content.add(scene)
} catch {
print(error)
}
}
}
}
#Preview(windowStyle: .volumetric) {
ContentView()
}
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hello,
I'm unable to activate a timeline in my application through an OnTap, OnAddedToScene or OnNotification.
In RCP I can test and play the timelines easily.
When running in the simulator or on device the timelines simply do not run, regardless of the method through which I try to call the API.
I have two questions:
How can I check that my timelines are in my RCP project that's loaded into the scene? I don't see timelines in the entity hierarchy when I debug in RealityKit Debugger
Is Behaviors a component I can manually set at runtime? I can very clearly see the behaviors component attached to my entity in RCP, but when running this code:
.gesture(
TapGesture()
.targetedToAnyEntity()
.onEnded { value in
if value.entity.applyTapForBehaviors() {
print("Success!")
} else {
print("Failure.")
}
}
)
It prints "Failure." every time indicating to me that the entity does not have a Behavior attached to it (whether that's a component or however else the Behavior is associated with the entity)
I also have not had success using the Notification system or even the OnAddedToScene behavior trigger which should theoretically work if a behavior is attached to the entity which the tap experiment indicates it's not.
For context this is my notification trigger code:
private let notificationTrigger = NotificationCenter.default
.publisher(for: Notification.Name("RealityKit.NotificationTrigger"))
@Environment(\.realityKitScene) var scene
Attachment(id: "home") {
Button {
NotificationCenter.default.post(
name: NSNotification.Name("RealityKit.NotificationTrigger"),
object: nil,
userInfo: [
"RealityKit.NotificationTrigger.Scene": scene,
"RealityKit.NotificationTrigger.Identifier": "test"
]
)
} label: {
Text("Test")
}
.padding(20)
.glassBackgroundEffect()
}
.onReceive(notificationTrigger) { _ in
print("test notification received")
I am receiving "test notification received" print statements as well.
I'm using Xcode 16.0 with VisionOS 2.0 on MacOS 15.3.1
Dear Developers, I am having some problems with Reality Composer Pro on Mac. Specifically, I don't understand how to manage some timeline functions. I have an object that has a double animation, an opening animation and a closing animation. On a first tap the object should open through animation 1, while on the second tap the object should close through animation 2. Only the two animations conflict. In addition, animation 1 does not stop at the last frame but returns the object to the position of frame 0. Do you have any solutions? Thank you all
Hello,
I am trying to use the subdivision mesh rendering option.
I can see it working in RealityComposerPro:
But not when loading asset and displaying in Simulator:
Using this code:
import SwiftUI
import RealityKit
import RealityKitContent
struct AirspaceView: View {
// MARK: - VIEW BODY
var body: some View {
RealityView { content in
if let a = try? await Entity(named: "Models/Test/Test.usdc", in: realityKitContentBundle) {
content.add(a)
}
}
}
}
Any ideas why?
I am at a loss. I have looked at examples, and I have used chat/cursor. I cannot figure out how to target the transform/position of a Reality Composer Pro project when adding it to an ARView in iOS.
I have a test red sphere working perfectly for raycast positioning. When I pass the same variables (tested with print out) to the Entity or Anchor position/transform nothing changes.
It seems that, no matter what, the content of the Reality Composer Pro project is placed where the camera view initialized.
How do I actually interact with its position? I just want to be able to tap the screen and place the RCP wherever I want.
I am submitting a challenge to the Swift Student Challenge. I have created a RealityContent folder using Reality Composer Pro. How can I import this folder into the Swift Package Manager (.swiftpm) project hosted on Playground to ensure that it becomes a usable package?
I'm trying to build a Shader in "Reality Composer Pro" that updates from a start time. Initially I tried the following:
The idea was that when the startTime was 0, the output would be 0, but then I would set startTime from within code and this would be compared with the current GPU time, and difference used to drive another part of the shader graph:
if
let testEntity = root.findEntity(named: "Test"),
var shaderGraphMaterial = testEntity.components[ModelComponent.self]?.materials.first as? ShaderGraphMaterial
{
let time = CFAbsoluteTimeGetCurrent()
try! shaderGraphMaterial.setParameter(name: "StartTime", value: .float(Float(time)))
testEntity.components[ModelComponent.self]?.materials[0] = shaderGraphMaterial
}
However, I haven't found a reference to the time the shader would be using.
So now I am trying to write an EntityAction to achieve the same effect. Instead of comparing a start time to the GPU's time I'm trying to animate one of the shader's uniform input. However, I'm not sure how to specify the bind target. Here's my attempt so far:
import RealityKit
struct ShaderAction: EntityAction {
let startValue: Float
let targetValue: Float
var animatedValueType: (any AnimatableData.Type)? { Float.self }
static func registerEntityAction() {
ShaderAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else { return }
let value = simd_mix(event.action.startValue, event.action.targetValue, Float(animationState.normalizedTime))
animationState.storeAnimatedValue(value)
}
}
}
extension Entity {
func updateShader(from startValue: Float, to targetValue: Float, duration: Double) {
let fadeAction = ShaderAction(startValue: startValue, targetValue: targetValue)
if let shaderAnimation = try? AnimationResource.makeActionAnimation(for: fadeAction, duration: duration, bindTarget: .material(0).customValue) {
playAnimation(shaderAnimation)
}
}
}
'''
Currently when I run this I get an assertion failure: 'Index out of range (operator[]:line 797) index = 260, max = 8'
Furthermore, even if it didn't crash I don't understand how to pass a binding to the custom shader value "startValue".
Any clues of how to achieve this effect - even if it's a completely different way.
I have a huge sphere where the camera stays inside the sphere and turn on front face culling on my ShaderGraphMaterial applied on that sphere, so that I can place other 3D stuff inside. However when it comes to attachment, the object occlusion never works as I am expecting. Specifically my attachments are occluded by my sphere (some are not so the behavior is not deterministic.
Then I suspect it was the issue of depth testing so I started using ModelSortGroup to reorder the rending sequence. However it doesn't work. As I was searching through the internet, this post's comments shows that ModelSortGroup simply doesn't work on attachments.
So I wonder how should I tackle this issue now? To let my attachments appear inside my sphere.
OS/Sys: VisionOS 2.3/XCode 16.3
I implemented a ShaderGraphMaterial and tried to load it from my usda scene by ShaderGraphMaterial.init(name: in: bundle). I want to dynamically set TextureResource on that material, so I wanted to expose texture as Uniform Input of a ShaderGraphMaterial. But obviously RCP's Shader Graph doesn't support Texture input as parameter as the image shows:
And from the code level, ShaderGraphMaterial also didn't expose a way to set TexturesResources neither. Its parameterNames shows an empty array if I didn't set any custom input params. The texture I get is from my backend so it really cannot be saved into a file and load it again (that would be too weird).
Is there something I am missing?
PLATFORM AND VERSION
Vision OS
Development environment: Xcode 16.2, macOS 15.2
Run-time configuration: visionOS 2.3 (On Real Device, Not simulator)
Please someone confirm I'm not crazy and this issue is actually out of my control.
Spent hours trying to fix my app and running profiles because thought it was an issue related to my apps performance. Finally considered chance it was issue with API itself and made sample app to isolate problem, and it still existed in it. The issue is when a model entity moves around in a full space that was launched when the system environment immersion was turned up before opening it, the entities looks very choppy as they move around. If you take off the headset while still in the space, and put it back on, this fixes it and then they move smoothly as they should. In addition, you can also leave the space, and then turn the system environment immersion all the way down before launching the full space again, this will also make the entity moves smoothly as it should. If you launch a mixed immersion style instead of a full immersion style, this issue never arrises. The issue only arrises if you launch the space with either a full style, or progressive style, while the system immersion level is turned on.
STEPS TO REPRODUCE
https://github.com/nathan-707/ChoppyEntitySample
Open my test project, its a small, modified vision os project template that shows it clearly.
otherwise:
create immersive space with either full or progressive immersion style.
setup a entity in kinematic mode, apply a velocity to it to make it pass over your head when the space appears.
if you opened the space while the Apple Vision Pros system environment was turned up, the entity will look choppy.
if you take the headset off while in the same space, and put it back on, it will fix the issue and it will look smooth.
alternatively if you open the space with the system immersion environment all the way down, you will also not run into the issue. Again, issue also does not happen if space launched is in mixed style.
Looking for help on getting "On Tap" to work inside RCP for my AVP project. I can get it to work when using "on added to scene" but if I switch to "on tap", the audio will not play when attaching the audio to an entity in my scene. I'm using the same entity for the tap gesture that the audio is using for the emitter. Here is my work flow for the "on added to scene" that works correctly to help troubleshoot my non working "on tap".
Behaviors: "on added to scene". action - timeline
Input target: check mark enabled, allowed all
Collision set to default
Audio library: source mp3 file
Chanel Audio: resource mp3 file above
Timeline: Play Audio with mp3 file added
This set up in RCP allows my AVP project to launch correctly with audio "on added to scene". But when switching behaviors to "on tap", the audio will no longer play and I can not figure out why. I've tried several different options and nothing works. Please help!
Hi team I don't know from this error has popped up any suggestions please
Thanks
Zippy Games
He=I I have made this app with the Xcode 15.2 but the current stimulator is not able to open it properly with the error showing
Thanks
zipzy Games
Hi, I am trying to update what entities are visible in my RealityView. After the SwiftData set is updated, I have to restart the app for it to appear in the RealityView.
Also, the RealityView does not close when I move to a different tab. It keeps everything on and tracking, leaving the model in the same location I left it.
import SwiftUI
import RealityKit
import MountainLake
import SwiftData
struct RealityLakeView: View {
@Environment(\.modelContext) private var context
@Query private var items: [Item]
var body: some View {
RealityView { content in
print("View Loaded")
let lakeScene = try? await Entity(named: "Lake", in: mountainLakeBundle)
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2)))
@MainActor func addEntity(name: String) {
if let lakeEntity = lakeScene?.findEntity(named: name) {
// Add the Cube_1 entity to the RealityView
anchor.addChild(lakeEntity)
} else {
print(name + "entity not found in the Lake scene.")
}
}
addEntity(name: "Island")
for item in items {
if(item.enabled) {
addEntity(name: item.value)
}
}
// Add the horizontal plane anchor to the scene
content.add(anchor)
content.camera = .spatialTracking
} placeholder: {
ProgressView()
}
.edgesIgnoringSafeArea(.all)
}
}
#Preview {
RealityLakeView()
}
Hello,
After watching the Work with Reality Composer Pro content in Xcode, I had created the following custom component.
public struct TestComponent : Component, Codable{
public var text : String = "helloWorld"
public init() {}
}
I had registered the custom component as suggested in App.init function
init() {
RealityKitContent.TestComponent.registerComponent()
}
The custom component is decoded and realityView shows the sphere, when I load the "Scene" from realityKitContent bundle.
But if I export the scene to a separate file named "test_scene.usdz" on disk and shared to the simulator and then trying to load it load in reality view causes
EXC_BAD_ACCESS #0 0x0000000194c8d508 in Swift._StringObject.getSharedUTF8Start() -> Swift.UnsafePointer<Swift.UInt8> ()
Printing the loaded entity, shows the customComponent but when trying to load in show realityview , crashes the app immediately. Is there a way to fix it?
sample repo: https://github.com/ckse93/VideoDiffusionIssueSHowcase
Repo has detailed step by step workflow. as well as screenshot, python script compute result, and parameters
after running computeDiffuseReflectionUVs.py and mapping textures and reflection diffuse to objects, I noticed that reflection diffuse does not produce any color.
expected result is shown below, diffused light has color
I am trying to apply impulseAction to an entity but everytime entity.playAnimation(impulseAnimation) is executed, the log says Cannot find a BindPoint for any bind path: "". I can't figure out what is wrong. Could someone please help me with this?
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle), var sphere = immersiveContentEntity.findEntity(named: "Sphere") {
sphere.components.set(CollisionComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)]))
sphere.components.set(PhysicsBodyComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)], mass: 1000))
sphere.components[PhysicsBodyComponent.self]?.isAffectedByGravity = false
sphere.position = [0, 1, -1]
content.add(immersiveContentEntity)
// Create an action to apply an impulse, forcing the object to move upwards.
let impulseAction = ImpulseAction(linearImpulse: [0, 1, 0])
// Create a small positive duration value.
let duration: TimeInterval = 1 / 30.0
// Create an animation for the action, which will start playing
// after five seconds.
do {
let impulseAnimation = try AnimationResource
.makeActionAnimation(for: impulseAction,
duration: duration,
delay: 5.0)
// Play the sequence animation that will play the actions.
sphere.playAnimation(impulseAnimation)
} catch {
print("Error: \(error)")
}
}
}
}
}
All the logs:
Could not locate file 'default-binaryarchive.metallib' in bundle.
Error creating the CFMessagePort needed to communicate with PPT.
AddInstanceForFactory: No factory registered for id <CFUUID 0x6000029a5b80> F8BB1C28-BAE8-11D6-9C31-00039315CD46
cannot add handler to 0 from 1 - dropping
nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket]
nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket]
Registering library (/Library/Developer/CoreSimulator/Volumes/xrOS_22N840/Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 2.2.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten.
cannot add handler to 0 from 1 - dropping
Cannot find a BindPoint for any bind path: "", ""
Sync object without snapshot while removing view (id: 2816861686082450363, type: 6373420419761316588[SelectableSceneContentIdentifierComponent]).
But i think only Cannot find a BindPoint for any bind path: "", "" is relevant.
I would like to implement an expression that pops out from the window to Immersive based on the following WWDC video.
This video introduces the new features of visionOS 2.0 in the form of refurbishing Apple's sample app BOT-anist.
https://developer.apple.com/jp/videos/play/wwdc2024/10153/?time=1252
In the video, it looks like ImmersiveSpaceAppModel is newly implemented.
However, the key code is not mentioned anywhere.
You pass appModel.robot as the from argument to the transform method of RealityViewContent.
It seems that BOT-anist has been updated once and can be downloaded from the following URL, but there is no class such as ImmersiveSpaceAppModel implemented in this app either.
https://developer.apple.com/documentation/visionos/bot-anist
Has it been further updated again?
Frankly, I'm not sure if it is possible to proceed as per the WWDC video.
Translated with DeepL.com (free version)
"Although Xcode generates loading methods for all Reality Composer files in your Xcode project"
I do not find this to be true, sadly.
Does anyone have any luck or insight on how one can build just a simple MacOS app that will import a scene from a Reality File?
The documentation suggests that the simple act of bringing a .Reality File in (What about .realitycomposerpro?) will generate code, but that doesn't seem to happen.
The sample code (Spaceship) does not compile for MacOS.
I'd really love just the most generic template of an Xcode Project that compiles with a button that pops open a scene., Like the VisionOS default immersive project.
Hey there, I am working on an app that displays environmental data using PNG color channels to represent data ranges, which gets overlayed on a map. The sampled values aren't what I'm expecting though... for example an RGB value of 0x7f0000 (R = 0.5, G = 0, B = 0) would be seen as 0.21, 0, 0 in the shader. This basically makes it unusable if I'm trying to show scientific data... I'm half wondering if I am completely misunderstanding how sampling works in RealityKit / RealityComposerPro. Anybody have any idea why it works like this?
Actual result (chart labels added in photoshop):
Expected:
Red > 0.1 Shader Graph