Hi!
I'm creating an app like this:
Using Image Tracking to set world anchor in real world first.
The timeline in Reality Composer Pro scene needs to be played in same time(for the people in same place using the app).
People using the app will see the same contents in same position in same time in same place.
I already made Image Tracking feature worked. But the big problem is "Synchronization". I found Group Activities and TabletopKit to solve the problem. But I don't know if this are the right modules for this project.
How do I solve this problem technically?
If you have ideas, please let me know. I really need help for this.
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
What is recommended best practice for importing a Blender 3D file into RCP? I assume as a .usdz file? Is there a WWDC24 session or other Apple resource that best explains this. I want to make sure I provide the right format/file to RCP from Blender.
I created a custom component for composer pro in which I have several variables I need an entity to have.
The idea is to add this component to some 3d models and save them as usdz’s then I load these usdz’s in code and do specific things depending on these variables.
The component shows up in composer fine and I can set variables there. The problem is that the values I set in composer are different that what is shown in code. lets say in composer I set canMove = true. then when I read in code is set to false.
I don’t know if I’m missing something
public struct MyObjectComponent: Component, Codable
{
public var affectAll: Bool = false
public var affectFloor: Bool = false
public var canMove: Bool = false
public var moveX: Bool = false
public var moveY: Bool = false
public var moveZ: Bool = false
public var canRotate: Bool = false
public var rotateX: Bool = false
public var rotateY: Bool = false
public var rotateZ: Bool = false
public init() {
}
}
Any help appreciated.
Guillermo
I decided to use a club to kick a ball and let it roll on the turf in RealityKit, but now I can only let it slide but can not roll.
I add collision on the turf(static), club (kinematic) and the ball(dynamic), and set some parameters: radius, mass.
Using these parameters calculate linear damping, inertia, besides, use time between frames and the club position to calculate speed. Code like these:
let radius: Float = 0.025
let mass: Float = 0.04593 // 质量,单位:kg
var inertia = 2/5 * mass * pow(radius, 2)
let currentPosition = entity.position(relativeTo: nil)
let distance = distance(currentPosition, rgfc.lastPosition)
let deltaTime = Float(context.deltaTime)
let speed = distance / deltaTime
let C_d: Float = 0.47 //阻力系数
let linearDamping = 0.5 * 1.2 * pow(speed, 2) * .pi * pow(radius, 2) * C_d //线性阻尼(1.2表示空气密度)
entity.components[PhysicsBodyComponent.self]?.massProperties.inertia = SIMD3<Float>(inertia, inertia, inertia)
entity.components[PhysicsBodyComponent.self]?.linearDamping = linearDamping
// force
let acceleration = speed / deltaTime
let forceDirection = normalize(currentPosition - rgfc.lastPosition)
let forceMultiplier: Float = 1.0
let appliedForce = forceDirection * mass * acceleration * forceMultiplier
entityCollidedWith.addForce(appliedForce, at: rgfc.hitPosition, relativeTo: nil)
Also I try to applyImpulse but not addForce, like:
let linearImpulse = forceDirection * speed * forceMultiplier * mass
No matter how I adjust the friction(static, dynamic) and restitution, using addForce or applyImpulse, the ball can only slide. How can I solve this problem?
If I put an alpha image texture on a model created in Blender and run it on
RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below.
I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and
t hen impor ted it into Reality Composer Pro.
When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her,
t he following behaviors were obser ved in t he transparent areas
・The transparent areas do not become transparent
・The transparent areas become transparent toget her wit h t he image behind t hem
the order of t he images becomes incorrect
Best regards.
I created a project using Reality Composer Pro. When I export to a .usdz file, it works well on iPhone 13, 14, 15, and 16 but not on iPhone 12 and 11.
In the timeline, I use a behaviour that is on added to scene to active intro animation and loop background audio. But it does not work on old device like iPhone 12 and 11. Also, all interactive taps/touch points/audio don't seem to work too.
Iphone 13,14,15,16 is on ios 18.1.
iPhone 11, 12 is on ios 17.6.1.
Here is sample usdz file exported from REality Composer Pro 2.0 that has problem above: https://drive.google.com/file/d/1sHZn9JABTswLq2flYjToTbWDuE5T7eNw/view?usp=sharing
I use this piece of code in Unity to get the distance length of my model entering another model. I have set collision markers at the tip and end of the model and performed raycasting, but Unity currently does not support object tracking in VisionOS. Therefore, I plan to use SwiftUI for native development. In Reality Composer Pro, I haven't seen a collision editing feature like in Unity; I can only set the size of the collision body but cannot manually adjust or visualize the shape and size of my collision body.
I want to achieve similar functionality using SwiftUI, to be able to calculate and display the distance that my model A, like a needle or ruler, penetrates into another model or a physical object's interior. Is there a similar functionality available, or other coding methods to achieve this?
void CalculateLengthInsideOrgan()
{
// Direction from the base of the probe to the tip
Vector3 direction = probeTip.position - probeBase.position;
float probeLength = direction.magnitude;
// Raycasting
RaycastHit[] hits = Physics.RaycastAll(probeBase.position, direction, probeLength, organLayerMask);
if (hits.Length > 0)
{
// Calculate the length entering the organ
float distanceToFirstHit = hits[0].distance;
lengthInsideOrgan = probeLength - distanceToFirstHit;
}
else
{
lengthInsideOrgan = 0f;
}
}
I have a RealityView displaying a Reality Composer Pro scene in window. Things are generally working fine, but the content seems to be appearing in front of and blocking the VisionOS window, rather than being contained inside it. Do I need to switch to a volumetric view for this to work? My scene simply contains a flat display which renders 3D content (it has a material that sends different imagery to each eye).
Hi!
I'm trying to play video on monitor 3D model like a material.
I wanna know if it's possible work. I searched about it, but I couldn't get enough information. Thank you in advance.
Hi!
I'm using timeline in Reality Composer Pro. I tried to enable entities that were disabled at the beginning of the scene to be enabled in the middle of the Timeline playback using the 'Enable Entities'. But it didn't work well as I imagined. (It was keep appearing before starting Timeline)
How do I solve this problem? Are there good solutions about it?
Hi!
I wanna know that if it's possible that loading Immersive Scene after scanning(recognizing) preregistered images or objects?
I tried to load the Immersive scene after scanning image and objects, it didn't work well.
Please let me know about the solution if it's possible. Here the ImmersiveView.swift code i tried.
// ImmersiveView.swift
import SwiftUI
import RealityKit
import RealityKitContent // Using the RealityKitContent module
struct ImmersiveView: View {
@ObservedObject var viewModel: TrackingViewModel
@State private var immersiveScene: Entity?
@State private var isToggleOn: Bool = false // Variable for toggle state
var body: some View {
ZStack { // Overlay RealityView and UI elements
RealityView { content in
if let scene = immersiveScene {
content.add(scene)
print("Immersive scene successfully added.")
if let moneyGunsEntity = scene.findEntity(named: "MoneyGuns") {
NotificationCenter.default.post(
name: Notification.Name("RealityKit.NotificationTrigger"),
object: nil,
userInfo: [
"RealityKit.NotificationTrigger.Scene": scene,
"RealityKit.NotificationTrigger.Identifier": "PlayTimeline"
]
)
print("PlayTimeline notification sent.")
} else {
print("MoneyGuns entity not found.")
}
}
}
.onAppear {
Task {
if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
immersiveScene = scene
} else {
print("Failed to load immersive scene.")
}
}
}
VStack {
Spacer()
Toggle(isOn: $isToggleOn) { // Add toggle button
Text("Toggle Option")
.foregroundColor(.white)
}
.padding()
.background(Color.black.opacity(0.7))
.cornerRadius(8)
.padding()
}
}
}
}
Hi!
I'm making content using Room Tracking for vision pro these days.
So I searched information about it. Here the links I visited. But I could not found the info I wanted to know
Apple ARKit
Create enhanced spatial computing experiences with ARKit
RoomTrackingProvider
I wanna know that if it's possible remembering room structure that recognized before and adding contents in certain world anchor in the room space when user entered the room again?
For example, a developer can save the room structure, room info (with room ID) and world anchor of the room with Room Tracking feature.
After this, the developer can add entities via Xcode and Reality Composer Pro in certain position of the room to show contents to users when users enter the room. So users can see the contents whenever they visit the room.
Is this possible?
If there are example codes or projects about it, please let me know.
I've been working on generating a KTX format cubemap using the xcrun realitytool image --generate-cube-map command, but I'm encountering an issue with the file size. The cubemap file ends up being around 128 MB(2k), which is too large for my needs. I'm hoping to get some advice on a few points:
Is there any way to reduce the file size of the KTX cubemap generated with this command? I’d appreciate any tips on compression settings or alternative approaches that could help shrink the file size while retaining good quality.
Is there any documentation available for this command? I've been exploring on my own, but a comprehensive guide would be helpful.
Are there alternative methods for converting textures to the KTX format? If anyone has experience with other tools or workflows that work well for this, please share!
Hi!
Im making project with Xcode and Reality Composer Pro. I'm trying to play timeline in Reality Composer Pro using codes without setting Behaviors on entities. And I also tried to send notification from Xcode to entities in Reality Composer Pro to play timeline(I already set "OnNotification" with Behaviors component). But it's not working well, and I couldn't figure out any problems. Are there solutions about it?
I have a VideoMaterial inside a RealityView and want to attach this to a DockingRegion inside an immersive environment.
It appears that adding the VideoMaterial entity as a child of the docking region somewhat works, but there are no lighting effects (specular, diffuse) from the playing video.
So essentially, how can you add a VideoMaterial to a DockingRegion and achieve the same reflections/behavior as using AVPlayerViewController.
The latter is not an option as I need custom controls.
Hi, I’m working on a portfolio project for Vision Pro these days.
I have two projects and each of projects are made with Unity and made with Xcode(using ARKit and RealityKit tracking feature). Is it able to combine these two projects in an app?
For example, using the buttons made with SwiftUI in a Reality Composer Pro, jumping to a scene in Unity, and back from a scene in Unity to a scene in Reality Composer Pro in an app.
As you can see, it is a transparent spherical shell model with a ball inside. Everything is normal on the front side, but there are strange mesh triangles on the side and back view. I don't know if this is as expected and what I need to do to remove these strange effects.
We are developing a mixed reality app for Vision Pro using Reality Composer Pro, but we're consistently encountering a Protobuf-related crash whenever we enter the immersive space. Our Reality Composer Pro package is quite complex, with numerous objects. Could the complexity of the package be contributing to the issue, or could something else be at play? No errors are being flagged in the code or Reality Composer itself.
Here’s the error log:
[libprotobuf FATAL /Library/Caches/com.apple.xbs/Sources/REKit/ThirdParty/protobuf/src/google/protobuf/io/zero_copy_stream_impl_lite.cc:276] CHECK failed: (count) >= (0):
libc++abi: terminating due to uncaught exception of type google::protobuf::FatalException: CHECK failed: (count) >= (0):
Any insight on what might be causing this would be appreciated.
I am trying to achieve an effect such that the particles of a particle system are attracted to my hand entity. The hand entity is essentially an AnchorEntity that is tracking my right hand.
let particleEmitterEntities = context.entities(matching: particleEmitterQuery, updatingSystemWhen: .rendering)
for particleEmitterEntity in particleEmitterEntities {
if var particleEmitter = particleEmitterEntity.components[ParticleEmitterComponent.self] {
particleEmitter.mainEmitter.attractionCenter = rightHandEntity.position(relativeTo: nil)
// trying to get the world space position of the hand
// I also tried relative to particleEmitterEntity
particleEmitterEntity.components[ParticleEmitterComponent.self] = particleEmitter
} else {
fatalError("Cannot find particle emitter")
}
}
The particle attraction center doesn't seem to update
Another issue I am noticing here that My particle system doesn't show the particle image a lot of times and just renders a placeholder square when I do this, when I comment this code out I get the right particle image. I believe this is due to the number of times this loop runs to update the position of the attraction center.
What is the right way to do an effect where the particles are attracted to my hand.
Hello,
I am looking to create a shader to update an entity's rendering. As a basic example say I want to recolour an entity, but leave its original textures showing through:
I understand with VisionOS I need to use Reality Composer Pro to create the shader, but I'm lost as how to reference the original colour that I'm trying to update in the node graph. All my attempts appear to completely override the textures in the entity (and its sub-entities) that I want to impact. Also the tutorials / examples I've looked at appear to create materials, not add an effect on top of existing materials.
Any hints or pointers?
Assuming this is possible, I've been trying to load the material in code, and apply to an entity. But do I need to do this to all child entities, or just the topmost?
do {
let entity = MyAssets.createModelEntity(.plane) // Loads from bundle and performs config
let material = try await ShaderGraphMaterial(named: "/Root/TestMaterial", from: "Test", in: realityKitContentBundle)
entity.applyToChildren {
$0.components[ModelComponent.self]?.materials = [material]
}
root.addChild(entity)
} catch {
fatalError(error.localizedDescription)
}