Hi,
I am in the need to control standard and non-standard entity components over the time.
For example, I want to change opacity of few entities over the time with a timer.
To do that I have added component Opacity over the entities I want to change opacity , created a system and registered it.
The system fires update method and inside that I am able to change the opacity, but after few seconds it stops firing.
Once I move the window, the update method fires again for few seconds, then stops.
Any idea on to why?
Any idea what to change in order to have that continuously running?
If that is by deisgn, how can I access components at any time to have those changed when I need to?
I am using Windows, not Volumes or Immersive Spaces.
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi all,
Up until a couple of days ago I was able to open and run Reality Composer Pro on my intel-based Mac. I tried to open it again this morning and I now receive the notification "Reality Composer is not supported on this Mac".
I understand that I will eventually need a new computer with Apple silicon but it was nice to be able to start exploring Shader Graphs with my existing computer for now.
Any suggestions? Perhaps go back to an earlier version of the beta Xcode - maybe the latest version disabled my ability to run RCP?
I'm running Version 15.1 beta (15C5042i) of Xcode on an Intel i7 MacBook Pro.
Thanks, in advance!
Hello. I've started exploring the new features in Reality Composer PRO and noticed that Composer now supports adding custom scripts as components to any objects in the scene. I'm curious about the following: will these scripts work if I export such a scene to a USDZ file and try to open it using Apple Quick Look? For instance, I want to add a 3D button and a cube model. When I press the button (touch it), I want to change the material or material color to another one using a script component. Is such functionality possible?
Hi,
I have a file in Reality Composer Pro that has a deep hierarchy. I've downloaded it from an asset store so I don't know how it is build.
As you can see from the screenshot, I'm trying to access banana and banana_whole entities as ModelEntity but I'm not able to load them as ModelEntity in Xcode.
I can load them as Entity and show them in visionOS Simulator but not as ModelEntity which I need to do to do some operations.
What should I do?
Hey everybody,
I am quite new to developing on ios specifically in the AR section, and I have been struggling through documentation and can't find an answer for loading in reality composer pro scenes into an ios app. There is a good amount of documentation on loading it into a visionOS app but it I haven't found it totally applicable. In this code block below I have been able to get my reality composer scene loaded, but I am wanting the added functionality of reality composer pro when developing my scenes and can't figure out how to get those to show up. How would I edit this code to load my reality composer pro scene? My reality composer pro project came over to xcode as Package.realitycomposerpro when I drag and dropped it in, but I don't know how I'd access a scene in it and the specific objects in that scene for ios use. Thanks in advance!
import RealityKit
struct ContentView: View {
var body: some View {
ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
func loadRealityComposerScene(filename: String, fileExtension: String, sceneName: String) -> (Entity & HasAnchoring)? {
guard let realitySceneURL = Bundle.main.url(forResource: filename, withExtension: fileExtension) else {
return nil
}
let loadedAnchor = try? Entity.loadAnchor(contentsOf: realitySceneURL)
return loadedAnchor
}
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
// Load the AR Scene from ACT2.reality
guard let anchor = loadRealityComposerScene(filename: "ACT2", fileExtension: "reality", sceneName: "Scene1") else {
print("Failed to load the anchor from ACT2.reality")
return arView
}
arView.scene.addAnchor(anchor)
// Visualize Collisions for Debugging
arView.debugOptions.insert(.showPhysics)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
#Preview {
ContentView()
}
I feel like I've heard of the Vision Pro supposedly being really great at object tracking / occlusion etc, but I can't find anything in the documentation or any actual examples.
Would love to find any clear information on this!
Creating an Entity and then changing between different animations (walk, run, jump, etc.) is pretty straightforward when you have individual USDZ files in your Xcode project, and then simply create an array of AnimationResource.
However, I'm trying to do it via the new Reality Composer Pro app because the docs state it's much more efficient versus individual files, but I'm having a heck of a time figuring out how exactly to do it.
Do you have one scene per USDZ file (does that erase any advantage over just loading individual files)? One scene with multiple entities? Something else all together?
If I try one scene with multiple entities within it, when I try to change animation I always get "Cannot find a BindPoint for any bind path" logged in the console, and the animation never actually occurs. This is with the same files that animate perfectly when just creating an array of AnimationResource manually via individual/raw USDZ files.
Anyone have any experience doing this?
Hello Everyone,
I'm facing a challenge related to resizing an entity built from a 3D model.
Although I can manipulate the size of the mesh, the entity's overall dimensions seem to remain static and unchangeable.
Here's a snippet of my code:
let giftEntity = try await Entity(named: "gift")
I've come across an operator that allows for scaling the entity. However, I'm uncertain about the appropriate value to employ, especially since the realityView is encapsulated within an HStack, which is further nested inside a ScrollView.
Would anyone have experience or guidance on this matter? Any recommendations or resources would be invaluable.
Thank you in advance for your assistance!
I have a USDZ model with many animations in single long clip. When I want to cut it via AnimationView I can't follow in which moment I should trim it. So add millisecond please.
We have a content creation application that uses SceneKit for rendering. In our application, we have a 3D view (non-AR), and an AR "mode" the user can go into. Currently we use a SCNView and an ARSCNView to achieve this. Our application currently targets iOS and MacOS (with AR only on iOS).
With VisionOS on the horizon, we're trying to bring the tech stack up to date, as SceneKit no longer seems to be supported, and isn't supported at all on VisionOS.
We'd like to use RealityKit for 3D rendering on all platforms; MacOS, iOS and VisionOS, in non-AR and AR mode where appropriate.
So far this hasn't been too difficult. The greatest challenge has been adding gesture support to replace the allowsCameraControl option on the SCNView, as no such option on ARView.
However, now we get to control shading, we're hitting a bit of a roadblock. When viewing the scene in Non-AR mode, we would like to add a ground plane underneath the object that only displays a shadow - in other words, it's opacity would be determined by light contribution. I've had a dig through the CustomMaterial API and it seems extremely primitive - there doesn't seem any way to get light information for a particular fragment, unless I'm missing something?
Additionally, we support a custom shader that we can apply as materials. This custom shader allows the properties of the material to vary depending on the light contribution, light incidence angle...etc. Looking at the CustomMaterial, the API seems to be defining a CustomMaterial, whereas as guess we want to customise the BRDF calculation. We achieve this in SceneKit using a series of shader modifiers hooked into the various SCNShaderModifierEntryPoint.
On VisionOS of course the lack of support for CustomMaterial is a shame, but I would hope something similar can be achieved with RealityComposer?
We can live with the lack of custom material, but the shadow catcher is a killer for adoption for us. I'd even accept a different limited features on VisionOS, as long as we can matching our existing feature set on existing platforms.
What am I missing?
On Ventura -
We have a network extension(Transparent Proxy) which blocks IPv6 traffic as below.
override func handleNewFlow(_ flow: NEAppProxyFlow) -> Bool {
//Ipv6 gets blocks by below code
let error = NSError(domain: "", code: 0, userInfo: [NSLocalizedDescriptionKey : "Connection Refused"])
flow.closeReadWithError(error)
flow.closeWriteWithError(error)
On IPv6 enabled client machine, when a client application(Browser, curl, Teams etc), try to send HTTP/s requests, first they try to send the request over IPv6 and if it fails, they try with IPv4 (Happy eyeballs Algorithm)
In our case, as network extension blocks IPv6 traffic, client applications will fail to establish connection over IPv6 and fallback to IPv4 as per Happy eyeballs Algorithm
The above scenario works fine till MacOS Ventura.
For Sonoma, this behaviour seems to have changed
When our network extension blocks IPv6 traffic, client applications do not fallback to IPv4.
They simply fail without trying IPv4. We tested with curl, Google chrome browser, Microsoft Teams. All these fail to load pages on Sonoma and they work fine on Ventura.
Note : No change in our network extension code, curl and browser versions. Only change is MacOS version
Please find attached screenshots with Ventura and with Sonoma, running curl
One other difference seen here is the error code received by client applications with Ventura and Sonoma.
On Ventura, when IPv6 is blocked, error is Network is down and client application establishes connection with IPv4.
On Sonoma, error code is 22 : Invalid arguments and client application does not retry with IPv4.
Curl_Ventura.jpg
Curl_Sonoma.png
Hi all,
I don't have a Vision Pro (yet), and I'm wondering if it is possible to preview my Reality Composer Pro project in AR using an iPad Pro or latest iPhones?
I also am interested in teaching others - I'm also a college professor, and I don't believe that my students have Vision Pros either.
I could always use the iOS versions, as I have done in the past, but the Pro version is much more capable and it would be great to be able to use it.
Thanks for any comments on this!
This is for a basic product configurator. How would I provide a menu with 4-6 material swatches and then have those swap the materials on the model in the immersive scene.
The Apple documentation seems to say RealityKit should obey the autoplay metadata, but it doesn't seem to work. Is the problem with my (hand coded) USDA files, the Swift, or something else? Thanks in advance.
I can make the animations run with an explicit call to run, but what have I done wrong to get the one cube to autoplay?
https://github.com/carlynorama/ExploreVisionPro_AnimationTests
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State var enlarge = false
var body: some View {
VStack {
//A ModelEntity, not expected to autoplay
Model3D(named: "cube_purple_autoplay", bundle: realityKitContentBundle)
//An Entity, actually expected this to autoplay
RealityView { content in
if let cube = try? await Entity(named: "cube_purple_autoplay", in: realityKitContentBundle) {
print(cube.components)
content.add(cube)
}
}
//Scene has one cube that should auto play, one that should not.
//Neither do, but both will start (as expected) with click.
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(scene)
}
} update: { content in
// Update the RealityKit content when SwiftUI state changes
if let scene = content.entities.first {
if enlarge {
for animation in scene.availableAnimations {
scene.playAnimation(animation.repeat())
}
} else {
scene.stopAllAnimations()
}
let uniformScale: Float = enlarge ? 1.4 : 1.0
scene.transform.scale = [uniformScale, uniformScale, uniformScale]
}
}
.gesture(TapGesture().targetedToAnyEntity().onEnded { _ in
enlarge.toggle()
})
VStack {
Toggle("Enlarge RealityView Content", isOn: $enlarge)
.toggleStyle(.button)
}.padding().glassBackgroundEffect()
}
}
}
No autospin meta data
#usda 1.0
(
defaultPrim = "transformAnimation"
endTimeCode = 89
startTimeCode = 0
timeCodesPerSecond = 24
upAxis = "Y"
)
def Xform "transformAnimation" ()
{
def Scope "Geom"
{
def Xform "xform1"
{
float xformOp:rotateY.timeSamples = {
...
}
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateY"]
over "cube_1" (
prepend references = @./cube_base_with_purple_linked.usd@
)
{
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate"]
}
}
With autoplay metadata
#usda 1.0
(
defaultPrim = "autoAnimation"
endTimeCode = 89
startTimeCode = 0
timeCodesPerSecond = 24
autoPlay = true
playbackMode = "loop"
upAxis = "Y"
)
def Xform "autoAnimation"
{
def Scope "Geom"
{
def Xform "xform1"
{
float xformOp:rotateY.timeSamples = {
...
}
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateY"]
over "cube_1" (
prepend references = @./cube_base_with_purple_linked.usd@
)
{
quatf xformOp:orient = (1, 0, 0, 0)
float3 xformOp:scale = (2, 2, 2)
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:orient", "xformOp:scale"]
}
}
}
}
Surface screen position
does it return model's vertices XYZ position normalized?
node graph needs more tutorials and explanations
made 0 progress
I have a blender project, for simplicity a black hole. The way that it is modeled is a sphere on top of a round plane, and then a bunch of effects on that.
I have tried multiple ways:
convert to USD from the file menu
convert to obj and then import
But all of them have resulted in just the body, not any effects.
Does anybody know how to do this properly? I seem to have no clue except for going through the Reality Converter Pro (which I planned on going through already - but modeling it there)
How would I make a some simple toggle buttons to hide or show specific entities within a scene created in Reality Composer Pro?
I'd imagine that within Reality Composer pro, all entities would already be in place, and then from Xcode I would be turning them on or off.
Additionally I was curious about how I would go about swapping out colors / materials for specific entities.
Thread 1: Fatal error: No ObservableObject of type GlobalEnvironment found. A View.environmentObject(_:) for GlobalEnvironment may be missing as an ancestor of this view.
The error I get with visionOS simulator:
cannot migrate AudioUnit assets for current process
code:
guard let resource = try? AudioFileGroupResource.load(
named: "/Root/AudioGroupDropStone",
from: "Scene.usda",
in: realityKitContentBundle
)
Any ideas how to debug this?
The audio files seem to work fine in Reality Composer Pro.