Hello there,
Can you please give me a complete roadmap to become a π½ππππππΆπΊ developer?
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I'm creating an immersive experience with RealityView (just consider it Fruit Ninja like experience). Saying I have some random generated fruits that were generated by certain criteria in System.update function. And I want to interact these generated fruits with whatever hand gesture.
Well it simply doesn't work, the gesture.onChange function isn't fire as I expected. I put both InputTargetComponent and CollisionComponent to make it detectable in an immersive view. It works fine if I already set up these fruits in the scene with Reality Composer Pro before the app running.
Here is what I did
Firstly I load the fruitTemplate by:
let tempScene = try await Entity(named: fruitPrefab.usda, in: realityKitContentBundle)
fruitTemplate = tempScene.findEntity(named: "fruitPrefab")
Then I clone it during the System.update(context) function. parent is an invisible object being placed in .zero in my loaded immersive scene
let fruitClone = fruitTemplate!.clone(recursive: true)
fruitClone.position = pos
fruitClone.scale = scale
parent.addChild(fruitClone)
I attached my gesture to RealityView by
.gesture(DragGesture(minimumDistance: 0.0)
.targetedToAnyEntity()
.onChanged { value in
print("dragging")
}
.onEnded { tapEnd in
print("dragging ends")
}
)
I was considering if the runtime-generated entity is not tracked by RealityView, but since I have added it as a child to a placeholder entity in the scene, it should be fine...right?
Or I just needs to put a new AnchorEntity there?
Thanks for any advice in advance. I've been tried it out for the whole day.
With the latest software upgrades such Sonoma 14.0, Xcode 15.0 Beta 8 and Reality Composer Pro, what is the alternative to behaviours for a web based AR app for iOS 17? Any example, documentation or tutorial about how to use custom components to provide user interactivity such click events and be able to export the scene as USDZ?
The error I get with visionOS simulator:
cannot migrate AudioUnit assets for current process
code:
guard let resource = try? AudioFileGroupResource.load(
named: "/Root/AudioGroupDropStone",
from: "Scene.usda",
in: realityKitContentBundle
)
Any ideas how to debug this?
The audio files seem to work fine in Reality Composer Pro.
Thread 1: Fatal error: No ObservableObject of type GlobalEnvironment found. A View.environmentObject(_:) for GlobalEnvironment may be missing as an ancestor of this view.
How would I make a some simple toggle buttons to hide or show specific entities within a scene created in Reality Composer Pro?
I'd imagine that within Reality Composer pro, all entities would already be in place, and then from Xcode I would be turning them on or off.
Additionally I was curious about how I would go about swapping out colors / materials for specific entities.
I have a blender project, for simplicity a black hole. The way that it is modeled is a sphere on top of a round plane, and then a bunch of effects on that.
I have tried multiple ways:
convert to USD from the file menu
convert to obj and then import
But all of them have resulted in just the body, not any effects.
Does anybody know how to do this properly? I seem to have no clue except for going through the Reality Converter Pro (which I planned on going through already - but modeling it there)
Surface screen position
does it return model's vertices XYZ position normalized?
node graph needs more tutorials and explanations
made 0 progress
The Apple documentation seems to say RealityKit should obey the autoplay metadata, but it doesn't seem to work. Is the problem with my (hand coded) USDA files, the Swift, or something else? Thanks in advance.
I can make the animations run with an explicit call to run, but what have I done wrong to get the one cube to autoplay?
https://github.com/carlynorama/ExploreVisionPro_AnimationTests
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State var enlarge = false
var body: some View {
VStack {
//A ModelEntity, not expected to autoplay
Model3D(named: "cube_purple_autoplay", bundle: realityKitContentBundle)
//An Entity, actually expected this to autoplay
RealityView { content in
if let cube = try? await Entity(named: "cube_purple_autoplay", in: realityKitContentBundle) {
print(cube.components)
content.add(cube)
}
}
//Scene has one cube that should auto play, one that should not.
//Neither do, but both will start (as expected) with click.
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(scene)
}
} update: { content in
// Update the RealityKit content when SwiftUI state changes
if let scene = content.entities.first {
if enlarge {
for animation in scene.availableAnimations {
scene.playAnimation(animation.repeat())
}
} else {
scene.stopAllAnimations()
}
let uniformScale: Float = enlarge ? 1.4 : 1.0
scene.transform.scale = [uniformScale, uniformScale, uniformScale]
}
}
.gesture(TapGesture().targetedToAnyEntity().onEnded { _ in
enlarge.toggle()
})
VStack {
Toggle("Enlarge RealityView Content", isOn: $enlarge)
.toggleStyle(.button)
}.padding().glassBackgroundEffect()
}
}
}
No autospin meta data
#usda 1.0
(
defaultPrim = "transformAnimation"
endTimeCode = 89
startTimeCode = 0
timeCodesPerSecond = 24
upAxis = "Y"
)
def Xform "transformAnimation" ()
{
def Scope "Geom"
{
def Xform "xform1"
{
float xformOp:rotateY.timeSamples = {
...
}
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateY"]
over "cube_1" (
prepend references = @./cube_base_with_purple_linked.usd@
)
{
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate"]
}
}
With autoplay metadata
#usda 1.0
(
defaultPrim = "autoAnimation"
endTimeCode = 89
startTimeCode = 0
timeCodesPerSecond = 24
autoPlay = true
playbackMode = "loop"
upAxis = "Y"
)
def Xform "autoAnimation"
{
def Scope "Geom"
{
def Xform "xform1"
{
float xformOp:rotateY.timeSamples = {
...
}
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateY"]
over "cube_1" (
prepend references = @./cube_base_with_purple_linked.usd@
)
{
quatf xformOp:orient = (1, 0, 0, 0)
float3 xformOp:scale = (2, 2, 2)
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:orient", "xformOp:scale"]
}
}
}
}
This is for a basic product configurator. How would I provide a menu with 4-6 material swatches and then have those swap the materials on the model in the immersive scene.
Hi all,
I don't have a Vision Pro (yet), and I'm wondering if it is possible to preview my Reality Composer Pro project in AR using an iPad Pro or latest iPhones?
I also am interested in teaching others - I'm also a college professor, and I don't believe that my students have Vision Pros either.
I could always use the iOS versions, as I have done in the past, but the Pro version is much more capable and it would be great to be able to use it.
Thanks for any comments on this!
On Ventura -
We have a network extension(Transparent Proxy) which blocks IPv6 traffic as below.
override func handleNewFlow(_ flow: NEAppProxyFlow) -> Bool {
//Ipv6 gets blocks by below code
let error = NSError(domain: "", code: 0, userInfo: [NSLocalizedDescriptionKey : "Connection Refused"])
flow.closeReadWithError(error)
flow.closeWriteWithError(error)
On IPv6 enabled client machine, when a client application(Browser, curl, Teams etc), try to send HTTP/s requests, first they try to send the request over IPv6 and if it fails, they try with IPv4 (Happy eyeballs Algorithm)
In our case, as network extension blocks IPv6 traffic, client applications will fail to establish connection over IPv6 and fallback to IPv4 as per Happy eyeballs Algorithm
The above scenario works fine till MacOS Ventura.
For Sonoma, this behaviour seems to have changed
When our network extension blocks IPv6 traffic, client applications do not fallback to IPv4.
They simply fail without trying IPv4. We tested with curl, Google chrome browser, Microsoft Teams. All these fail to load pages on Sonoma and they work fine on Ventura.
Note : No change in our network extension code, curl and browser versions. Only change is MacOS version
Please find attached screenshots with Ventura and with Sonoma, running curl
One other difference seen here is the error code received by client applications with Ventura and Sonoma.
On Ventura, when IPv6 is blocked, error is Network is down and client application establishes connection with IPv4.
On Sonoma, error code is 22 : Invalid arguments and client application does not retry with IPv4.
Curl_Ventura.jpg
Curl_Sonoma.png
We have a content creation application that uses SceneKit for rendering. In our application, we have a 3D view (non-AR), and an AR "mode" the user can go into. Currently we use a SCNView and an ARSCNView to achieve this. Our application currently targets iOS and MacOS (with AR only on iOS).
With VisionOS on the horizon, we're trying to bring the tech stack up to date, as SceneKit no longer seems to be supported, and isn't supported at all on VisionOS.
We'd like to use RealityKit for 3D rendering on all platforms; MacOS, iOS and VisionOS, in non-AR and AR mode where appropriate.
So far this hasn't been too difficult. The greatest challenge has been adding gesture support to replace the allowsCameraControl option on the SCNView, as no such option on ARView.
However, now we get to control shading, we're hitting a bit of a roadblock. When viewing the scene in Non-AR mode, we would like to add a ground plane underneath the object that only displays a shadow - in other words, it's opacity would be determined by light contribution. I've had a dig through the CustomMaterial API and it seems extremely primitive - there doesn't seem any way to get light information for a particular fragment, unless I'm missing something?
Additionally, we support a custom shader that we can apply as materials. This custom shader allows the properties of the material to vary depending on the light contribution, light incidence angle...etc. Looking at the CustomMaterial, the API seems to be defining a CustomMaterial, whereas as guess we want to customise the BRDF calculation. We achieve this in SceneKit using a series of shader modifiers hooked into the various SCNShaderModifierEntryPoint.
On VisionOS of course the lack of support for CustomMaterial is a shame, but I would hope something similar can be achieved with RealityComposer?
We can live with the lack of custom material, but the shadow catcher is a killer for adoption for us. I'd even accept a different limited features on VisionOS, as long as we can matching our existing feature set on existing platforms.
What am I missing?
I have a USDZ model with many animations in single long clip. When I want to cut it via AnimationView I can't follow in which moment I should trim it. So add millisecond please.
Hello Everyone,
I'm facing a challenge related to resizing an entity built from a 3D model.
Although I can manipulate the size of the mesh, the entity's overall dimensions seem to remain static and unchangeable.
Here's a snippet of my code:
let giftEntity = try await Entity(named: "gift")
I've come across an operator that allows for scaling the entity. However, I'm uncertain about the appropriate value to employ, especially since the realityView is encapsulated within an HStack, which is further nested inside a ScrollView.
Would anyone have experience or guidance on this matter? Any recommendations or resources would be invaluable.
Thank you in advance for your assistance!
Creating an Entity and then changing between different animations (walk, run, jump, etc.) is pretty straightforward when you have individual USDZ files in your Xcode project, and then simply create an array of AnimationResource.
However, I'm trying to do it via the new Reality Composer Pro app because the docs state it's much more efficient versus individual files, but I'm having a heck of a time figuring out how exactly to do it.
Do you have one scene per USDZ file (does that erase any advantage over just loading individual files)? One scene with multiple entities? Something else all together?
If I try one scene with multiple entities within it, when I try to change animation I always get "Cannot find a BindPoint for any bind path" logged in the console, and the animation never actually occurs. This is with the same files that animate perfectly when just creating an array of AnimationResource manually via individual/raw USDZ files.
Anyone have any experience doing this?
I feel like I've heard of the Vision Pro supposedly being really great at object tracking / occlusion etc, but I can't find anything in the documentation or any actual examples.
Would love to find any clear information on this!
Hey everybody,
I am quite new to developing on ios specifically in the AR section, and I have been struggling through documentation and can't find an answer for loading in reality composer pro scenes into an ios app. There is a good amount of documentation on loading it into a visionOS app but it I haven't found it totally applicable. In this code block below I have been able to get my reality composer scene loaded, but I am wanting the added functionality of reality composer pro when developing my scenes and can't figure out how to get those to show up. How would I edit this code to load my reality composer pro scene? My reality composer pro project came over to xcode as Package.realitycomposerpro when I drag and dropped it in, but I don't know how I'd access a scene in it and the specific objects in that scene for ios use. Thanks in advance!
import RealityKit
struct ContentView: View {
var body: some View {
ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
func loadRealityComposerScene(filename: String, fileExtension: String, sceneName: String) -> (Entity & HasAnchoring)? {
guard let realitySceneURL = Bundle.main.url(forResource: filename, withExtension: fileExtension) else {
return nil
}
let loadedAnchor = try? Entity.loadAnchor(contentsOf: realitySceneURL)
return loadedAnchor
}
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
// Load the AR Scene from ACT2.reality
guard let anchor = loadRealityComposerScene(filename: "ACT2", fileExtension: "reality", sceneName: "Scene1") else {
print("Failed to load the anchor from ACT2.reality")
return arView
}
arView.scene.addAnchor(anchor)
// Visualize Collisions for Debugging
arView.debugOptions.insert(.showPhysics)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
#Preview {
ContentView()
}
Hi,
I have a file in Reality Composer Pro that has a deep hierarchy. I've downloaded it from an asset store so I don't know how it is build.
As you can see from the screenshot, I'm trying to access banana and banana_whole entities as ModelEntity but I'm not able to load them as ModelEntity in Xcode.
I can load them as Entity and show them in visionOS Simulator but not as ModelEntity which I need to do to do some operations.
What should I do?