Hello. I've started exploring the new features in Reality Composer PRO and noticed that Composer now supports adding custom scripts as components to any objects in the scene. I'm curious about the following: will these scripts work if I export such a scene to a USDZ file and try to open it using Apple Quick Look? For instance, I want to add a 3D button and a cube model. When I press the button (touch it), I want to change the material or material color to another one using a script component. Is such functionality possible?
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi all,
Up until a couple of days ago I was able to open and run Reality Composer Pro on my intel-based Mac. I tried to open it again this morning and I now receive the notification "Reality Composer is not supported on this Mac".
I understand that I will eventually need a new computer with Apple silicon but it was nice to be able to start exploring Shader Graphs with my existing computer for now.
Any suggestions? Perhaps go back to an earlier version of the beta Xcode - maybe the latest version disabled my ability to run RCP?
I'm running Version 15.1 beta (15C5042i) of Xcode on an Intel i7 MacBook Pro.
Thanks, in advance!
Hi,
I am in the need to control standard and non-standard entity components over the time.
For example, I want to change opacity of few entities over the time with a timer.
To do that I have added component Opacity over the entities I want to change opacity , created a system and registered it.
The system fires update method and inside that I am able to change the opacity, but after few seconds it stops firing.
Once I move the window, the update method fires again for few seconds, then stops.
Any idea on to why?
Any idea what to change in order to have that continuously running?
If that is by deisgn, how can I access components at any time to have those changed when I need to?
I am using Windows, not Volumes or Immersive Spaces.
Hello all -
I'm experiencing a shading error when I have two UnlitSurface shaders using images for color and opacity. When the shaders are applied to two mesh planes, one placed in front of the other, the shader in front will render and the plane mesh will mask out and not render what is behind.
Basically - it looks like the opacity map on the shader in front is creating a 'mask'.
I've attached some images here to help explain.
Has anyone experienced this error? And how can I go about fixing this - thx!
HELP!
computer:macbookpro 13.6.1 (Intel)
1\first,my xcode is <Version 15.1 beta>
second,xcode show
third,i push the ,but the xcode show the error <Failed with HTTP status 400: bad request>
the detail info:
Failed with HTTP status 400: bad request
Domain: DataGatheringNSURLSessionDelegate
Code: 1
User Info: {
DVTErrorCreationDateKey = "2023-11-17 13:03:58 +0000";
}
--
System Information
macOS Version 13.6.1 (Build 22G313)
Xcode 15.1 (22501) (Build 15C5028h)
Timestamp: 2023-11-17T21:03:58+08:00
2、then i try another way.
first download the xrOS simulator<visionOS_1_beta_3_Simulator_Runtime.dmg>,
and second,try the command in ther termnal platform.
xcode-select -s /Applications/Xcode-beta.app
xcodebuild -runFirstLaunch
xcrun simctl runtime add "~/Downloads/watchOS 9 beta Simulator Runtime.dmg"
but show the error info:
D: F238A4FF-FF5B-4C87-B202-28EAE59C558A xrOS (1.0 - 21N5233f) (Unusable - Other Failure: Error Domain=SimDiskImageErrorDomain Code=5 "Duplicate of 4813D2D0-7539-4306-8132-8E25C64EADD9" UserInfo={NSLocalizedDescription=Duplicate of 4813D2D0-7539-4306-8132-8E25C64EADD9, unusableErrorDetail=})
do anyone the reason,and how i to solve it? thank you~
Hi! Im having an issue creating a PortalComponent on visionOS
Im trying to anchor a Portal to a wall or floor anchor and always the portal appears opposite to the anchor.
If I use a vertical anchor (wall) the portal appears horizontal on the scene
If I use a horizontal anchor (floor) the portal appears vertical on the scene
Im tested on xcode
15.1.0 beta 3
15.1.0 beta 2
15.0 beta 8
Any ideas ?? Thank you so much!
I have a Sphere Entity that has a VideoMaterial on it and a floor, i want to have the reflection of the video material on that floor. Any possible ways?
In my Reality Composer scene, I have added a spatial audio. How do I play this from my swift code?
I loaded the scene the following way:
myEntity = try await Entity(named: "grandScene", in: realityKitContentBundle)
I know that CustomMaterial in RealityKit can update texture by use DrawableQueue, but in new VisionOS, CustomMaterial doesn't work anymore. How i can do the same thing,does ShaderGraphMaterial can do?I can't find example about how to do that. Looking forward your repley, thank you!
Can AR projects run on a visionOS simulator?
Hi guys,
has any individual develper received Vision Pro dev kit or is it just aimed at big companies?
Basically I would like to start with one or 2 of my apps that I removed from the store already, just to get familiar with VisionOS platform and gain knowledge and skills on a small, but real project.
After that I would like to use the Dev kit on another project. I work on a contract for mutlinational communication company on a pilot project in a small country and extending that project to VisionOS might be very interesting introduction of this new platform and could excite users utilizing their services. However I cannot quite reveal to Apple details for reasons of confidentiality. After completing that contract (or during that if I manage) I would like to start working on a great idea I do have for Vision Pro (as many of you do).
Is it worth applying for Dev kit as an individual dev? I have read some posts, that guys were rejected.
Is is better to start in simulator and just wait for actual hardware to show up in App Store? I would prefer to just get the device, rather than start working with the device that I may need to return in the middle of unfinished project.
Any info on when pre-orders might be possible?
Any idea what Mac specs are for developing for VisionOS - escpecially for 3D scenes. Just got Macbook Pro M3 Max with 96GB RAM, I'm thinknig if I should have maxed out the config. Anybody using that config with Vision Pro Dev kit?
Thanks.
How to control the input of left and right eye content through code?
Hello Everyone,
I'm currently facing a challenge related to detecting taps on an entity that features video material.
Based on the information I found online, it appears that in order to enable touch functionality, the recommended approach is to clone the entity and add an InputTargetComponent while also enabling collision shapes.
Here's a snippet of my code:
RealityView { content, attachments in
// The following code doesn't trigger the tapGesture
let videoEntity = ImmersivePlayerEntity(configuration: configuration)
content.add(videoEntity)
//
if let attachment = attachments.entity(for: "player-controls") {
anchorEntity.addChild(attachment)
content.add(anchorEntity)
}
/* This code triggers the tapGesture
let boxResource = MeshResource.generateBox(size: 2)
let itemMaterial = SimpleMaterial(color: .red, roughness: 0, isMetallic: false)
let entity = ModelEntity(mesh: boxResource, materials: [itemMaterial]).addTappable()
content.add(entity)
*/
}
update: { _, _ in
}
attachments: {
Attachment(id: "player-controls") {
ImmersivePlayerControlsView(coordinator: coordinator)
.frame(width: 1280)
.opacity(areControlsVisible ? 1 : 0)
.animation(.easeInOut, value: areControlsVisible)
}
}
.gesture(
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
areControlsVisible.toggle()
}
)
extension Entity
{
func addTappable() -> Entity {
let newModelEntity = self.clone(recursive: true)
newModelEntity.components.set(InputTargetComponent())
newModelEntity.generateCollisionShapes(recursive: true)
return newModelEntity
}
}
I'm seeking guidance and assistance on how to enable touch functionality on the video entity. Your insights and suggestions would be greatly appreciated. Thank you in advance for your help!
When creating a USDA file in a DCC, I want RCP to import it as expected with materials assigned. However, I’m finding that the material is not imported correctly, despite it rendering correctly in the preview pane and the textures being pulled in.
The workaround is to recreate the material in the shader tree, but then I override any material changes I do on the original UDSA. Please advise me on what I need to be doing here, to correctly import materials into RCP.
Using USDZ files is not ideal, as I want to make sure changes can easily be made upstream.
Sorry about the link, but I can't seem to upload it to the post.
https://pasteboard.co/bmhl3t004APu.png
Any guidance here is much appreciated!
Hey guys
How I can fit RealityView content inside a volumetric window?
I have below simple example:
WindowGroup(id: "preview") {
RealityView { content in
if let entity = try? await Entity(named: "name") {
content.add(entity)
entity.setPosition(.zero, relativeTo: entity.parent)
}
}
}
.defaultSize(width: 0.6, height: 0.6, depth: 0.6, in: .meters)
.windowStyle(.volumetric)
I understand that we can resize a Model3D view automatically using .resizable() and .scaledToFit() after the model loaded.
Can we achieve the same result using a RealityView?
Cheers
In new VisionOS platform, CustomMaterial in RealityKit can not userd, it should use ShaderGraphMaterial instead, but i can't find the way to change the culling mode. With old CustomMaterial, it has a facculling property,
Is there a way to change the culling mode in new ShaderGraphMaterial?
Hi,
I have a usdz asset of a torus / hoop shape that I would like to pass another Reality Kit Entity cube-like object through (without touching the torus) in VisionOS. Similar to how a basketball goes through a hoop.
Whenever I pass the cube through, I am getting a collision notification, even if the objects are not actually colliding. I want to be able to detect when the objects are actually colliding, vs when the cube passes cleanly through the opening in the torus.
I am using entity.generateCollisionShapes(recursive: true) to generate the collision shapes. I believe the issue is in the fact that the collision shape of the torus is a rectangular box, and not the actual shape of the torus. I know that the collision shape is a rectangular box because I can see this in the vision os simulator by enabling "Collision Shapes"
Does anyone know how to programmatically create a torus in collision shape in SwiftUI / RealityKit for VisionOS. Followup, can I create a torus in reality kit, so I don't even have to use a .usdz asset?
In my project, i want to use new shadergraphmaterial to do the stereoscopic render, i notice that there is a node called Camera Index Switch Node can do this. But when i tried it , i found that :
It can only output Integer type value, when i change to float value , it change back again, i don't konw if it is a bug.
2. So i test this node with a IF node,i found that it output is weird.
Below is zero should output,it is black
but when i change to IF node,it is grey,it is neither 0 nor 1(My IF node result is TRUE result 1, FALSE result 0)
I wanna ask if this is a bug, and if this is a correct way to do the stereoscopic render.
Hello fellow developers,
I am currently exploring the functionality of the UsdPrimvarReader node in Shader Graph Editor and would appreciate some clarification on its operational principles. Despite my efforts to understand its functionality, I find myself in need of some guidance.
Specifically, I would appreciate insights into the proper functioning of the UsdPrimvarReader node, including how it should ideally operate, the essential data that should be specified in the Varname field, and the Primvars that can be extracted from a USD file. Additionally, I am curious about the correct code representation of a Primvar in USD file to ensure it can be invoked successfully.
If anyone could share their expertise or point me in the right direction to relevant documentation, I would be immensely grateful.
Thank you in advance for your time and consideration. I look forward to any insights or recommendations you may have.
Is there a way of integrating the RealityKitContent to an app created with Xcode12 using UIKit?
The non AR parts are working ok in VisionOS, the AR parts need to be rewritten in SwiftUI. In order to be able to do so,I need to access the RealityKit content and be able to work it seamlessly with Reality Composer Pro, but unsure how to integrate RealityKitContent is such pre-SwitftUI/VisionOS project. I am using Xcode 15
Thank you.