I created a RealityKitContent in the Packages folder of the visionOS app project. At first, I tried to add a USDA model directly to its rkassets. I used Model3D(named: "ModelName", bundle: realityKitContentBundle) can The model is displayed normally, but then when I add a folder in rkassets and then put the USDA model in that folder, use Model3D(named: "ModelName", bundle: realityKit ContentBundle) cannot display the model normally. What should I do?
If you know how to solve the above problems, please let me know and check if you know how to solve the following problems. If you know, please also tell me the answer. Thank you!
The USDA model I mentioned in the question just now contains an animation, but when I used Model3D(named: "ModelName", bundle: realityKitContentBundle) , I found that the animation was not played by default, but needed additional code. Is there any documentation, video or listing code in this regard?
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hey everyone, I'm running into this issue of my USDZ model not showing up in Reality Composer Pro, exported from Blender as a USD and converted in Reality Converter.
See Attached image:
It's strange, because the USDz model appears fine in Previews. But once it is brought inside RCP, I receive this pop up, and does not appear.
Not sure how to resolve this multiple root level issue. If anyone can point me in the right direction or any feedback, all is much appreciated! Thank you!
In reality composer pro, when importing an USDZ model and inserting it into the scene, reality composer pro will remove the material of the model itself by default, but I don't want to do this. So how can reality composer pro not remove the material of the model itself?
Is there a way of integrating the RealityKitContent to an app created with Xcode12 using UIKit?
The non AR parts are working ok in VisionOS, the AR parts need to be rewritten in SwiftUI. In order to be able to do so,I need to access the RealityKit content and be able to work it seamlessly with Reality Composer Pro, but unsure how to integrate RealityKitContent is such pre-SwitftUI/VisionOS project. I am using Xcode 15
Thank you.
Hello fellow developers,
I am currently exploring the functionality of the UsdPrimvarReader node in Shader Graph Editor and would appreciate some clarification on its operational principles. Despite my efforts to understand its functionality, I find myself in need of some guidance.
Specifically, I would appreciate insights into the proper functioning of the UsdPrimvarReader node, including how it should ideally operate, the essential data that should be specified in the Varname field, and the Primvars that can be extracted from a USD file. Additionally, I am curious about the correct code representation of a Primvar in USD file to ensure it can be invoked successfully.
If anyone could share their expertise or point me in the right direction to relevant documentation, I would be immensely grateful.
Thank you in advance for your time and consideration. I look forward to any insights or recommendations you may have.
In my project, i want to use new shadergraphmaterial to do the stereoscopic render, i notice that there is a node called Camera Index Switch Node can do this. But when i tried it , i found that :
It can only output Integer type value, when i change to float value , it change back again, i don't konw if it is a bug.
2. So i test this node with a IF node,i found that it output is weird.
Below is zero should output,it is black
but when i change to IF node,it is grey,it is neither 0 nor 1(My IF node result is TRUE result 1, FALSE result 0)
I wanna ask if this is a bug, and if this is a correct way to do the stereoscopic render.
Hi,
I have a usdz asset of a torus / hoop shape that I would like to pass another Reality Kit Entity cube-like object through (without touching the torus) in VisionOS. Similar to how a basketball goes through a hoop.
Whenever I pass the cube through, I am getting a collision notification, even if the objects are not actually colliding. I want to be able to detect when the objects are actually colliding, vs when the cube passes cleanly through the opening in the torus.
I am using entity.generateCollisionShapes(recursive: true) to generate the collision shapes. I believe the issue is in the fact that the collision shape of the torus is a rectangular box, and not the actual shape of the torus. I know that the collision shape is a rectangular box because I can see this in the vision os simulator by enabling "Collision Shapes"
Does anyone know how to programmatically create a torus in collision shape in SwiftUI / RealityKit for VisionOS. Followup, can I create a torus in reality kit, so I don't even have to use a .usdz asset?
In new VisionOS platform, CustomMaterial in RealityKit can not userd, it should use ShaderGraphMaterial instead, but i can't find the way to change the culling mode. With old CustomMaterial, it has a facculling property,
Is there a way to change the culling mode in new ShaderGraphMaterial?
Hey guys
How I can fit RealityView content inside a volumetric window?
I have below simple example:
WindowGroup(id: "preview") {
RealityView { content in
if let entity = try? await Entity(named: "name") {
content.add(entity)
entity.setPosition(.zero, relativeTo: entity.parent)
}
}
}
.defaultSize(width: 0.6, height: 0.6, depth: 0.6, in: .meters)
.windowStyle(.volumetric)
I understand that we can resize a Model3D view automatically using .resizable() and .scaledToFit() after the model loaded.
Can we achieve the same result using a RealityView?
Cheers
When creating a USDA file in a DCC, I want RCP to import it as expected with materials assigned. However, I’m finding that the material is not imported correctly, despite it rendering correctly in the preview pane and the textures being pulled in.
The workaround is to recreate the material in the shader tree, but then I override any material changes I do on the original UDSA. Please advise me on what I need to be doing here, to correctly import materials into RCP.
Using USDZ files is not ideal, as I want to make sure changes can easily be made upstream.
Sorry about the link, but I can't seem to upload it to the post.
https://pasteboard.co/bmhl3t004APu.png
Any guidance here is much appreciated!
Hello Everyone,
I'm currently facing a challenge related to detecting taps on an entity that features video material.
Based on the information I found online, it appears that in order to enable touch functionality, the recommended approach is to clone the entity and add an InputTargetComponent while also enabling collision shapes.
Here's a snippet of my code:
RealityView { content, attachments in
// The following code doesn't trigger the tapGesture
let videoEntity = ImmersivePlayerEntity(configuration: configuration)
content.add(videoEntity)
//
if let attachment = attachments.entity(for: "player-controls") {
anchorEntity.addChild(attachment)
content.add(anchorEntity)
}
/* This code triggers the tapGesture
let boxResource = MeshResource.generateBox(size: 2)
let itemMaterial = SimpleMaterial(color: .red, roughness: 0, isMetallic: false)
let entity = ModelEntity(mesh: boxResource, materials: [itemMaterial]).addTappable()
content.add(entity)
*/
}
update: { _, _ in
}
attachments: {
Attachment(id: "player-controls") {
ImmersivePlayerControlsView(coordinator: coordinator)
.frame(width: 1280)
.opacity(areControlsVisible ? 1 : 0)
.animation(.easeInOut, value: areControlsVisible)
}
}
.gesture(
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
areControlsVisible.toggle()
}
)
extension Entity
{
func addTappable() -> Entity {
let newModelEntity = self.clone(recursive: true)
newModelEntity.components.set(InputTargetComponent())
newModelEntity.generateCollisionShapes(recursive: true)
return newModelEntity
}
}
I'm seeking guidance and assistance on how to enable touch functionality on the video entity. Your insights and suggestions would be greatly appreciated. Thank you in advance for your help!
How to control the input of left and right eye content through code?
Hi guys,
has any individual develper received Vision Pro dev kit or is it just aimed at big companies?
Basically I would like to start with one or 2 of my apps that I removed from the store already, just to get familiar with VisionOS platform and gain knowledge and skills on a small, but real project.
After that I would like to use the Dev kit on another project. I work on a contract for mutlinational communication company on a pilot project in a small country and extending that project to VisionOS might be very interesting introduction of this new platform and could excite users utilizing their services. However I cannot quite reveal to Apple details for reasons of confidentiality. After completing that contract (or during that if I manage) I would like to start working on a great idea I do have for Vision Pro (as many of you do).
Is it worth applying for Dev kit as an individual dev? I have read some posts, that guys were rejected.
Is is better to start in simulator and just wait for actual hardware to show up in App Store? I would prefer to just get the device, rather than start working with the device that I may need to return in the middle of unfinished project.
Any info on when pre-orders might be possible?
Any idea what Mac specs are for developing for VisionOS - escpecially for 3D scenes. Just got Macbook Pro M3 Max with 96GB RAM, I'm thinknig if I should have maxed out the config. Anybody using that config with Vision Pro Dev kit?
Thanks.
Can AR projects run on a visionOS simulator?
I know that CustomMaterial in RealityKit can update texture by use DrawableQueue, but in new VisionOS, CustomMaterial doesn't work anymore. How i can do the same thing,does ShaderGraphMaterial can do?I can't find example about how to do that. Looking forward your repley, thank you!
In my Reality Composer scene, I have added a spatial audio. How do I play this from my swift code?
I loaded the scene the following way:
myEntity = try await Entity(named: "grandScene", in: realityKitContentBundle)
I have a Sphere Entity that has a VideoMaterial on it and a floor, i want to have the reflection of the video material on that floor. Any possible ways?
Hi! Im having an issue creating a PortalComponent on visionOS
Im trying to anchor a Portal to a wall or floor anchor and always the portal appears opposite to the anchor.
If I use a vertical anchor (wall) the portal appears horizontal on the scene
If I use a horizontal anchor (floor) the portal appears vertical on the scene
Im tested on xcode
15.1.0 beta 3
15.1.0 beta 2
15.0 beta 8
Any ideas ?? Thank you so much!
HELP!
computer:macbookpro 13.6.1 (Intel)
1\first,my xcode is <Version 15.1 beta>
second,xcode show
third,i push the ,but the xcode show the error <Failed with HTTP status 400: bad request>
the detail info:
Failed with HTTP status 400: bad request
Domain: DataGatheringNSURLSessionDelegate
Code: 1
User Info: {
DVTErrorCreationDateKey = "2023-11-17 13:03:58 +0000";
}
--
System Information
macOS Version 13.6.1 (Build 22G313)
Xcode 15.1 (22501) (Build 15C5028h)
Timestamp: 2023-11-17T21:03:58+08:00
2、then i try another way.
first download the xrOS simulator<visionOS_1_beta_3_Simulator_Runtime.dmg>,
and second,try the command in ther termnal platform.
xcode-select -s /Applications/Xcode-beta.app
xcodebuild -runFirstLaunch
xcrun simctl runtime add "~/Downloads/watchOS 9 beta Simulator Runtime.dmg"
but show the error info:
D: F238A4FF-FF5B-4C87-B202-28EAE59C558A xrOS (1.0 - 21N5233f) (Unusable - Other Failure: Error Domain=SimDiskImageErrorDomain Code=5 "Duplicate of 4813D2D0-7539-4306-8132-8E25C64EADD9" UserInfo={NSLocalizedDescription=Duplicate of 4813D2D0-7539-4306-8132-8E25C64EADD9, unusableErrorDetail=})
do anyone the reason,and how i to solve it? thank you~
Hello all -
I'm experiencing a shading error when I have two UnlitSurface shaders using images for color and opacity. When the shaders are applied to two mesh planes, one placed in front of the other, the shader in front will render and the plane mesh will mask out and not render what is behind.
Basically - it looks like the opacity map on the shader in front is creating a 'mask'.
I've attached some images here to help explain.
Has anyone experienced this error? And how can I go about fixing this - thx!