Hi guys,
I thought I make a visionOS test app with the Apple's native robot.usdz file.
My plan was to rotate limbs of the robot programatically, but while I can see the bones in previous Xcode versions and in Blender, somehow I can not reach them in Xcode 15.3 or Reality Composer Pro.
Has anyone any experience with that?
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi Guys, I've been trying to put my model to react to light in visionOS Simulator by editing the component in Reality Composer Pro and also modifying it by code, but I can only put the shadow if I put it as an usdz file, it's not as reflective as when I see it on reality converter or reality composer pro, does someone have this problem too?
RealityView { content in
if let bigDonut = try? await ModelEntity(named: "bigdonut", in: realityKitContentBundle) {
print("LOADED")
// Create anchor for horizontal placement on a table
let anchor = AnchorEntity(.plane(.horizontal, classification: .table, minimumBounds: [0,0]))
// Configure scale and position
bigDonut.setScale([1,1,1], relativeTo: anchor)
bigDonut.setPosition([0,0.2,0], relativeTo: anchor)
// Add the anchor
content.add(anchor)
// Enable shadow casting but this does not work
bigDonut.components.set(GroundingShadowComponent(castsShadow: true))
}
}
I hope to be able to display the USDA model in RealityComposerPro and play the Spatial Audio. I used RealityView to implement these contents:
RealityView{ content in
do {
let entity = try await Entity(named: "isWateringBasin", in: RealityKitContent.realityKitContentBundle)
content.add(entity)
guard let entity = entity.findEntity(named: "SpatialAudio"),
let resource = try? await AudioFileResource(named: "/Root/isWateringBasinAudio_m4a",
from: "isWateringBasin.usda",
in: RealityKitContent.realityKitContentBundle) else { return }
let audioPlaybackController = entity.prepareAudio(resource)
audioPlaybackController.play()
} catch {
print("Entity encountered an error while loading the model.")
return
}
}
but when I ran it, I found that although can displayed the model normally, Spatial Audio failed to play normally. I hope to get guidance, thank you!
I create a simple primitive shape and I want to add colors on each faces of the cube. I was thinking using Shape Graph, but I have no idea on how to specify each faces with a different color. Any lead or help would be great. This tech is new so help documentions is very low
How can I add .hoverEffect to a single entity in a realityview with multiple entities?
I want selectable object to highlight themselves when looked at or hovered over.
How can I easily make this happen?
There is a working example of this within the Swift Splash demo but I don't know what part of the code is creating that feature.
So far I've been able to get .hoverEffect to work on Model3D(), but I want to have multiple entities with a few that are selectable.
I want to play RealityKitContent USDA model's Spatial Audio, I use this code:
RealityView{ content in
do {
let entity = try await Entity(named: "isWateringBasin", in: RealityKitContent.realityKitContentBundle)
let audio = entity.spatialAudio
entity.playAudio(audio)
content.add(entity)
} catch {
print("Entity encountered an error while loading the model.")
return
}
}
entity.playAudio(audio) this code need add a 'AudioResource' back of audio, Excuse me, what should AudioResource be?
I used Model3D to display a model:
Model3D(named: "Model", bundle: realityKitContentBundle) { phase in
switch phase {
case .empty:
ProgressView()
case .failure(let error):
Text("Error \(error.localizedDescription)")
case .success(let model):
model.resizable()
}
}
However, when I ran, I found that the width and length were not stretched, but when I looked at the depth from the side, they were seriously stretched. What should I do?
Note: For some reason, I can't use the Frame modifier.
Image:
width and length
error depth
I just updated xcode 15.2 and I want to try to use Reality Composer Pro, I saw on the Apple developer video that it should be under Xcode -> developer tool -> Reality Composer Pro but when I open that I don't have Composer.
On the Apple webpage for Rality Composer is written "Reality Composer for macOS is bundled with Xcode, which is available on the Mac App Store."
Where i can find the Composer Pro?
Thanks
I created a RealityKitContent in the Packages folder of the visionOS app project. At first, I tried to add a USDA model directly to its rkassets. I used Model3D(named: "ModelName", bundle: realityKitContentBundle) can The model is displayed normally, but then when I add a folder in rkassets and then put the USDA model in that folder, use Model3D(named: "ModelName", bundle: realityKit ContentBundle) cannot display the model normally. What should I do?
If you know how to solve the above problems, please let me know and check if you know how to solve the following problems. If you know, please also tell me the answer. Thank you!
The USDA model I mentioned in the question just now contains an animation, but when I used Model3D(named: "ModelName", bundle: realityKitContentBundle) , I found that the animation was not played by default, but needed additional code. Is there any documentation, video or listing code in this regard?
Hey everyone, I'm running into this issue of my USDZ model not showing up in Reality Composer Pro, exported from Blender as a USD and converted in Reality Converter.
See Attached image:
It's strange, because the USDz model appears fine in Previews. But once it is brought inside RCP, I receive this pop up, and does not appear.
Not sure how to resolve this multiple root level issue. If anyone can point me in the right direction or any feedback, all is much appreciated! Thank you!
In reality composer pro, when importing an USDZ model and inserting it into the scene, reality composer pro will remove the material of the model itself by default, but I don't want to do this. So how can reality composer pro not remove the material of the model itself?
Is there a way of integrating the RealityKitContent to an app created with Xcode12 using UIKit?
The non AR parts are working ok in VisionOS, the AR parts need to be rewritten in SwiftUI. In order to be able to do so,I need to access the RealityKit content and be able to work it seamlessly with Reality Composer Pro, but unsure how to integrate RealityKitContent is such pre-SwitftUI/VisionOS project. I am using Xcode 15
Thank you.
Hello fellow developers,
I am currently exploring the functionality of the UsdPrimvarReader node in Shader Graph Editor and would appreciate some clarification on its operational principles. Despite my efforts to understand its functionality, I find myself in need of some guidance.
Specifically, I would appreciate insights into the proper functioning of the UsdPrimvarReader node, including how it should ideally operate, the essential data that should be specified in the Varname field, and the Primvars that can be extracted from a USD file. Additionally, I am curious about the correct code representation of a Primvar in USD file to ensure it can be invoked successfully.
If anyone could share their expertise or point me in the right direction to relevant documentation, I would be immensely grateful.
Thank you in advance for your time and consideration. I look forward to any insights or recommendations you may have.
In new VisionOS platform, CustomMaterial in RealityKit can not userd, it should use ShaderGraphMaterial instead, but i can't find the way to change the culling mode. With old CustomMaterial, it has a facculling property,
Is there a way to change the culling mode in new ShaderGraphMaterial?
When creating a USDA file in a DCC, I want RCP to import it as expected with materials assigned. However, I’m finding that the material is not imported correctly, despite it rendering correctly in the preview pane and the textures being pulled in.
The workaround is to recreate the material in the shader tree, but then I override any material changes I do on the original UDSA. Please advise me on what I need to be doing here, to correctly import materials into RCP.
Using USDZ files is not ideal, as I want to make sure changes can easily be made upstream.
Sorry about the link, but I can't seem to upload it to the post.
https://pasteboard.co/bmhl3t004APu.png
Any guidance here is much appreciated!
Hey guys
How I can fit RealityView content inside a volumetric window?
I have below simple example:
WindowGroup(id: "preview") {
RealityView { content in
if let entity = try? await Entity(named: "name") {
content.add(entity)
entity.setPosition(.zero, relativeTo: entity.parent)
}
}
}
.defaultSize(width: 0.6, height: 0.6, depth: 0.6, in: .meters)
.windowStyle(.volumetric)
I understand that we can resize a Model3D view automatically using .resizable() and .scaledToFit() after the model loaded.
Can we achieve the same result using a RealityView?
Cheers
Hello Everyone,
I'm currently facing a challenge related to detecting taps on an entity that features video material.
Based on the information I found online, it appears that in order to enable touch functionality, the recommended approach is to clone the entity and add an InputTargetComponent while also enabling collision shapes.
Here's a snippet of my code:
RealityView { content, attachments in
// The following code doesn't trigger the tapGesture
let videoEntity = ImmersivePlayerEntity(configuration: configuration)
content.add(videoEntity)
//
if let attachment = attachments.entity(for: "player-controls") {
anchorEntity.addChild(attachment)
content.add(anchorEntity)
}
/* This code triggers the tapGesture
let boxResource = MeshResource.generateBox(size: 2)
let itemMaterial = SimpleMaterial(color: .red, roughness: 0, isMetallic: false)
let entity = ModelEntity(mesh: boxResource, materials: [itemMaterial]).addTappable()
content.add(entity)
*/
}
update: { _, _ in
}
attachments: {
Attachment(id: "player-controls") {
ImmersivePlayerControlsView(coordinator: coordinator)
.frame(width: 1280)
.opacity(areControlsVisible ? 1 : 0)
.animation(.easeInOut, value: areControlsVisible)
}
}
.gesture(
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
areControlsVisible.toggle()
}
)
extension Entity
{
func addTappable() -> Entity {
let newModelEntity = self.clone(recursive: true)
newModelEntity.components.set(InputTargetComponent())
newModelEntity.generateCollisionShapes(recursive: true)
return newModelEntity
}
}
I'm seeking guidance and assistance on how to enable touch functionality on the video entity. Your insights and suggestions would be greatly appreciated. Thank you in advance for your help!
How to control the input of left and right eye content through code?
Hi guys,
has any individual develper received Vision Pro dev kit or is it just aimed at big companies?
Basically I would like to start with one or 2 of my apps that I removed from the store already, just to get familiar with VisionOS platform and gain knowledge and skills on a small, but real project.
After that I would like to use the Dev kit on another project. I work on a contract for mutlinational communication company on a pilot project in a small country and extending that project to VisionOS might be very interesting introduction of this new platform and could excite users utilizing their services. However I cannot quite reveal to Apple details for reasons of confidentiality. After completing that contract (or during that if I manage) I would like to start working on a great idea I do have for Vision Pro (as many of you do).
Is it worth applying for Dev kit as an individual dev? I have read some posts, that guys were rejected.
Is is better to start in simulator and just wait for actual hardware to show up in App Store? I would prefer to just get the device, rather than start working with the device that I may need to return in the middle of unfinished project.
Any info on when pre-orders might be possible?
Any idea what Mac specs are for developing for VisionOS - escpecially for 3D scenes. Just got Macbook Pro M3 Max with 96GB RAM, I'm thinknig if I should have maxed out the config. Anybody using that config with Vision Pro Dev kit?
Thanks.
Can AR projects run on a visionOS simulator?