Hello,
I'm getting some strange behavior as I keep working with AVP on Xcode. First the basic template used to be normal, and when I previewed contentView it would display the basic sphere (using volume preset) right in the center of the screen.
Now, when I create any basic template, the preview window is way elevated upwards to where its basically going thru the ceiling in the room. For the life of me I cannot get Xcode to correct this behavior.
Also, while I can create a scene with assets in RealityKit, and try to view it in my content bundle following this Apple tutorial https://developer.apple.com/videos/play/wwdc2023/10203/
I cannot get my new scene to show in an immersive view file. I made sure to use the swiftUI template for the immersive view and I had the correct setup for it. My naming conventions are correct and there are no typos or errors (that I know of).
I CAN get the basic template scene to show up, but not my custom scene with a pancake model in it.
I've gone thru the video several times, but that still doesn't solve the above issue of the initial template previewing way to high.
Does anyone have any advice on fixes for this subject?
Edit : I may have located the issue to importing assets and using them in a new scene in realityComposerPro. Still not sure why it breaks after creating a scene in RCP. I haven't even used them in y project yet. (Started a new one)
--> If I reset the project it removes the errors. But I still can't see my pancake, even if I just replace the template scene with a basic scene with just the asset in it. Much like the sphere but pancake instead.
Also the initial scene may be high because using the volume template the view starts at the users feet. That might be why
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Activity monitor reports that Reality Composer Pro uses 150% CPU and always is the number one energy user on my M3 mac. Unfortunately the high cpu usage continues when the app is hidden or minimized. I can understand the high usage when a scene is visible and when interacting with the scene, but this appears to be a bug. Can anyone else confirm this or have a workaround?
Can the scene processing at least be paused when app is hidden?
Or better yet, find out why the cpu usage is so high when the scene is not changing.
Reality Composer Pro Version 1.0 (409.60.6) on Sonoma 14.3
Thanks
Hello!
I have a few questions about using RealityKit on iPadOS and Mac Catalyst.
I currently have a model in Cinema 4D that has no materials added to it. I need to add a couple of simple materials to everything.
My goal is to move my model from Cinema 4D to my RealityKit app. The model can load fine in the app. However, I'm having issues with the materials.
First, when I export from Cinema 4D as a USD, the materials don't seem to come with it. I thought materials came with the model. I just get a pink striped model (no materials), which is not ideal. So, that crossed out making materials in Cinema 4D for me.
Here's a test model with materials added:
Here's what it looks like when exported as a USDA in Reality Composer Pro. It looks the same when exported as a USDZ:
I checked materials when exporting from Cinema 4D, so I don't know what I am doing wrong. So, I thought of another idea instead of making materials in Cinema 4D.
What if I used Apple's new Reality Composer Pro app to add materials to my model? You can add physically based materials or even custom shader materials with nodes.
I thought that would work, and it does. When I export the model as a USDZ with physically based materials, they appear fine and work in my app.
However, what about custom shader materials?
Whenever I play with a custom shader material and apply it to my model, I am left with problems.
Look at this image. This is my model in Reality Composer Pro with two types of materials added from the app. The water and sand on the beach are created with physically based materials in the app. No nodes. The gold/metal ball is created with a custom shader material with nodes. Looks fine, right?
When I drag an exported USDZ of this into my Xcode project, it even looks good in the Xcode preview of the file under Resources. (Note I am not adding the USDZ as an . rkassets as Apple suggests as .rkassets folders are only available for visionOS. This is an iPadOS + Catalyst app.)
However, when I run the actual app, only the physically based materials actually display correctly:
Besides the lighting in the scene, the physically based materials look good. However, the metal ball that used a custom shader material? It looks gray.
Why? How can I fix this?
I guarantee this is not a problem with my app or its lighting setup etc. I tried loading a custom shader material in a visionOS simulator and it worked! But, not for this.
I know Reality Composer Pro seems to be very focused on visionOS right now, but this is still just a USDZ file. Why aren't these custom shaders working?
I've been working on this problem for at least 24+ hours, so I've tried as much as I could imagine.
I thought Reality Composer Pro would be good to do textures in as it would be less error prone when moving the model over to Xcode, compared to moving materials from Cinema 4D to Xcode, and I kind of proved that with my second photo.
For RealityKit on iPadOS + Catalyst, how should I be applying materials? What am I doing wrong?
P.S. Yes, this is a nonAR project with a virtual camera, which is possible for RealityKit.
Thanks for the help! :)
I captured my office using 3D Scanner and get a USDZ file.
The file contains a 3-D Model and a Physically based material.
I can view the file correctly with texture on Xcode and Reality Composer Pro.
But when using RealityView to present the model in immersive space. I got the model in whole black.
My guess is my Material doesn't have a shader graph?
Does anyone caught into similar issue? How to solve it?
I'm using RealityKit to give an immersive view of 360 pictures. However, I'm seeing a problem where the window disappears when I enter immersive mode and returns when I rotate my head. Interestingly, putting ".glassBackground()" to the back of the window cures the issue, however I prefer not to use it in the UI's backdrop. How can I deal with this?
here is link of Gif:-
https://firebasestorage.googleapis.com/v0/b/affirmation-604e2.appspot.com/o/Simulator%20Screen%20Recording%20-%20Apple%20Vision%20Pro%20-%202024-01-30%20at%2011.33.39.gif?alt=media&token=3fab9019-4902-4564-9312-30d49b15ea48
Hi, I have a small question. Is it possible to place the entities from a reality view (Immersive space) at the eye level on Y axis? Is it enough to set the position to (x, 0 , z)?
Are there templates to code on visionOS?
It would be nice to be able to copy n' paste code that works to build stuff out for it with ease that reinventing the wheel so many times.
When starting a new visionOS project in Xcode, there used to be a button to show the default immersive space in the default ContentView. Now as of some time recently, it is no longer a button but a toggle. When toggled, nothing happens. The two balls from the immersive space do not appear. Is this intentional?
There is also no templates for visionOS when adding a new file to Xcode project. The visionOS option used to be there when adding a new file. This is along with various bugs I have encountered when trying to follow extremely simple starter app tutorials posted by community, albeit these tutorials were posted 3-6 months ago. What has changed so drastically that now it seems like nothing works? I actually factory reset my Mac mini M2 because I could not find answers/solutions. This actually fixed some of the bugs I was having, but not most, including the big ones mentioned above.
So what's going on? This is 2 fresh installs of latest Sonoma and Xcode beta where I'm having issues. Is something not being downloaded or installed correctly? Any suggestions appreciated
I am trying to create a simple custom shader with an image as material and a depth map as bump map information. I have followed the official procedure from "Explore materials in Reality Composer Pro" but the depth map is not processed.
What am I doing wrong?
(attached is a screenshot that shows the setup. I removed the image ref for clarity)
I'm trying to import the USDZ file of a model with multiple textures attached to each part of the model. When I preview the file by double-clicking on the USDZ, it views fine.
However, when I import it into Reality Composer Pro, it only shows the pink striped model.
I also get the message - "Multiple root level objects exist for HU_EVO_SPY-8.usdc".
There are so many components of the model that binding each texture to each component will be very difficult to do manually.
How can I fix the file such that when I import to Reality Composer Pro, textures are attached to the model?
Closure containing control flow statement cannot be used with result builder 'ViewBuilder'
below was the code
let feet = 9
let inch = 45
var body: some View {
VStack{
for number in 1...10 {
print(number)
}
}
Hi,
I'm trying to display an STL model file in visionOS. I import the STL file using SceneKit's ModelIO extension, add it to an empty scene USDA and then export the finished scene into a temporary USDZ file. From there I load the USDZ file as an Entity and add it onto the content.
However, the model in the resulting USDZ file has no lighting and appears as an unlit solid. Please see the screenshot below:
Top one is created from directly importing a USDA scene with the model already added using Reality Composer through in an Entity and works as expected.
Middle one is created from importing the STL model as an MDLAsset using ModelIO, adding onto the empty scene, exporting as USDZ. Then importing USDZ into an Entity. This is what I want to be able to do and is broken.
Bottom one is just for me to debug the USDZ import/export. It was added to the empty scene using Reality Composer and works as expected, therefore the USDZ export/import is not broken as far as I can tell.
Full code:
import SwiftUI
import ARKit
import SceneKit.ModelIO
import RealityKit
import RealityKitContent
struct ContentView: View {
@State private var enlarge = false
@State private var showImmersiveSpace = false
@State private var immersiveSpaceIsShown = false
@Environment(\.openImmersiveSpace) var openImmersiveSpace
@Environment(\.dismissImmersiveSpace) var dismissImmersiveSpace
var modelUrl: URL? = {
if let url = Bundle.main.url(forResource: "Trent 900 STL", withExtension: "stl") {
let asset = MDLAsset(url: url)
asset.loadTextures()
let object = asset.object(at: 0) as! MDLMesh
let emptyScene = SCNScene(named: "EmptyScene.usda")!
let scene = SCNScene(mdlAsset: asset)
// Position node in scene and scale
let node = SCNNode(mdlObject: object)
node.position = SCNVector3(0.0, 0.1, 0.0)
node.scale = SCNVector3(0.02, 0.02, 0.02)
// Copy materials from the test model in the empty scene to our new object (doesn't really change anything)
node.geometry?.materials = emptyScene.rootNode.childNodes[0].childNodes[0].childNodes[0].childNodes[0].geometry!.materials
// Add new node to our empty scene
emptyScene.rootNode.addChildNode(node)
let fileManager = FileManager.default
let appSupportDirectory = try! fileManager.url(for: .applicationSupportDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
let permanentUrl = appSupportDirectory.appendingPathComponent("converted.usdz")
if emptyScene.write(to: permanentUrl, delegate: nil) {
// We exported, now load and display
return permanentUrl
}
}
return nil
}()
var body: some View {
VStack {
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(contentsOf: modelUrl!) {
// Displays middle and bottom models
content.add(scene)
}
if let scene2 = try? await Entity(named: "JetScene", in: realityKitContentBundle) {
// Displays top model using premade scene and exported as USDA.
content.add(scene2)
}
} update: { content in
// Update the RealityKit content when SwiftUI state changes
if let scene = content.entities.first {
let uniformScale: Float = enlarge ? 1.4 : 1.0
scene.transform.scale = [uniformScale, uniformScale, uniformScale]
}
}
.gesture(TapGesture().targetedToAnyEntity().onEnded { _ in
enlarge.toggle()
})
VStack (spacing: 12) {
Toggle("Enlarge RealityView Content", isOn: $enlarge)
.font(.title)
Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace)
.font(.title)
}
.frame(width: 360)
.padding(36)
.glassBackgroundEffect()
}
.onChange(of: showImmersiveSpace) { _, newValue in
Task {
if newValue {
switch await openImmersiveSpace(id: "ImmersiveSpace") {
case .opened:
immersiveSpaceIsShown = true
case .error, .userCancelled:
fallthrough
@unknown default:
immersiveSpaceIsShown = false
showImmersiveSpace = false
}
} else if immersiveSpaceIsShown {
await dismissImmersiveSpace()
immersiveSpaceIsShown = false
}
}
}
}
}
#Preview(windowStyle: .volumetric) {
ContentView()
}
To test this even further, I exported the generated USDZ and opened in Reality Composer. The added model was still broken while the test model in the scene was fine. This also further proved that import/export is fine and RealityKit is not doing something weird with the imported model.
I am convinced this has to be something with the way I'm using ModelIO to import the STL file.
Any help is appreciated. Thank you
Hi,
What are the limitations and capabilities of visionOS? I cannot find answers to the questions I have.
Let's say you have some USDZ files stored in a cloud service, there are so many of them that the app would be huge if you put them in assets. You want to fetch the one you are interested in and show it while an app is running. Is it possible to load USDZ files at runtime from the network?
Is there a limit to how many objects can be visible at once? Let's say I am in an open space, with no walls. I want to place 100 3D objects somewhere in space. Is it possible? What if I placed 500, 1000?
Is there a way to save the anchor point of the object? I want to open the app again and have an object in the same place I left it. I would like to arrange my space and have objects always in the same spots.
How does the OS behave if objects are in different rooms? Is it possible to walk around, visit different rooms, and have objects anchored there? Would it behave like real objects?
Is it possible to color a plane? Let's say there is a wall and it's black. I want this wall to be orange. Is it possible?
Hello!
I'm having a very odd problem. I was trying to open a USD file in Xcode so I could then open it in Reality Composer Pro. I've been able to do that without a problem for a number of weeks. However, I can't do that now. Every time I try to open a USD, Xcode briefly opens and then crashes. Then, every time I try to open Reality Composer Pro from the Developer Tools menu in Xcode, the app bounces up and down, opens for one second (little dot on the dock) and then just doesn't open.
I have no idea what I did. I've been using Xcode 15.2 and all of the sudden it just doesn't work anymore. The only thing I could think of is that I used an online converter from GLB to USD and then tried opening up that USD, but the website was working for me before. Plus, when I try to open up other files like USDA, it still doesn't work. So, I don't think it's one type of file.
I tried updating to macOS Sonoma 14.3.1 but that didn't fix it. Xcode is downloaded from the Mac App Store. I am not using any beta software. I tried doing the usual restart, clean build folder etc. but nothing works.
I am really confused... all of the sudden it just stopped working. Any fixes? I am on a very tight deadline, and this app is crucial to my work.
Thanks! :)
How to binding MTLTexture to Color input of the material?
I need use something similar to VideoMaterial.
So I need make a CustomMaterial.
But RealityKit CustomMaterial is not available in VisionOS, and replaced by ShaderGraphMaterial
So how to binding Metal resource such as MTLTexture to ShadeGraphMaterial directly.
Hello everyone, I want to develop an app for vision pro that aims to help people with vertigo and dizziness problems. The problem is that I can not afford vision pro. If I use standart vr set with an iPhone inside would it cause issues on real vision pro?
Hi all, I'm trying to retrieve the name of an entity from the gesture that hits it, but it's not giving me the value I set when I created the entity.
I create the entity like:
class DrumPad: ObservableObject {
static func create(for audioFileName: String) -> Entity? {
do {
let padEntity = try Entity.load(named: "Geometry/pad-without-handle", in: tableDrummerContentBundle)
padEntity.name = "\(audioFileName)_pad"
return padEntity
} catch {
print("Could not load pad \(audioFileName)")
print(error.localizedDescription)
return nil
}
}
}
Then I get it from the gesture:
var body: some View {
RealityView { content in
for sampleName in audioSamples {
guard let pad = DrumPad.create(for: sampleName) else { continue }
content.add(pad)
}
}
.gesture(SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
print(value.entity.name)
})
}
}
In the gesture handler, print(value.entity.name) gives me the name of the root transform of the entity, PadTransform, not the string I set "\(audioFileName)_pad" during instantiation. If I call print(padEntity.name) during instantiation, I get rock-kick-2_pad and the like. Any help would be much appreciated.
Dear Apple Developer Forum Community,
I hope this message finds you well. I am writing to seek assistance regarding an error I encountered while attempting to create a "Hello World" application using Xcode.
Upon launching Xcode and starting a new project, I followed the standard procedure for creating a simple iOS application. However, during the process, I encountered an unexpected error that halted my progress. The error message I received was [insert error message here].
I have attempted to troubleshoot the issue by see two images, but unfortunately, I have been unsuccessful in resolving it.
I am reaching out to the community in the hope that someone might have encountered a similar issue or have expertise in troubleshooting Xcode errors. Any guidance, suggestions, or solutions would be greatly appreciated.
Thank you very much for your time and assistance.
Sincerely,
Zipzy games
y
Games
Hi, I tried to change the default size for a volumetric window but It looks like this window has a maximum width value. Is it true?
WindowGroup(id: "id") {
ItemToShow()
}.windowStyle(.volumetric)
.defaultSize(width: 100, height: 0.8, depth: 0.3, in: .meters)
Here I set the width to 100 meters but It still looks like about 2 meters
Dear Apple Developer Forum Community,
I hope this message finds you well. I am writing to seek assistance regarding an error I encountered while attempting to create a "Tic Tac Toe" application using Xcode.
Upon launching Xcode and starting a new project, I followed the standard procedure for creating a simple iOS application. However, during the process, I encountered I am trying to make an app but the code showing an error when any player won the match.
I have attempted to troubleshoot the issue by see two images, but unfortunately, I have been unsuccessful in resolving it.
I am reaching out to the community in the hope that someone might have encountered a similar issue or have expertise in troubleshooting Xcode errors. Any guidance, suggestions, or solutions would be greatly appreciated.
Thank you very much for your time and assistance.
Sincerely,
Zipzy games