In my Reality Composer scene, I have added a spatial audio. How do I play this from my swift code?
I loaded the scene the following way:
myEntity = try await Entity(named: "grandScene", in: realityKitContentBundle)
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I have a Sphere Entity that has a VideoMaterial on it and a floor, i want to have the reflection of the video material on that floor. Any possible ways?
Hi! Im having an issue creating a PortalComponent on visionOS
Im trying to anchor a Portal to a wall or floor anchor and always the portal appears opposite to the anchor.
If I use a vertical anchor (wall) the portal appears horizontal on the scene
If I use a horizontal anchor (floor) the portal appears vertical on the scene
Im tested on xcode
15.1.0 beta 3
15.1.0 beta 2
15.0 beta 8
Any ideas ?? Thank you so much!
HELP!
computer:macbookpro 13.6.1 (Intel)
1\first,my xcode is <Version 15.1 beta>
second,xcode show
third,i push the ,but the xcode show the error <Failed with HTTP status 400: bad request>
the detail info:
Failed with HTTP status 400: bad request
Domain: DataGatheringNSURLSessionDelegate
Code: 1
User Info: {
DVTErrorCreationDateKey = "2023-11-17 13:03:58 +0000";
}
--
System Information
macOS Version 13.6.1 (Build 22G313)
Xcode 15.1 (22501) (Build 15C5028h)
Timestamp: 2023-11-17T21:03:58+08:00
2、then i try another way.
first download the xrOS simulator<visionOS_1_beta_3_Simulator_Runtime.dmg>,
and second,try the command in ther termnal platform.
xcode-select -s /Applications/Xcode-beta.app
xcodebuild -runFirstLaunch
xcrun simctl runtime add "~/Downloads/watchOS 9 beta Simulator Runtime.dmg"
but show the error info:
D: F238A4FF-FF5B-4C87-B202-28EAE59C558A xrOS (1.0 - 21N5233f) (Unusable - Other Failure: Error Domain=SimDiskImageErrorDomain Code=5 "Duplicate of 4813D2D0-7539-4306-8132-8E25C64EADD9" UserInfo={NSLocalizedDescription=Duplicate of 4813D2D0-7539-4306-8132-8E25C64EADD9, unusableErrorDetail=})
do anyone the reason,and how i to solve it? thank you~
The error I get with visionOS simulator:
cannot migrate AudioUnit assets for current process
code:
guard let resource = try? AudioFileGroupResource.load(
named: "/Root/AudioGroupDropStone",
from: "Scene.usda",
in: realityKitContentBundle
)
Any ideas how to debug this?
The audio files seem to work fine in Reality Composer Pro.
Hi,
I am in the need to control standard and non-standard entity components over the time.
For example, I want to change opacity of few entities over the time with a timer.
To do that I have added component Opacity over the entities I want to change opacity , created a system and registered it.
The system fires update method and inside that I am able to change the opacity, but after few seconds it stops firing.
Once I move the window, the update method fires again for few seconds, then stops.
Any idea on to why?
Any idea what to change in order to have that continuously running?
If that is by deisgn, how can I access components at any time to have those changed when I need to?
I am using Windows, not Volumes or Immersive Spaces.
Hello. I've started exploring the new features in Reality Composer PRO and noticed that Composer now supports adding custom scripts as components to any objects in the scene. I'm curious about the following: will these scripts work if I export such a scene to a USDZ file and try to open it using Apple Quick Look? For instance, I want to add a 3D button and a cube model. When I press the button (touch it), I want to change the material or material color to another one using a script component. Is such functionality possible?
Hi,
I have a file in Reality Composer Pro that has a deep hierarchy. I've downloaded it from an asset store so I don't know how it is build.
As you can see from the screenshot, I'm trying to access banana and banana_whole entities as ModelEntity but I'm not able to load them as ModelEntity in Xcode.
I can load them as Entity and show them in visionOS Simulator but not as ModelEntity which I need to do to do some operations.
What should I do?
I feel like I've heard of the Vision Pro supposedly being really great at object tracking / occlusion etc, but I can't find anything in the documentation or any actual examples.
Would love to find any clear information on this!
Hey everybody,
I am quite new to developing on ios specifically in the AR section, and I have been struggling through documentation and can't find an answer for loading in reality composer pro scenes into an ios app. There is a good amount of documentation on loading it into a visionOS app but it I haven't found it totally applicable. In this code block below I have been able to get my reality composer scene loaded, but I am wanting the added functionality of reality composer pro when developing my scenes and can't figure out how to get those to show up. How would I edit this code to load my reality composer pro scene? My reality composer pro project came over to xcode as Package.realitycomposerpro when I drag and dropped it in, but I don't know how I'd access a scene in it and the specific objects in that scene for ios use. Thanks in advance!
import RealityKit
struct ContentView: View {
var body: some View {
ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
func loadRealityComposerScene(filename: String, fileExtension: String, sceneName: String) -> (Entity & HasAnchoring)? {
guard let realitySceneURL = Bundle.main.url(forResource: filename, withExtension: fileExtension) else {
return nil
}
let loadedAnchor = try? Entity.loadAnchor(contentsOf: realitySceneURL)
return loadedAnchor
}
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
// Load the AR Scene from ACT2.reality
guard let anchor = loadRealityComposerScene(filename: "ACT2", fileExtension: "reality", sceneName: "Scene1") else {
print("Failed to load the anchor from ACT2.reality")
return arView
}
arView.scene.addAnchor(anchor)
// Visualize Collisions for Debugging
arView.debugOptions.insert(.showPhysics)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
#Preview {
ContentView()
}
Hello Everyone,
I'm facing a challenge related to resizing an entity built from a 3D model.
Although I can manipulate the size of the mesh, the entity's overall dimensions seem to remain static and unchangeable.
Here's a snippet of my code:
let giftEntity = try await Entity(named: "gift")
I've come across an operator that allows for scaling the entity. However, I'm uncertain about the appropriate value to employ, especially since the realityView is encapsulated within an HStack, which is further nested inside a ScrollView.
Would anyone have experience or guidance on this matter? Any recommendations or resources would be invaluable.
Thank you in advance for your assistance!
Creating an Entity and then changing between different animations (walk, run, jump, etc.) is pretty straightforward when you have individual USDZ files in your Xcode project, and then simply create an array of AnimationResource.
However, I'm trying to do it via the new Reality Composer Pro app because the docs state it's much more efficient versus individual files, but I'm having a heck of a time figuring out how exactly to do it.
Do you have one scene per USDZ file (does that erase any advantage over just loading individual files)? One scene with multiple entities? Something else all together?
If I try one scene with multiple entities within it, when I try to change animation I always get "Cannot find a BindPoint for any bind path" logged in the console, and the animation never actually occurs. This is with the same files that animate perfectly when just creating an array of AnimationResource manually via individual/raw USDZ files.
Anyone have any experience doing this?
We have a content creation application that uses SceneKit for rendering. In our application, we have a 3D view (non-AR), and an AR "mode" the user can go into. Currently we use a SCNView and an ARSCNView to achieve this. Our application currently targets iOS and MacOS (with AR only on iOS).
With VisionOS on the horizon, we're trying to bring the tech stack up to date, as SceneKit no longer seems to be supported, and isn't supported at all on VisionOS.
We'd like to use RealityKit for 3D rendering on all platforms; MacOS, iOS and VisionOS, in non-AR and AR mode where appropriate.
So far this hasn't been too difficult. The greatest challenge has been adding gesture support to replace the allowsCameraControl option on the SCNView, as no such option on ARView.
However, now we get to control shading, we're hitting a bit of a roadblock. When viewing the scene in Non-AR mode, we would like to add a ground plane underneath the object that only displays a shadow - in other words, it's opacity would be determined by light contribution. I've had a dig through the CustomMaterial API and it seems extremely primitive - there doesn't seem any way to get light information for a particular fragment, unless I'm missing something?
Additionally, we support a custom shader that we can apply as materials. This custom shader allows the properties of the material to vary depending on the light contribution, light incidence angle...etc. Looking at the CustomMaterial, the API seems to be defining a CustomMaterial, whereas as guess we want to customise the BRDF calculation. We achieve this in SceneKit using a series of shader modifiers hooked into the various SCNShaderModifierEntryPoint.
On VisionOS of course the lack of support for CustomMaterial is a shame, but I would hope something similar can be achieved with RealityComposer?
We can live with the lack of custom material, but the shadow catcher is a killer for adoption for us. I'd even accept a different limited features on VisionOS, as long as we can matching our existing feature set on existing platforms.
What am I missing?
I have a USDZ model with many animations in single long clip. When I want to cut it via AnimationView I can't follow in which moment I should trim it. So add millisecond please.
On Ventura -
We have a network extension(Transparent Proxy) which blocks IPv6 traffic as below.
override func handleNewFlow(_ flow: NEAppProxyFlow) -> Bool {
//Ipv6 gets blocks by below code
let error = NSError(domain: "", code: 0, userInfo: [NSLocalizedDescriptionKey : "Connection Refused"])
flow.closeReadWithError(error)
flow.closeWriteWithError(error)
On IPv6 enabled client machine, when a client application(Browser, curl, Teams etc), try to send HTTP/s requests, first they try to send the request over IPv6 and if it fails, they try with IPv4 (Happy eyeballs Algorithm)
In our case, as network extension blocks IPv6 traffic, client applications will fail to establish connection over IPv6 and fallback to IPv4 as per Happy eyeballs Algorithm
The above scenario works fine till MacOS Ventura.
For Sonoma, this behaviour seems to have changed
When our network extension blocks IPv6 traffic, client applications do not fallback to IPv4.
They simply fail without trying IPv4. We tested with curl, Google chrome browser, Microsoft Teams. All these fail to load pages on Sonoma and they work fine on Ventura.
Note : No change in our network extension code, curl and browser versions. Only change is MacOS version
Please find attached screenshots with Ventura and with Sonoma, running curl
One other difference seen here is the error code received by client applications with Ventura and Sonoma.
On Ventura, when IPv6 is blocked, error is Network is down and client application establishes connection with IPv4.
On Sonoma, error code is 22 : Invalid arguments and client application does not retry with IPv4.
Curl_Ventura.jpg
Curl_Sonoma.png
Surface screen position
does it return model's vertices XYZ position normalized?
node graph needs more tutorials and explanations
made 0 progress
This is for a basic product configurator. How would I provide a menu with 4-6 material swatches and then have those swap the materials on the model in the immersive scene.
Hi all,
I don't have a Vision Pro (yet), and I'm wondering if it is possible to preview my Reality Composer Pro project in AR using an iPad Pro or latest iPhones?
I also am interested in teaching others - I'm also a college professor, and I don't believe that my students have Vision Pros either.
I could always use the iOS versions, as I have done in the past, but the Pro version is much more capable and it would be great to be able to use it.
Thanks for any comments on this!
How would I make a some simple toggle buttons to hide or show specific entities within a scene created in Reality Composer Pro?
I'd imagine that within Reality Composer pro, all entities would already be in place, and then from Xcode I would be turning them on or off.
Additionally I was curious about how I would go about swapping out colors / materials for specific entities.