Reality Composer

RSS for tag

Prototype and produce content for AR experiences using Reality Composer.

Reality Composer Documentation

Pinned Posts

Posts under Reality Composer tag

69 Posts
Sort by:
Post not yet marked as solved
2 Replies
953 Views
Attempting to load an entity via Entity.loadAsync(contentsOf: url) throws an error when passing a .realitycomposerpro file: "Cannot determine file format for ...Package.realitycomposerpro" AdditionalErrors=( "Error Domain=USDKitErrorDomain Code=3 "Failed to open layer } I have a VisionOs app with the same Reality Composer pro file referenced. The project automatically builds a .reality file out of the project. The project contains the realityComposerPro package as a library in the Link Binary build phase. I duplicated this setup with my iPhone app using RealityKit. Attempting to include a realityconvertpro package into my existing app. causes a build error: RealityAssetsCompile: Error: for --platform, value must be one of [xros, xrsimulator], not 'iphoneos' Usage: realitytool compile --output-reality [--schema-file ] [--derived-data ] --platform --deployment-target [--use-metal ] See 'realitytool compile --help' for more information. Lastly, I extracted the .reality file generated by a separate, working, visionOs app using reality converter pro. Attempting to actually load an entity from this file results in an error: Reality File version 9 is not supported. (Latest supported version is 7.)
Posted
by Natooka.
Last updated
.
Post not yet marked as solved
2 Replies
886 Views
I am trying to do the following code: func withBackground() -> some View { #if os(visionOS) background { Material.thin } #else background { Color.offWhite.ignoresSafeArea() } #endif } But Xcode 15 Beta 2 says the following: Unknown operating system for build configuration os How can I change the background ONLY for visionOS, while keeping it as is on the other platforms I support? Thanks!
Posted Last updated
.
Post not yet marked as solved
0 Replies
771 Views
Just getting familiar with XCode. Using Reality Composer a lot now. Ready to try coding along with with Reality Composer. Saw this demo (see link below), but I don't want to use a web server to retrieve banner information, I would prefer to embed this information directly into the USDZ file to be read with AR Quick Look. **Two questions: ** How can you get a banner like this when you open an USDZ file and edit the banner information directly (within the file itself) without using a URL? In place of the call to action button (for Apple Pay) in the demo below, I'd like to use that button to either call a phone number, send a text, or go to a web URL. Link to Apple's example with Apple Pay (see custom examples section, like for the kids' slide example on that page). https://developer.apple.com/augmented-reality/quick-look/ Scraps are welcome, hungry to learn.
Posted Last updated
.
Post not yet marked as solved
0 Replies
365 Views
Hello, I'm trying to build a compound object using 2 cubes. I adjusted the z value for one object to make them appear merged together. However, when I run the simulation the two objects separate. Is there something I need to set to make sure the z value doesn't change when it's running? Thanks, BVSdev composed objects, grouped together running:
Posted
by bvsdev.
Last updated
.
Post marked as solved
1 Replies
706 Views
I have been reading through the documentation and can not find a way to alter the users environment lighting. Is this not possible? Basically I would like to add a darkening to a room. Or change the HUE of the environment in the scene they are seeing. I can think of a few "hacks" to do this but figured there would be a fancy reality kit way to do so. If it is possible to "dim" or darken the environment I could then light up my models with lights but still have the real environment all around.
Posted Last updated
.
Post not yet marked as solved
2 Replies
789 Views
If the anchor picture is out of the camera view, the AR experience disappears. Both with USDZ files, created with Reality composer, directly opened on iPhone or iPad (with AR Quick Look), or with Adobe Aero. So I suppose the bug is due to ARKit.
Posted
by EMD24Fr.
Last updated
.
Post not yet marked as solved
1 Replies
927 Views
I recently spent several days messing around in Reality Composer with the intention of creating a course that we could teach to students to get them started learning how to use Augmented Reality to tell stories and play with digital assets. We would use it in combination with other apps like TinkerCad to teach them modeling, the Voice Memo recorder so they can record dialogue and interaction sounds, iMovie to edit a demo reel of their application, as well as taking advantage of online assets libraries like Sketchfab that does have .usdz files and even some animations available for free. The focus would be on creating an interactive application that works in AR. Here are some notes I took while trying things out. UI Frustrations: The behaviors tab doesn’t go up far enough, I’d like to be able to drag it to take up more space on the screen. Several of the actions have sections that go just below the edge of the screen, and it’s frustrating to have to constantly scroll in order to see all the information. I’ll select an “Ease Type” on the Move, Rotate, Scale To action and buttons will appear on the very edge of my screen in such a way that I can’t read them until I scroll down. This happens for so many different Actions that it feels like I don’t have enough space to see all the necessary information The audio importing from the content library is difficult to navigate. First, I wish there was a way to import only one sound instead of having to import the entire category of sounds. Second, it would be nice to see all the categories of sounds in some kind of sidebar, similar to the “Add Object” menu that already exists. I wish there was a way to copy and paste position and rotation vectors easily so we could make sure objects are in the same place, especially if we need to duplicate objects in order to get a second tap implementation. Currently you have to keep flipping back and forth between objects to get the numbers right Is there a way to see all of the behaviors a selected object is referenced in? Since the “Affected Objects” list is inside all sorts of behaviors, actions, triggers, etc, it can be hard to find exactly where a behavior is coming from, especially if your scene has a lot of behaviors. I do come from a Unity background, so I’m used to object behaviors being put directly onto the object itself, so to not know which behaviors are referencing any given object makes it possible to accidentally have my physics response overwritten by an animation triggered from somewhere else and then causing me to search for it through all of the behaviors in my scene. Is there a way to see the result of my object scanning? Right now it’s all sort of behind the scenes, and it feels like the object scanning doesn’t work unless the object is in the same position in relation to the background as it was before. It’s a black box and hard to understand what I’m doing wrong with the scanning, cuz when I move the object everything stops working. I could use a scene Hierarchy or list of all objects in a scene. Sometimes I don’t know where an object is but I know what it is called, and I’d like to be able to select it to make changes to it. Sometimes objects start right on top of eachother in the scene (like for a reset button for a physics simulation), which makes it frustrating to select one of them over the other, especially since it seems that the only way to select “affected objects” is to tap on them, instead of choosing from a list of those available in the scene. Feature Requests: One thing other apps have that makes it easy to add personality into the scene is characters that have a variety of animations they can play depending on context. It would be nice to have some kind of character creator that came with a bunch of pre-made animations, or at least some kind of library of characters with animations. So for example, if we want to create a non-player character that waves to the player, then moves somewhere else, and talks again, we can switch the animation of the character at the appropriate parts of the movement to make the character feel more real. This is a little more difficult to do with usdz files that only play one animation, although the movement is cool it typically only fits in one setting, so you have to juggle turning a bunch of objects off and on, even if you do find an importable character with a couple of animations (such as you might find on mixamo in .fbx format). Although I believe it may be possible for a .usdz file to have more than one animation in it? I haven't seen any examples of this. Any chance we’ll have a version of the Reality Converter app that works on iPads and iPhones? We don’t want to assume our students have access to a macbook, and being able to convert fbx files or obj files would open up access to a wider variety of online downloadable assets. Something that would really help make more complex scenes is the ability to make a second trigger for an object that relies on a condition being met first. The easiest example is being able to click on a tour guide a second time in order to move onto the next object. This gets a little deeper into code blocks, but possibly there could be an if block or a condition statement that checks if something is in proximity before allowing a tap, or checks how many times the object has been tapped by storing it in an integer variable you could set and check the value of. The way I first imagined it, maybe you’d be able to add a Trigger that enables AFTER the first action sequence has completed, so you can build longer chains. This also comes into play with physics interactions. Let’s say I want to click on a ball to launch it, but when it stops moving at a speed greater than some number I want it to automatically reset. I’d like the ability to make one object follow another one with a block, or some kind of system that’s similar to “parenting” objects together like you can in a 3D engine. This way, you could separate the visuals of an object from its physics, letting you play animations on launched objects, spin them, emphasize them, while still allowing the physics simulation to work. For Physics simulations, is it possible to implement a feature where the direction of force is pointing towards another object in the scene? Or better yet, away from them using negative values? Specifically, I’d like to be able to launch a projectile in the direction the camera is facing, or give the user some control over the direction during playtime. Would be nice to edit the material or color of an object with an action, give the user a little pulse of color when they tap as feedback, or even allow the user to customize their environment with different textures If you want this app to be used in education, there must be a way for teachers to share their built experiences with eachother, some kind of online repository where you can try out what others have made.
Posted Last updated
.