Reality Composer

RSS for tag

Prototype and produce content for AR experiences using Reality Composer.

Reality Composer Documentation

Posts under Reality Composer tag

77 Posts
Sort by:
Post not yet marked as solved
1 Replies
272 Views
Hi, I'm having a problem with orbits when I export data from Reality Composer. I am able to select an object and specify the center to orbit around, but when I drag the orbital axis it will work in Reality Composer, but not when I export to USDZ. I tried to attach a .usdz and a .rcproject file, but I don't seem to be able to do that.
Posted Last updated
.
Post not yet marked as solved
4 Replies
935 Views
Hello everyone, I created a 3D character animation in Blender and I would like to import it in Reality Composer. However i export the animation from Blender, it wont show in Composer, just the static object. My usdz scene has character parrented on animated armature. is there any way to import 3D character animation (made in 3D software) to Reality Composer? thnx
Posted
by Leka99.
Last updated
.
Post not yet marked as solved
7 Replies
1.4k Views
Hi, So I created a scene in Reality Composer but when trying to export to USDZ, all of the animations, audio and interactions work, but the textures have been completely changed... (Map should be green, for example) Everything looks fine in Reality Composer Unfortunately, exporting as a .reality file isnt an option in this project. Could I get some insight into this please? Thank you
Posted
by Vanta.
Last updated
.
Post not yet marked as solved
1 Replies
656 Views
Hello :), My name is Alec and I'm experimenting with the USD file format. Here's what I've done: I used my iPad to take a point scan of my own head (funny I know). I brought the PLY cloud into Blender where I used GeoNodes to instance objects on those points. I also loaded audio into my Blend file because I wanted my head to react to audio I exported my model into Alembic format (.abc) and exported the animation data as well (.mdd). I followed this video to do it https://www.youtube.com/shorts/awArQyvncmU I brought the .mdd & .abc back into files back into Blender and rexported as a GLB. AT THIS POINT I KNOW I HAVE A WORKING GLB MODEL AND ANIMATION (see a link to file here. Warning, its 500MB: https://drive.google.com/file/d/15nQ2eIqhFQtHce5EbwH4_pXq-maR3rKo/view?usp=sharing ) I brought the glb file into Reality Converter hoping there would be a seamless conversion to USDZ. For some reason, my animation data is getting lost :( I don't know what to do. Does anyone know what I might be doing wrong? Bonus Question: Is it even possible to put audio in a USDZ file in Reality Composer? Many thanks to everyone out there willing to help! All the best, Alec
Posted Last updated
.
Post not yet marked as solved
0 Replies
266 Views
My app auto-generates USDZ files, and I want to make them interactive by adding some interactive dots into the file and let user click them. One possible way is to embed clickable dots into the USDZ file and convert it to .reality file. But how can I do this with code? Below is an example that I want to achieve.
Posted
by Xia_Su.
Last updated
.
Post marked as solved
1 Replies
307 Views
Hello, I'm working on building some automation and I've been unable to get my RealityComposer AR Session recorded MOV file to playback for our AR test scene. I recorded the Video on a iPhone 12 Pro iOS16.2 and uploaded the video through the Xcode (14.1) scheme, and our plane finder discovers the planes, so the plane data is there - but the video recording itself is missing and only renders a black screen.. I'm getting these two errors in Xcode: 2022-12-15 21:02:46.831232-0800 ARKit[47806:4130209] ⛔️⛔️⛔️ ERROR [MOVReaderInterface]: Error Domain=com.apple.AppleCV3DMOVKit.readererror Code=9 "CVAUserEvent: Error Domain=com.apple.videoeng.streamreaderwarning Code=0 "Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'." UserInfo={NSLocalizedDescription=Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'.}" UserInfo={NSLocalizedDescription=CVAUserEvent: Error Domain=com.apple.videoeng.streamreaderwarning Code=0 "Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'." UserInfo={NSLocalizedDescription=Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'.}} ⛔️⛔️⛔️ 2022-12-15 21:02:47.002073-0800 ARKit[47806:4130208] ⛔️⛔️⛔️ ERROR [MOVReaderInterface]: Error Domain=com.apple.AppleCV3DMOVKit.readererror Code=9 "Location: Error Domain=com.apple.videoeng.streamreaderwarning Code=0 "Cannot grab metadata. Unknown metadata stream 'Location'." UserInfo={NSLocalizedDescription=Cannot grab metadata. Unknown metadata stream 'Location'.}" UserInfo={NSLocalizedDescription=Location: Error Domain=com.apple.videoeng.streamreaderwarning Code=0 "Cannot grab metadata. Unknown metadata stream 'Location'." UserInfo={NSLocalizedDescription=Cannot grab metadata. Unknown metadata stream 'Location'.}} ⛔️⛔️⛔️ I have tried toggling Location services for both the device and for RealityComposer and neither have worked - I have rolled back to a previous version (12) of Xcode and the newer 14 version, but still I only get a black screen, i've taken long videos and short videos (>15 seconds & <15 seconds), but still the issue persists. I've asked my team devs and they're unsure what could be causing this location error either and we are unfortunately stumped, is there something we're missing on our end, is anyone able to give us some advice? Huge thanks!
Posted Last updated
.
Post not yet marked as solved
1 Replies
566 Views
I am trying to run the application on the iPad M1 Device pinned to the Xcode debugger. The scheme has the Replay data enabled with the film prerecorded from the very same iPad M1. Errors: 2022-09-19 19:21:52.790061+0100 ARPathFinder[1373:320166] ⛔️⛔️⛔️ ERROR [MOVReaderInterface]: Error Domain=com.apple.AppleCV3DMOVKit.readererror Code=9 "CVAUserEvent: Error Domain=com.apple.videoeng.streamreaderwarning Code=0 "Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'." UserInfo={NSLocalizedDescription=Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'.}" UserInfo={NSLocalizedDescription=CVAUserEvent: Error Domain=com.apple.videoeng.streamreaderwarning Code=0 "Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'." UserInfo={NSLocalizedDescription=Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'.}} ⛔️⛔️⛔️ 2022-09-19 19:21:54.103813+0100 ARPathFinder[1373:320166] [Session] ARSession <0x104f77ec0>: did fail with error: Error Domain=com.apple.arkit.error Code=101 "Required sensor unavailable." UserInfo={NSLocalizedDescription=Required sensor unavailable., NSLocalizedFailureReason=A required sensor is not available on this device.} Any help will be appreciated, thanks.
Posted
by jmgawe.
Last updated
.
Post not yet marked as solved
2 Replies
702 Views
Hello everyone , i am working on Quick look project using model-viewer. I am trying to play USDZ animation using tap behavior using reality composer and its look good in composer. once export and play on click animation not working. Any trick or hack for this? You can test here what i want. https://ti-hardikshah.github.io/UsExpo/index.html
Posted Last updated
.
Post not yet marked as solved
0 Replies
342 Views
Hello, I am learning swift and was building a quick setup using Reality Composer onAction trigger to object change. Basically I want to flip a card resulting in the object image being changed. example: boxAnchor.actions.flipcardA.onAction = FunctionA I am not seeing this as a property... is there an easy way to change the 'flipcardA' image from a rcproject file? thank you
Posted
by VoidOfOne.
Last updated
.
Post marked as solved
3 Replies
1.5k Views
The context is: ARSessionDelegate func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {} Upon Image Detection, place an overlay of a RC Entity at the ImageAnchor location. Using IOS 14, I have been reliably using: mySceneAnchorEntity = AnchorEntity(anchor: imageAnchor) to render additive RC scene content. Additive model entities render correctly in the field of view. When I upgraded to IOS 15 using the same code that had been working for many (~12) months on all versions of IOS 14, my code failed to render any RC scene content. I get a ghosting of all of the render content in the proper location, but visible only for a moment, then it disappears. So, I finally found the root cause of the issue. It appears that IOS 15 only renders correctly in my application using: mySceneAnchorEntity = AnchorEntity(world: imageAnchor.transform). This led to many frustrating days of debugging to find the root cause. As a side note, IOS 14 renders RC scene entities correctly using both variants of AnchorEntity: mySceneAnchorEntity = AnchorEntity(anchor: imageAnchor) and mySceneAnchorEntity = AnchorEntity(world: imageAnchor.transform) So, this leads me to believe there is an issue with the behavior of IOS 15 with the following version of AnchorEntity: mySceneAnchorEntity = AnchorEntity(anchor: imageAnchor)
Posted
by MikeNunez.
Last updated
.
Post not yet marked as solved
1 Replies
639 Views
Hi, I’ve been watching quite some stuff about Object Capture months afar from now and how it works, and I’m wondering how I am supposed to interpret the ideal overlaps between sequential photos. Apple says it should be at least 70% between each photo, but since the ideal overlap isn’t a tangible concept, how am I supposed to take it into account when I take photos. Any help appreciated. By the way, here’s the article mentioning the overlap, near the end of the latter. https://developer.apple.com/documentation/realitykit/capturing-photographs-for-realitykit-object-capture
Posted
by Kwiky.
Last updated
.
Post not yet marked as solved
1 Replies
605 Views
Hi there, I am happy to have found Reality Composer. As I continue to create content, I have run into some issues. When exporting my model into .OBJ format to upload to the composer, I notice that the objects are not solid. I have gone through the exporting process to make sure that they are, but when viewed in Reality Composer, the closest surfaces are not showing and the object looks see-through. Any idea of what is going on?
Posted Last updated
.
Post not yet marked as solved
0 Replies
410 Views
Is it possible to add haptic feedback to a USDZ in Reality Composer? For example, if I want a phone to single-tap after knocking over a 3D model of a block in augmented reality, is that possible to do, and if so, how? If this is not possible to do within Reality Composer, is it possible to add haptic feedback to a USDZ in some other way? For example, if I have a USDZ model of a button in augmented reality view, can I have the phone single-tap when the button model is pushed? If so, how can I accomplish this? Thanks in advance for your help!
Posted
by KDP.
Last updated
.
Post not yet marked as solved
1 Replies
379 Views
Hi Apple Developer Team, Wanted to reach out to submit an idea for Reality Composer that I think would be a tremendous addition to the product. Adding a digital trackball to the top right corner of the viewport would greatly improve my user experience. If somehow the trackball was able to snap to an XYZ axis within the scene (top, bottom, left, right, front, back), that would be even more magnificent. Is there a place to see what features are coming on the product roadmap? Would love to see that as well. Thanks! All the best, Alec
Posted Last updated
.
Post not yet marked as solved
0 Replies
569 Views
Is anyone else having trouble with image targets in reality composer? The detection itself works fine, properly showing my intended .usdz file when the target is seen in the camera view. However when the target is not seen or when the camera is pushed up close to the target, the AR object is shown at an arbitrary location when it should be hidden. Any help is welcome. Thank you very much.
Posted
by luorix.
Last updated
.