Meet Reality Composer Pro

RSS for tag

Discuss the WWDC23 Session Meet Reality Composer Pro

View Session

Posts under wwdc2023-10083 tag

6 Posts
Sort by:
Post not yet marked as solved
0 Replies
523 Views
I'm currently testing Photogrametry by capturing photos with sample project https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture Then use them on my laptop with https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app It worked perfectly until the latest updates of Sonoma BETA. It started by warning logs in the console saying I lacked depthMap in my samples and now it just refuse to create samples from my HEIC files. I tried to create HEIC files with and without Depth data to check if it's a bad format of these depth data but it seems it's just the HEIC format itself that is not accepted anymore. I've also just imported HEIC files captured with the standard iOS app and transferred what Photo app and they doesn't work either so it's not an issue of poorly formatted files. If I convert the files in PNG, it works again but of course, as announced during WWDC 2023, I expect to get the photogrammetry pipeline leverage the LIDAR data ! I check every BETA update waiting for an improvement. I can see the photogrammetry logs are never the same so I guess the apple teams are working on it. Of course, the object capture model from Reality Composer pro, also doesn't accept HEIC files anymore. If there are some workarounds, please advise !
Posted Last updated
.
Post marked as solved
1 Replies
879 Views
I saw the at the WWDC23 session "Meet Object Capture for iOS" that the new tool that was released today along with Xcode 15 beta 2 called "Reality Composer Pro" will be capable of creating 3D models with Apple's PhotogrammetrySession. However, I do not see any of its features on the tool. Has anyone managed to find the feature for creating 3D models as shown in the session?
Posted
by KKodiac.
Last updated
.
Post not yet marked as solved
1 Replies
1.4k Views
Do I understand correctly that to use Unity (Universal Render Pipeline) for Vision Pro's fully immersive apps, we can normally use Unity's Shader Graph to make custom shaders; but for immersive mixed reality apps, we cannot use Shader Graph anymore, instead, we have to create shaders for Unity in Reality Composer? How does bringing Reality Composer's shader into Unity work? Is it simply working in Unity or will it require special adaptation for Unity? Are there some cases to avoid using Reality Composer, and use Unity's Shader Graph for immersive Vision apps? For instance, we may lose real-time lighting adaptation for virtual objects, but on the other hand, we will be able to use Shader Graph.
Posted
by NikitaSh.
Last updated
.