Interesting. My previous test did all the things you mentioned, but the captureHighResolutionFrame call that failed was made immediately after calling session.run. It works when moved to a tap handler.
Thanks
Sweet, thanks for digging into that!
Thanks for the reply! I don't see anything having to do with export in UIDocumentPickerViewController class definition though, so how should the following be done in iOS 14?
UIDocumentPickerViewController(url: exportUrl, in: .exportToService)
Thanks again,
Jim
Post not yet marked as solved
Thanks. Looking forward to the evolution of the RealityKit ecosystem!Best regards,Jim
Post not yet marked as solved
One thing that's a blocker with RealityKit currently is the lack of support for custom geometry. We needed to show a shaded mesh during a reconstruction session, so we were driven back to ARSCNView which worked great, but of course requires abandoning ARView. Around 2:45 in this recently posted tech talk there's a comment: "We're overlaying the ARFrame image with a mesh being generated by ARKit using the LiDAR sensor... and the colors are based on a classification of what the mesh overlays" https://developer.apple.com/videos/play/tech-talks/609/It would be extremely helpful to know if this demo used ARSCNView!. Could you tell us how this was done? Best regards,Jim Selikoff
Post not yet marked as solved
Not sure if this is the only problem, but I think that indexCount should be faces.count * faces.indexCountPerPrimitive.
Post not yet marked as solved
Some discussions on Twitter indicate that the LiDAR depth data will not be accessible. Can you say anything about this?
Post not yet marked as solved
Pulled from GitHub the project builds and runs, at least on an iPhone 7+.https://github.com/Gruppio/SwiftShot.git
Post not yet marked as solved
I will let you know if it builds after correcting all the calls to os_log!