Explore 3D body pose and person segmentation in Vision

RSS for tag

Discuss the WWDC23 Session Explore 3D body pose and person segmentation in Vision

View Session

Posts under wwdc2023-111241 tag

4 Posts
Sort by:
Post not yet marked as solved
0 Replies
321 Views
According to the Apple’s tutorial in wwdc2023, the origin of coordinate for immersive space is located at the user’s feet. Are there any ways to retrieve the device orientation data in the real world, i.e. the horizontal and vertical angle data of the device?
Posted Last updated
.
Post marked as solved
4 Replies
902 Views
Hi, Has anyone gotten the human body pose in 3D sample provided at the following working? https://developer.apple.com/documentation/vision/detecting_human_body_poses_in_3d_with_vision I installed iPadOS 17 on a 9th Gen iPad. The sample load up on Mac and iPad. However after selecting an image, it goes into the spinning wheel without anything returned. I hope to play and learn more about the sample. Any pointers or help is greatly appreciated. Similarly, the Detecting animal body poses with Vision is showing up as blank for me. https://developer.apple.com/documentation/vision/detecting_animal_body_poses_with_vision Or does the samples require a device with Lidar? Thank you in advance.
Posted
by jamesboo.
Last updated
.
Post not yet marked as solved
0 Replies
478 Views
Seems like the same pipelines that enabled VNDetectHumanBodyPose3DRequest can be utilized to upgrade the hand tracking model as well - can we expect that upgrade this year? I suppose that Vision Pro only uses the 2020 2D workflow, correct?
Posted
by NovGal.
Last updated
.