Body tracking AND People Occlusion at same time?

Hello,


I've been able to run the body tracking code example with the skeleton tracking a person's movement. I would like to add People Occlusion to this scenario. The code example depends on the ARBodyTrackingConfiguration subclass of ARConfiguration.
After calling

ARBodyTrackingConfiguration.supportsFrameSemantics(.personSegmentation)



or


ARBodyTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth)


I got the value 'false' for both.
To double check, I have tried to turn on People Occlusion by setting the the frameSemantics of the session

var config = ARBodyTrackingConfiguration()
config.frameSemantics.insert(.personSegmentation)
// or config.frameSemantics.insert(.personSegmentationWithDepth)

But this leads to a run-time exception complaining about the frameSemantics options I've set.
-----
I've seen that the ARWorldTrackingConfiguration supports .personSegmentation and .bodyDetection (according to the .supportsFrameSemantics( ) method), so I tried to achieve body tracking + people occlusion that way. I've noticed these two frameSemantics options cannot be turned on at the same time with an ARWorldTrackingConfiguration (it causes another runtime exception). Despite this, the method .supportsFrameSemantics() return true for both .personSegmentation and .bodyDetection.

If I use the ARWorldTrackingConfiguration and only turn on .bodyDetection frameSemantics, there are no runtime exceptions but the session isn't returning any ARBodyAnchors, as in the original 3D example (see below).
"When ARKit identifies a person in the back camera feed, it calls

session:didAddAnchors:
, passing you an
ARBodyAnchor
you can use to track the body's movement."
Source: https://developer.apple.com/documentation/arkit/arbodytrackingconfiguration

-----------------------------
Am I missing something obvious? Is it possible to somehow do People Occlusion and Body Tracking at the same time?
If I want to achieve body tracking, must I use the ARBodyTrackingConfiguration subclass or is there some other way to turn on the .bodyDetection frameSemantic enum using a different subclass of ARConfiguration?


EDIT: If it is not currently possible, is this something Apple intends to support in the future?

Replies

You're correct that this is not currently possible.


I'd assume it's due to the computation required to do each of those things means that together the experience would not be good enough.

I expect that if it is at some point possible to do these two together it will only be on an A13 chip or higher.

Any update on this? Would be nice to do both at the same time.

"Despite this, the method .supportsFrameSemantics() return true for both .personSegmentation and .bodyDetection"


While it reports true for each of these options individually, when these options are combined ARKit reports false.


ARWorldTrackingConfiguration.supportsFrameSemantics([.personSegmentation, .bodyDetection]) // false


"EDIT: If it is not currently possible, is this something Apple intends to support in the future?"


You should file an enhanement request using Feedback Assistant.


"If I want to achieve body tracking, must I use the ARBodyTrackingConfiguration subclass or is there some other way to turn on the .bodyDetection frameSemantic enum using a different subclass of ARConfiguration?"


The only way to get an ARBodyAnchor (which is 3D joints) is through the ARBodyTrackingConfiguration. The bodyDetection frame semantic (when enabled on a configuration that supports it, for example ARWorldTrackingConfiguration) will tell ARKit to give you detectedBody (which is 2D joints) when the frame is updated in session(_:didUpdate:).

I am also missing this feature. But really hope there will be a solution in near future. Apps like “Wanna Kicks” show that it is possible to do.