The set of active semantics on the frame.
- iOS 13.0+
You can choose whether ARKit reports information about a particular per-frame metric, or semantic. Before enabling a frame sementic, call
supports to ensure device support.
Enable 2D Body Detection
To get information about the 2D location of a person that ARKit recognizes in a frame, you enable the
body frame semantic.
Enable People Occlusion
People occlusion is a feature that enables people in the camera feed to cover your app’s virtual content.
To indicate that a person should overlap your app's virtual content when the person is closer to the camera than the virtual content, add the
person option to your configuration's frame semantics.
To indicate that a person should overlap your app's virtual content regardless of the person's depth in the scene, use the
person frame semantic instead. This option is particularly appropriate for green-screen scenarios.
If you implement your own renderer, use
estimated to implement people occlusion yourself.
ARMatte helps you by providing masks. For a sample app that demonstrates matte generator and people occlusion, see Effecting People Occlusion in Custom Renderers.
If you enable Scene Reconstruction, ARKit adjusts the mesh according to any people ARKit may detect in the camera feed. ARKit removes any part of the scene mesh that overlaps with people, as defined by the with- or without-depth frame semantics. For more information about scene reconstruction, see Visualizing and Interacting with a Reconstructed Scene.