The set of active semantics on the frame.
SDK
- iOS 13.0+
Framework
- ARKit
Declaration
var frameSemantics: ARConfiguration.Frame Semantics { get set }
Discussion
A frame semantic represents 2D information that ARKit extracts from a frame. Set this property to tell ARKit to provide information about one or more ARConfiguration
every frame.
Enable 2D Body Detection
To get information about the 2D location of a person that ARKit recognizes in a frame, you enable the body
frame semantic.
if let config = mySession.configuration as? ARBodyTrackingConfiguration {
config.frameSemantics.insert(.bodyDetection)
// Run the configuration to effect a frame semantics change.
mySession.run(config)
}
Enable People Occlusion
To indicate that a person should overlap your app's virtual content regardless of the person's depth in the camera's field of view, you enable the person
frame semantic.
if let config = mySession.configuration as? ARWorldTrackingConfiguration {
config.frameSemantics.insert(.personSegmentation)
// Run the configuration to effect a frame semantics change.
mySession.run(config)
}
This option works for standard renderers like ARView
and ARSCNView
, and is suitable for virtual reality or green screen scenarios.
Enable People Occlusion with Depth
To indicate that a person should overlap your app's virtual content only when the person is closer to the camera than the virtual content, you enable the person
frame semantic.
if let config = mySession.configuration as? ARWorldTrackingConfiguration {
config.frameSemantics.insert(.personSegmentationWithDepth)
// Run the configuration to effect a frame semantics change.
mySession.run(config)
}
This operation is done pixel by pixel, according to the renderer's z-buffer, when you use standard renderers like ARView
, or ARSCNView
.
Enable People Occlusion in Custom Renderers
If you implement your own renderer, then you use the segmentation
and estimated
to implement people occlusion. See ARMatte
for more information.