Sample Code

Occluding Virtual Content with People

Cover your app’s virtual content with people that ARKit perceives in the camera feed.

Download

Overview

By default, virtual content covers anything in the camera feed. For example, when a person passes in front of a virtual object, the object is drawn on top of the person, which can break the illusion of the AR experience.

Screenshot of a virtual toaster drawn on top of a person.

To cover your app’s virtual content with people that ARKit perceives in the camera feed, you enable people occlusion. Your app can then render a virtual object behind people who pass in front of the camera. ARKit accomplishes the occlusion by identifying regions in the camera feed where people reside, and preventing virtual content from drawing into that region’s pixels.

Screenshot of the virtual toaster behind the person.

This sample renders its graphics using RealityKit, but you can follow the same steps to use people occlusion with SceneKit. To enable people occlusion in Metal apps, see Effecting People Occlusion in Custom Renderers.

Verify Device Support for People Occlusion

People occlusion is supported on Apple A12 and later devices. Before attempting to enable people occlusion, verify that the user’s device supports it.

guard ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth) else {
    fatalError("People occlusion is not supported on this device.")
}

Enable People Occlusion

If the user’s device supports people occlusion, enable it by adding the personSegmentationWithDepth option to your configuration’s frame semantics.

config.frameSemantics.insert(.personSegmentationWithDepth)

Any time you change your session’s configuration, rerun the session to effect the configuration change.

arView.session.run(config)

The personSegmentationWithDepth option specifies that a person occludes a virtual object only when the person is closer to the camera than the virtual object.

Screenshot of people occlusion with depth.

Alternatively, the personSegmentation frame semantic gives you the option of always occluding virtual content with any people that ARKit perceives in the camera feed irrespective of depth. This technique is useful, for example, in green-screen scenarios.

Screenshot of people occlusion with virtual background.

Disable People Occlusion

You might choose to disable people occlusion for performance reasons if, for example, no virtual content is present in the scene, or if the device has reached a serious or critical thermalState (see ProcessInfo.ThermalState). To temporarily disable people occlusion, remove that option from your app’s frameSemantics.

config.frameSemantics.remove(.personSegmentationWithDepth)

Then, rerun your session to effect the configuration change.

arView.session.run(config)

See Also

Camera

class ARFrame

A video image captured as part of a session with position tracking information.

class ARCamera

Information about the camera position and imaging characteristics for a given frame.