Class

ARFrame

A video image captured as part of a session with position tracking information.

Declaration

@interface ARFrame : NSObject

Overview

A running session continuously captures video frames from the device camera while ARKit analyzes them to estimate the user's position in the world. ARKit also provides this information to you in the form of an ARFrame, and at the frequency of your app's frame rate.

Your app has two ways to receive an ARFrame:

Topics

Accessing Captured Video Frames

capturedImage

A pixel buffer containing the image captured by the camera.

timestamp

The time at which the frame was captured.

capturedDepthData

The depth map, if any, captured along with the video frame.

capturedDepthDataTimestamp

The time at which depth data for the frame (if any) was captured.

Checking World Mapping Status

worldMappingStatus

The feasibility of generating or relocalizing a world map for this frame.

ARWorldMappingStatus

Possible values describing how thoroughly ARKit has mapped the the area visible in a given frame.

Examining Scene Parameters

camera

Information about the camera position, orientation, and imaging parameters used to capture the frame.

lightEstimate

An estimate of lighting conditions based on the camera image.

- displayTransformForOrientation:viewportSize:

Returns an affine transform for converting between normalized image coordinates and a coordinate space appropriate for rendering the camera image onscreen.

Tracking and Finding Objects

anchors

The list of anchors representing positions tracked or objects detected in the scene.

- hitTest:types:

Searches for real-world objects or AR anchors in the captured camera image.

Debugging Scene Detection

rawFeaturePoints

The current intermediate results of the scene analysis ARKit uses to perform world tracking.

ARPointCloud

A collection of points in the world coordinate space of the AR session.

Finding Real-World Surfaces

- raycastQueryFromPoint:allowingTarget:alignment:

Get a ray-cast query for a screen point.

Tracking Human Bodies in 2D

detectedBody

The screen position information of a body that ARKit recognizes in the camera image.

ARBody2D

The screen-space representation of a person ARKit recognizes in the camera feed.

Occluding Virtual Content with People

segmentationBuffer

A buffer that contains pixel information identifying the shape of objects from the camera feed that you use to occlude virtual content.

estimatedDepthData

A buffer that represents the estimated depth values from the camera feed that you use to occlude virtual content.

ARSegmentationClass

A categorization of a pixel that defines a type of content you use to occlude your app's virtual content.

Enabling Camera Grain

cameraGrainIntensity

A value that specifies the amount of grain present in the camera grain texture.

cameraGrainTexture

A tileable Metal texture created by ARKit to match the visual characteristics of the current video stream.

Relationships

Inherits From

Conforms To

See Also

Camera

Occluding Virtual Content with People

Cover your app’s virtual content with people that ARKit perceives in the camera feed.

ARCamera

Information about the camera position and imaging characteristics for a given frame.