Configure input devices, output media, preview views, and basic settings before capturing photos or video.
AVCapture is the basis for all media capture in iOS and macOS. It manages your app’s exclusive access to the OS capture infrastructure and capture devices, as well as the flow of data from input devices to media outputs. How you configure connections between inputs and outputs defines the capabilities of your capture session. For example, the diagram below shows a capture session that can capture both photos and movies and provides a camera preview, using the iPhone back camera and microphone.
Connect Inputs and Outputs to the Session
All capture sessions need at least one capture input and capture output. Capture inputs (
AVCapture subclasses) are media sources—typically recording devices like the cameras and microphone built into an iOS device or Mac. Capture outputs (
AVCapture subclasses) use data provided by capture inputs to produce media, like image and movie files.
Next, add outputs for the kinds of media you plan to capture from the camera you’ve selected. For example, to enable capturing photos, add an
AVCapture to the session:
A session can have multiple inputs and outputs. For example:
Display a Camera Preview
It’s important to let the user see input from the camera before choosing to snap a photo or start video recording, as in the viewfinder of a traditional camera. You can provide such a preview by connecting an
AVCapture to your capture session, which displays a live video feed from the camera whenever the session is running.
AVCapture is a Core Animation layer, so you can display and style it in your interface as you would any other
CALayer subclass. The simplest way to add a preview layer to a UIKit app is to define a
UIView subclass whose
AVCapture, as shown below.
Then, to use the preview layer with a capture session, set the layer’s
Run the Capture Session
After you’ve configured inputs, outputs, and previews, call
start to let data flow from inputs to outputs.
With some capture outputs, running the session is all you need to begin media capture. For example, if your session contains an
AVCapture, you start receiving delivering video frames as soon as the session is running.
With other capture outputs, you first start the session running, then use the capture output class itself to initiate capture. In a photography app, for example, running the session enables a viewfinder-style preview, but you use the
capture method to snap a picture.