An object that manages capture activity and coordinates the flow of data from input devices to capture outputs.
SDKs
- iOS 4.0+
- macOS 10.7+
- Mac Catalyst 13.0+
Framework
- AVFoundation
Declaration
@interface AVCaptureSession : NSObject
Overview
To perform real-time capture, you instantiate an AVCapture
object and add appropriate inputs and outputs. The following code fragment illustrates how to configure a capture device to record audio.
// Create the capture session.
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
// Lookup the default audio device.
AVCaptureDevice *audioDevice =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
// Wrap the audio device in a capture device input.
NSError *error = nil;
AVCaptureDeviceInput *audioInput =
[AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error];
if (audioInput) {
// If the input can be added, add it to the session.
if ([captureSession canAddInput:audioInput]) {
[captureSession addInput:audioInput];
}
} else {
// Configuration failed. Handle error.
}
You invoke start
to start the flow of data from the inputs to the outputs, and invoke stop
to stop the flow.
Important
The start
method is a blocking call which can take some time, therefore you should perform session setup on a serial queue so that the main queue isn’t blocked (which keeps the UI responsive). See AVCam: Building a Camera App for an implementation example.
You use the session
property to customize the quality level, bitrate, or other settings for the output. Most common capture configurations are available through session presets; however, some specialized options (such as high frame rate) require directly setting a capture format on an AVCapture
instance.