Capture photos and record video and audio; configure built-in cameras and microphones or external capture devices.
The AVFoundation Capture subsystem provides a common high-level architecture for video, photo, and audio capture services in iOS and macOS. Use this system if you want to:
Build a custom camera UI to integrate shooting photos or videos into your app’s user experience.
Give users more direct control over photo and video capture, such as focus, exposure, and stabilization options.
Produce different results than the system camera UI, such as RAW format photos, depth maps, or videos with custom timed metadata.
Get live access to pixel or audio data streaming directly from a capture device.
The main parts of the capture architecture are sessions, inputs, and outputs: Capture sessions connect one or more inputs to one or more outputs. Inputs are sources of media, including capture devices like the cameras and microphones built into an iOS device or Mac. Outputs acquire media from inputs to produce useful data, such as movie files written to disk or raw pixel buffers available for live processing.
Methods for responding to events that occur while recording captured media to a file.
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.