Configure and capture single or multiple still images, Live Photos, and other forms of photography.
Video captured on the iPhone 8, iPhone 8 Plus, and iPhone X running iOS 11 or later uses the HEVC codec by default. If your app shares the captured video using a system share sheet, the video will be automatically converted to a format compatible with the destination device.
AVFoundation supports many ways to capture photos. You can simply capture still HEIF or JPEG images, capture in RAW format for custom processing, snap several images in one shot, create Live Photos with motion and sound, and much more. In iOS, all photography workflows use the
Prepare for Photo Capture
First, set up an
AVCapture containing a supported camera device as one of its inputs and an
AVCapture as one of its outputs. (For details, see Choosing a Capture Device and Setting Up a Capture Session.) Each camera device supports a wide range of resolution and frame rate settings. To easily get the best photo quality for the user's device, you can use the
photo session preset instead of directly choosing individual settings.
Some capture options affect the internal configuration of the media capture pipeline. Because changing those options causes the pipeline to reconfigure itself, which takes time, enable them before offering the user the ability to shoot photos with those settings. Otherwise, the configuration delay could prevent the user from capturing a photo at the right moment.
For example, to configure the capture pipleline to support Live Photos, enable that property on the photo output, as shown below. After you've enabled Live Photo capture, you can choose for each individual shot whether to use still or Live Photo capture for each shot (see Capturing and Saving Live Photos).
To capture a photo, first create an
AVCapture object describing the settings you want to use for that shot and the data format for the resulting still photo. For example:
On supported devices, you can use the HEIF/HEVC format for improved image quality at smaller file sizes: use
hevcfor the video codec. On devices without HEVC support, use the default initializer
init()to fall back to JPEG format.
After creating a photo settings object, you can choose other settings for the photo. For example, the code below creates a settings object for HEIF/HEVC shooting, with automatic flash and image stabilization.
Other possible photo settings include Live Photos, depth data capture, and multi-image (bracketed) capture, as well as options for embedding preview or thumbnail images in output image files. For more information, see Next Steps and More Capture Options below.
Capture the Photo
Pass your photo settings object to the
capture method to trigger photo capture with the settings you've chosen.
Handle Capture Results
delegate you pass to the
capture method is an object to track the progress of and handle results from that photo capture. Capturing a photo is an asynchronous process with multiple steps that unfold over time. Because your app can trigger additional captures while earlier captures are still processing, your delegate implementation should be able to handle multiple captures at once. An easy way to handle concurrent captures is to define a class adopting the
AVCapture protocol and create a separate instance of that class for each capture:
When your captured image data is ready for use, the photo output calls your delegate's
photo method. You can use the resulting
AVCapture object there to display, process or save the image.