Record, edit, and play audio and video; configure your audio session; and respond to changes in the device audio environment. If necessary, customize the default system behavior that you implement with AVKit.


The AV Foundation framework provides an Objective-C interface for managing and playing audio-visual media in iOS and macOS applications. To learn more about AV Foundation, see AVFoundation Programming Guide.




AVAsset is an abstract, immutable class used to model timed audiovisual media such as videos and sounds. An asset may contain one or more tracks that are intended to be presented or processed together, each of a uniform media type, including but not limited to audio, video, text, closed captions, and subtitles.


An AVAssetCache is used to inspect the state of an asset’s locally cached media data.


AVAssetDownloadTask is an URLSessionTask subclass used to download HTTP Live Streaming assets. Instances of this class are created using the makeAssetDownloadTask(asset:assetTitle:assetArtworkData:options:) method of AVAssetDownloadURLSession.


A subclass of URLSession used to support creating and executing instances of AVAssetDownloadTask.


An AVAssetExportSession object transcodes the contents of an AVAsset source object to create an output of the form described by a specified export preset.


An AVAssetImageGenerator object provides thumbnail or preview images of assets independently of playback.


You use an AVAssetReader object to obtain media data of an asset, whether the asset is file-based or represents an assemblage of media data from multiple sources (as with an AVComposition object).


AVAssetReaderAudioMixOutput is a concrete subclass of AVAssetReaderOutput that defines an interface for reading audio samples that result from mixing the audio from one or more tracks of an AVAssetReader object's asset.


AVAssetReaderOutput is an abstract class that defines an interface for reading a single collection of samples of a common media type from an AVAssetReader object.


The AVAssetReaderOutputMetadataAdaptor class defines an interface for reading metadata, packaged as instances of AVTimedMetadataGroup, from a single AVAssetReaderTrackOutput object.


AVAssetReaderSampleReferenceOutput is a concrete subclass of the AVAssetReaderOutput class that defines an interface for reading sample references from a single AVAssetTrack of an AVAsset instance contained in an AVAssetReader object.


AVAssetReaderTrackOutput defines an interface for reading media data from a single AVAssetTrack object of an asset reader's asset.


AVAssetReaderVideoCompositionOutput is a subclass of AVAssetReaderOutput you use to read video frames that have been composited together from the frames in one or more tracks of an AVAssetReader object's asset.


An AVAssetResourceLoader object mediates resource requests from an AVURLAsset object with a delegate object that you provide. When a request arrives, the resource loader asks your delegate if it is able to handle the request and reports the results back to the asset.


The AVAssetResourceLoadingContentInformationRequest class represents a query for essential information about a resource referenced by an asset resource loading request.


Use the AVAssetResourceLoadingDataRequest class to request data from a resource referenced by an AVAssetResourceLoadingRequest instance.


An AVAssetResourceLoadingRequest object encapsulates information about a resource request issued from a resource loader object.


The AVAssetResourceRenewalRequest class is a subclass of AVAssetResourceLoadingRequest that encapsulates information about a resource request issued by a resource loader for the purpose of renewing a request previously issued.


An AVAssetTrack object provides the track-level inspection interface for an asset’s media tracks.


The AVAssetTrackGroup class encapsulates a single group of related tracks in an asset.


An AVAssetTrackSegment object represents a segment of an AVAssetTrack object, comprising of a time mapping from the source to the asset track timeline.


You use an AVAssetWriter object to write media data to a new file of a specified audiovisual container type, such as a QuickTime movie file or an MPEG-4 file, with support for automatic interleaving of media data for multiple concurrent tracks.


You use an AVAssetWriterInput to append media samples packaged as CMSampleBuffer objects (see CMSampleBuffer), or collections of metadata, to a single track of the output file of an AVAssetWriter object.


The AVAssetWriterInputGroup class associates tracks corresponding to inputs with each other in a mutually exclusive relationship.


The AVAssetWriterInputMetadataAdaptor class defines an interface for writing metadata packaged as instances of AVTimedMetadataGroup to a single AVAssetWriterInput object.


The AVAssetWriterInputPassDescription class defines an interface for querying information about the requirements of the current pass, such as the time ranges of media data to append.


You use an AVAssetWriterInputPixelBufferAdaptor to append video samples packaged as CVPixelBuffer objects to a single AVAssetWriterInput object.


An AVAsynchronousCIImageFilteringRequest object provides for using Core Image filters to process an individual video frame in a video composition (a AVVideoComposition or AVMutableVideoComposition object).


An AVAsynchronousVideoCompositionRequest instance contains the information necessary for a video compositor to render an output pixel buffer.


The AVAudioBuffer class represents a buffer of audio data and its format.


The AVAudioChannelLayout class describes the roles of a set of audio channels.


The AVAudioEngine class defines a group of connected AVAudioNode objects, known as audio nodes. You use audio nodes to generate audio signals, process them, and perform audio input and output.


The AVAudioEnvironmentDistanceAttenuationParameters class specifies the attenuation distance, the gradual loss in audio intensity, and characteristics.


The AVAudioEnvironmentNode class is a mixer node that simulates a 3D audio environment. Any node that conforms to the AVAudioMixing protocol (for example, AVAudioPlayerNode) can act as a source in this environment.


The AVAudioEnvironmentReverbParameters class encapsulates the parameters that you use to control the reverb of the AVAudioEnvironmentNode class.


The AVAudioFile class represents an audio file that can be opened for reading or writing.


The AVAudioFormat class wraps a Core Audio AudioStreamBasicDescription struct, with convenience initializers and accessors for common formats, including Core Audio’s standard deinterleaved 32-bit floating point format.


The AVAudioIONode class is the base class for nodes that connects to the system's audio input or output.


The AVAudioInputNode class represents a node that connects to the system's audio input.


An AVAudioMix object manages the input parameters for mixing audio tracks. It allows custom audio processing to be performed on audio tracks during playback or other operations.


An AVAudioMixInputParameters object represents the parameters that should be applied to an audio track when it is added to a mix.


The AVAudioMixerNode class represents a node that mixes its inputs to a single output.


The AVAudioNode class is an abstract class for an audio generation, processing, or I/O block.


The AVAudioOutputNode class represents a audio node that connects to the system's audio output.


The AVAudioPCMBuffer class is a subclass of AVAudioBuffer for use with PCM audio formats.


An instance of the AVAudioPlayer class, called an audio player, provides playback of audio data from a file or memory.


The AVAudioPlayerNode class plays buffers or segments of audio files.


An instance of the AVAudioRecorder class, called an audio recorder, provides audio recording capability in your application. Using an audio recorder you can:


An audio session is a Singleton object that you employ to set the audio context for your app and to express to the system your intentions for your app’s audio behavior.


The AVAudioSessionChannelDescription class provides descriptive information about a hardware channel on the current device. You typically do not create instances of this class yourself but can retrieve them from the port AVAudioSessionPortDescription object used to reference the intended input or output port.


The AVAudioSessionDataSourceDescription class defines a data source for an audio input or output, providing information such as the source’s name, location and orientation.


An AVAudioSessionPortDescription object describes a single input or output port associated with an audio route. You can use the information in this class to obtain information about the capabilities of the port and the hardware channels it supports.


An AVAudioSessionRouteDescription manages the input and output ports associated with the current audio route for a session.


The AVAudioTime class is used by AVAudioEngine to represent time. Instances of the class are immutable.


The AVAudioUnit class is a subclass of the AVAudioNode class that, depending on the type of the audio unit, processes audio either in real-time or non real-time.


The AVAudioUnitComponent class provides details about an audio unit such as: type, subtype, manufacturer, and location. User tags can be added to the AVAudioUnitComponent which can be queried later for display.


The AVAudioUnitComponentManager class is a singleton object that provides a way to find audio components that are registered with the system. It provides methods to search and query various information about the audio components without opening them. Currently, only audio components that are audio units can only be searched.


The AVAudioUnitDelay class is an AVAudioUnitEffect subclass that implements a delay effect.


The AVAudioUnitDistortion class is an AVAudioUnitEffect subclass that implements a multi-stage distortion effect.


The AVAudioUnitEQ class is an AVAudioUnitEffect subclass that implements a multi-band equalizer.


The AVAudioUnitEQFilterParameters class encapsulates the parameters used by an AVAudioUnitEQ instance.


An AVAudioUnitEffect class that processes audio in real-time using AudioUnits of type: effect, music effect, panner, remote effect or remote music effect. These effects run in real-time and process some number of audio input samples to produce number of audio output samples. A delay unit is an example of an effect unit.


The AVAudioUnitGenerator is an AVAudioUnit subclass that generates audio output.


The AVAudioUnitMIDIInstrument class is an abstract class representing music devices or remote instruments.


The AVAudioUnitReverb class is an AVAudioUnitEffect subclass that implements a reverb effect.


The AVAudioUnitSampler class encapsulates Apple's Sampler Audio Unit. The sampler audio unit can be configured by loading different types of instruments such as an “.aupreset” file, a DLS or SF2 sound bank, an EXS24 instrument, a single audio file or with an array of audio files. The output is a single stereo bus.


The AVAudioUnitTimeEffect class is an AVAudioUnit subclass that processes audio in non-realtime.


The AVAudioUnitTimePitch class is an AVAudioUnitTimeEffect subclass that provides good quality playback rate and pitch shifting independent of each other.


The AVAudioUnitVarispeed class is an AVAudioUnitTimeEffect subclass that allows control of the playback rate.


You use an AVCaptureAudioChannel to monitor the average and peak power levels in an audio channel in a capture connection (see AVCaptureConnection).


AVCaptureAudioDataOutput is a concrete sub-class of AVCaptureOutput that you use, via its delegate, to process audio sample buffers from the audio being captured.


AVCaptureMovieFileOutput is a concrete sub-class of AVCaptureFileOutput that writes captured audio to any audio file type supported by CoreAudio.


AVCaptureAudioPreviewOutput is a concrete subclass of AVCaptureOutput that you use to preview audio being captured.


The AVCaptureAutoExposureBracketedStillImageSettings class is a concrete subclass of the AVCaptureBracketedStillImageSettings class that is used when bracketing exposure target bias.


AVCaptureBracketedStillImageSettings is an abstract class that defines an interface for settings pertaining to a bracketed capture.


An AVCaptureConnection object represents a connection between capture input and capture output objects associated with a capture session.


An AVCaptureDevice object represents a physical capture device and the properties associated with that device. You use a capture device to configure the properties of the underlying hardware. A capture device also provides input data (such as audio or video) to an AVCaptureSession object.


A query for finding and monitoring available capture devices.


An AVCaptureDeviceFormat object provides information about a media capture format for use with an AVCaptureDevice instance, such as video frame rates and zoom factors.


AVCaptureDeviceInput is a concrete sub-class of AVCaptureInput you use to capture data from an AVCaptureDevice object.


An AVCaptureDeviceInputSource object represents a distinct input source on an AVCaptureDevice object.


AVCaptureInput is an abstract base-class describing an input data source to an AVCaptureSession object.


An AVCaptureInputPort represents a stream of data from a capture input.


The AVCaptureManualExposureBracketedStillImageSettings class is a concrete subclass of the AVCaptureBracketedStillImageSettings class used when bracketing exposure duration and ISO.


An AVCaptureMetadataOutput object intercepts metadata objects emitted by its associated capture connection and forwards them to a delegate object for processing. You can use instances of this class to process specific types of metadata included with the input data. You use this class the way you do other output objects, typically by adding it as an output to an AVCaptureSession object.


AVCaptureMovieFileOutput is a concrete sub-class of AVCaptureFileOutput you use to capture data to a QuickTime movie.


AVCaptureOutput is an abstract base class describing an output destination of an AVCaptureSession object.


An AVCapturePhotoBracketSettings object describes desired features and settings for a photo capture request that involves capturing multiple images together with varied settings. To take a bracketed capture, you create and configure an AVCapturePhotoBracketSettings object, using AVCaptureBracketedStillImageSettings objects to describe the individual captures in the bracket, and then pass it to the AVCapturePhotoOutput capturePhoto(with:delegate:) method.


AVCapturePhotoOutput is a concrete subclass of AVCaptureOutput that provides a modern interface for most capture workflows related to still photography. In addition to basic capture of still images, a photo output supports RAW-format capture, bracketed capture of multiple images, Live Photos, and wide-gamut color. You can have images delivered in RAW format, a compressed format such as JPEG, or both. You can also enable automatic delivery of preview-sized images in addition to a main image. In addition, the AVCapturePhotoOutput class can format captured photos for output in the JPEG/JFIF and DNG file format.


A AVCapturePhotoSettings instance is a mutable object describing all the desired features and settings for a single photo capture request. To take a photo, you create and configure a AVCapturePhotoSettings object, then pass it to the AVCapturePhotoOutput capturePhoto(with:delegate:) method.


An AVCaptureResolvedPhotoSettings object provides an immutable description of the photo settings for a photo capture request that is either in progress or has completed. When you request a photo capture using the AVCapturePhotoOutput capturePhoto(with:delegate:) method, you describe the settings for that capture request in an AVCapturePhotoSettings object. When the capture begins, the photo output calls your delegate methods and provides an AVCaptureResolvedPhotoSettings object detailing the settings that are in effect for that capture.


AVCaptureScreenInput is a concrete subclass of AVCaptureInput that provides an interface for capturing media from a screen or a portion of a screen.


You use an AVCaptureSession object to coordinate the flow of data from AV input devices to outputs.


AVCaptureStillImageOutput is a concrete sub class of AVCaptureOutput that you use to capture a high-quality still image with accompanying metadata.


AVCaptureVideoDataOutput is a concrete sub-class of AVCaptureOutput you use to process uncompressed frames from the video being captured, or to access compressed frames.


AVCaptureVideoPreviewLayer is a subclass of CALayer that you use to display video as it is being captured by an input device.


An AVComposition object combines media data from multiple file-based sources in a custom temporal arrangement, in order to present or process media data from multiple sources together. All file-based audiovisual assets are eligible to be combined, regardless of container type. The tracks in an AVComposition object are fixed; to change the tracks, you use an instance of its subclass, AVMutableComposition.


An AVCompositionTrack object provides the low-level representation of tracks a track in an AVComposition object, comprising a media type, a track identifier, and an array of AVCompositionTrackSegment objects, each comprising a URL, and track identifier, and a time mapping.


An AVCompositionTrackSegment object represents a segment of an AVCompositionTrack object, comprising a URL, and track identifier, and a time mapping from the source track to the composition track.


AVDateRangeMetadataGroup is used to represent a collection of metadata items that are valid for use within a specific range of dates.


An AVFrameRateRange object expresses a range of valid frame rates as minimum and maximum rate and minimum and maximum duration.


The AVMIDIPlayer class is a player for music file formats such as MIDI and iMelody.


An AVMediaSelection represents a complete rendition of media selection options on an AVAsset.


An AVMediaSelectionGroup represents a collection of mutually exclusive options for the presentation of media within an asset.


An AVMediaSelectionOption object represents a specific option for the presentation of media within a group of options.


The AVMetadataFaceObject class is a concrete subclass of AVMetadataObject that defines the features of a single detected face. You can retrieve instances of this class from the output of an AVCaptureMetadataOutput object on devices that support face detection.


AVMetadataGroup is the common superclass for objects representing a collection of metadata items associated with a segment of a timeline.


An AVMetadataItem object represents an item of metadata associated with an audiovisual asset or with one of its tracks. To create metadata items for your own assets, you use the mutable subclass, AVMutableMetadataItem.


The AVMetadataItemFilter class is used to filter selected information from AVMetadataItem objects.


An AVMetadataItemValueRequest is used to respond to a request to load the value for an AVMetadataItem created using the init(propertiesOf:valueLoadingHandler:) method.


The AVMetadataMachineReadableCodeObject class is a concrete subclass of AVMetadataObject defining the features of a detected one-dimensional or two-dimensional barcode.


The AVMetadataObject class is an abstract class that defines the basic properties associated with a piece of metadata. These attributes reflect information either about the metadata itself or the media from which the metadata originated. Subclasses are responsible for providing appropriate values for each of the relevant properties.


An AVMutableAudioMix object manages the input parameters for mixing audio tracks. It allows custom audio processing to be performed on audio tracks during playback or other operations.


An AVMutableAudioMixInputParameters object represents the parameters that should be applied to an audio track when it is added to a mix.


AVMutableComposition is a mutable subclass of AVComposition you use when you want to create a new composition from existing assets. You can add and remove tracks, and you can add, remove, and scale time ranges.


AVMutableCompositionTrack is a mutable subclass of AVCompositionTrack that lets you for insert, remove, and scale track segments without affecting their low-level representation (that is, the operations you perform are non-destructive on the original).


AVMutableDateRangeMetadataGroup is a mutable subclass of AVDateRangeMetadataGroup used to represent a mutable collection of metadata items that are valid for use within a specific range of dates.


AVMutableMediaSelection is a mutable subclass of AVMediaSelection allowing for the selection of a media option.


AVMutableMetadataItem is a mutable subclass of AVMetadataItem that lets you build collections of metadata to be written to asset files using AVAssetExportSession.


You use an AVMutableTimedMetadataGroup object to represent a mutable collection of metadata items.


The AVMutableVideoComposition class is a mutable subclass of AVVideoComposition.


An AVMutableVideoCompositionInstruction object represents an operation to be performed by a compositor.


AVMutableVideoCompositionLayerInstruction is a mutable subclass of AVVideoCompositionLayerInstruction that is used to modify the transform, cropping, and opacity ramps to apply to a given track in a composition.


The AVOutputSettingsAssistant class specifies a set of parameters for configuring objects that use output settings dictionaries—so that the resulting media file conforms to a specific criteria.


An AVPlayer is a controller object used to manage the playback and timing of a media asset. It provides the interface to control the player’s transport behavior such as its ability to play, pause, change the playback rate, and seek to various points in time within the media’s timeline. You can use an AVPlayer to play local and remote file-based media, such as QuickTime movies and MP3 audio files, as well as audiovisual media served using HTTP Live Streaming.


AVPlayerItem models the timing and presentation state of an asset played by an AVPlayer object. It provides the interface to seek to various times in the media, determine its presentation size, identify its current time, and much more.


You use an AVPlayerItemAccessLog object to retrieve the access log associated with an AVPlayerItem object.


An AVPlayerItemAccessLogEvent object represents a single entry in an AVPlayerItem object’s access log.


You use an AVPlayerItemErrorLog object to retrieve the error log associated with an AVPlayerItem object.


An AVPlayerItemErrorLogEvent object represents a single item in an AVPlayerItem object’s error log.


The AVPlayerItemLegibleOutput class is a subclass of AVPlayerItemOutput that can vend media with a legible characteristic as an attributed string.


AVPlayerItemMediaDataCollector is the abstract base of media data collectors such as AVPlayerItemMetadataCollector.


AVPlayerItemMetadataCollector is a subclass of AVPlayerItemMediaDataCollector used to capture the date range metadata defined for an HTTP Live Streaming (HLS) asset.


The AVPlayerItemMetadataOutput class is a subclass of AVPlayerItemOutput that vends collections of metadata items carried in metadata tracks.


The AVPlayerItemOutput class is an abstract class that defines the common interface for moving samples from an asset to an AVPlayer object. You do not create instances of this class directly but instead use one of the concrete subclasses that manage specific types of assets.


You use an AVPlayerItemTrack object to modify the presentation state of an asset track (AVAssetTrack) being presented by an AVPlayer object.


The AVPlayerItemVideoOutput lets you coordinate the output of content associated with a Core Video pixel buffer.


AVPlayerLayer is a subclass of CALayer to which an AVPlayer object can direct its visual output. It can be used as the backing layer for a UIView or NSView or can be manually added to the layer hierarchy to present your video content on screen.


AVPlayerLooper is a helper class used to simplify playing looping media content using AVQueuePlayer.


The AVPlayerMediaSelectionCriteria class specifies the preferred languages and media characteristics for an AVPlayer instance.


AVQueuePlayer is a subclass of AVPlayer used to play a number of items in sequence. Using this class you can create and manage a queue of player items comprised of local or progressively downloaded file-based media, such as QuickTime movies or MP3 audio files, as well as media served using HTTP Live Streaming.


The AVSampleBufferDisplayLayer class is a subclass of CALayer that displays compressed or uncompressed video frames.


The AVSampleBufferGenerator class is used to create CMSampleBuffer opaque objects.


An AVSampleBufferRequest instance describes a CMSampleBuffer creation request.


An AVSampleCursor instance is always positioned at a specific media sample in a sequence of samples as defined by a higher-level construct, such as an AVAssetTrack. It can be moved to a new position in that sequence either backwards or forwards, either in decode order or in presentation order. Movement can be requested according to a count of samples or according to a delta in time.


An AVSpeechSynthesisVoice object defines a distinct voice for use in speech synthesis. Voices are distinguished primarily by language and locale.


The AVSpeechSynthesizer class produces synthesized speech from text on an iOS device, and provides methods for controlling or monitoring the progress of ongoing speech.


An AVSpeechUtterance is the basic unit of speech synthesis. An utterance encapsulates some amount of text to be spoken and a set of parameters affecting its speech: voice, pitch, rate, and delay.


AVSynchronizedLayer a subclass of CALayer with layer timing that synchronizes with a specific AVPlayerItem.


An AVTextStyleRule object represents text styling rules that can be applied to text in a media item. You use text style objects to format subtitles, closed captions, and other text-related content of the item. The rules you specify can be applied to all or part of the text in the media item.


The AVTimedMetadataGroup class represents a collection of metadata items that are valid for use during a specific range of time.


AVURLAsset is a concrete subclass of AVAsset that you use to initialize an asset from a local or remote URL.


An AVVideoComposition object represents an immutable video composition.


You use an AVVideoCompositionCoreAnimationTool object to incorporate Core Animation in a video composition.


An AVVideoCompositionInstruction object represents an operation to be performed by a compositor.


An AVVideoCompositionLayerInstruction object represents the transform, opacity, and cropping ramps to apply to a given track.


The AVVideoCompositionRenderContext class defines the context within which custom compositors render new output pixels buffers.



The AVAssetDownloadDelegate protocol describes the methods that AVAssetDownloadURLSession objects call on their delegates to handle download-related events. These methods should be implemented to be notified of download progress and completion events.


The AVAssetResourceLoaderDelegate protocol defines a method that lets your code handle resource loading requests coming from an AVURLAsset object.


The AVAsynchronousKeyValueLoading protocol defines methods that let you use an AVAsset or AVAssetTrack object without blocking the calling thread. A “key” is any property of a class that implements this protocol. Using the protocol’s methods, you can find out the current status of a key (for example, whether the corresponding value has been loaded) and ask the object to load its values asynchronously, informing you when the operation has completed.


The AVAudio3DMixing protocol defines 3D mixing properties. Currently these properties are only implemented by the AVAudioEnvironmentNode mixer.


The AVAudioMixing protocol defines properties applicable to the input bus of a mixer node.


The delegate of an AVAudioPlayer object must adopt the AVAudioPlayerDelegate protocol. All of the methods in this protocol are optional. They allow a delegate to respond to audio interruptions and audio decoding errors, and to the completion of a sound’s playback.


The delegate of an AVAudioRecorder object must adopt the AVAudioRecorderDelegate protocol. All of the methods in this protocol are optional. They allow a delegate to respond to audio interruptions and audio decoding errors, and to the completion of a recording.


The use of this protocol is deprecated in iOS 6 and later. Instead, you should use the notifications declared in AVAudioSession.


The AVAudioStereoMixing protocol defines stereo mixing properties used by mixers.


The delegate of an AVCaptureAudioDataOutputSampleBuffer object must adopt the AVCaptureAudioDataOutputSampleBufferDelegate protocol. The method in this protocol is optional.


The AVCaptureFileOutputDelegate protocol defines an interface for delegates of anAVCaptureFileOutput object to monitor and control recordings along exact sample boundaries.


Defines an interface for delegates of AVCaptureFileOutput to respond to events that occur in the process of recording a single file.


The AVCaptureMetadataOutputObjectsDelegate protocol must be adopted by the delegate of an AVCaptureMetadataOutput object . The single method in this protocol is optional. The method allows a delegate to respond when a capture metadata output object receives relevant metadata objects through its connection.


You implement methods in the AVCapturePhotoCaptureDelegate protocol to be notified of progress and results when capturing photos with the AVCapturePhotoOutput class.


This protocol defines an interface for delegates of an AVCaptureVideoDataOutput object to receive captured video sample buffers and be notified of late sample buffers that were dropped.


The AVPlayerItemLegibleOutputPushDelegate protocol extends the AVPlayerItemOutputPushDelegate protocol to provide additional methods specific to attributed string output.


The AVPlayerItemMetadataCollectorPushDelegate protocol should be adopted by objects interested in receiving metadata callbacks from an AVPlayerItemMetadataCollector.


The AVPlayerItemMetadataOutputPushDelegate protocol extends the AVPlayerItemOutputPushDelegate protocol to provide additional methods specific to metadata output.


The AVPlayerItemOutputPullDelegate protocol defines the methods that are called by an AVPlayerItemVideoOutput object in response to pixel buffer changes.


The AVPlayerItemOutputPushDelegate protocol defines common delegate methods for objects participating in AVPlayerItemOutput push sample output acquisition.


The AVSpeechSynthesizerDelegate protocol defines methods that the delegate of an AVSpeechSynthesizer object may implement; all methods in this protocol are optional. You can implement these methods to respond to events that occur during speech synthesis.


The AVVideoCompositing protocol defines properties and methods that custom video compositors must implement.


The AVVideoCompositionInstruction protocol represents operations to be performed by a compositor. An AVVideoComposition object maintains an array of instructions to perform its composition.


The AVVideoCompositionValidationHandling protocol declares methods that you can implement in the delegate of an AVVideoComposition object to indicate whether validation of a video composition should continue after specific errors have been found.

Extended Types


The NSCoder abstract class declares the interface used by concrete subclasses to transfer objects and other values between memory and some other format. This capability provides the basis for archiving (where objects and data items are stored on disk) and distribution (where objects and data items are copied between different processes or threads). The concrete subclasses provided by Foundation for these purposes are NSArchiver, NSUnarchiver, NSKeyedArchiver, NSKeyedUnarchiver, and NSPortCoder. Concrete subclasses of NSCoder are referred to in general as coder classes, and instances of these classes as coder objects (or simply coders). A coder object that can only encode values is referred to as an encoder object, and one that can only decode values as a decoder object.


An NSValue object is a simple container for a single C or Objective-C data item. It can hold any of the scalar types such as int, float, and char, as well as pointers, structures, and object id references. Use this class to work with such data types in collections (such as NSArray and NSSet), Key-value coding, and other APIs that require Objective-C objects. NSValue objects are always immutable.



Values identifying the general type of a capture device, used with the defaultDevice(withDeviceType:mediaType:position:) method and the AVCaptureDeviceDiscoverySession class.