iOS Developer Library

Developer

AVFoundation Framework Reference AVAssetWriterInput Class Reference

Options
Deployment Target:

On This Page
Language:

AVAssetWriterInput

Inheritance


Conforms To


Import Statement


Swift

import AVFoundation

Objective-C

@import AVFoundation;

Availability


Available in iOS 4.1 and later.

You use an AVAssetWriterInput to append media samples packaged as CMSampleBuffer objects (see CMSampleBufferRef), or collections of metadata, to a single track of the output file of an AVAssetWriter object.

When there are multiple inputs, AVAssetWriter tries to write media data in an ideal interleaving pattern for efficiency in storage and playback. Each of its inputs signals its readiness to receive media data for writing according to that pattern via the value of readyForMoreMediaData. If readyForMoreMediaData is YEStrue, an input can accept additional media data while maintaining appropriate interleaving. You can only append media data to an input while its readyForMoreMediaData property is YEStrue.

  • If you’re writing media data from a non-real-time source, such as an instance of AVAssetReader, you should hold off on generating or obtaining more media data to append to an input when the value of readyForMoreMediaData is NOfalse. To help with control of the supply of non-real-time media data, you can use requestMediaDataWhenReadyOnQueue:usingBlock: to specify a block that the input should invoke whenever it’s ready for input to be appended.

  • If you’re writing media data from a real-time source such as an AVCaptureOutput object, you should set the input’s expectsMediaDataInRealTime property to YEStrue to ensure that the value of readyForMoreMediaData is calculated appropriately. When expectsMediaDataInRealTime is YEStrue, readyForMoreMediaData will become NOfalse only when the input cannot process media samples as quickly as they are being provided by the client. If readyForMoreMediaData becomes NOfalse for a real-time source, the client may need to drop samples or consider reducing the data rate of appended samples.

The value of readyForMoreMediaData will often change from NOfalse to YEStrue asynchronously, as previously-supplied media data is processed and written to the output. It is possible for all of an asset writer’s inputs temporarily to return NOfalse for readyForMoreMediaData.

  • Returns a new writer input object initialized with the specified media type and output settings.

    Declaration

    Objective-C

    + (AVAssetWriterInput *)assetWriterInputWithMediaType:(NSString *)mediaType outputSettings:(NSDictionary *)outputSettings

    Parameters

    mediaType

    The type of samples to be accepted by the input object. For a list of media types, see AV Foundation Constants Reference.

    outputSettings

    The settings used for encoding the media appended to the output. Pass nil to specify that the appended samples should not be re-encoded.

    Audio output settings keys are defined in AV Foundation Audio Settings Constants. Video output settings keys are defined in AV Foundation Constants Reference. Video output settings with keys from <CoreVideo/CVPixelBuffer.h> are not currently supported.

    Return Value

    A new writer input object that can accept samples of the specified media type and write them to the output file.

    Discussion

    Each new input accepts data for a new track of the asset writer’s output file. You add an input to an asset writer using the AVAssetWriter method addInput:.

    Passing nil for outputSettings instructs the input to pass through appended samples, doing no processing before they are written to the output file. This is useful if, for example, you are appending buffers that are already in a desirable compressed format.

    Import Statement

    Objective-C

    @import AVFoundation;

    Availability

    Available in iOS 4.1 and later.

  • Returns a new writer input object initialized with the specified media type, output settings, and source format hint.

    Declaration

    Objective-C

    + (AVAssetWriterInput *)assetWriterInputWithMediaType:(NSString *)mediaType outputSettings:(NSDictionary *)outputSettings sourceFormatHint:(CMFormatDescriptionRef)sourceFormatHint

    Parameters

    mediaType

    The media type of the samples to be accepted by the input object. For a list of media types, see AV Foundation Constants Reference.

    outputSettings

    Specify a dictionary containing the settings used for encoding the media appended to the output. You may pass nil if you do not want the appended samples to be re-encoded.

    Audio output settings keys are defined in AV Foundation Audio Settings Constants. Video output settings keys are defined in AV Foundation Constants Reference. Video output settings with keys from <CoreVideo/CVPixelBuffer.h> are not currently supported.

    sourceFormatHint

    This parameter contains a hint of the type of the format buffers to be appended. If you specify a value for this parameter, the writer input object may be able to fill in missing output settings or perform more upfront validation. If you specify a value for this property, you should make sure that the buffers you append are of the indicated type.

    Return Value

    A new writer input object that can accept samples of the specified media type and write them to the output file.

    Discussion

    Each new input accepts data for a new track of the asset writer’s output file. You add an input to an asset writer using the AVAssetWriter method addInput:.

    Passing nil for output settings instructs the input to pass through appended samples, doing no processing before they are written to the output file.  This is useful if, for example, you are appending buffers that are already in a desirable compressed format.  However, if not writing to a QuickTime Movie file (i.e. the AVAssetWriter was initialized with a file type other than AVFileTypeQuickTimeMovie), AVAssetWriter only supports passing through a restricted set of media types and subtypes.  In order to pass through media data to files other than AVFileTypeQuickTimeMovie, a non-NULL format hint must be provided.

    Import Statement

    Objective-C

    @import AVFoundation;

    Availability

    Available in iOS 6.0 and later.

  • Initialize a writer input object with the specified media type and output settings.

    Declaration

    Swift

    init!(mediaType mediaType: String!, outputSettings outputSettings: [NSObject : AnyObject]!)

    Objective-C

    - (instancetype)initWithMediaType:(NSString *)mediaType outputSettings:(NSDictionary *)outputSettings

    Parameters

    mediaType

    The media type of the samples to be accepted by the input object. For a list of media types, see AV Foundation Constants Reference.

    outputSettings

    Specify a dictionary containing the settings used for encoding the media appended to the output. You may pass nil for this parameter if you do not want the appended samples to be re-encoded.

    Audio output settings keys are defined in AV Foundation Audio Settings Constants. Video output settings keys are defined in AV Foundation Constants Reference. Video output settings with keys from CVPixelBufferRef are not currently supported.

    Return Value

    An initialized writer input object that can accept samples of the specified media type and write them to the output file.

    Discussion

    Each new input accepts data for a new track of the asset writer’s output file. You add an input to an asset writer using the AVAssetWriter method addInput:.

    Passing nil for output settings instructs the input to pass through appended samples, doing no processing before they are written to the output file.  This is useful if, for example, you are appending buffers that are already in a desirable compressed format.  However, if not writing to a QuickTime Movie file (i.e. the AVAssetWriter was initialized with a file type other than AVFileTypeQuickTimeMovie), AVAssetWriter only supports passing through a restricted set of media types and subtypes.  In order to pass through media data to files other than AVFileTypeQuickTimeMovie, a non-NULL format hint must be provided using initWithMediaType:outputSettings:sourceFormatHint: instead of this method.

    When the mediaType parameter is AVMediaTypeAudio, the outputSettings dictionary does not support the AVEncoderAudioQualityKey and AVSampleRateConverterAudioQualityKey keys. When using this method, an audio settings dictionary must be fully specified, meaning that it must contain the AVFormatIDKey, AVSampleRateKey, and AVNumberOfChannelsKey keys. If no other channel layout information is available, a value of 1 for the AVNumberOfChannelsKey key results in mono output and a value of 2 results in stereo output. If the AVNumberOfChannelsKey key specifies a value greater than 2, the dictionary must also specify a value for the AVChannelLayoutKey key. For audio using the kAudioFormatLinearPCM format, include all relevant AVLinearPCM*Key keys. For the kAudioFormatAppleLossless format, include the AVEncoderBitDepthHintKey keys. To avoid specifying values for each of those keys, use the initWithMediaType:outputSettings:sourceFormatHint: method instead.

    When the mediaType parameter is AVMediaTypeVideo, the outputSettings dictionary must request a compressed video format. This means that the values specified in the dictionary must follow the rules for compressed video output, as described in AVVideoSettings.h. When using this initializer, a video settings dictionary must be fully specified, meaning that it must contain the following keys: AVVideoCodecKey, AVVideoWidthKey, and AVVideoHeightKey. To avoid specifying values for each of those keys, use the initWithMediaType:outputSettings:sourceFormatHint: method.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 4.1 and later.

  • Initialize a writer input object with the specified media type, output settings, and source hint.

    Declaration

    Swift

    init!(mediaType mediaType: String!, outputSettings outputSettings: [NSObject : AnyObject]!, sourceFormatHint sourceFormatHint: CMFormatDescription!)

    Objective-C

    - (instancetype)initWithMediaType:(NSString *)mediaType outputSettings:(NSDictionary *)outputSettings sourceFormatHint:(CMFormatDescriptionRef)sourceFormatHint

    Parameters

    mediaType

    The media type of the samples to be accepted by the input object. For a list of media types, see AV Foundation Constants Reference.

    outputSettings

    Specify a dictionary containing the settings used for encoding the media appended to the output. You may pass nil for this parameter if you do not want the appended samples to be re-encoded.

    Audio output settings keys are defined in AV Foundation Audio Settings Constants. Video output settings keys are defined in AV Foundation Constants Reference. Video output settings with keys from <CoreVideo/CVPixelBuffer.h> are not currently supported.

    sourceFormatHint

    This parameter contains a hint of the type of the format buffers to be appended. If you specify a value for this parameter, the writer input object may be able to fill in missing output settings or perform more upfront validation. If you specify a value for this property, you should make sure that the buffers you append are of the indicated type.

    Return Value

    An initialized writer input object that can accept samples of the specified media type and write them to the output file.

    Discussion

    Each new input accepts data for a new track of the asset writer’s output file. You add an input to an asset writer using the AVAssetWriter method addInput:.

    Passing nil for output settings instructs the input to pass through appended samples, doing no processing before they are written to the output file.  This is useful if, for example, you are appending buffers that are already in a desirable compressed format.  However, if not writing to a QuickTime Movie file (i.e. the AVAssetWriter was initialized with a file type other than AVFileTypeQuickTimeMovie), AVAssetWriter only supports passing through a restricted set of media types and subtypes.  In order to pass through media data to files other than AVFileTypeQuickTimeMovie, a non-NULL format hint must be provided.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 6.0 and later.

  • Appends samples to the receiver.

    Declaration

    Swift

    func appendSampleBuffer(_ sampleBuffer: CMSampleBuffer!) -> Bool

    Objective-C

    - (BOOL)appendSampleBuffer:(CMSampleBufferRef)sampleBuffer

    Parameters

    sampleBuffer

    The CMSampleBuffer to be appended.

    Return Value

    YEStrue if sampleBuffer as appended successfully, otherwise NOfalse.

    Discussion

    The timing information in the sample buffer, considered relative to the time passed to the asset writer’s startSessionAtSourceTime: will be used to determine the timing of those samples in the output file.

    If NOfalse is returned, clients can check the value of the AVAssetWriter status property to determine whether the writing operation completed, failed, or was cancelled.  If the status is AVAssetWriterStatusFailed, The AVAssetWriter error property will contain an instance of NSErrorß that describes the failure.

    Do not modify sampleBuffer or its contents after you have passed it to this method.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 4.1 and later.

  • A Boolean value that indicates the readiness of the input to accept more media data. (read-only)

    Declaration

    Swift

    var readyForMoreMediaData: Bool { get }

    Objective-C

    @property(nonatomic, readonly, getter=isReadyForMoreMediaData) BOOL readyForMoreMediaData

    Discussion

    This property is observable using key-value observing (see Key-Value Observing Programming Guide). Observers should not assume that they will be notified of changes on a specific thread.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 4.1 and later.

  • Tells the writer that no more buffers will be appended to this input.

    Declaration

    Swift

    func markAsFinished()

    Objective-C

    - (void)markAsFinished

    Discussion

    If you are monitoring each input's expectsMediaDataInRealTime value to keep the output file well interleaved, it is important to call this method when you have finished adding buffers to a track. This is necessary to prevent other inputs from stalling, as they may otherwise wait forever for that input's media data, attempting to complete the ideal interleaving pattern.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 4.1 and later.

  • Instructs the receiver to invoke a block repeatedly, at its convenience, in order to gather media data for writing to the output.

    Declaration

    Swift

    func requestMediaDataWhenReadyOnQueue(_ queue: dispatch_queue_t!, usingBlock block: (() -> Void)!)

    Objective-C

    - (void)requestMediaDataWhenReadyOnQueue:(dispatch_queue_t)queue usingBlock:(void (^)(void))block

    Parameters

    queue

    The queue on which block should be invoked.

    block

    The block the input should invoke to obtain media data.

    Discussion

    The block should append media data to the input either until the input’s readyForMoreMediaData property becomes NOfalse or until there is no more media data to supply (at which point it may choose to mark the input as finished using markAsFinished). The block should then exit. After the block exits, if the input has not been marked as finished, once the input has processed the media data it has received and becomes ready for more media data again, it will invoke the block again in order to obtain more.

    A typical use of this method, with a block that supplies media data to an input while respecting the input’s readyForMoreMediaData property, might look like this:

    • [myAVAssetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
    • while ([myAVAssetWriterInput isReadyForMoreMediaData])
    • {
    • CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
    • if (nextSampleBuffer)
    • {
    • [myAVAssetWriterInput appendSampleBuffer:nextSampleBuffer];
    • CFRelease(nextSampleBuffer);
    • }
    • else
    • {
    • [myAVAssetWriterInput markAsFinished];
    • break;
    • }
    • }
    • }];

    You should not use this method with a push-style buffer source, such as AVCaptureAudioDataOutput or AVCaptureVideoDataOutput, because such a combination will typically require intermediate queueing of buffers. Instead, this method is better suited to a pull-style buffer source such as an AVAssetReaderOutput object.

    When using a push-style buffer source, it is generally better to immediately append each buffer to the asset writer input, directly as it is received using appendSampleBuffer:. Using this strategy, it is often possible to avoid having to queue up buffers in between the buffer source and the asset writer input. Note that many of these push-style buffer sources also produce buffers in real-time, in which case you should set expectsMediaDataInRealTime to YEStrue.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 4.1 and later.

  • metadata metadata Property

    The collection of track-level metadata for association with the asset and for carriage in the output file.

    Declaration

    Swift

    var metadata: [AnyObject]!

    Objective-C

    @property(nonatomic, copy) NSArray *metadata

    Discussion

    The array contains AVMetadataItem objects representing the collection of track-level metadata to be written in the output file.

    You cannot set this property after writing has started.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 4.1 and later.

  • transform transform Property

    The transform specified in the output file as the preferred transformation of the visual media data for display purposes.

    Declaration

    Swift

    var transform: CGAffineTransform

    Objective-C

    @property(nonatomic) CGAffineTransform transform

    Discussion

    If no value is specified, the identity transform is used.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 4.1 and later.

  • Specifies the media time scale to be used

    Declaration

    Swift

    var mediaTimeScale: CMTimeScale

    Objective-C

    @property(nonatomic) CMTimeScale mediaTimeScale

    Discussion

    For file types that support media time scales, such as QuickTime Movie files, specifies the media time scale to be used. The default value of this property is 0, which indicates that the writer input object should choose an appropriate value.

    It is an error to assign a value other than 0 to this property if the object’s mediaType property is set to to AVMediaTypeAudio.

    You cannot set this property after writing has started.

    In order to avoid inconsistencies between the track's media time scale and the result's media time scale (see AVAssetWriter property movieTimeScale ) both should be set to equal or compatible values.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 4.3 and later.

  • Indicates whether the input should tailor its processing of media data for real-time sources.

    Declaration

    Swift

    var expectsMediaDataInRealTime: Bool

    Objective-C

    @property(nonatomic) BOOL expectsMediaDataInRealTime

    Discussion

    If you are appending media data to an input from a real-time source, such as an AVCaptureOutput, you should set expectsMediaDataInRealTime to YEStrue. This will ensure that readyForMoreMediaData is calculated appropriately for real-time usage.

    You cannot set this property after writing has started.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 4.1 and later.

  • Returns whether the receiver’s track is enabled.

    Declaration

    Swift

    var marksOutputTrackAsEnabled: Bool

    Objective-C

    @property(nonatomic) BOOL marksOutputTrackAsEnabled

    Discussion

    For file types that support enabled and disabled tracks, such as QuickTime Movie files, specifies whether the track corresponding to the receiver should be enabled by default for playback and processing.

    The default value is YEStrue.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 7.0 and later.

  • Size specified in the output file as the natural dimensions of the visual media data for display.

    Declaration

    Swift

    var naturalSize: CGSize

    Objective-C

    @property(nonatomic) CGSize naturalSize

    Discussion

    If the default value, CGSizeZero, is specified, the naturalSize of the track corresponding to the receiver is set according to dimensions indicated by the format descriptions that are ultimately written to the output track.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 7.0 and later.

  • Preferred volume level to be stored in the output file.

    Declaration

    Swift

    var preferredVolume: Float

    Objective-C

    @property(nonatomic) float preferredVolume

    Discussion

    The value for this property should typically be in the range of 0.0 to 1.0.

    The default value is 1.0, which is equivalent to a “normal” volume level.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 7.0 and later.

  • Associates the track corresponding to the specified input with the track corresponding with the receiver.

    Declaration

    Swift

    func addTrackAssociationWithTrackOfInput(_ input: AVAssetWriterInput!, type trackAssociationType: String!)

    Objective-C

    - (void)addTrackAssociationWithTrackOfInput:(AVAssetWriterInput *)input type:(NSString *)trackAssociationType

    Parameters

    input

    The instance of AVAssetWriterInput with a corresponding track to associate with track corresponding with the receiver.

    trackAssociationType

    The type of track association to add. Common track association types, such as AVTrackAssociationTypeTimecode are defined in Track Association Types.

    Discussion

    If the type of association requires tracks of specific media types that don't match the media types of the inputs, or if the output file type does not support track associations, an NSInvalidArgumentException is raised.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 7.0 and later.

  • Whether an association between the tracks corresponding to a pair of inputs is valid.

    Declaration

    Swift

    func canAddTrackAssociationWithTrackOfInput(_ input: AVAssetWriterInput!, type trackAssociationType: String!) -> Bool

    Objective-C

    - (BOOL)canAddTrackAssociationWithTrackOfInput:(AVAssetWriterInput *)input type:(NSString *)trackAssociationType

    Parameters

    input

    The instance of AVAssetWriterInput with a corresponding track to associate with track corresponding with the receiver.

    trackAssociationType

    The type of track association to test. Common track association types, such as AVTrackAssociationTypeTimecode are defined in Track Association Types.

    Return Value

    YEStrue if the track association can be added; otherwise NOfalse.

    Discussion

    If the type of association requires tracks of specific media types that don't match the media types of the inputs, or if the output file type does not support track associations, returns NOfalse.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 7.0 and later.

  • mediaType mediaType Property

    The media type of the samples that can be appended to the input. (read-only)

    Declaration

    Swift

    var mediaType: String! { get }

    Objective-C

    @property(nonatomic, readonly) NSString *mediaType

    Discussion

    The value of this property is one of the media type strings defined in AV Foundation Constants Reference.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 4.1 and later.

  • The settings used for encoding the media appended to the output. (read-only)

    Declaration

    Swift

    var outputSettings: [NSObject : AnyObject]! { get }

    Objective-C

    @property(nonatomic, readonly) NSDictionary *outputSettings

    Discussion

    A value of nil specifies that appended samples should not be re-encoded.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 4.1 and later.

  • A hint about the format of buffers that will be appended. (read-only)

    Declaration

    Swift

    var sourceFormatHint: CMFormatDescription! { get }

    Objective-C

    @property(nonatomic, readonly) CMFormatDescriptionRef sourceFormatHint

    Discussion

    An AVAssetWriterInput object may be able to use this hint to fill in missing output settings or perform more upfront validation.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 6.0 and later.

  • For file types that support writing sample references, such as QuickTime Movie files, specifies the base URL sample references are relative to.

    Declaration

    Swift

    @NSCopying var sampleReferenceBaseURL: NSURL!

    Objective-C

    @property(nonatomic, copy) NSURL *sampleReferenceBaseURL

    Discussion

    If the value of this property can be resolved as an absolute URL, the sample locations written to the file when appending sample references will be relative to this URL. The URL must point to a location that is in a directory that is a parent of the sample reference location.

    For example: setting the sampleReferenceBaseURL property to file:///User/johnappleseed/Movies/ and appending sample buffers with the kCMSampleBufferAttachmentKey_SampleReferenceURL attachment set to file:///User/johnappleseed/Movies/data/movie1.mov will cause the sample reference data/movie1.mov to be written to the movie.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 8.0 and later.

  • Language tag to associate with the track corresponding to the receiver

    Declaration

    Swift

    var extendedLanguageTag: String!

    Objective-C

    @property(nonatomic, copy) NSString *extendedLanguageTag

    Discussion

    The value is specified as an RFC 4646 language tag; can be nil in which case no tag is written to the track.

    Extended language tags are normally set only when an ISO 639-2/T language code by itself is ambiguous, as in cases in which media data should be distinguished not only by language but also by the regional dialect in use or the writing system employed.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 7.0 and later.

    See Also

    languageCode

  • Language to associate with the track corresponding to the receiver.

    Declaration

    Swift

    var languageCode: String!

    Objective-C

    @property(nonatomic, copy) NSString *languageCode

    Discussion

    The value is specified as an ISO 639-2/T language code; can be nil in which case no language code is written to the track..

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 7.0 and later.

  • A Boolean value that indicates whether the input might perform multiple passes over appended media data. (read-only)

    Declaration

    Swift

    var canPerformMultiplePasses: Bool { get }

    Objective-C

    @property(nonatomic, readonly) BOOL canPerformMultiplePasses

    Discussion

    When the value for this property is YEStrue, your source for media data should be configured for random access. After appending all of the media data for the current pass, as specified by the currentPassDescription property, invoke markCurrentPassAsFinished to start the process of determining whether additional passes are needed. Note that it is still possible in this case for the input to perform only the initial pass, if it determines that there will be no benefit to performing multiple passes.

    When the value for this property is NOfalse, your source for media data only needs to support sequential access. In this case, append all of the source media once and call markAsFinished.

    In the default configuration of AVAssetWriterInput, the value for this property will be NOfalse. Currently the only way for this property to become YEStrue is when performsMultiPassEncodingIfSupported has been set to YEStrue. The final value will be available after startWriting is called, when a specific encoder has been chosen.

    This property supports key-value observing.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 8.0 and later.

  • Provides an object that describes the requirements, such as source time ranges to append or re-append, for the current pass. (read-only)

    Declaration

    Swift

    var currentPassDescription: AVAssetWriterInputPassDescription! { get }

    Objective-C

    @property(readonly) AVAssetWriterInputPassDescription *currentPassDescription

    Discussion

    If the value of this property is nil, there is no request to be fulfilled and markAsFinished should be called on the asset writer input.

    During the first pass, the request will contain a single time range from zero to positive infinity, indicating that all media from the source should be appended. This will also be true when canPerformMultiplePasses is NOfalse, in which case only one pass will be performed.

    The value of this property will be nil before startWriting is called on the attached asset writer. It will transition to an initial non-nil value during the call to startWriting. After that, the value of this property will change only after a call to markCurrentPassAsFinished. The respondToEachPassDescriptionOnQueue:usingBlock: method allows for notification at the beginning of each pass.

    This property supports key-value observing. Observers should not assume that they will be notified of changes on a specific thread.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 8.0 and later.

  • Instructs the receiver to analyze the media data that has been appended and determine whether the results could be improved by re-encoding certain segments.

    Declaration

    Swift

    func markCurrentPassAsFinished()

    Objective-C

    - (void)markCurrentPassAsFinished

    Discussion

    When the value of canPerformMultiplePasses is YEStrue, call this method after you have appended all of your media data. After the receiver analyzes whether an additional pass is warranted, the value of currentPassDescription will change (usually asynchronously) to describe how to set up for the next pass. Although it is possible to use key-value observing to determine when the value of currentPassDescription has changed, it is typically more convenient to invoke respondToEachPassDescriptionOnQueue:usingBlock: in order to start the work for each pass.

    After re-appending the media data for all of the time ranges of the new pass, call this method again to determine whether additional segments should be re-appended in another pass.

    Calling this method effectively cancels any previous invocation of requestMediaDataWhenReadyOnQueue:usingBlock:, meaning that requestMediaDataWhenReadyOnQueue:usingBlock: can be invoked again for each new pass. The respondToEachPassDescriptionOnQueue:usingBlock: method provides a convenient way to consolidate these invocations in your code.

    After each pass, you have the option of keeping the most recent results by calling markAsFinished instead of this method. If the value of currentPassDescription is nil at the beginning of a pass, call markAsFinished to tell the receiver to not expect any further media data.

    If the value of canPerformMultiplePasses is NOfalse, the value of currentPassDescription will immediately become nil after calling this method.

    Before calling this method, you must ensure that the receiver is attached to an AVAssetWriter instance via a prior call to addInput: and that startWriting has been called on the asset writer.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 8.0 and later.

  • A Boolean value that indicates whether the input should attempt to encode the source media data using multiple passes.

    Declaration

    Swift

    var performsMultiPassEncodingIfSupported: Bool

    Objective-C

    @property(nonatomic) BOOL performsMultiPassEncodingIfSupported

    Discussion

    The asset writer input may be able to achieve higher quality and/or lower data rate by performing multiple passes over the source media. It does this by analyzing the media data that has been appended and re-encoding certain segments with different parameters. In order to do this re-encoding, the media data for these segments must be appended again. See markCurrentPassAsFinished and the property currentPassDescription for the mechanism by which the input nominates segments for re-appending.

    When the value of this property is YEStrue, the value of readyForMoreMediaData for other inputs attached to the same AVAssetWriter may be NOfalse more often and/or for longer periods of time. In particular, the value of readyForMoreMediaData for inputs that do not (or cannot) perform multiple passes may start out as NOfalse after the AVAssetWriter method startWriting has been called and may not change to YEStrue until after all multi-pass inputs have completed their final pass.

    When the value of this property is YEStrue, the input may store data in one or more temporary files before writing compressed samples to the output file. Use the AVAssetWriter property directoryForTemporaryFiles if you need to control the location of temporary file writing.

    The default value is NOfalse, meaning that no additional analysis will occur and no segments will be re-encoded. Not all asset writer input configurations (for example, inputs configured with certain media types or to use certain encoders) can benefit from performing multiple passes over the source media. To determine whether the selected encoder can perform multiple passes, query the value of canPerformMultiplePasses after calling startWriting.

    It is an error to set this property to YEStrue when the value for expectsMediaDataInRealTime is YEStrue. It is also an error for an asset writer to contain an input with this property set to YEStrue when any of its other inputs have a value of YEStrue for expectsMediaDataInRealTime.

    This property cannot be set after writing on the receiver's AVAssetWriter has started.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 8.0 and later.

  • For file types that support media chunk alignment, such as QuickTime Movie files, specifies the boundary for media chunk alignment in bytes.

    Declaration

    Swift

    var preferredMediaChunkAlignment: Int

    Objective-C

    @property(nonatomic) NSInteger preferredMediaChunkAlignment

    Discussion

    The default value is 0, which means that the receiver will choose an appropriate default value. A value of 1 implies that no padding should be used to achieve a particular chunk alignment. It is an error to set a negative value for chunk alignment.

    This property cannot be set after startWriting has been called for the receiver.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 8.0 and later.

  • For file types that support media chunk duration, such as QuickTime Movie files, specifies the duration to be used for each chunk of sample data in the output file.

    Declaration

    Swift

    var preferredMediaChunkDuration: CMTime

    Objective-C

    @property(nonatomic) CMTime preferredMediaChunkDuration

    Discussion

    Chunk duration can influence the granularity of the I/O performed when reading a media file, for example, during playback. A larger chunk duration can result in fewer reads from disk, at the potential expense of a higher memory footprint.

    A “chunk” contains one or more samples. The total duration of the samples in a chunk is no greater than this preferred chunk duration, or the duration of a single sample if the sample's duration is greater than this preferred chunk duration.

    The default value is kCMTimeInvalid, which means that the receiver will choose an appropriate default value. It is an error to set a chunk duration that is negative or non-numeric.

    This property cannot be set after startWriting has been called for the receiver.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 8.0 and later.

  • Instructs the receiver to invoke a client-supplied block whenever a new pass has begun.

    Declaration

    Swift

    func respondToEachPassDescriptionOnQueue(_ queue: dispatch_queue_t!, usingBlock block: dispatch_block_t!)

    Objective-C

    - (void)respondToEachPassDescriptionOnQueue:(dispatch_queue_t)queue usingBlock:(dispatch_block_t)block

    Parameters

    queue

    The queue on which the block should be invoked.

    block

    The block the receiver should invoke whenever a new pass has begun.

    Discussion

    A typical block passed to this method will perform the following steps:

    1. Query the value of the receiver's currentPassDescription property and reconfigure the source of media data, for example, the AVAssetReader instance, accordingly.

    2. Invoke requestMediaDataWhenReadyOnQueue:usingBlock: to begin appending data for the current pass.

    When all media data has been appended for the current request, invoke markCurrentPassAsFinished to begin the process of determining whether an additional pass is warranted. If an additional pass is warranted, the block passed to this method will be invoked to begin the next pass. If no additional passes are needed, the block passed to this method will be invoked one final time so the client can invoke markAsFinished in response to the value of currentPassDescription becoming nil.

    Before calling this method, you must ensure that the receiver is attached to an AVAssetWriter instance via a prior call to addInput: and that startWriting has been called on the asset writer.

    Import Statement

    Objective-C

    @import AVFoundation;

    Swift

    import AVFoundation

    Availability

    Available in iOS 8.0 and later.