AV Foundation Release Notes for iOS 5

This article summarizes some of the new features and changes in functionality in AV Foundation in iOS 5.

Contents:

UTIs and MIME types

New class methods have been defined on AVURLAsset that provide information about file types and MIME types supported by AVFoundation.

Note that a definitive determination of whether a particular resource is playable or is suitable for any other purpose requires an examination of its contents; -[AVAsset isPlayable] and other methods in the category AVAssetUsability are available for that purpose.

Preparing an AVAsset for playback

The current revision of AVFoundation Programming Guide recommends for all file-based assets that applications load the value of the tracks property of an asset before creating an AVPlayerItem with it and associating the AVPlayerItem with an instance of AVPlayer. While this recommendation is still the best practice for all applications with a deployment target of iOS 4.0-4.3, applications with a deployment target of iOS 5.0 or later need no longer load properties of an AVAsset before creating an AVPlayerItem with it and playing the AVPlayerItem. Instead, those applications need load only the values of properties of an AVAsset that they will examine or process themselves, to display in a user interface, for configuration of playback, or for any other purpose. Starting with iOS 5.0, AVPlayer takes care of its own loading needs.

Selection of audio and subtitle media according to language and other criteria

AVFoundation now offers features for the discovery of options that may be offered by audiovisual media resources to accommodate differing language preferences, accessibility requirements, custom application configurations, and other needs, and for selection of these options for playback. For example, a resource may contain multiple audible options, each with dialog spoken in a different language, to be selected for playback to the exclusion of the others. Similar options in multiple languages can also be provided for legible media, such as subtitles. Both file-based content and HTTP Live Streaming content can offer media options. To obtain information about the groups of options that are offered by an instance of AVAsset:

To examine available options within a group and to filter them for selection for playback:

Advice about subtitles

Special care should be taken when displaying options to the user among the available legible options for playback and when making a selection among the available legible options according to user preferences. Some legible content contains "forced" subtitles, meaning that according to the content author's intent the subtitles should be displayed when the user has neither indicated a preference for the display of subtitles nor made an explicit selection of a subtitle option. Forced subtitles are typically used in order to convey the meaning of spoken dialog or visible text in a language that the content provider assumes will not be commonly understood, when comprehension of the dialog or text is nevertheless considered to be essential. Be sure that your app allows them to be displayed appropriately by following the advice below.

An AVMediaSelectionGroup for the characteristic AVMediaCharacteristicLegible can provide two types of legible options: 1) for display of legible content that's considered to be elective along with content that's considered to be essential, and 2) for display of essential legible content only. Legible AVMediaSelectionOptions that include essential content only have the media characteristic AVMediaCharacteristicContainsOnlyForcedSubtitles (defined in AVMediaFormat.h). When offering legible options for display to the end user in a selection interface, or when considering subtitle options for automatic selection according to a user preference for language, legible options with the characteristic AVMediaCharacteristicContainsOnlyForcedSubtitles should be excluded. +[AVMediaSelectionOption mediaSelectionOptionsFromArray:withoutMediaCharacteristics:], specifying AVMediaCharacteristicContainsOnlyForcedSubtitles as a characteristic to exclude, can be used to obtain the legible options that are suitable to offer to the end user in a selection interface or for consideration for selection according to a user preference.

If the user indicates no preference for or makes no selection of legible content, the application should select one of the legible options for playback that has the characteristic AVMediaCharacteristicContainsOnlyForcedSubtitles, if any are present. For most resources containing legible options with forced-only subtitles, an appropriate selection among them can be made in accordance with the current audible selection. Use -[AVMediaSelectionOption associatedMediaSelectionOptionInMediaSelectionGroup:] to obtain the legible option associated with an audible option. If there is no other means available to choose among them, the first legible option with forced-only subtitles in the media selection group is an appropriate default.

AirPlay support in AVPlayer

Routing of audio and video to another device via AirPlay does not require the intervention or knowledge AVFoundation clients. Users can choose AirPlay routing of any audio or video played by an AVPlayer either by interacting with an instance of MPVolumeView as provided within an application's playback interface or with the System AirPlay Picker that's part of the standard multitasking interface. See the MPVolumeView class reference for details on how to offer AirPlay routing to users within your application's user interface.

Whether audio and video is playing on the local device or remotely via AirPlay, existing interfaces for control of playback as defined on AVPlayer, AVQueuePlayer, and AVPlayerItem will apply.

In addition, AVPlayer now defines properties that allow AVFoundation clients to control AirPlay behaviors and to respond to AirPlay routing.

Determining whether fast forward and fast reverse playback are available

AVPlayerItem now has properties that allow applications to determine whether playback is possible at forward rates greater than 1.0 and at reverse rates less than -1.0. While such playback rates are possible with all file-based content, they are possible with HTTP Live Streaming content only when the source playlist offers media that allows it. Applications may wish to customize playback controls offered to users to accord with the values of these properties for the currently playing item.

Seeking and, having sought, knowing that the seeking is done

None of the variants of -seekToTime: as provided by AVPlayer and AVPlayerItem perform a seek operation synchronously. Therefore, without a notification of some kind, it can be difficult to tell whether the full effect of a seek operation has occurred; a determination by observing the effect of a seek on the current time is not always sufficient, because a seek to a nearby time using typical tolerances may have no effect on the current time at all. To provide an indication when a seek operation has either finished or was cancelled, AVPlayer and AVPlayerItem now define methods for seeking that accept a client-specified block to be invoked as a notification.

Advice about scrubbing

Scrubbing is the common term for an operation in which the user jumps from time to time within an audiovisual media resource, in arbitrary increments of media time both backward and forward, at arbitrary intervals of real time, sometimes to locate a particular scene of interest and sometimes as a way of previewing the contents of the resource. Scrubbing is commonly implemented by AVFoundation clients as a succession of seek operations, in order to set the current time of an AVPlayerItem to a time indicated by the current or recent position of a UI affordance, such as a slider. As noted above, because each successive seek operation implicitly cancels a prior seek operation, it's important during scrubbing to allow at least some seek operations to finish instead of being cancelled by new ones, so that the user will be presented with a visual indication that scrubbing is in fact having an effect.

One way to ensure that a sufficient number of seek operations will finish and that the user will receive appropriate visual feedback is to chain them. Instead of initiating a new seek operation each time a UI affordance changes position, you can merely note the time that its new position indicates, and when the completion handler for a prior seek is invoked, you can seek to the time indicated by the most recent position of the UI affordance, if that time has changed since the last seek.

Background and foreground transitions

To the background

In iOS 4.x, the playback of any instance of AVPlayer is automatically paused as the app that created it is sent to the background whenever the AVPlayer is associated with any instance of AVPlayerLayer, whether the AVPlayerLayer is being displayed onscreen or not, and whether the currently playing item has video media or not.

In iOS 5.0, playback of an AVPlayer is automatically paused as the app that created it is sent to the background only if the AVPlayer's current item is displaying video on the device's display.

To the foreground

In iOS 4.x, the playback of audio in the background by other applications is automatically interrupted as an AVFoundation client comes to the foreground whenever any instance of AVPlayerLayer exists within the application. The AVPlayerLayer need not be associated with any AVPlayer.

In iOS 5.0, the above behavior continues, except for applications that have a base SDK of iOS 5.0 or later. As those applications come to the foreground, the playback of audio in the background by other applications is automatically interrupted only if they have an AVPlayer with a current item that needs to update the display of video either on the device or on a connected display. Otherwise the playback of background audio will continue until a further user action requires an interruption.

Audio playback under the locked screen

(When the screen is locked while an app is the frontmost app.)

Applications that have a base SDK of iOS 5 or later:

Applications that have a base SDK prior to iOS 5.0:

Receiving rotated CVPixelBuffers from AVCaptureVideoDataOutput

Clients may now receive physically rotated CVPixelBuffers in their AVCaptureVideoDataOutput -captureOutput:didOutputSampleBuffer:fromConnection: delegate callback. In previous iOS versions, the front-facing camera would always deliver buffers in AVCaptureVideoOrientationLandscapeLeft and the back-facing camera would always deliver buffers in AVCaptureVideoOrientationLandscapeRight. All 4 AVCaptureVideoOrientations are supported, and rotation is hardware accelerated. To request buffer rotation, a client calls -setVideoOrientation: on the AVCaptureVideoDataOutput's video AVCaptureConnection. Note that physically rotating buffers does come with a performance cost, so only request rotation if it's necessary. If, for instance, you want rotated video written to a QuickTime movie file using AVAssetWriter, it is preferable to set the -transform property on the AVAssetWriterInput rather than physically rotate the buffers in AVCaptureVideoDataOutput.

Setting minimum and maximum video frame rate

Since iOS 4.0, clients have been able to adjust the maximum frame rate of video buffers delivered to the AVCaptureVideoDataOutput -captureOutput:didOutputSampleBuffer:fromConnection: delegate callback using [AVCaptureVideoDataOutput setMinFrameDuration:]. In iOS 5, AVCaptureVideoDataOutput's minFrameDuration property has been deprecated, and a new pair of properties introduced in AVCaptureConnection.

Discovering pixel formats supported by AVCaptureVideoDataOutput

Determining when a re-focus is necessary

Clients of AVCaptureDevice may lock focus, exposure, and/or white balance if the receiver supports it, but once locked, these properties stay locked, even if the subject area changes dramatically due either to substantial movement of the iOS device, or the subjects within the capture device's field of view. A new opt-in mechanism has been added to AVCaptureDevice to allow clients to receive a notification when the subject area changes substantially.

Determining flash and torch availability

Clients of AVCaptureDevice may query whether it -hasFlash or -hasTorch and may turn the flash or torch on. Use of the LED torch generates a lot of heat. Continuous use of the torch or flash could cause the enclosing device to overheat, so, under thermal duress, the flash and torch will turn off automatically. In iOS 5 AVCaptureDevice provides three new properties exposing the current state of the flash and torch.

Using the LED torch as a flashlight

Previous iOS releases require AVCaptureSession to be running before a client may turn on the LED torch. Consequently, the full capture stack is allocated and running, resulting in unnecessary power consumption for applications using the LED torch as a flashlight. In iOS 5, it is no longer necessary to run an AVCaptureSession to turn the torch on. Flashlight applications may now simply call:

AVCaptureDevice *backCamera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ( [backCamera isTorchAvailable] && [backCamera isTorchModeSupported:AVCaptureTorchModeOn]  )
{
    BOOL success = [backCamera lockForConfiguration:nil];
    if ( success )
    {
        [backCamera setTorchMode:AVCaptureTorchModeOn];
        [backCamera unlockForConfiguration];
    }
}

Finding connections with a given media type

The new AVCaptureOutput utility method, -connectionWithMediaType: allows clients to find an output's first connection of a given media type without writing a function iterate through each connection's input ports.

Scaling and cropping still images

AVCaptureStillImageOutput now supports still image scale and crop to simulate a "digital zoom" effect. To set a scale and crop factor, clients call [AVCaptureConnection setVideoScaleAndCropFactor:] on the still image output's video connection. The value must be between 1.0 (no scaling/cropping) and the result of [AVCaptureConnection videoMaxScaleAndCropFactor]. When the video scale and crop factor is set to a value higher than 1.0, AVCaptureStillImageOutput scales the captured image by the specified factor and center crops the result back to its original size using hardware acceleration.

Driving a camera shutter animation

AVCaptureStillImageOutput provides a new key-value observable property, -capturingStillImage, whereby a client can find out when a request to [AVCaptureStillImageOutput captureStillImageAsynchronouslyFromConnection:completionHandler:] is being satisfied. -isCapturingStillImage changes to YES right before the picture is taken, and changes to NO right after it is taken. Clients may use this property to drive a camera shutter or iris animation.

Enhancements for streaming video applications

Clients may use the new AVCaptureSessionPreset352x288 session preset to receive 352x288 (CIF) sized video buffers from the camera. This preset is supported on all devices, front and back cameras. Clients should always call [AVCaptureDevice supportsAVCaptureSessionPreset:] to ensure that the input supports their desired preset.

Capturing movies for editing applications

AVCaptureSession provides two new session presets to aid in the creation of editable content.

Using either of these presets, AVCaptureMovieFileOutput captures Apple iFrame compatible movies (see http://support.apple.com/kb/HT3905) that work well in iMovie and in other editing applications.