Optimizing Your App for Device Hardware

Using audio session properties, you can optimize your app’s audio behavior for device hardware at runtime. This lets your code adapt to the characteristics of the device it’s running on, as well as to changes made by the user (such as plugging in a headset or docking the device) as your app runs.

The audio session property mechanism lets you:

The most commonly used property value change event is route changes, covered in Responding to Route Changes. You can also write callbacks to listen for changes in hardware output volume and changes in the availability of audio input.

Choosing Preferred Audio Hardware Values

Use the audio session APIs to specify preferred hardware sample rate and preferred hardware I/O buffer duration. Table 4-1 describes benefits and costs of these preferences.

Table 4-1  Choosing preferred hardware values


Preferred sample rate

Preferred I/O buffer duration

High value

Example: 44.1 kHz

+ High audio quality

Large file size

Example: 500 mS

+ Less-frequent disk access

Longer latency

Low value

Example: 8 kHz

+ Small file size

Low audio quality

Example: 5 mS

+ Low latency

Frequent disk access

For example, as shown in the top-middle cell of the table, you might specify a preference for a high sample rate if audio quality is very important in your app, and if large file size is not a significant issue.

The default audio I/O buffer duration (about 0.02 seconds for 44.1 kHz audio) provides sufficient responsiveness for most apps. A lower I/O duration can be set for latency-critical apps such as live musical instrument monitoring, but you won’t need to change this value for most apps.

Setting Preferred Hardware Values

Set preferred hardware values prior to activating your audio session. When an app sets a preferred value, it does not take effect until the audio session is activated. Verify the selected values after your audio session has been reactivated. While your app is running, Apple recommends that you deactivate your audio session before changing any of the set values. Listing 4-1 shows how to set preferred hardware values and how to check the actual values being used.

Listing 4-1  Setting and querying hardware values

NSError *audioSessionError = nil;
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayback error:&audioSessionError];
if (audioSessionError) {
     NSLog(@"Error %ld, %@, (long)audioSessionError.code, audioSessionError.localizedDescription);
NSTimeInterval bufferDuration =.005;
[session setPreferredIOBufferDuration:bufferDuration error:&audioSessionError];
if (audioSessionError) {
     NSLog(@"Error %ld, %@, (long)audioSessionError.code, audioSessionError.localizedDescription);
double sampleRate = 44100.0
[session setPreferredSampleRate:samplerate error:&audioSessionError];
if (audioSessionError) {
     NSLog(@"Error %ld, %@, (long)audioSessionError.code, audioSessionError.localizedDescription);
[[NSNotificationCenter defaultCenter] addObserver:self
[session setActive:YES error:&audioSessionError];
if (audioSessionError) {
     NSLog(@"Error %ld, %@, (long)audioSessionError.code, audioSessionError.localizedDescription);
sampeRate = session.sampleRate;
bufferDuration = session.IOBuffferDuration;
NSLog(@"Sampe Rate:%0.0fHZ I/O Buffer Duration:%f", sampleRate, bufferDuration);

Querying Hardware Characteristics

Hardware characteristics of an iOS device can change while your app is running, and can differ from device to device. When you use the built-in microphone for an original iPhone, for example, recording sample rate is limited to 8 kHz; attaching a headset and using the headset microphone provides a higher sample rate. Newer iOS devices support higher hardware sample rates for the built-in microphone.

Your app’s audio session can tell you about many hardware characteristics of a device. These characteristics can change at runtime. For instance, input sample rate may change when a user plugs in a headset. See AVAudioSession Class Reference for a complete list of properties.

Before you specify preferred hardware characteristics, ensure that the audio session is inactive. After establishing your preferences, activate the session and then query it to determine the actual characteristics. This final step is important because in some cases, the system cannot provide what you ask for.

Two of the most useful audio session hardware properties are sampleRate and outputLatency. The sampleRate property contains the hardware sample rate of the device. The outputLatency property contains the playback latency of the device.

Specifying Preferred Hardware I/O Buffer Duration

Use the AVAudioSession class to specify preferred hardware sample rates and preferred hardware I/O buffer durations, as shown in Listing 4-2. To set preferred sample rate you’d use similar code.

Listing 4-2  Specifying preferred I/O buffer duration using the AVAudioSession class

NSError *setPreferenceError = nil;
NSTimeInterval preferredBufferDuration = 0.005;
[[AVAudiosession sharedInstance]
            setPreferredIOBufferDuration: preferredBufferDuration
                                   error: &setPreferenceError];

After establishing a hardware preference, always ask the hardware for the actual value, because the system may not be able to provide what you ask for.

Obtaining and Using the Hardware Sample Rate

As part of setting up for recording audio, you obtain the current audio hardware sample rate and apply it to your audio data format. Listing 4-3 shows how. You typically place this code in the implementation file for a recording class. Use similar code to obtain other hardware properties, including the input and output number of channels.

Before you query the audio session for current hardware characteristics, ensure that the session is active.

Listing 4-3  Obtaining the current audio hardware sample rate using the AVAudioSession class

double sampleRate;
sampleRate = [[AVAudioSession sharedInstance] currentHardwareSampleRate];

Running Your App in the Simulator

When you add audio session support to your app, you can run your app in the Simulator or on a device. However, the Simulator does not simulate audio session behavior and does not have access to the hardware features of a device. When running your app in the Simulator, you cannot:

Because of the characteristics of the Simulator, you may want to conditionalize your code to allow partial testing in the Simulator.

One approach is to branch based on the return value of an API call. Ensure that you are checking and appropriately responding to the result codes from all of your audio session function calls; the result codes may indicate why your app works correctly on a device but fails in the Simulator.

In addition to correctly using audio session result codes, you can employ preprocessor conditional statements to hide certain code when it is running in the Simulator. Listing 4-4 shows how to do this.

Listing 4-4  Using preprocessor conditional statements

#warning *** Simulator mode: audio session code works only on a device
    // Execute subset of code that works in the Simulator
    // Execute device-only code as well as the other code