Ans: Solving 6S distorted audio - avoid bad audio format assumptions

Hi there,


I have this following audio distorted audio issue. I play 48kHz Audio AAC file using the AVPlayer Framework and I experience distorted audio only on iPhone 6S.. So I'm trying to avoid some bad assumptions about audio formats with no luck with this following code :



+ (void) setupAudioSpeaker

{

NSError * audioSessionError = nil;

[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback error: &audioSessionError];

if (audioSessionError) {

NSLog(@"AVAudioSession Error %ld, %@", (long)audioSessionError.code, audioSessionError.localizedDescription);

}


NSTimeInterval bufferDuration =.005;

[[AVAudioSession sharedInstance] setPreferredIOBufferDuration:bufferDuration error:&audioSessionError];

if (audioSessionError) {

NSLog(@"Error %ld, %@", (long)audioSessionError.code, audioSessionError.localizedDescription);

}


double sampleRate = 48000;

[[AVAudioSession sharedInstance] setPreferredSampleRate:sampleRate error:&audioSessionError];

if (audioSessionError) {

NSLog(@"AVAudioSession Error %ld, %@", (long)audioSessionError.code, audioSessionError.localizedDescription);

}


[[AVAudioSession sharedInstance] setActive:YES error:&audioSessionError];

if (audioSessionError) {

NSLog(@"AVAudioSession Error %ld, %@", (long)audioSessionError.code, audioSessionError.localizedDescription);

}


sampleRate = [AVAudioSession sharedInstance].sampleRate;

bufferDuration = [AVAudioSession sharedInstance].IOBufferDuration;

NSLog(@"Sampe Rate:%0.0fHZ I/O Buffer Duration:%f", sampleRate, bufferDuration);

}


Output :


Sampe Rate:48000HZ I/O Buffer Duration:0.005333



Any help wanted, thanks in advance!!

Julien



The internal speaker on the iPhone 6S models only support a sample rate of 48kHz while previous iPhone models supported a collection of sample rates.

Some developers are running into problems (generally classified as distorted or "bad" sounding audio) due to some incorrect assumptions being made when the requested “preferred" sample rate ends up being different than the "current" actual hardware sample rate.

If you ignore these types of difference and for example set the client format to the hardware format expecting 44.1kHz when the actual sample rate is 48kHz, your application will suffer problems like audio distortion with the further possibility of other failures.


Additionally, even if your application is specifying a client format of 44.1kHz, for example the render callback (in the case of the AURemoteIO (kAudioUnitSubType_RemoteIO)) may call you for a varying number of frames in cases where sample rate conversion is involved. Therefore, it is important that the application never assume it will always be rendering a fixed number of audio frames.

In the reported cases we've investigated, this specific issue was always related to developers code making some bad assumptions about audio formats.

Audit your code if you are getting these types of reports from your users, especially if the internal speaker is mentioned.

Any chance Apple iOS 9 future update to fix this issue ?


Thanks,

Julien

If you're using AVPlayer (or AVAudioPlayer, which is a totally different thing) you shouldn't have to do any of the above AVAudioSession stuff. Why do you need to set the hardware sample rate or the buffer duration when using a high-level object for playback? AVFoundation framework should take care of the playback pipeline for you. All you should need to do is set a Playback category and activate the AVAudioSession. In fact, if you're using an AVPlayer you don't even need to do that - AVPlayer uses the per-process AVAudioSessions singleton.


Sample code for playback can be found here:

https://developer.apple.com/library/ios/samplecode/AVFoundationSimplePlayer-iOS/Introduction/Intro.html

Ans: Solving 6S distorted audio - avoid bad audio format assumptions
 
 
Q