How to obtain an AVAudioFormat for a canonical format?

I receive a buffer from[AVSpeechSynthesizer convertToBuffer:fromBuffer:] and want to schedule it on an AVPlayerNode.

The player node's output format need to be something that the next node could handle and as far as I understand most nodes can handle a canonical format.

The format provided by AVSpeechSynthesizer is not something thatAVAudioMixerNode supports.

So the following:

  AVAudioEngine *engine = [[AVAudioEngine alloc] init];
  playerNode = [[AVAudioPlayerNode alloc] init];
  AVAudioFormat *format = [[AVAudioFormat alloc]
      initWithSettings:utterance.voice.audioFileSettings];
  [engine attachNode:self.playerNode];
  [engine connect:self.playerNode to:engine.mainMixerNode format:format];

Throws an exception:

Thread 1: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 \"(null)\""

I am looking for a way to obtain the canonical format for the platform so that I can use AVAudioConverter to convert the buffer.

Since different platforms have different canonical formats, I imagine there should be some library way of doing this. Otherwise each developer will have to redefine it for each platform the code will run on (OSX, iOS etc) and keep it updated when it changes.

I could not find any constant or function which can make such format, ASDB or settings.

The smartest way I could think of, which does not work:

  AudioStreamBasicDescription toDesc;
  FillOutASBDForLPCM(toDesc, [AVAudioSession sharedInstance].sampleRate,
                     2, 16, 16, kAudioFormatFlagIsFloat, kAudioFormatFlagsNativeEndian);
  AVAudioFormat *toFormat = [[AVAudioFormat alloc] initWithStreamDescription:&toDesc];

Even the provided example for iPhone, in the documentation linked above, uses kAudioFormatFlagsAudioUnitCanonical and AudioUnitSampleType which are deprecated.

So what is the correct way to do this?

In the first paragraph, I meant to write [AVSpeechSynthesizer writeUtterance:toBufferCallback:].

I found a workaround, not sure if this is the correct way. The following will return the format which mainMixerNode will accept.

[self.engine.mainMixerNode inputFormatForBus:0]

So to play on AVAudioEngine you have to use the same format returned by mainMixerNode.inputFormatForBus? If I have an audio buffer already constructed in with format AVAudioPCMFormatInt16 I can't schedule it on AVAudioEngine?

So I tested converting the AVAudioPCMBuffer I have (which is AVAudioPCMFormatInt16) to a new AVAudioPCMFormatFloat32 and AVAudioEngine plays the float32 pcm buffer.

I assumed it was valid to pass in a PCM buffer in other formats (being that -connect:to:format: has a format parameter). Can't seem to get it to play without using float32 format. I think I'll open a new thread asking about this since your question was related but different.

How to obtain an AVAudioFormat for a canonical format?
 
 
Q