AVAudioEngine offline render?

I'm looking for a way to do an offline render with AVAudioEngine. In Audio Toolbox terms, this means creating an AUGraph with AUGenericOutput at the end (rather than AURemoteIO or AUHAL) and then calling AudioUnitRender() on this unit to pull samples through the graph and get all the unit effects applied, rather than being connected to actual output hardware and calling AUGraphStart().


Looking at the AV Foundation audio API, I can't quite see a way to so it. The AVAudioEngine is effectively the graph, and there are AVAudioNodes to wrap the nodes within the graph, but the AVAudioOutputNode (and parent AVAudioIONode) don't expose any kind of render: method. AudioIONode exposes the underlying AudioUnit as a property, but it's read-only, so I assume this is just AURemoteIO or AUHAL as appropriate, and can't be used to insert an AUGenericOutput.


So… can this be done, or am I writing an enhancement request tonight?


Oh, since someone might ask: reason I need this is so that I can post-process some audio and then send it to a watch extension. Imagine, say, a podcast client that runs some cleanup filters (or maybe AUTimePitch to speed things up) on a downloaded file, and then sends the resulting file over to the watch extension, where it can be played on the watch without having the iPhone present.


Thanks!


—Chris

Please file that ehnacment request if you didn't already. This is a very popular feature request for AVAudioEngine and the more duplicate requests the better.

With access to the underlying AudioUnit, calling AudioUnitRender on it might do the trick no ?

Major major bummer. I thought this was possible already, and I really need it. Don't make me use the C APIs again. 😟


Radar filed. I guess we'll have to wait until 10.12. *sigh*

I hate to suggest this as it feels dirty: set the mixer.volume to 0, and use a tap to save the data to memory or a file.


Unfortunatley, the render will not be faster than real-time.

AVAudioEngine offline render?
 
 
Q