I'm looking for a way to do an offline render with AVAudioEngine. In Audio Toolbox terms, this means creating an AUGraph with AUGenericOutput at the end (rather than AURemoteIO or AUHAL) and then calling AudioUnitRender() on this unit to pull samples through the graph and get all the unit effects applied, rather than being connected to actual output hardware and calling AUGraphStart().
Looking at the AV Foundation audio API, I can't quite see a way to so it. The AVAudioEngine is effectively the graph, and there are AVAudioNodes to wrap the nodes within the graph, but the AVAudioOutputNode (and parent AVAudioIONode) don't expose any kind of render: method. AudioIONode exposes the underlying AudioUnit as a property, but it's read-only, so I assume this is just AURemoteIO or AUHAL as appropriate, and can't be used to insert an AUGenericOutput.
So… can this be done, or am I writing an enhancement request tonight?
Oh, since someone might ask: reason I need this is so that I can post-process some audio and then send it to a watch extension. Imagine, say, a podcast client that runs some cleanup filters (or maybe AUTimePitch to speed things up) on a downloaded file, and then sends the resulting file over to the watch extension, where it can be played on the watch without having the iPhone present.
Thanks!
—Chris