Analytics Audio Unit and Audio Unit Controlling Another

I am developing some generalized analytics functions to be performed on real-time audio such as autocorrelation, etc. The analytics results will then be used to control particular audio processing functions (e.g., effects audio units). Does it make sense to define an audio unit type for performing analytics, i.e., an audio unit that doesn't actually process the audio in-band, but only performs some analytics on audio and the results of the analytics are used in real-time to control the processing behavior of another audio unit?


If so, can one audio unit be used to control another one? In particular, is there any well defined design pattern or API that allows one audio unit to control another via control signals that are generated as the result of some analytics performed on real-time audio. Any insight into this question or examples of code illustrating such a scenario would be greatly appreciated. Thanks.

Sounds like a fun project. On iOS, I think the new developments related to MIDI should enable something like this, but I'm still in the middle of putting together a prototype to test it out.


In your case, I think the the Host would have to know how and when to proxy MIDI control signals from one Audio Unit to another. Beyond "use MIDI", I don't think this sort of thing is very standardized. Depending on the details, you might be able to map your analytics parameters to MIDI CC outputs, or you might have to send blobs of sysex data over.

Analytics Audio Unit and Audio Unit Controlling Another
 
 
Q