I am developing some generalized analytics functions to be performed on real-time audio such as autocorrelation, etc. The analytics results will then be used to control particular audio processing functions (e.g., effects audio units). Does it make sense to define an audio unit type for performing analytics, i.e., an audio unit that doesn't actually process the audio in-band, but only performs some analytics on audio and the results of the analytics are used in real-time to control the processing behavior of another audio unit?
If so, can one audio unit be used to control another one? In particular, is there any well defined design pattern or API that allows one audio unit to control another via control signals that are generated as the result of some analytics performed on real-time audio. Any insight into this question or examples of code illustrating such a scenario would be greatly appreciated. Thanks.