I am working with analog medical (eeg) data. I'm trying to allow for recording and scanning large files ~20-30 minutes in length. Core Audio seems like a great solution for storing and playing these files, however the catch is that I do not want to send any sound to an audio device. Everything is output graphically as waveforms. None of my considered solutions seem satisfactory:
1. AudioComponentDescription.componentSubType = kAudioUnitSubType_GenericOutput for the output node seems similar to what I want, but since it isn't tied to the timing of a hardware output device, it wouldn't let me play files in real-time at the same speed they were recorded.
2. Playing the file with the volume turned down, or with the mixer node disabled is a possibility, but when I tried it I got zeroed out audio playback (logical but unfortunate in my case)
3. Having a NSTimer callback repeatedly call ExtendedAudioFile.ExtAudioFileSeek is another possibility, but seems like it is very much not how ExtAudioFileSeek was intended to be used.
Is there a simpler way to do this?