Speech Recognition not working with QuickTime iPhone mirroring

So I have a controller which uses the AVFoundation library in two ways. The first was is the SFSpeechAudioBufferRecognitionRequest() to translate speech, and the second is an AVAudioRecorder object to basically create an equalizer. My app works perfectly as expected, except while mirroring my iPhone through QuckTime on my MacBook pro. The exact problem is that the SFSpeechAudoBufferRecofnitionRequest() does not work while mirroring, but the AVAudioRecorder does work. Also side not, while mirroring my phone the time on my phone say "9:41" no matter what time of day it is. Again the app works while unplugged from my computer, and while plugged in (as long as im not mirroring through QuickTime). Another strange thing is that while mirroring through quicktime, if I have the same controller keyed/loaded in my application, if I try recording 3 times in a row it consistantly works on the 3rd time and everytime after. I need it to work on the first and second time too of course though. My question is what does mirroring my iPhone on QuickTime change? Does it possibly have something to do with my shared instance since the AVAudioRecorder always works? Am I hopeless? All feedback is appreciated!