Hi
Anyone has found a sample code or API documentation on how to access the audio stream coming from microphone in Apple Watch and how to control the gain of the microphone?
Thanks in advance!
Hi
Anyone has found a sample code or API documentation on how to access the audio stream coming from microphone in Apple Watch and how to control the gain of the microphone?
Thanks in advance!
I'm in the same boat, it doesn't appear to be available under the frameworks.
And looking through the device listings for avaudiorecorder doesn't list any watchtype devices
there also isn't any information on how to access heart rate data. Rather annoying, but I understand. Docs take time to write
There isn't direct access to a stream of audio from the microphone in watchOS. However, you can present a system interface to record audio to a file using:
- (void)presentAudioRecordingControllerWithOutputURL:(NSURL *)URL
preset:(WKAudioRecordingPreset)preset
maximumDuration:(NSTimeInterval)maximumDuration
actionTitle:(nullable NSString *)actionTitle
completion:(void (^)(BOOL didSave, NSError * __nullable error))completion;
Is this the same for Watch OS 2? Or is it there possible to access the audio stream. I'm looking into the feasability of having audio trigger a certain action in the watch app.
The API I mentioned is only available in watchOS 2. There is no access to the audio stream.
What should I send for the URL? I know it's the location to store the recorded output, but what should that location be? Is there a documents directory on watchOS 2?
I may have answered my own question. The following code seems to be working for me inside of my InterfaceController:
let path = NSSearchPathForDirectoriesInDomains(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)[0]
let url = NSURL(fileURLWithPath: path.stringByAppendingPathComponent("dictation.wav"))
self.presentAudioRecordingControllerWithOutputURL(url, preset: WKAudioRecordingPreset.NarrowBandSpeech, maximumDuration: 30, actionTitle: "Save") { (didSave, error) -> Void in
if let error = error {
print("error: \(error)")
return
}
if didSave {
print("saved!")
}
}
Does this mean there's no way to provide a custom UI while the user is speaking? I'm adding voice dictation that parses natural language into data, and I'd love to be able to create a custom UI (or overlay something on top of the AudioRecordingController). Can I customize anything there, or should I be looking at a different API?
You can't provide a custom UI. Once the user is finished recording, the API will pass you a URL to an audio file.