getUserMedia

I am using the newly implemented getUserMedia API and MediaStream APIs and want to record the captured audio stream.

I can capture the stream into blobs, but the data inside the blobs is either silence or in some format that I don't know.


I try to process it as though it is WAV file, or in PCM format. But I can't get much luck. The best I can get is silence of the same length as the recording.

A hint as to what format the raw data is inside the blob would be really helpful. Some sample code on how to return the captured the audio in a playable format to an html5 audio player on the same page would be out of sight.


Can anyone help with this?

I should say this is on Safari Version 11.0 (13604.1.38.1.6) High Sierra 10.13 beta


On Mobile Safari iOS11.0 I am having less luck. I can not get it play back even silence


J

Well let me answer my own question. The issue was not what I thought it was. The data coming back was PCM and it was no problem to convert it to wav. The real issue was that there was no sound data in there. This is because the audio device that was auto selected was the first device on the machine (not the "selected" audio input). In my case it was a mac mini, and though it shows up as audio input, there is no input to it. So it was capturing silence.


I could select another audio device and then capture and record the audio. But (I think) after I updated to the latest High Sierra Beta it seems to only ever capture from the first input device.


Honestly the whole thing is quite delicate and it could be a comibination of seemingly unrelated factors that are breaking it. I am able to capture the audio off iOS11 Mobile safari ok, but it also quite fragile and I may have been simply lucky to stumble upon the correct bag of setings and sequence.

Not trying to interrupt this conversation with yourself, but could you post your code?


I'm also receiving no audio. I realized this is because onaudioprocess is never triggered... Seems you got this part working, so your help would be appreciated.


To help with the discussion, here is my code (working in Android and in chrome on windows, but not on my iPhone):

var AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
function beginRecording(stream) {
  microphone = context.createMediaStreamSource(stream);
  processor = context.createScriptProcessor(0, 1, 1);
  processor.onaudioprocess = function (event) {
         ...
  };
  microphone.connect(processor);
  processor.connect(context.destination);
}
navigator.getUserMedia({audio: true}, function (stream) {
   var recording = beginRecording(stream);
}, function (error) {
   ...
});


BTW, I hope everything I did is according to this forum's guidelines, this is my first post here. 🙂

It took me a good few days to work out that

new AudioContext()

has to be called from a user input handler eg a button click. It needs to be called before getUserMedia as this returns a Promise and every piece of code that runs here on after is seen as automatically scripted, which results in the context remaining suspended.


I have a working demo here: https://github.com/danielstorey/webrtc-audio-recording which works on iphone.


I hope that helps you guys.

getUserMedia
 
 
Q