Live audio via socket.io 1.0 - html

As from socket.io website
Binary streaming
Starting in 1.0, it's possible to send any blob back and forth: image, audio, video.
I'm now wondering, if this couldn't be the solution for something I'm trying to achieve recently.
I'm actually looking for a way how to broadcast live audio stream from (A - ie, mic input..) to all clients connected to a website of mine. Is something like this possible? I've been messing with WebRTC (https://www.webrtc-experiment.com/) examples but I haven't been able to manage the goal for more than few connected clients.
My idea is about something like getUserMedia or any other audio source (PCM, whatever..) on side A being chopped to chunks and provided to client and played for example by html5 audio element or anything.. I need to make that stream as much realtime as possible, no shout/ice cast services werent fast enough (indeed, they arent solution to my problem, they're meant to be used this way) and I don't really care about the audio quality. Crossplatform compatibility would be awesome.
Is something like that possible? By using socket.io as way how to provide those data to clients?
I would be very grateful for any reference, hint or source that could help me achieve this.
Thanks a lot.

This example shows you how to use the MediaRecorder to upload audio and then forward it using socket.io. This code will only broadcast after you're called mediaRecorder.stop(). You can choose to broadcast inside of ondataavailable. If you do that, you might want to pass a timeslice to mediaRecorder.start(), so that it doesn't trigger ondataavailable so often.
This solution isn't truly live, but I think it will help people who come back and find this question.
Client Code
var constraints = { audio: true };
navigator.mediaDevices.getUserMedia(constraints).then(function(mediaStream) {
var mediaRecorder = new MediaRecorder(mediaStream);
mediaRecorder.onstart = function(e) {
this.chunks = [];
};
mediaRecorder.ondataavailable = function(e) {
this.chunks.push(e.data);
};
mediaRecorder.onstop = function(e) {
var blob = new Blob(this.chunks, { 'type' : 'audio/ogg; codecs=opus' });
socket.emit('radio', blob);
};
// Start recording
mediaRecorder.start();
// Stop recording after 5 seconds and broadcast it to server
setTimeout(function() {
mediaRecorder.stop()
}, 5000);
});
// When the client receives a voice message it will play the sound
socket.on('voice', function(arrayBuffer) {
var blob = new Blob([arrayBuffer], { 'type' : 'audio/ogg; codecs=opus' });
var audio = document.createElement('audio');
audio.src = window.URL.createObjectURL(blob);
audio.play();
});
Server Code
socket.on('radio', function(blob) {
// can choose to broadcast it to whoever you want
socket.broadcast.emit('voice', blob);
});

In the Client Code you can write setInterval() instead of setTimeout() and then recursively call mediaRecorder.start() so that every 5 seconds the blob will be emitted continuously.
setInterval(function() {
mediaRecorder.stop()
mediaRecorder.start()
}, 5000);
Client Code
var constraints = { audio: true };
navigator.mediaDevices.getUserMedia(constraints).then(function(mediaStream) {
var mediaRecorder = new MediaRecorder(mediaStream);
mediaRecorder.onstart = function(e) {
this.chunks = [];
};
mediaRecorder.ondataavailable = function(e) {
this.chunks.push(e.data);
};
mediaRecorder.onstop = function(e) {
var blob = new Blob(this.chunks, { 'type' : 'audio/ogg; codecs=opus' });
socket.emit('radio', blob);
};
// Start recording
mediaRecorder.start();
// Stop recording after 5 seconds and broadcast it to server
setInterval(function() {
mediaRecorder.stop()
mediaRecorder.start()
}, 5000);
});
// When the client receives a voice message it will play the sound
socket.on('voice', function(arrayBuffer) {
var blob = new Blob([arrayBuffer], { 'type' : 'audio/ogg; codecs=opus' });
var audio = document.createElement('audio');
audio.src = window.URL.createObjectURL(blob);
audio.play();
});
Server Code
socket.on('voice', function(blob) {
// can choose to broadcast it to whoever you want
socket.broadcast.emit('voice', blob);
});

Related

Changing the pitch of the sound of an HTML5 audio node

I would like to change the pitch of a sound file using the HTML 5 Audio node.
I had a suggestion to use the setVelocity property and I have found this is a function of the Panner Node
I have the following code in which I have tried changing the call parameters, but with no discernible result.
Does anyone have any ideas, please?
I have the folowing code:
var gAudioContext = new AudioContext()
var gAudioBuffer;
var playAudioFile = function (gAudioBuffer) {
var panner = gAudioContext.createPanner();
gAudioContext.listener.dopplerFactor = 1000
source.connect(panner);
panner.setVelocity(0,2000,0);
panner.connect(gainNode);
gainNode.connect(gAudioContext.destination);
gainNode.gain.value = 0.5
source.start(0); // Play sound
};
var loadAudioFile = (function (url) {
var request = new XMLHttpRequest();
request.open('get', 'Sounds/English.wav', true);
request.responseType = 'arraybuffer';
request.onload = function () {
gAudioContext.decodeAudioData(request.response,
function(incomingBuffer) {
playAudioFile(incomingBuffer);
}
);
};
request.send();
}());
I'm trying to achieve something similar and I also failed at using PannerNode.setVelocity().
One working technique I found is by using the following package (example included in the README): https://www.npmjs.com/package/soundbank-pitch-shift
It is also possible with a biquad filter, available natively. See an example here: http://codepen.io/qur2/pen/emVQwW
I didn't find a simple sound to make it obvious (CORS restriction with ajax loading).
You can read more at http://chimera.labs.oreilly.com/books/1234000001552/ch04.html and https://developer.mozilla.org/en-US/docs/Web/API/BiquadFilterNode.
Hope this helps!

WebRTC audio heard without <audio> element (RTCMultiConnection)

Audio is being heard even though no audio element seems to be put inserted in the DOM.
Scenario:
Create PeerConnection without streams
Add a stream but disable the code that adds MediaElements (audio,video) to DOM
Issue:
After the stream gets across, audio can be heard from headphones (or speakers).
What should happen:
Since I'm not attaching anything to the dom I expect no audio to be heard.
Code for replicating the scenario
// <body>
// <script src="https://cdn.webrtc-experiment.com/RTCMultiConnection.js"></script>
// <button id="start">Start!</button>
// </body>
$('#start').click(function() {
var NO_MEDIA_SESSION = {video: false, audio: false, oneway: true};
var caller = new RTCMultiConnection('lets-try');
caller.session = NO_MEDIA_SESSION;
caller.dontAttachStream = true;
caller.onstream = function() { console.log("Got stream but not attaching") };
var receiver = new RTCMultiConnection('lets-try');
receiver.session = NO_MEDIA_SESSION;
receiver.dontAttachStream = true;
receiver.onstream = function() { console.log("Got stream but not attaching") };
caller.open();
receiver.connect();
receiver.onconnected = function() {
console.log("Connected!");
caller.addStream({audio: true});
}
});
I'm interested how is it possible to hear MediaStream without there being audio DOM element?
If any RTCMultiConnection specialists answering, then maybe point me how to avoid audio stream being made audible? (I want to get the stream and attach it later myself).
RTCMultiConnection creates mediaElement on the fly to make sure onstream event is fired only when media stream started flowing.
connection.onstream = function(event) {
event.mediaElement.pause(); // or volume=0
// or
event.mediaElement = null;
// or
delete event.mediaElement;
};
Updated:
Use following snippet:
var connection = new RTCMultiConnection();
connection.session = {
data: true
};
btnOpenRoom.onclick = function() {
connection.open('roomid');
};
btnJoinRoom.onclick = function() {
connection.join('roomid');
};
btnAddAudioStream.onclick = function() {
connection.addStream({
audio: true
});
};
btnAddAudioVideoStream.onclick = function() {
connection.addStream({
audio: true,
video: true
});
};

WebRTC SDP object (local description) by Firefox does not contain DataChannel info unlike Chrome?

I'm testing WebRTC procedure step by step for my sake.
I wrote some testing site for server-less WebRTC.
http://webrtcdevelop.appspot.com/
In fact, STUN server by google is used, but no signalling server deployed.
Session Description Protocol (SDP) is exchanged manually by hand that is CopyPaste between browser windows.
So far, here is the result I've got with the code:
'use strict';
var peerCon;
var ch;
$(document)
.ready(function()
{
init();
$('#remotebtn2')
.attr("disabled", "");
$('#localbtn')
.click(function()
{
offerCreate();
$('#localbtn')
.attr("disabled", "");
$('#remotebtn')
.attr("disabled", "");
$('#remotebtn2')
.removeAttr("disabled");
});
$('#remotebtn')
.click(function()
{
answerCreate(
new RTCSessionDescription(JSON.parse($('#remote')
.val())));
$('#localbtn')
.attr("disabled", "");
$('#remotebtn')
.attr("disabled", "");
$('#remotebtn')
.attr("disabled", "");
});
$('#remotebtn2')
.click(function()
{
answerGet(
new RTCSessionDescription(JSON.parse($('#remote')
.val())));
$('#remotebtn2')
.attr("disabled", "");
});
$('#msgbtn')
.click(function()
{
msgSend($('#msg')
.val());
});
});
var init = function()
{
//offer------
peerCon =
new RTCPeerConnection(
{
"iceServers": [
{
"url": "stun:stun.l.google.com:19302"
}]
},
{
"optional": []
});
var localDescriptionOut = function()
{
console.log(JSON.stringify(peerCon.localDescription));
$('#local')
.text(JSON.stringify(peerCon.localDescription));
};
peerCon.onicecandidate = function(e)
{
console.log(e);
if (e.candidate === null)
{
console.log('candidate empty!');
localDescriptionOut();
}
};
ch = peerCon.createDataChannel(
'ch1',
{
reliable: true
});
ch.onopen = function()
{
dlog('ch.onopen');
};
ch.onmessage = function(e)
{
dlog(e.data);
};
ch.onclose = function(e)
{
dlog('closed');
};
ch.onerror = function(e)
{
dlog('error');
};
};
var msgSend = function(msg)
{
ch.send(msg);
}
var offerCreate = function()
{
peerCon
.createOffer(function(description)
{
peerCon
.setLocalDescription(description, function()
{
//wait for complete of peerCon.onicecandidate
}, error);
}, error);
};
var answerCreate = function(descreption)
{
peerCon
.setRemoteDescription(descreption, function()
{
peerCon
.createAnswer(
function(description)
{
peerCon
.setLocalDescription(description, function()
{
//wait for complete of peerCon.onicecandidate
}, error);
}, error);
}, error);
};
var answerGet = function(description)
{
peerCon.setRemoteDescription(description, function()
{ //
console.log(JSON.stringify(description));
dlog('local-remote-setDescriptions complete!');
}, error);
};
var error = function(e)
{
console.log(e);
};
var dlog = function(msg)
{
var content = $('#onmsg')
.html();
$('#onmsg')
.html(content + msg + '<br>');
}
Firefox(26.0):
RtpDataChannels
onopen event is fired successfully, but send fails.
Chrome(31.0):
RtpDataChannels
onopen event is fired successfully, and send also succeeded.
A SDP object by Chrome is as follows:
{"sdp":".................. cname:L5dftYw3P3clhLve
\r\
na=ssrc:2410443476 msid:ch1 ch1
\r\
na=ssrc:2410443476 mslabel:ch1
\r\
na=ssrc:2410443476 label:ch1
\r\n","type":"offer"}
where the ch1 information defined in the code;
ch = peerCon.createDataChannel(
'ch1',
{
reliable: false
});
is bundled properly.
However, a SDP object (local description) by Firefox does not contain DataChannel at all, and moreover, the SDP is much shorter than Chrome, and less information bundled.
What do I miss?
Probably, I guess the reason that send fails on DataChannel is due to this lack of information in the SDP object by firefox.
How could I fix this?
I investigated sources of various working libraries, such as peerJS, easyRTC, simpleWebRTC, but cannot figure out the reason.
Any suggestion and recommendation to read is appreciated.
[not an answer, yet]
I leave this here just trying to help you. I am not much of a WebRTC developer. But, curious i am, this quite new and verry interresting for me.
Have you seen this ?
DataChannels
Supported in Firefox today, you can use DataChannels to send peer-to-peer
information during an audio/video call. There is
currently a bug that requires developers to set up some sort of
audio/video stream (even a “fake” one) in order to initiate a
DataChannel, but we will soon be fixing that.
Also, i found this bug hook, witch seems to be related.
One last point, your version of adapter.js is different from the one served on code.google. And .. alot. the webrtcDetectedVersion part is missing in yours.
https://code.google.com/p/webrtc/source/browse/stable/samples/js/base/adapter.js
Try that, come back to me with good newz. ?
After last updating, i have this line in console after clicking 'get answer'
Object { name="INVALID_STATE", message="Cannot set remote offer in
state HAVE_LOCAL_OFFER", exposedProps={...}, more...}
but this might be useless info ence i copy pasted the same browser offre to answer.
.. witch made me notice you are using jQuery v1.7.1 jquery.com.
Try updating jQuery (before i kill a kitten), and in the meantime, try make sure you use all updated versions of scripts.
Woups, after fast reading this : https://developer.mozilla.org/en-US/docs/Web/Guide/API/WebRTC/WebRTC_basics then comparing your javascripts, i see no SHIM.
Shims
As you can imagine, with such an early API, you must use the browser
prefixes and shim it to a common variable.
> var PeerConnection = window.mozRTCPeerConnection ||
> window.webkitRTCPeerConnection; var IceCandidate =
> window.mozRTCIceCandidate || window.RTCIceCandidate; var
> SessionDescription = window.mozRTCSessionDescription ||
> window.RTCSessionDescription; navigator.getUserMedia =
> navigator.getUserMedia || navigator.mozGetUserMedia ||
> navigator.webkitGetUserMedia;

Choppy/inaudible playback with chunked audio through Web Audio API

I brought this up in my last post but since it was off topic from the original question I'm posting it separately. I'm having trouble with getting my transmitted audio to play back through Web Audio the same way it would sound in a media player. I have tried 2 different transmission protocols, binaryjs and socketio, and neither make a difference when trying to play through Web Audio. To rule out the transportation of the audio data being the issue I created an example that sends the data back to the server after it's received from the client and dumps the return to stdout. Piping that into VLC results in a listening experience that you would expect to hear.
To hear the results when playing through vlc, which sounds the way it should, run the example at https://github.com/grkblood13/web-audio-stream/tree/master/vlc using the following command:
$ node webaudio_vlc_svr.js | vlc -
For whatever reason though when I try to play this same audio data through Web Audio it fails miserably. The results are random noises with large gaps of silence in between.
What is wrong with the following code that is making the playback sound so bad?
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var delayTime = 0;
var init = 0;
var audioStack = [];
client.on('stream', function(stream, meta){
stream.on('data', function(data) {
context.decodeAudioData(data, function(buffer) {
audioStack.push(buffer);
if (audioStack.length > 10 && init == 0) { init++; playBuffer(); }
}, function(err) {
console.log("err(decodeAudioData): "+err);
});
});
});
function playBuffer() {
var buffer = audioStack.shift();
setTimeout( function() {
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.start(context.currentTime);
delayTime=source.buffer.duration*1000; // Make the next buffer wait the length of the last buffer before being played
playBuffer();
}, delayTime);
}
Full source: https://github.com/grkblood13/web-audio-stream/tree/master/binaryjs
You really can't just call source.start(audioContext.currentTime) like that.
setTimeout() has a long and imprecise latency - other main-thread stuff can be going on, so your setTimeout() calls can be delayed by milliseconds, even tens of milliseconds (by garbage collection, JS execution, layout...) Your code is trying to immediately play audio - which needs to be started within about 0.02ms accuracy to not glitch - on a timer that has tens of milliseconds of imprecision.
The whole point of the web audio system is that the audio scheduler works in a separate high-priority thread, and you can pre-schedule audio (starts, stops, and audioparam changes) at very high accuracy. You should rewrite your system to:
1) track when the first block was scheduled in audiocontext time - and DON'T schedule the first block immediately, give some latency so your network can hopefully keep up.
2) schedule each successive block received in the future based on its "next block" timing.
e.g. (note I haven't tested this code, this is off the top of my head):
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var delayTime = 0;
var init = 0;
var audioStack = [];
var nextTime = 0;
client.on('stream', function(stream, meta){
stream.on('data', function(data) {
context.decodeAudioData(data, function(buffer) {
audioStack.push(buffer);
if ((init!=0) || (audioStack.length > 10)) { // make sure we put at least 10 chunks in the buffer before starting
init++;
scheduleBuffers();
}
}, function(err) {
console.log("err(decodeAudioData): "+err);
});
});
});
function scheduleBuffers() {
while ( audioStack.length) {
var buffer = audioStack.shift();
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
if (nextTime == 0)
nextTime = context.currentTime + 0.05; /// add 50ms latency to work well across systems - tune this if you like
source.start(nextTime);
nextTime+=source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
};
}

How to get media stream object form HTML5 video element in javascript

all
I'm in peer to peer communication using webRTC , we have media stream object from the getUserMedia which is given as input stream to peerconnection. Here I need video stream from the selected video file from the local drive which is playing using Video element of HTML5.
Is it possible to create mediastream object from the video tag?
thanks,
suri
For now you can't add a media stream from a video tag, but it should be possible in the future, as it is explained on MDN
MediaStream objects have a single input and a single output. A MediaStream object generated by getUserMedia() is called local, and has as its source input one of the user's cameras or microphones. A non-local MediaStream may be representing to a media element, like or , a stream originating over the network, and obtained via the WebRTC PeerConnection API, or a stream created using the Web Audio API MediaStreamAudioSourceNode.
But you can use Media Source Extensions API to do what yo want : you have to put the local file into a stream and append in in a MediaSource object. You can learn more about MSE here : http://www.w3.org/TR/media-source/
And you can find a demo and source of the method above here
2021 update: It is now possible using MediaRecorder interface: https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder
Example from same page:
if (navigator.mediaDevices) {
console.log('getUserMedia supported.');
var constraints = { audio: true };
var chunks = [];
navigator.mediaDevices.getUserMedia(constraints)
.then(function(stream) {
var mediaRecorder = new MediaRecorder(stream);
visualize(stream);
record.onclick = function() {
mediaRecorder.start();
console.log(mediaRecorder.state);
console.log("recorder started");
record.style.background = "red";
record.style.color = "black";
}
stop.onclick = function() {
mediaRecorder.stop();
console.log(mediaRecorder.state);
console.log("recorder stopped");
record.style.background = "";
record.style.color = "";
}
mediaRecorder.onstop = function(e) {
console.log("data available after MediaRecorder.stop() called.");
var clipName = prompt('Enter a name for your sound clip');
var clipContainer = document.createElement('article');
var clipLabel = document.createElement('p');
var audio = document.createElement('audio');
var deleteButton = document.createElement('button');
clipContainer.classList.add('clip');
audio.setAttribute('controls', '');
deleteButton.innerHTML = "Delete";
clipLabel.innerHTML = clipName;
clipContainer.appendChild(audio);
clipContainer.appendChild(clipLabel);
clipContainer.appendChild(deleteButton);
soundClips.appendChild(clipContainer);
audio.controls = true;
var blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs=opus' });
chunks = [];
var audioURL = URL.createObjectURL(blob);
audio.src = audioURL;
console.log("recorder stopped");
deleteButton.onclick = function(e) {
evtTgt = e.target;
evtTgt.parentNode.parentNode.removeChild(evtTgt.parentNode);
}
}
mediaRecorder.ondataavailable = function(e) {
chunks.push(e.data);
}
})
.catch(function(err) {
console.log('The following error occurred: ' + err);
})
}
MDN also has a detailed mini tutorial: https://developer.mozilla.org/en-US/docs/Web/API/MediaStream_Recording_API/Recording_a_media_element