I want to record my webcam and broadcast the stream in live to other clients.
I can easily show my webcam in a video tag in the same page with something like that:
function initialize() {
video = $("#v")[0];
width = video.width;
height = video.height;
var canvas = $("#c")[0];
context = canvas.getContext("2d");
nav.getUserMedia({video: true}, startStream, function () {});
}
function startStream(stream) {
video.src = URL.createObjectURL(stream);
video.play();
requestAnimationFrame(draw);
}
function draw() {
var frame = readFrame();
if (frame) {
replaceGreen(frame.data);
context.putImageData(frame, 0, 0);
}
requestAnimationFrame(draw);
}
function readFrame() {
try {
context.drawImage(video, 0, 0, width, height);
} catch (e) {
return null;
}
return context.getImageData(0, 0, width, height);
}
But how to send that stream to a server, do some image processing and then brodcast to other clients?
Is nodejs the best way? Do you have some readings/libs to recommend me?
There are currently several working drafts on this but I think currently the best way to broadcast a live video in a browser is to use Flash ActionScript. Of course, the clients will still need to install a plugin to run the app in the browser. Note that you can't use the Apache web server to broadcast the live videos in that case. You will need a media server such as Red5(free), Wowza, or Adobe Media Server.
Also, see if you can find some help here:
'WebRTC (where RTC stands for Real-Time Communications) is a technology that enables audio/video streaming and data sharing between browser clients (peers). As a set of standards, WebRTC provides any browser with the ability to share application data and perform teleconferencing peer to peer, without the need to install plug-ins or third-party software. '
Related
I want to play stream from gstreamer in a web browser.
I played around a with RTP, WebRTC and SDP files but, while VLC was able to connect to stream by simple SDP, browsers were not. I later understood that WebRTC requires secure connection which only complicates things and is not needed for my purposes. I stumbled upon Media Source Extension (MSE) of html5, which seems that it could help, but I'm not able to find some comprehensive tutorial or appropriate specs on how to get gstreamer to stream correct data and later how to play them using MSE. I'm also not sure about latency with using MSE.
So is there a way to play stream from gstreamer in a browser?
Thanks.
Using node webrtc project, I was able to combine output from gstreamer with webrtc call. For gstreamer, there is a project which enables it's use with node gstreamer superficial. So basically, you need to run gstremaer process from node process, which can then control output from gstremaer. On every gstreamer frame there is a callback called which takes the frame and can send it to webrtc calls.
Then an webrtc calls needs to be implemented. There is required some signaling protocol for calls. One side of the call will be the server and another will be the client's browser, instead of two browsers. Then a video track will be created where frames from gstreamer superficial will be pushed.
const { RTCVideoSource } = require("wrtc").nonstandard;
const gstreamer = require("gstreamer-superficial");
const source = new RTCVideoSource();
// This is WebRTC video track which should be used with addTransceiver see below
const track = source.createTrack();
const frame = {
width: 1920,
height: 1080,
data: null
};
const pipeline = new gstreamer.Pipeline("v4l2src ! videorate ! video/x-raw,format=YUY2,width=1920,height=1080,framerate=25/1 ! videoconvert ! video/x-raw,format=I420 ! appsink name=sink");
const appsink = pipeline.findChild("sink");
const pull = function() {
appsink.pull(function(buf, caps) {
if (buf) {
frame.data = new Uint8Array(buf);
try {
source.onFrame(frame);
} catch (e) {}
pull();
} else if (!caps) {
console.log("PULL DROPPED");
setTimeout(pull, 500);
}
});
};
pipeline.play();
pull();
// Example:
const useTrack = SomeRTCPeerConnection => SomeRTCPeerConnection.addTransceiver(track, { direction: "sendonly" });
I am currently looking to find a best way to store a incoming webrtc video streams. I am joining the videocall using webrtc (via chrome) and I would like to record every incoming video stream to from each participant to the browser.
The solutions I am researching are:
Intercept network packets coming to the browsers e.g. using Whireshark and then decode. Following this article: https://webrtchacks.com/video_replay/
Modifying a browser to store recording as a file e.g. by modifying Chromium itself
Any screen-recorders or using solutions like xvfb & ffmpeg is not an options due the resources constrains. Is there any other way that could let me capture packets or encoded video as a file? The solution must be working on Linux.
if the media stream is what you want a method is to override the browser's PeerConnection. Here is an example:
In an extension manifest add the following content script:
content_scripts": [
{
"matches": ["http://*/*", "https://*/*"],
"js": ["payload/inject.js"],
"all_frames": true,
"match_about_blank": true,
"run_at": "document_start"
}
]
inject.js
var inject = '('+function() {
//overide the browser's default RTCPeerConnection.
var origPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;
//make sure it is supported
if (origPeerConnection) {
//our own RTCPeerConnection
var newPeerConnection = function(config, constraints) {
console.log('PeerConnection created with config', config);
//proxy the orginal peer connection
var pc = new origPeerConnection(config, constraints);
//store the old addStream
var oldAddStream = pc.addStream;
//addStream is called when a local stream is added.
//arguments[0] is a local media stream
pc.addStream = function() {
console.log("our add stream called!")
//our mediaStream object
console.dir(arguments[0])
return oldAddStream.apply(this, arguments);
}
//ontrack is called when a remote track is added.
//the media stream(s) are located in event.streams
pc.ontrack = function(event) {
console.log("ontrack got a track")
console.dir(event);
}
window.ourPC = pc;
return pc;
};
['RTCPeerConnection', 'webkitRTCPeerConnection', 'mozRTCPeerConnection'].forEach(function(obj) {
// Override objects if they exist in the window object
if (window.hasOwnProperty(obj)) {
window[obj] = newPeerConnection;
// Copy the static methods
Object.keys(origPeerConnection).forEach(function(x){
window[obj][x] = origPeerConnection[x];
})
window[obj].prototype = origPeerConnection.prototype;
}
});
}
}+')();';
var script = document.createElement('script');
script.textContent = inject;
(document.head||document.documentElement).appendChild(script);
script.parentNode.removeChild(script);
I tested this with a voice call in google hangouts and saw that two mediaStreams where added via pc.addStream and one track was added via pc.ontrack. addStream would seem to be local media streams and the event object in ontrack is a RTCTrackEvent which has a streams object. which I assume are what you are looking for.
To access these streams from your extenion's content script you will need to create audio elements and set the "srcObject" property to the media stream: e.g.
pc.ontrack = function(event) {
//check if our element exists
var elm = document.getElementById("remoteStream");
if(elm == null) {
//create an audio element
elm = document.createElement("audio");
elm.id = "remoteStream";
}
//set the srcObject to our stream. not sure if you need to clone it
elm.srcObject = event.streams[0].clone();
//write the elment to the body
document.body.appendChild(elm);
//fire a custom event so our content script knows the stream is available.
// you could pass the id in the "detail" object. for example:
//CustomEvent("remoteStreamAdded", {"detail":{"id":"audio_element_id"}})
//then access if via e.detail.id in your event listener.
var e = CustomEvent("remoteStreamAdded");
window.dispatchEvent(e);
}
Then in your content script you can listen for that event/access the mediastream like so:
window.addEventListener("remoteStreamAdded", function(e) {
elm = document.getElementById("remoteStream");
var stream = elm.captureStream();
})
With the capture stream available to your content script you can do pretty much anything you want with it. For example, MediaRecorder works really well for recording the stream(s) or you could use something like peer.js or maybe binary.js to stream to another source.
I haven't tested this but it should also be possible to override the local streams. For example, in the inject.js you could establish some blank mediastream, override navigator.mediaDevices.getUserMedia and instead of returning the local mediastream return your own mediastream.
This method should work in firefox and maybe others as well assuming you use an extenion/app to load the inject.js script at the start of the document. It being loaded before any of the target's libs is key to making this work.
edited for more detail
edited for even more detail
Capturing packets will only give you the network packets which you would then need to turn into frames and put into a container. A server such as Janus can record videos.
Running headless chrome and using the javascript MediaRecorder API is another option but much more heavy on resources.
When I use getUserMedia() for screen share, I don't get audio.
Things which I would like to do, but couldn't find any relevant stuff:
I want to capture both the screen and audio at the same time. How can I achieve this ?
When my screen share starts, the below tray appears. What it is called ? and how can I modify it (like its looks) ?
Screenshot:
if you want one stream made of your screensharing for the video track and your webcam/mike audio for the audio track, you will need to make 2 calls to getusermedia with constraints set to screen and audio, respectively. then you will have to put the tracks in a common stream. Eventually, you can attach that stream to a peer connection.
as peveuve said, you can also use two peer connections, but it comes with at least two problems:
you will not have synchronization between audio and video (not so important for screensahring)
you will need two connection => twice the number of ports => more chance to fail. That is more likely to be a problem.
this is a mandatory security feature from the browser (to prevent a rogue page to broadcast your screen without you knowing it). I do not know of a way to manipulate it at all
its possible with npm-msr on Chrome.
getScreenId(function (error, sourceId, screen_constraints) {
navigator.getUserMedia = navigator.mozGetUserMedia || navigator.webkitGetUserMedia;
navigator.getUserMedia(screen_constraints, function (stream) {
navigator.getUserMedia({audio: true}, function (audioStream) {
stream.addTrack(audioStream.getAudioTracks()[0]);
var mediaRecorder = new MediaStreamRecorder(stream);
mediaRecorder.mimeType = 'video/mp4'
mediaRecorder.stream = stream;
document.querySelector('video').src = URL.createObjectURL(stream);
var video = document.getElementById('screen-video')
if (video) {
video.src = URL.createObjectURL(stream);
video.width = 360;
video.height = 300;
}
}, function (error) {
alert(error);
});
}, function (error) {
alert(error);
});
});
Check this answer: Is it possible broadcast audio with screensharing with WebRTC
You can't share both screen and audio in the same peer, you have to open 2 peers.
I'm pretty new to the HTML5 audio api: I've read some of the related articles at HTML5 Rocks, but it can be a little tricky flipping between Javascript and Dart at times.
In any case, I've been experimenting with HTML5 Audio in Dart. To produce sound effects for a simple game, I created a class as follows. I created an AudioContext, loaded sound data into SoundBuffers, and when the sound needed to be played, created an AudioBufferSourceNode via which to play the data stored in the buffers:
class Sfx {
AudioContext audioContext;
List<Map> soundList;
int soundFiles;
Sfx() {
audioContext = new AudioContext();
soundList = new List<Map>();
var soundsToLoad = [
{"name": "MISSILE", "url": "SFX/missile.wav"},
{"name": "EXPLOSION", "url": "SFX/explosion.wav"}
];
soundFiles = soundsToLoad.length;
for (Map sound in soundsToLoad) {
initSound(sound);
}
}
bool allSoundsLoaded() => (soundFiles == 0);
void initSound(Map soundMap) {
HttpRequest req = new HttpRequest();
req.open('GET', soundMap["url"], true);
req.responseType = 'arraybuffer';
req.on.load.add((Event e) {
audioContext.decodeAudioData(
req.response,
(var buffer) {
// successful decode
print("...${soundMap["name"]} loaded...");
soundList.add({"name": soundMap["name"], "buffer": buffer});
soundFiles--;
},
(var error) {
print("error loading ${soundMap["name"]}");
}
);
});
req.send();
}
void sfx(AudioBuffer buffer) {
AudioBufferSourceNode source = audioContext.createBufferSource();
source.connect(audioContext.destination, 0, 0);
source.buffer = buffer;
source.start(0);
}
void playSound(String sound) {
for (Map m in soundList) {
print(m);
if (m["name"] == sound) {
sfx(m["buffer"]);
break;
}
}
}
}
(The sound effects are in a folder "SFX". Now that I look at the code, there are probably a million better ways to organise the data, but that's besides the point right now.) I am able to play sound effects by creating an instance of Sfx and calling the method playSound.
e.g.
#import('dart:html');
#source('sfx.dart');
Sfx sfx;
void main() {
sfx = new Sfx();
window.on.keyUp.add((KeyboardEvent keX) {
sfx.playSound("MISSILE");
});
}
(Edit: added code to play sound when a key is hit.)
The problem is: although with the dart2js Javascript, the sound effects play as expected in Safari, when they are played in Dartium or (with the dart2js Javascript) in Chrome, they are distorted. (In Firefox, there are even worse problems!)
Is there anything obvious that I have neglected to do or that I need to take into account? Otherwise, are there any references or tutorials, preferably in a Dart context, that might help?
thanks for trying Dart!
First off, Firefox doesn't support Web Audio API (yet?) Chrome and Safari support Web Audio API. You can track adoption of Web Audio API here: http://caniuse.com/#feat=audio-api
Second, please try this Web Audio API sample in Dartium: https://github.com/dart-lang/dart-html5-samples/tree/master/web/webaudio/intro You will need to clone the repo first and run it locally. This sample works for me locally.
This sounds more like a bug report. If the sample from dart-html5-samples works for you, but your above code continues to be distorted, please open a bug at http://dartbug.com/new so we can take a look.
One thing to consider is waiting until the specific MISSLE sound is loaded before hooking up the keyUp handler.
I want to write a basic script for HTML5 Web Audio API, can play some audio files. But I don't know how to unload a playing audio and load another one. In my script two audio files are playing in the same time,but not what I wanted.
Here is my code:
var context,
soundSource,
soundBuffer;
// Step 1 - Initialise the Audio Context
context = new webkitAudioContext();
// Step 2: Load our Sound using XHR
function playSound(url) {
// Note: this loads asynchronously
var request = new XMLHttpRequest();
request.open("GET", url, true);
request.responseType = "arraybuffer";
// Our asynchronous callback
request.onload = function() {
var audioData = request.response;
audioGraph(audioData);
};
request.send();
}
// This is the code we are interested in
function audioGraph(audioData) {
// create a sound source
soundSource = context.createBufferSource();
// The Audio Context handles creating source buffers from raw binary
soundBuffer = context.createBuffer(audioData, true/* make mono */);
// Add the buffered data to our object
soundSource.buffer = soundBuffer;
// Plug the cable from one thing to the other
soundSource.connect(context.destination);
// Finally
soundSource.noteOn(context.currentTime);
}
// Stop all of sounds
function stopSounds(){
// How can do this?
}
// Events for audio buttons
document.querySelector('.pre').addEventListener('click',
function () {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/hello.mp3');
}
);
document.querySelector('.next').addEventListener('click',
function() {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/nokia.mp3');
}
);
You should be pre-loading sounds into buffers once, at launch, and simply resetting the AudioBufferSourceNode whenever you want to play it back.
To play multiple sounds in sequence, you need to schedule them using noteOn(time), one after the other, based on buffer respective lengths.
To stop sounds, use noteOff.
Sounds like you are missing some fundamental web audio concepts. This (and more) is described in detail and shown with samples in this HTML5Rocks tutorial and the FAQ.