getUserMedia() Screen share with Audio and update tray - google-chrome

When I use getUserMedia() for screen share, I don't get audio.
Things which I would like to do, but couldn't find any relevant stuff:
I want to capture both the screen and audio at the same time. How can I achieve this ?
When my screen share starts, the below tray appears. What it is called ? and how can I modify it (like its looks) ?
Screenshot:

if you want one stream made of your screensharing for the video track and your webcam/mike audio for the audio track, you will need to make 2 calls to getusermedia with constraints set to screen and audio, respectively. then you will have to put the tracks in a common stream. Eventually, you can attach that stream to a peer connection.
as peveuve said, you can also use two peer connections, but it comes with at least two problems:
you will not have synchronization between audio and video (not so important for screensahring)
you will need two connection => twice the number of ports => more chance to fail. That is more likely to be a problem.
this is a mandatory security feature from the browser (to prevent a rogue page to broadcast your screen without you knowing it). I do not know of a way to manipulate it at all

its possible with npm-msr on Chrome.
getScreenId(function (error, sourceId, screen_constraints) {
navigator.getUserMedia = navigator.mozGetUserMedia || navigator.webkitGetUserMedia;
navigator.getUserMedia(screen_constraints, function (stream) {
navigator.getUserMedia({audio: true}, function (audioStream) {
stream.addTrack(audioStream.getAudioTracks()[0]);
var mediaRecorder = new MediaStreamRecorder(stream);
mediaRecorder.mimeType = 'video/mp4'
mediaRecorder.stream = stream;
document.querySelector('video').src = URL.createObjectURL(stream);
var video = document.getElementById('screen-video')
if (video) {
video.src = URL.createObjectURL(stream);
video.width = 360;
video.height = 300;
}
}, function (error) {
alert(error);
});
}, function (error) {
alert(error);
});
});

Check this answer: Is it possible broadcast audio with screensharing with WebRTC
You can't share both screen and audio in the same peer, you have to open 2 peers.

Related

Autoplay stops after some time html5 video taken from google drive

I have a mp4 video inside html video tag. SRC= is given for the google drive video.
see my code below
<video auto play muted loop id="video" class="" src="https://drive.google.com/uc?export=download&id=1qSC6ySf6ZZldFRpuBx9EXvDsD-mfve1Z" type="video/mp4"> </video>
Note: when I am directly calling mp4 video from my server than it will not pause but when the video is called from google drive it gets paused after few minutes. As I am not showing any controls. I need the video to be played continuously.
the issues I'm getting is
Codepen link: https://codepen.io/5salman/pen/bGRMyKj
There's nothing inherit in Google Drive (that I can think of) that would cancel autoplay. More likely you're hitting an error event. The video will be loaded via "206 Partial Content" byte-range requests. If it doesn't end up being locally cached then that could make it more susceptible to network issues than on a server that doesn't support byte-range requests. Also consider looking at the network Devtools for any network errors.
Mitigation options:
Reduce the size of your video! It's 14mb for 20 seconds of video. Use Handbrake or something to re-encode the video to a smaller bitrate/size. That will reduce network traffic and make it less likely to choke the viewing device.
Add an error event handler on the <video> tag - this can help you with diagnostics, and you can also trigger the video to (try) to play again.
function reloadVideo(vidElement, preserveTime = true) {
if (preserveTime) {
// wait to set the current time again until after playable
const position = vidElement.currentTime;
var cb = vidElement.addEventListener('canplay', () => {
if (!isNaN(position) && position < vidElement.duration) {
// I recommend seeking just a frame or so ahead, just in case there was a decode error on the video
vidElement.currentTime = position + 1/30;
}
vidElement.removeEventListener('canplay', cb);
});
}
const src = vidElement.currentSrc;
vidElement.src = "";
vidElement.src = src;
}
var vidElement = document.querySelector('#video');
// retry on error
var retryErrorCodes = [MediaError.MEDIA_ERR_NETWORK, MediaError.MEDIA_ERR_DECODE];
var SECONDS_BEFORE_RETRY = 5;
vidElement.addEventListener('error', evt => {
if (retryErrorCodes.includes(vidElement.error.code)) {
setTimeout(reloadVideo, SECONDS_BEFORE_RETRY * 1000, vidElement);
}
}
(Not Recommended) if you need it to keep running you can add a setInterval function that restarts the video if it's paused. Keep in mind that these "zombie" functions can get annoying if you don't manage them carefully.
// use setInterval to keep retrying the playback
var SECONDS_BETWEEN_PLAYING_CHECKS = 10;
var keepPlayingTimer = setInterval(function () {
var vidElement = document.querySelector('#video');
// make sure the element is still there
if (vidElement && typeof vidElement.play === 'function') {
// if it's not playing start it playing again
if (vidElement.paused) {
vidElement.play();
}
} else {
// don't call this function again if video element is gone
clearInterval(keepPlayingTimer);
}
}, SECONDS_BETWEEN_PLAYING_CHECKS * 1000);

Properly using chrome.tabCapture in a manifest v3 extension

Edit:
As the end of the year and the end of Manifest V2 is approaching I did a bit more research on this and found the following workarounds:
The example here that uses the desktopCapture API:
https://github.com/GoogleChrome/chrome-extensions-samples/issues/627
The problem with this approach is that it requires the user to select a capture source via some UI which can be disruptive. The --auto-select-desktop-capture-source command line switch can apparently be used to bypass this but I haven't been able to use it with success.
The example extension here that works around tabCapture not working in
service workers by creating its own inactive tab from
which to access the tabCapture API and record the currently
active tab:
https://github.com/zhw2590582/chrome-audio-capture
So far this seems to be the best solution I've found so far in terms of UX. The background page provided in Manifest V2 is essentially replaced with a phantom tab.
The roundaboutedness of the second solution also seems to suggest that the tabCapture API is essentially not intended for use in Manifest V3, or else there would have been a more straightforward way to use it. I am disappointed that Manifest V3 is being enforced while essentially leaving behind Manifest V2 features such as this one.
Original Post:
I'm trying to write a manifest v3 Chrome extension that captures tab audio. However as far as I can tell, with manifest v3 there are some changes that make this a bit difficult:
Background scripts are replaced by service workers.
Service workers do not have access to the chrome.tabCapture API.
Despite this I managed to get something that nearly works as popup scripts still have access to chrome.tabCapture. However, there is a drawback - the audio of the tab is muted and there doesn't seem to be a way to unmute it. This is what I have so far:
Query the service worker current tab from the popup script.
let tabId;
// Fetch tab immediately
chrome.runtime.sendMessage({command: 'query-active-tab'}, (response) => {
tabId = response.id;
});
This is the service worker, which response with the current tab ID.
chrome.runtime.onMessage.addListener(
(request, sender, sendResponse) => {
// Popup asks for current tab
if (request.command === 'query-active-tab') {
chrome.tabs.query({active: true}, (tabs) => {
if (tabs.length > 0) {
sendResponse({id: tabs[0].id});
}
});
return true;
}
...
Again in the popup script, from a keyboard shortcut command, use chrome.tabCapture.getMediaStreamId to get a media stream ID to be consumed by the current tab, and send that stream ID back to the service worker.
// On command, get the stream ID and forward it back to the service worker
chrome.commands.onCommand.addListener((command) => {
chrome.tabCapture.getMediaStreamId({consumerTabId: tabId}, (streamId) => {
chrome.runtime.sendMessage({
command: 'tab-media-stream',
tabId: tabId,
streamId: streamId
})
});
});
The service worker forwards that stream ID to the content script.
chrome.runtime.onMessage.addListener(
(request, sender, sendResponse) => {
...
// Popup sent back media stream ID, forward it to the content script
if (request.command === 'tab-media-stream') {
chrome.tabs.sendMessage(request.tabId, {
command: 'tab-media-stream',
streamId: request.streamId
});
}
}
);
The content script uses navigator.mediaDevices.getUserMedia to get the stream.
// Service worker sent us the stream ID, use it to get the stream
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
navigator.mediaDevices.getUserMedia({
video: false,
audio: true,
audio: {
mandatory: {
chromeMediaSource: 'tab',
chromeMediaSourceId: request.streamId
}
}
})
.then((stream) => {
// Once we're here, the audio in the tab is muted
// However, recording the audio works!
const recorder = new MediaRecorder(stream);
const chunks = [];
recorder.ondataavailable = (e) => {
chunks.push(e.data);
};
recorder.onstop = (e) => saveToFile(new Blob(chunks), "test.wav");
recorder.start();
setTimeout(() => recorder.stop(), 5000);
});
});
Here is the code that implements the above: https://github.com/killergerbah/-test-tab-capture-extension
This actually does produce a MediaStream, but the drawback is that the sound of the tab is muted. I've tried playing the stream through an audio element, but that seems to do nothing.
Is there a way to obtain a stream of the tab audio in a manifest v3 extension without muting the audio in the tab?
I suspect that this approach might be completely wrong as it's so roundabout, but this is the best I could come up with after reading through the docs and various StackOverflow posts.
I've also read that the tabCapture API is going to be moved for manifest v3 at some point, so maybe the question doesn't even make sense to ask - however if there is a way to still properly use it I would like to know.
I found your post very useful in progressing my implementation of an audio tab recorder.
Regarding the specific muting issue you were running into, I resolved it by looking here: Original audio of tab gets muted while using chrome.tabCapture.capture() and MediaRecorder()
// Service worker sent us the stream ID, use it to get the stream
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
navigator.mediaDevices.getUserMedia({
video: false,
audio: true,
audio: {
mandatory: {
chromeMediaSource: 'tab',
chromeMediaSourceId: request.streamId
}
}
})
.then((stream) => {
// To resolve original audio muting
context = new AudioContext();
var audio = context.createMediaStreamSource(stream);
audio.connect(context.destination);
const recorder = new MediaRecorder(stream);
const chunks = [];
recorder.ondataavailable = (e) => {
chunks.push(e.data);
};
recorder.onstop = (e) => saveToFile(new Blob(chunks), "test.wav");
recorder.start();
setTimeout(() => recorder.stop(), 5000);
});
});
This may not be exactly what you are looking for, but perhaps it may provide some insight.
I've tried playing the stream through an audio element, but that seems to do nothing.
Ironically this is how I managed to get around the issue; by creating an object in the popup itself. When using tabCapture in the popup script, it returns the stream, and I set the audio srcObject to that stream.
HTML:
<audio id="audioObject" autoplay> No source detected </audio>
JS:
chrome.tabCapture.capture({audio: true, video: false}, function(stream) {
var audio = document.getElementById("audioObject");
audio.srcObject = stream
})
According to this post on Manifest V3, chrome.capture will be the new namespace for tabCapture and the like, but I haven't seen anything beyond that.
I had this problem too, and I resolve it by using Web Audio API. Just create a new context and conect it to a media stream source using the captures MediaStream, this is an example:
avoidSilenceInTab: (desktopStream: MediaStream) => {
var contextTab = new AudioContext();
contextTab
.createMediaStreamSource(desktopStream)
.connect(contextTab.destination);
}

how to getUserMedia and record the video mixing the mp3 with javascript? mp3 can play pause and stop

I am using getUserMedia and mediaRecorder API to record an video from webcam.
I am using chrome version 80.
How to getUserMedia and record the video mixing the mp3 with javascript? mp3 can play pause and stop
I don't know how to mixing the mp3 to the video stream on live.
When I removeTrack and addTrack, I stop on MediaRecorder fail.
show Error: Failed to execute 'stop' on 'MediaRecorder': The MediaRecorder's state is 'inactive'.
my code on codepen: https://codepen.io/zhishaofei3/pen/eYNrYGj
and prime codes:
function getFileBuffer(filepath) {
return fetch(filepath, {method: 'GET'}).then(response => response.arrayBuffer())
}
function mp3play() {
getFileBuffer('song.mp3')
.then(buffer => context.decodeAudioData(buffer))
.then(buffer => {
console.log(buffer)
const source = context.createBufferSource()
source.buffer = buffer
let volume = context.createGain()
volume.gain.value = 1
source.connect(volume)
dest = context.createMediaStreamDestination()
volume.connect(dest)
// volume.connect(context.destination)
source.start(0)
const _audioTrack = stream.getAudioTracks();
if (_audioTrack.length > 0) {
_audioTrack[0].stop();
stream.removeTrack(_audioTrack[0]);
}
console.log(dest.stream)
console.log(dest.stream.getAudioTracks()[0])
stream.addTrack(dest.stream.getAudioTracks()[0])
})
}
thank you !
Many containers don't support adding/removing tracks like that, and it's doubtful the Media Recorder API does at all. It's an unusual thing to do.
You need to create the stream you're going to record before instantiating Media Recorder, with all of the tracks you want. Therefore, you need to do things in this order:
Set up your AudioContext.
Call getUserMedia(). (And while you're at it, set audio: false in your constraints. No need to open a microphone if you're not using one.)
videoStream.getVideoTracks() and dest.stream.getAudioTracks() to get all of the tracks.
Create a new MediaStream with those tracks. new MediaStream([audioTrack, videoTrack])
Now, run your MediaRecorder on this new MediaStream and you'll have what you want.

Ways to capture incoming WebRTC video streams (client side)

I am currently looking to find a best way to store a incoming webrtc video streams. I am joining the videocall using webrtc (via chrome) and I would like to record every incoming video stream to from each participant to the browser.
The solutions I am researching are:
Intercept network packets coming to the browsers e.g. using Whireshark and then decode. Following this article: https://webrtchacks.com/video_replay/
Modifying a browser to store recording as a file e.g. by modifying Chromium itself
Any screen-recorders or using solutions like xvfb & ffmpeg is not an options due the resources constrains. Is there any other way that could let me capture packets or encoded video as a file? The solution must be working on Linux.
if the media stream is what you want a method is to override the browser's PeerConnection. Here is an example:
In an extension manifest add the following content script:
content_scripts": [
{
"matches": ["http://*/*", "https://*/*"],
"js": ["payload/inject.js"],
"all_frames": true,
"match_about_blank": true,
"run_at": "document_start"
}
]
inject.js
var inject = '('+function() {
//overide the browser's default RTCPeerConnection.
var origPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;
//make sure it is supported
if (origPeerConnection) {
//our own RTCPeerConnection
var newPeerConnection = function(config, constraints) {
console.log('PeerConnection created with config', config);
//proxy the orginal peer connection
var pc = new origPeerConnection(config, constraints);
//store the old addStream
var oldAddStream = pc.addStream;
//addStream is called when a local stream is added.
//arguments[0] is a local media stream
pc.addStream = function() {
console.log("our add stream called!")
//our mediaStream object
console.dir(arguments[0])
return oldAddStream.apply(this, arguments);
}
//ontrack is called when a remote track is added.
//the media stream(s) are located in event.streams
pc.ontrack = function(event) {
console.log("ontrack got a track")
console.dir(event);
}
window.ourPC = pc;
return pc;
};
['RTCPeerConnection', 'webkitRTCPeerConnection', 'mozRTCPeerConnection'].forEach(function(obj) {
// Override objects if they exist in the window object
if (window.hasOwnProperty(obj)) {
window[obj] = newPeerConnection;
// Copy the static methods
Object.keys(origPeerConnection).forEach(function(x){
window[obj][x] = origPeerConnection[x];
})
window[obj].prototype = origPeerConnection.prototype;
}
});
}
}+')();';
var script = document.createElement('script');
script.textContent = inject;
(document.head||document.documentElement).appendChild(script);
script.parentNode.removeChild(script);
I tested this with a voice call in google hangouts and saw that two mediaStreams where added via pc.addStream and one track was added via pc.ontrack. addStream would seem to be local media streams and the event object in ontrack is a RTCTrackEvent which has a streams object. which I assume are what you are looking for.
To access these streams from your extenion's content script you will need to create audio elements and set the "srcObject" property to the media stream: e.g.
pc.ontrack = function(event) {
//check if our element exists
var elm = document.getElementById("remoteStream");
if(elm == null) {
//create an audio element
elm = document.createElement("audio");
elm.id = "remoteStream";
}
//set the srcObject to our stream. not sure if you need to clone it
elm.srcObject = event.streams[0].clone();
//write the elment to the body
document.body.appendChild(elm);
//fire a custom event so our content script knows the stream is available.
// you could pass the id in the "detail" object. for example:
//CustomEvent("remoteStreamAdded", {"detail":{"id":"audio_element_id"}})
//then access if via e.detail.id in your event listener.
var e = CustomEvent("remoteStreamAdded");
window.dispatchEvent(e);
}
Then in your content script you can listen for that event/access the mediastream like so:
window.addEventListener("remoteStreamAdded", function(e) {
elm = document.getElementById("remoteStream");
var stream = elm.captureStream();
})
With the capture stream available to your content script you can do pretty much anything you want with it. For example, MediaRecorder works really well for recording the stream(s) or you could use something like peer.js or maybe binary.js to stream to another source.
I haven't tested this but it should also be possible to override the local streams. For example, in the inject.js you could establish some blank mediastream, override navigator.mediaDevices.getUserMedia and instead of returning the local mediastream return your own mediastream.
This method should work in firefox and maybe others as well assuming you use an extenion/app to load the inject.js script at the start of the document. It being loaded before any of the target's libs is key to making this work.
edited for more detail
edited for even more detail
Capturing packets will only give you the network packets which you would then need to turn into frames and put into a container. A server such as Janus can record videos.
Running headless chrome and using the javascript MediaRecorder API is another option but much more heavy on resources.

Broadcast a live webcam stream

I want to record my webcam and broadcast the stream in live to other clients.
I can easily show my webcam in a video tag in the same page with something like that:
function initialize() {
video = $("#v")[0];
width = video.width;
height = video.height;
var canvas = $("#c")[0];
context = canvas.getContext("2d");
nav.getUserMedia({video: true}, startStream, function () {});
}
function startStream(stream) {
video.src = URL.createObjectURL(stream);
video.play();
requestAnimationFrame(draw);
}
function draw() {
var frame = readFrame();
if (frame) {
replaceGreen(frame.data);
context.putImageData(frame, 0, 0);
}
requestAnimationFrame(draw);
}
function readFrame() {
try {
context.drawImage(video, 0, 0, width, height);
} catch (e) {
return null;
}
return context.getImageData(0, 0, width, height);
}
But how to send that stream to a server, do some image processing and then brodcast to other clients?
Is nodejs the best way? Do you have some readings/libs to recommend me?
There are currently several working drafts on this but I think currently the best way to broadcast a live video in a browser is to use Flash ActionScript. Of course, the clients will still need to install a plugin to run the app in the browser. Note that you can't use the Apache web server to broadcast the live videos in that case. You will need a media server such as Red5(free), Wowza, or Adobe Media Server.
Also, see if you can find some help here:
'WebRTC (where RTC stands for Real-Time Communications) is a technology that enables audio/video streaming and data sharing between browser clients (peers). As a set of standards, WebRTC provides any browser with the ability to share application data and perform teleconferencing peer to peer, without the need to install plug-ins or third-party software. '