webrtc configuration to reduce sent traffic - configuration

I am developing the function of audio video calls using webrtc technology. I faced with case when one of the participants is in an unstable network (mobile network). In this case, the participant’s audio and video starts to freeze, there is a delay, etc. I think that to solve this problem it is necessary to configure the application so that it sends less traffic.
Please tell me what webrtc configurations exist to reduce sent traffic?

Most heavy traffic will be video. One of the solutions will be limiting video quality or disabling it completely. You can limit video quality using this code:
const displayMediaStream = await getDisplayMediaStream();
let supports = navigator.mediaDevices.getSupportedConstraints();
if (!supports["width"] || !supports["height"] || !supports["frameRate"] || !supports["facingMode"]) {
// We're missing needed properties, so handle that error.
} else {
let constraints = {
width: { min: 640, ideal: 1920, max: 1920 },
height: { min: 400, ideal: 1080 },
aspectRatio: 1.777777778,
frameRate: { max: 30 }
};
displayMediaStream.getVideoTracks()[0].applyConstraints(constraints)
}
return displayMediaStream.getVideoTracks()[0];
You can play with the values.
Also, the problem could be in the browser codec. For example, FF in case of screen sharing uses codec which produces high-quality video stream which is good in case of static pictures, like sharing documents, the problem appears when users broadcast dynamic videos, like youtube videos, with screen sharing. In such a case, FF overloads the network by sending streams ~7 Gb. Meanwhile, Google Chrome is more intelligent and can adapt the traffic by using better codecs. I would do tests with different browsers and if the problem lays in FF, you can try to force FF to use better codecs, like same which used by Google Chrome, for that you have to modify SDP when detecting FF browser, you can do it like described here: How can I change the default Codec used in WebRTC?

Related

MediaRecorder captured on Chrome not playable on Mobile or Safari

Goal: use MediaRecorder (or else) api to produce video files that are viewable cross platforms.
Fail: current api falls back to container/codec on google chrome which is only viewable on chrome and advanced desktop media players but not on Safari or mobile devices.
! Same code when running on safari generates a working video file on all platforms.
const mimeType = 'video/webm;codecs=H264'
rec = new MediaRecorder(stream.current, { mimeType })
rec.ondataavailable = e => blobs.push(e.data)
rec.onstop = async () => {
saveToFile(new Blob(blobs, { type: mimeType }))
}
Tried all different combinations of containers and codecs.
also tried to override the mimeType of the Blob with MP4 file container.
No success what so ever.
also tried:
https://github.com/streamproc/MediaStreamRecorder
https://github.com/muaz-khan/RecordRTC
Same issues. iI seems like chrome's container/codec combinations always fall back to a format that is only viewable out of the box on chrome or a powerful desktop video player like vlc.
The only cross platform working video for me is the one taken from safari browser and is the 5th from left in the picture above.
What is the correct container/codac to be used in MediaCapture api to make the output file playable cross platform.
Edit -
We ended up building a transcoding pipeline with AWS ElasticTranscoder, which takes the uploaded video and transcodes it with a general preset that is playable on all platforms thus creating a converted video file.
unfortunately the bounty I offered expired, but if someone answers the original question I would gladly reward him with the bounty again.
I think your problem may be in the first line:
const mimeType = 'video/webm;codecs=H264'
The container you are using is webm, which typically uses codecs VP8, VP9. H264 is a codec used in the mp4 container.
Chrome supports webm. Safari does not (and all iOS browsers are based on Safari - hence your mobile issue).
You say that run on Safari, this outputs a playable video. use ffprobe to see what codec/containers are outputted on Safari - I am guessing that there is a change in container/codec.
Since your video is h264, you must simply change the container to mp4, and it will play everywhere. This is a 'copy' from one container to the other, not a transcoding, but you'll still need ffmpeg :)
Here's a post that might help: Recording cross-platform (H.264?) videos using WebRTC MediaRecorder

HTML5 record moderate video quality for upload to be playable by Safari

I am creating a web-based mobile app where it should be possible to upload video-recordings.
There are two ways to achieve this:
Use input:
<input type="file" name="video" accept="video/*" capture></input>
Use RTC MediaRecorder:
var recordedBlobs = [];
function handleDataAvailable(event) {
if (event.data && event.data.size > 0) {
recordedBlobs.push(event.data);
}
}
var options = {
mimeType: 'video/webm',
audioBitsPerSecond : 128000,
videoBitsPerSecond : 2500000
}
mediaRecorder = new MediaRecorder(window.stream, options);
mediaRecorder.ondataavailable = handleDataAvailable;
mediaRecorder.start(10);
While the first option always works the main problem is that it uses the build-in mobile camera application leaving us no control over quality, which again leads to potentially very large files (especially on android)
Second version gives us full control over quality and lets os create moderate file sizes that are size-wise acceptable as content in the application. iOS/Safari does not support this feature yet, but this is ok since iPhones record small files by default when started from the browser. So I can activate option 1 when the user-agent is iOS.
Now the problems:
First option would be fine if I could:
Control the video recording quality of the mobile application
Post-elaborate the recording to change the resolution before upload
The problem with option 2 is that only the .webm container type is supported, and Safari does not support this type.
So I'm a little stuck - right now it seems like my only option is to post-convert the incoming .webm files to .mp4 on the server as they are uploaded. But it seems to be a very CPU costly process on the server.
Any good ideas?
You can record as H.264 into a webm container. This is supported by Chrome.
var options = {mimeType: 'video/webm;codecs=h264'};
media_recorder = new MediaRecorder(stream, options);
Although it is an usual combination of video format and container - it is valid.
Now you could change H.264/webm into H.264/mp4 without transcoding the video stream using ffmepg (-vcodec copy).
You could also try re-wrapping from webm to mp4 client side in JavaScript using ffmpeg.js.

Capture system sound from browser

I am trying to build a web app that captures both local and remote audios from a webrtc call, but I can`t record the remote audio(using recordRTC).
I was wondering if I could capture the system sound somehow.
Is there a way to capture the system sound (not just the mic) from the browser. Maybe an extension?
In Chrome, the chrome.desktopCapture extension API can be used to capture the screen, which includes system audio (but only on Windows and Chrome OS and without plans for OS X or Linux). E.g.
chrome.desktopCapture.chooseDesktopMedia([
'screen', 'window' // ('tab' is not supported; use chrome.tabCapture instead)
], function(streamId) {
navigator.webkitGetUserMedia({
audio: {
mandatory: {
chromeMediaSource: 'system',
chromeMediaSourceId: streamId
}
},
video: false, // We only want audio for now.
}, function(stream) {
// Do what you want with this MediaStream.
}, function(error) {
// Handle error
});
});
I'm not sure whether Firefox can capture system sound, but at the very least it is capable of capturing some output (tab/window/browser/OS?).
First you need to visit about:config and set media.getusermedia.audiocapture.enabled to true (this could be automated through a Firefox add-on). Then the stream can be captured as follows:
navigator.mozGetUserMedia({
audio: {
mediaSource: 'audioCapture'
},
video: false, // Just being explicit, we only want audio for now
}, function(stream) {
// Do what you want with this MediaStream.
}, function(error) {
// Handle error
});
This was implemented in Firefox 42, at https://bugzilla.mozilla.org/show_bug.cgi?id=1156472
This is possible with the new Screen Capture API, but browser support is still limited.
See the "Browser compatibility" section in the above-linked MDN page for details. Some browsers currently don't yet support audio capture, and some others currently only allow audio capture from a specific tab, rather than the operating system as a whole.
Example code:
videoElem.srcObject = await navigator.mediaDevices.getDisplayMedia({audio:true, video:true});

Howto: Save screencast to video file ChromeOS?

Two Chrome apps/extensions have caught my eye on the webstore:
Screencastify
Snagit
I am aware of chrome.desktopCapture and how I can use getUserMedia() to capture a live stream of a user's desktop.
Example:
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: desktop_id,
minWidth: 1280,
maxWidth: 1280,
minHeight: 720,
maxHeight: 720
}
}
}, successCallback, errorCallback);
I'd love to create my own screencast app that allows audio recording as well as embedding webcam capture in a given corner of the video like Screencastify.
I understand capturing the desktop and the audio and video of the user, but how do you put it all together and make it into a video file?
I'm assuming there is a way to create a video file from a getUserMedia() stream on ChromeOS. Something that only ChromeOS has implemented?
How is it done? Thanks in advance for your answers.
The actual encoding and saving of the video file isn't something that's been implemented in Chrome as of yet. Mozilla has it in a nascent form at the moment. I'm unsure of its state in ChromeOS. I can give you a little information I've gleaned during development with the Chrome browser, however.
The two ways to encode, save, and distribute a media stream as a video are client-side and server-side.
Server-side:
Requires a media server of some kind. The best I've free/open-source solution that I've found so far is Kurento. The media stream is uploaded(chunks or whole) or streamed to the media server where it is encoded and saved for later use. This also works with peer-to-peer by acting as a middleman, recording as the data streams through.
Client-side:
This is all about browser-based encoding. There are currently two working options that I've tested successfully in Chrome.
Whammy.js:
This method uses a canvas hack to save arrays of webp images and then encode them into a webm container. While slow, it works well with video. No audio support. I'm working on that at the moment.
videoconverter.js(was ffmpeg-asm.js):
This is a straight port of ffmpeg to JavaScript using Emscripten. It works with both audio and video. It's also gigantic, script-wise, at around 25MB uncompressed. The other reason I'm not using it in production is the shaky licensing ground that ffmpeg is on at the moment.
It has not been optimized as much as it could be. It would probably be quite a project to make it reliably production-ready.
Hopefully that at least gives you avenues of research.

knitr mp4 movie embedding does not work on Windows XP

I knit a Rmd file (to html) with a chunk producing a mp4 movie :
```{r clock, fig.width=7, fig.height=6, fig.show='animate'}
par(mar = rep(3, 4))
for (i in seq(pi/2, -4/3 * pi, length = 12)) {
plot(0, 0, pch = 20, ann = FALSE, axes = FALSE)
arrows(0, 0, cos(i), sin(i))
axis(1, 0, "VI"); axis(2, 0, "IX")
axis(3, 0, "XII"); axis(4, 0, "III"); box()
}
```
knitrgenerates the following html code for embedding the mp4 movie :
<p><video controls="controls" loop="loop"><source src="figure/clock.mp4" type="video/mp4" />video of chunk clock</video></p>
The mp4 movie is well created in the figure subfolder but it does not appear in the html output when I open it with a Windows XP machine using either Chrome, Firefox or Explorer.
Here is a (temporary) example : http://stla.overblog.com/ellipse-chart-test - this is not the "clock" example but this is exactly the same rendering problem. I see the movie with Chrome on a Windows Vista machine, but not with my Windows XP machine.
What is the explanation ? Is it really a problem with the OS or with browser versions ?
tl;dr Browsers really use the OS to perform some media decoding tasks. Work around it by a) providing alternate media streams b) using the most compatible media format for your audience c) using a plugin (i.e. Flash), or d) recommend installing an MP4 plugin.
This is in fact a 'problem' with the OS. Many browsers, just as some other programs on a particular platform, use the operating system resources to acomplish a given task. This is particularly true when it comes to procedures protected by intellectual property rights.
Your codec (h.264 aka "MP4") happens to be a particulary fiercely fought over piece of IP. Thus, browsers don't go to lengths to licence the IP at hand, but rather use the licenced codecs of the host system.
In your case, Windows XP happens not to be able to decode the media format of your video, and the browser doesn't seem to be able to do that by itself.
Your alternatives now:
Give additional media streams with your video tag (see Wikipedia for an example)
Trying to find out what browser the majority of users is using on XP, then choose a natively supported format (either webm for Chrome or ogg for Firefox)
Just use Flash to play the MP4 (as in the pre-HTML5 days)
Telling users to install an OS-level plugin to play h.264; you could even do that in the fallback text. I won't recommend a specific product, but there are many.