I have a simple site with an HLS stream from a m3u8 playlist in an autoplay video tag. If the stream stops for more than 10s or so it will not "catch up" and start again when the stream is restarted - I need to manually refresh the page to get it to play again.
Is there a way with js (or something else) to automatically refresh the page after the video has been buffering for X time? (say 5 seconds)
I managed to get this with some help from VC.One's answer. This will reload after the video has been buffering for 5 seconds, if the video was previously playing. Otherwise it will be stuck in a reload loop if it never starts playing. I am still looking for a way to check whether the stream is live without actually reloading the page. video.load() and video.play() are giving me errors, but I will update this post when I figure it out.
var reloadCheck;
var reloadThisTime;
var video = document.getElementById("videotag");
var sourcetag = document.getElementById("sourcetag");
video.addEventListener('waiting', (event) => {
console.log("No connection");
reloadCheck = setTimeout(function(){
if(reloadThisTime){
location = '';
};
},5000);
});
video.addEventListener('playing', (event) => {
console.log("Connected");
clearTimeout(reloadCheck);
reloadThisTime = true;
});
Related
I have a mp4 video inside html video tag. SRC= is given for the google drive video.
see my code below
<video auto play muted loop id="video" class="" src="https://drive.google.com/uc?export=download&id=1qSC6ySf6ZZldFRpuBx9EXvDsD-mfve1Z" type="video/mp4"> </video>
Note: when I am directly calling mp4 video from my server than it will not pause but when the video is called from google drive it gets paused after few minutes. As I am not showing any controls. I need the video to be played continuously.
the issues I'm getting is
Codepen link: https://codepen.io/5salman/pen/bGRMyKj
There's nothing inherit in Google Drive (that I can think of) that would cancel autoplay. More likely you're hitting an error event. The video will be loaded via "206 Partial Content" byte-range requests. If it doesn't end up being locally cached then that could make it more susceptible to network issues than on a server that doesn't support byte-range requests. Also consider looking at the network Devtools for any network errors.
Mitigation options:
Reduce the size of your video! It's 14mb for 20 seconds of video. Use Handbrake or something to re-encode the video to a smaller bitrate/size. That will reduce network traffic and make it less likely to choke the viewing device.
Add an error event handler on the <video> tag - this can help you with diagnostics, and you can also trigger the video to (try) to play again.
function reloadVideo(vidElement, preserveTime = true) {
if (preserveTime) {
// wait to set the current time again until after playable
const position = vidElement.currentTime;
var cb = vidElement.addEventListener('canplay', () => {
if (!isNaN(position) && position < vidElement.duration) {
// I recommend seeking just a frame or so ahead, just in case there was a decode error on the video
vidElement.currentTime = position + 1/30;
}
vidElement.removeEventListener('canplay', cb);
});
}
const src = vidElement.currentSrc;
vidElement.src = "";
vidElement.src = src;
}
var vidElement = document.querySelector('#video');
// retry on error
var retryErrorCodes = [MediaError.MEDIA_ERR_NETWORK, MediaError.MEDIA_ERR_DECODE];
var SECONDS_BEFORE_RETRY = 5;
vidElement.addEventListener('error', evt => {
if (retryErrorCodes.includes(vidElement.error.code)) {
setTimeout(reloadVideo, SECONDS_BEFORE_RETRY * 1000, vidElement);
}
}
(Not Recommended) if you need it to keep running you can add a setInterval function that restarts the video if it's paused. Keep in mind that these "zombie" functions can get annoying if you don't manage them carefully.
// use setInterval to keep retrying the playback
var SECONDS_BETWEEN_PLAYING_CHECKS = 10;
var keepPlayingTimer = setInterval(function () {
var vidElement = document.querySelector('#video');
// make sure the element is still there
if (vidElement && typeof vidElement.play === 'function') {
// if it's not playing start it playing again
if (vidElement.paused) {
vidElement.play();
}
} else {
// don't call this function again if video element is gone
clearInterval(keepPlayingTimer);
}
}, SECONDS_BETWEEN_PLAYING_CHECKS * 1000);
I am using getUserMedia and mediaRecorder API to record an video from webcam.
I am using chrome version 80.
How to getUserMedia and record the video mixing the mp3 with javascript? mp3 can play pause and stop
I don't know how to mixing the mp3 to the video stream on live.
When I removeTrack and addTrack, I stop on MediaRecorder fail.
show Error: Failed to execute 'stop' on 'MediaRecorder': The MediaRecorder's state is 'inactive'.
my code on codepen: https://codepen.io/zhishaofei3/pen/eYNrYGj
and prime codes:
function getFileBuffer(filepath) {
return fetch(filepath, {method: 'GET'}).then(response => response.arrayBuffer())
}
function mp3play() {
getFileBuffer('song.mp3')
.then(buffer => context.decodeAudioData(buffer))
.then(buffer => {
console.log(buffer)
const source = context.createBufferSource()
source.buffer = buffer
let volume = context.createGain()
volume.gain.value = 1
source.connect(volume)
dest = context.createMediaStreamDestination()
volume.connect(dest)
// volume.connect(context.destination)
source.start(0)
const _audioTrack = stream.getAudioTracks();
if (_audioTrack.length > 0) {
_audioTrack[0].stop();
stream.removeTrack(_audioTrack[0]);
}
console.log(dest.stream)
console.log(dest.stream.getAudioTracks()[0])
stream.addTrack(dest.stream.getAudioTracks()[0])
})
}
thank you !
Many containers don't support adding/removing tracks like that, and it's doubtful the Media Recorder API does at all. It's an unusual thing to do.
You need to create the stream you're going to record before instantiating Media Recorder, with all of the tracks you want. Therefore, you need to do things in this order:
Set up your AudioContext.
Call getUserMedia(). (And while you're at it, set audio: false in your constraints. No need to open a microphone if you're not using one.)
videoStream.getVideoTracks() and dest.stream.getAudioTracks() to get all of the tracks.
Create a new MediaStream with those tracks. new MediaStream([audioTrack, videoTrack])
Now, run your MediaRecorder on this new MediaStream and you'll have what you want.
I took an mp4 video, encoded it for HTTP Live Streaming (HLS) using ffmpeg — resulting in a series of myvideo.ts files and a myvideo.m3u8 playlist — and am attempting to play it using the HTML <video> tag in Safari, with the native HLS capabilities of that browser:
<video id="myvideo" src="myvideo.m3u8" loop="loop"></video>
It plays, once. But despite the "loop" attribute in the video tag, it doesn't loop. It stays frozen on the last frame of the video.
If I try to detect the end of the video using an event listener as described here:
Detect when an HTML5 video finishes
… that event never seems to fire.
The "paused" property in javascript (document.getElementById('myvideo').paused) evaluates to false, even after the video has played once and stopped.
How can I get the video to loop in Safari?
HLS is intended to be a live stream, so it never actually "finishes" in order to automatically loop. I used a JavaScript timer as a hack to get around this:
var LOOP_WAIT_TIME_MS = 1000,
vid = document.getElementById("myvideo"),
loopTimeout;
vid.addEventListener('play', function() {
if (!/\.m3u8$/.test(vid.currentSrc)) return;
loopTimeout = window.setTimeout(function() {
loopTimeout = null;
vid.pause();
vid.play();
}, (vid.duration - vid.currentTime) * 1000 + LOOP_WAIT_TIME_MS);
});
vid.addEventListener('pause', function() {
if (!/\.m3u8$/.test(vid.currentSrc)) return;
if (loopTimeout) {
clearTimeout(loopTimeout);
loopTimeout = null;
}
});
When I use getUserMedia() for screen share, I don't get audio.
Things which I would like to do, but couldn't find any relevant stuff:
I want to capture both the screen and audio at the same time. How can I achieve this ?
When my screen share starts, the below tray appears. What it is called ? and how can I modify it (like its looks) ?
Screenshot:
if you want one stream made of your screensharing for the video track and your webcam/mike audio for the audio track, you will need to make 2 calls to getusermedia with constraints set to screen and audio, respectively. then you will have to put the tracks in a common stream. Eventually, you can attach that stream to a peer connection.
as peveuve said, you can also use two peer connections, but it comes with at least two problems:
you will not have synchronization between audio and video (not so important for screensahring)
you will need two connection => twice the number of ports => more chance to fail. That is more likely to be a problem.
this is a mandatory security feature from the browser (to prevent a rogue page to broadcast your screen without you knowing it). I do not know of a way to manipulate it at all
its possible with npm-msr on Chrome.
getScreenId(function (error, sourceId, screen_constraints) {
navigator.getUserMedia = navigator.mozGetUserMedia || navigator.webkitGetUserMedia;
navigator.getUserMedia(screen_constraints, function (stream) {
navigator.getUserMedia({audio: true}, function (audioStream) {
stream.addTrack(audioStream.getAudioTracks()[0]);
var mediaRecorder = new MediaStreamRecorder(stream);
mediaRecorder.mimeType = 'video/mp4'
mediaRecorder.stream = stream;
document.querySelector('video').src = URL.createObjectURL(stream);
var video = document.getElementById('screen-video')
if (video) {
video.src = URL.createObjectURL(stream);
video.width = 360;
video.height = 300;
}
}, function (error) {
alert(error);
});
}, function (error) {
alert(error);
});
});
Check this answer: Is it possible broadcast audio with screensharing with WebRTC
You can't share both screen and audio in the same peer, you have to open 2 peers.
I want to write a basic script for HTML5 Web Audio API, can play some audio files. But I don't know how to unload a playing audio and load another one. In my script two audio files are playing in the same time,but not what I wanted.
Here is my code:
var context,
soundSource,
soundBuffer;
// Step 1 - Initialise the Audio Context
context = new webkitAudioContext();
// Step 2: Load our Sound using XHR
function playSound(url) {
// Note: this loads asynchronously
var request = new XMLHttpRequest();
request.open("GET", url, true);
request.responseType = "arraybuffer";
// Our asynchronous callback
request.onload = function() {
var audioData = request.response;
audioGraph(audioData);
};
request.send();
}
// This is the code we are interested in
function audioGraph(audioData) {
// create a sound source
soundSource = context.createBufferSource();
// The Audio Context handles creating source buffers from raw binary
soundBuffer = context.createBuffer(audioData, true/* make mono */);
// Add the buffered data to our object
soundSource.buffer = soundBuffer;
// Plug the cable from one thing to the other
soundSource.connect(context.destination);
// Finally
soundSource.noteOn(context.currentTime);
}
// Stop all of sounds
function stopSounds(){
// How can do this?
}
// Events for audio buttons
document.querySelector('.pre').addEventListener('click',
function () {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/hello.mp3');
}
);
document.querySelector('.next').addEventListener('click',
function() {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/nokia.mp3');
}
);
You should be pre-loading sounds into buffers once, at launch, and simply resetting the AudioBufferSourceNode whenever you want to play it back.
To play multiple sounds in sequence, you need to schedule them using noteOn(time), one after the other, based on buffer respective lengths.
To stop sounds, use noteOff.
Sounds like you are missing some fundamental web audio concepts. This (and more) is described in detail and shown with samples in this HTML5Rocks tutorial and the FAQ.