MediaRecorder timeslice segments - only the first segment plays - html

I have the following on chrome latest:
var options = { mimeType: "video/webm;codecs=vp8" };
internalMediaRecorder = new MediaRecorder(internalStream, options);
internalMediaRecorder.ondataavailable = function (blob) {
// put blob.data into an array
var src = URL.createObjectURL(blobData.segment);
const $container = $("body");
const $video = $("<video id='" + blobData.ts + "-" + blob.data.size + "' controls src='" + src + "'></video>").css("max-width", "100%");
$container.prepend($video);
// if I stop/start the recorder, I get playable segments here, separated by unplayable mini-segments from onDataAvailable because I call stop right after processing a video. I can "approximate" desired behavior by doing this and then ignoring blobs that are less than some threshhold to ignore the "dead gap" segments.
}
internalMediaRecorder.start(segmentLengthInMs); // every 5s
I then compile an array of 5s segments - the blob data is available. However when I create a URL for each of these segments:
URL.createObjectURL(videoSegment)
Only the first video plays. Why is this?
UPDATE
If I stop/start the recorder in onDataAvailable, I get playable segments here, separated by unplayable mini-segments from onDataAvailable because I call stop right after processing a video. I can "approximate" desired behavior by doing this and then ignoring blobs that are less than some threshhold to ignore the "dead gap" segments. This smells like feet though and I'd like to get proper segmentation working if possible.

Its expected as per the spec
The UA MUST record stream in such a way that the original Tracks can be retrieved at playback time. When multiple Blobs are returned (because of timeslice or requestData()), the individual Blobs need not be playable, but the combination of all the Blobs from a completed recording MUST be playable.
The resulted blobs are not raw video data, they are encoded with requested MIME type. So you need merge all the blobs in correct order, to generate a playable video file.
var options = { mimeType: "video/webm;codecs=vp8" };
var recordedBlobs = [];
internalMediaRecorder = new MediaRecorder(internalStream, options);
internalMediaRecorder.ondataavailable = function (event) {
if (event.data && event.data.size > 0) {
recordedBlobs.push(event.data);
}
}
internalMediaRecorder.start(segmentLengthInMs); // every 5s
function play() {
var superBuffer = new Blob(recordedBlobs, {type: 'video/webm'});
videoElement.src = window.URL.createObjectURL(superBuffer);
}
See the demo

Related

playback of array of base64 audio data

I am using javascript to parse an SWF file and displaying the contents in an HTML5 canvas.
I am having an issue with playing back the audio data from the audiostream swf tags. The audio is split up per frame and I am able to get the audio data into an array of base64 data, in the same order as the frames. Creating/destorying audio elemnts on each frame does not seem like the best way to go about it, but it is the only way I can think of. Is there a better way to go about this?
Note: There are rewind/fastforward/pause buttons in the swf file as well, so the audio will need to align with the frames when they are sent back, so I don't believe I can just create one long audio file from the smaller bits of data.
You will want to load these audio files as AudioBuffers and play them through the Web Audio API.
What you currently have are data-URLs, that do represent full audio file (with metadata).
Loading all of these in Audio elements may indeed not be a good idea, for a start because some browsers may not let you do so, and then because HTMLMediaElement are not meant for perfect timing.
So you will need to first fetch all these data-URLs to get back their actual binary content in ArrayBuffers, then you'll be able to extract the raw PCM audio data from these audio files.
// would be the same with data-URLs
const urls = [
"kbgd2jm7ezk3u3x/hihat.mp3",
"h2j6vm17r07jf03/snare.mp3",
"1cdwpm3gca9mlo0/kick.mp3",
"h8pvqqol3ovyle8/tom.mp3"
].map( (path) => 'https://dl.dropboxusercontent.com/s/' + path );
const audio_ctx = new AudioContext();
Promise.all( urls.map( toAudioBuffer ) )
.then( activateBtn )
.catch( console.error );
async function toAudioBuffer( url ) {
const resp = await fetch( url );
const arr_buffer = await resp.arrayBuffer();
return audio_ctx.decodeAudioData( arr_buffer );
}
function activateBtn( audio_buffers ) {
const btn = document.getElementById( 'btn' );
btn.onclick = playInSequence;
btn.disabled = false;
// simply play one after the other
// you could add your own logic of course
async function playInSequence() {
await audio_ctx.resume(); // to make noise we need to be allowed by a user gesture
let current = 0;
while( current < audio_buffers.length ) {
// create a bufferSourceNode, no worry, it weights nothing
const source = audio_ctx.createBufferSource();
source.buffer = audio_buffers[ current ];
// so it makes noise
source.connect( audio_ctx.destination );
// [optional] promisify
const will_stop = new Promise( (res) => source.onended = res );
source.start(0); // start playing now
await will_stop;
current ++;
}
}
}
<button id="btn" disabled>play all in sequence</button>
I ended up making an array in javascript of the index of the sound id, which corresponds with the frame id, and inside of the object it contains an audio element create as I parse the tags. The elements are not added into the DOM, and they are created up front, so they persist for the life of the frame-handler (as they are stored in a sounds array inside of the object), so there is no create/destory cost.
This way, when I play the frames (the visuals) I can call play on the audio element corresponding to the active frame. As the frames control which audio is played, the rewind/fastforward functionality is retained.

Audio recorded with MediaRecorder on Chrome missing duration

I am recording audio (oga/vorbis) files with MediaRecorder. When I record these file through Chrome I get problems: I cannot edit the files on ffmpeg and when I try to play them on Firefox it says they are corrupt (they do play fine on Chrome though).
Looking at their metadata on ffmpeg I get this:
Input #0, matroska,webm, from '91.oga':
Metadata:
encoder : Chrome
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
[STREAM]
index=0
codec_name=opus
codec_long_name=Opus (Opus Interactive Audio Codec)
profile=unknown
codec_type=audio
codec_time_base=1/48000
codec_tag_string=[0][0][0][0]
codec_tag=0x0000
sample_fmt=fltp
sample_rate=48000
channels=1
channel_layout=mono
bits_per_sample=0
id=N/A
r_frame_rate=0/0
avg_frame_rate=0/0
time_base=1/1000
start_pts=0
start_time=0.000000
duration_ts=N/A
duration=N/A
bit_rate=N/A
max_bit_rate=N/A
bits_per_raw_sample=N/A
nb_frames=N/A
nb_read_frames=N/A
nb_read_packets=N/A
DISPOSITION:default=1
DISPOSITION:dub=0
DISPOSITION:original=0
DISPOSITION:comment=0
DISPOSITION:lyrics=0
DISPOSITION:karaoke=0
DISPOSITION:forced=0
DISPOSITION:hearing_impaired=0
DISPOSITION:visual_impaired=0
DISPOSITION:clean_effects=0
DISPOSITION:attached_pic=0
TAG:language=eng
[/STREAM]
[FORMAT]
filename=91.oga
nb_streams=1
nb_programs=0
format_name=matroska,webm
format_long_name=Matroska / WebM
start_time=0.000000
duration=N/A
size=7195
bit_rate=N/A
probe_score=100
TAG:encoder=Chrome
As you can see there are problems with the duration. I have looked at posts like this:
How can I add predefined length to audio recorded from MediaRecorder in Chrome?
But even trying that, I got errors when trying to chop and merge files.For example when running:
ffmpeg -f concat -i 89_inputs.txt -c copy final.oga
I get a lot of this:
[oga # 00000000006789c0] Non-monotonous DTS in output stream 0:0; previous: 57612, current: 1980; changing to 57613. This may result in incorrect timestamps in the output file.
[oga # 00000000006789c0] Non-monotonous DTS in output stream 0:0; previous: 57613, current: 2041; changing to 57614. This may result in incorrect timestamps in the output file.
DTS -442721849179034176, next:42521 st:0 invalid dropping
PTS -442721849179034176, next:42521 invalid dropping st:0
[oga # 00000000006789c0] Non-monotonous DTS in output stream 0:0; previous: 57614, current: 2041; changing to 57615. This may result in incorrect timestamps in the output file.
[oga # 00000000006789c0] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
DTS -442721849179031296, next:42521 st:0 invalid dropping
PTS -442721849179031296, next:42521 invalid dropping st:0
Does anyone know what we need to do to audio files recorded from Chrome for them to be useful? Or is there a problem with my setup?
Recorder js:
if (navigator.getUserMedia) {
console.log('getUserMedia supported.');
var constraints = { audio: true };
var chunks = [];
var onSuccess = function(stream) {
var mediaRecorder = new MediaRecorder(stream);
record.onclick = function() {
mediaRecorder.start();
console.log(mediaRecorder.state);
console.log("recorder started");
record.style.background = "red";
stop.disabled = false;
record.disabled = true;
var aud = document.getElementById("audioClip");
start = aud.currentTime;
}
stop.onclick = function() {
console.log(mediaRecorder.state);
console.log("Recording request sent.");
mediaRecorder.stop();
}
mediaRecorder.onstop = function(e) {
console.log("data available after MediaRecorder.stop() called.");
var audio = document.createElement('audio');
audio.setAttribute('controls', '');
audio.setAttribute('id', 'audioClip');
audio.controls = true;
var blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs="vorbis"' });
chunks = [];
var audioURL = window.URL.createObjectURL(blob);
audio.src = audioURL;
sendRecToPost(blob); // this just send the audio blob to the server by post
console.log("recorder stopped");
}
I found at the ffmpeg documentation that we can set metadata at the conversion using this option:
//-metadata[:metadata_specifier] key=value (output,per-metadata)
//Set a metadata key/value pair.
ffmpeg -i in.avi -metadata title="my title" out.flv
You can also test if the duration conversion limit works on your case:
//-t duration (input/output)
//When used as an input option (before -i), limit the duration of data read from the input file.
//When used as an output option (before an output url), stop writing the output after its duration reaches duration.

MediaSource API - append/concatenate multiple videos together into a single buffer

UPDATE:
SO I was able to get this to work by using the offsetTimestamp property (incrementing it after appending each video).
My questions now:
1) Why isn't this done properly when setting the MediaSource.mode to sequence?
2) Why is my MediaSource.duration always "Infinity" and not the correct duration?
I'm trying to use the MediaSource API to append multiple video files and play them seamlessly as if it were 1 video.
I've properly transcoded my videos according to the spec (DASH-MPEG) and when playing them individually, they work fine.
However, when I try to append multiple, I run into issues where the segments overwrite one another, incorrect duration, etc. Even though everything seems to be executing as expected.
I've tried playing around with the offsetTimestamp, but according to the documentation setting MediaSource.mode to 'sequence' should automatically handle this. Also, for some reason, MediaSource.duration always seems to be 'Infinity' even after appending a segment.
Here is my code:
<script>
function downloadData(url, cb) {
console.log("Downloading " + url);
var xhr = new XMLHttpRequest;
xhr.open('get', url);
xhr.responseType = 'arraybuffer';
xhr.onload = function () {
cb(new Uint8Array(xhr.response));
};
xhr.send();
}
if (MediaSource.isTypeSupported('video/mp4; codecs="avc1.64001E"')) {
console.log("mp4 codec supported");
}
var videoSources = [
"{% static 'mp4/ff_97.mp4' %}",
"{% static 'mp4/ff_98.mp4' %}",
"{% static 'mp4/ff_99.mp4' %}",
"{% static 'mp4/ff_118.mp4' %}"
]
var mediaSource = new MediaSource();
mediaSource.addEventListener('sourceopen', function(e) {
var sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.64001E"');
sourceBuffer.mode = 'sequence';
console.log('SourceBuffer mode set to ' + sourceBuffer.mode);
sourceBuffer.addEventListener('updateend', function(e) {
console.log('Finished updating buffer');
console.log('New duration is ' + String(mediaSource.duration));
if (videoSources.length == 0) {
mediaSource.endOfStream();
video.currentTime = 0;
video.play();
return;
}
downloadData(videoSources.pop(), function(arrayBuffer) {
console.log('Finished downloading buffer of size ' + String(arrayBuffer.length));
console.log('Updating buffer');
sourceBuffer.appendBuffer(arrayBuffer);
});
console.log('New duration: ' + String(mediaSource.duration));
});
downloadData(videoSources.pop(), function(arrayBuffer) {
console.log('Finished downloading buffer of size ' + String(arrayBuffer.length));
console.log('Updating buffer');
sourceBuffer.appendBuffer(arrayBuffer);
});
}, false);
var video = document.querySelector('video');
video.src = window.URL.createObjectURL(mediaSource);
And here is the logs:
mp4 codec supported
(index):78 SourceBuffer mode set to sequence
(index):45 Downloading /static/mp4/ff_118.mp4
(index):103 Finished downloading buffer of size 89107
(index):104 Updating buffer
(index):81 Finished updating buffer
(index):82 New duration is Infinity
(index):45 Downloading /static/mp4/ff_99.mp4
(index):98 New duration: Infinity
(index):92 Finished downloading buffer of size 46651
(index):93 Updating buffer
(index):81 Finished updating buffer
(index):82 New duration is Infinity
(index):45 Downloading /static/mp4/ff_98.mp4
(index):98 New duration: Infinity
(index):92 Finished downloading buffer of size 79242
(index):93 Updating buffer
(index):81 Finished updating buffer
(index):82 New duration is Infinity
(index):45 Downloading /static/mp4/ff_97.mp4
(index):98 New duration: Infinity
(index):92 Finished downloading buffer of size 380070
(index):93 Updating buffer
(index):81 Finished updating buffer
(index):82 New duration is Infinity
2) Why is my MediaSource.duration always "Infinity" and not the correct duration?
You need to call MediaSource.endOfStream() in order for the MediaSource object to calculate the actual duration of segments in its SourceBuffer. I see that you are doing this, but it looks like you're trying to access MediaSource.duration before calling endOfStream(). I suggest you read up on the end of stream algorithm in the MSE Spec, you'll notice that it will lead to invoking the duration change algorithm.
If you want to have your <video> element report a duration before calling MediaSource.endOfStream(), you can actually set a value using MediaSource.duration based on your own estimate of segments appended.
1) Why isn't this done properly when setting the MediaSource.mode to sequence?
As far as I know, it should do. But I have preferred the explicit timestampOffset approach myself as it provides more flexibility when wanting to append segments far ahead in the buffer (ie. if the user seeks way ahead of the current buffer end, you'll want to start loading+appending after a gap). Although I appreciate that seeking my not be a requirement in your use-case.
Why isn't this done properly when setting the MediaSource.mode to sequence?
You have to do this later in the code. When exactly? I'm not sure. When I do it right before appending, it works.
sourceBuffer.mode = 'sequence';
sourceBuffer.appendBuffer(chunk);

HTML5 <audio> poor choice for LIVE streaming?

As discussed in a previous question, I have built a prototype (using MVC Web API, NAudio and NAudio.Lame) that is streaming live low quality audio after converting it to mp3. The source stream is PCM: 8K, 16-bit, mono and I'm making use of html5's audio tag.
On both Chrome and IE11 there is a 15-34 second delay (high-latency) before audio is heard from the browser which, I'm told, is unacceptable for our end users. Ideally the latency would be no more than 5 seconds. The delay occurs even when using the preload="none" attribute within my audio tag.
Looking more closely at the issue, it appears as though both browsers will not start playing audio until they have received ~32K of audio data. With that in mind, I can affect the delay by changing Lame's MP3 'bitrate' setting. However, if I reduce the delay (by sending more data to the browser for the same length of audio), I will introduce audio drop-outs later.
Examples:
If I use Lame's V0 encoding the delay is nearly 34 seconds which requires almost 0.5 MB of source audio.
If I use Lame's ABR_32 encoding, I can reduce the delay to 10-15 seconds but I will experience pauses and drop-outs throughout the listening session.
Questions:
Any ideas how I can minimize the start-up delay (latency)?
Should I continue investigating various Lame 'presets' in hopes of picking the "right" one?
Could it be that MP3 is not the best format for live streaming?
Would switching to Ogg/Vorbis (or Ogg/OPUS) help?
Do we need to abandon HTML5's audio tag and use Flash or a java applet?
Thanks.
You can not reduce the delay, since you have no control on the browser code and buffering size. HTML5 specification does not enforce any constraint, so I don't see any reason why it would improve.
You can however implement a solution with webaudio API (it's quite simple), where you handle streaming yourself.
If you can split your MP3's chunk in fixed size (so that each MP3 chunks size is known beforehand, or at least, at receive time), then you can have a live streaming in 20 lines of code. The chunk size will be your latency.
The key is to use AudioContext::decodeAudioData.
// Fix up prefixing
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var offset = 0;
var byteOffset = 0;
var minDecodeSize = 16384; // This is your chunk size
var request = new XMLHttpRequest();
request.onprogress = function(evt)
{
if (request.response)
{
var size = request.response.length - byteOffset;
if (size < minDecodeSize) return;
// In Chrome, XHR stream mode gives text, not ArrayBuffer.
// If in Firefox, you can get an ArrayBuffer as is
var buf;
if (request.response instanceof ArrayBuffer)
buf = request.response;
else
{
ab = new ArrayBuffer(size);
buf = new Uint8Array(ab);
for (var i = 0; i < size; i++)
buf[i] = request.response.charCodeAt(i + byteOffset) & 0xff;
}
byteOffset = request.response.length;
context.decodeAudioData(ab, function(buffer) {
playSound(buffer);
}, onError);
}
};
request.open('GET', url, true);
request.responseType = expectedType; // 'stream' in chrome, 'moz-chunked-arraybuffer' in firefox, 'ms-stream' in IE
request.overrideMimeType('text/plain; charset=x-user-defined');
request.send(null);
function playSound(buffer) {
var source = context.createBufferSource(); // creates a sound source
source.buffer = buffer; // tell the source which sound to play
source.connect(context.destination); // connect the source to the context's destination (the speakers)
source.start(offset); // play the source now
// note: on older systems, may have to use deprecated noteOn(time);
offset += buffer.duration;
}

Web Audio API: How to load another audio file?

I want to write a basic script for HTML5 Web Audio API, can play some audio files. But I don't know how to unload a playing audio and load another one. In my script two audio files are playing in the same time,but not what I wanted.
Here is my code:
var context,
soundSource,
soundBuffer;
// Step 1 - Initialise the Audio Context
context = new webkitAudioContext();
// Step 2: Load our Sound using XHR
function playSound(url) {
// Note: this loads asynchronously
var request = new XMLHttpRequest();
request.open("GET", url, true);
request.responseType = "arraybuffer";
// Our asynchronous callback
request.onload = function() {
var audioData = request.response;
audioGraph(audioData);
};
request.send();
}
// This is the code we are interested in
function audioGraph(audioData) {
// create a sound source
soundSource = context.createBufferSource();
// The Audio Context handles creating source buffers from raw binary
soundBuffer = context.createBuffer(audioData, true/* make mono */);
// Add the buffered data to our object
soundSource.buffer = soundBuffer;
// Plug the cable from one thing to the other
soundSource.connect(context.destination);
// Finally
soundSource.noteOn(context.currentTime);
}
// Stop all of sounds
function stopSounds(){
// How can do this?
}
// Events for audio buttons
document.querySelector('.pre').addEventListener('click',
function () {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/hello.mp3');
}
);
document.querySelector('.next').addEventListener('click',
function() {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/nokia.mp3');
}
);
You should be pre-loading sounds into buffers once, at launch, and simply resetting the AudioBufferSourceNode whenever you want to play it back.
To play multiple sounds in sequence, you need to schedule them using noteOn(time), one after the other, based on buffer respective lengths.
To stop sounds, use noteOff.
Sounds like you are missing some fundamental web audio concepts. This (and more) is described in detail and shown with samples in this HTML5Rocks tutorial and the FAQ.