playback of array of base64 audio data - html

I am using javascript to parse an SWF file and displaying the contents in an HTML5 canvas.
I am having an issue with playing back the audio data from the audiostream swf tags. The audio is split up per frame and I am able to get the audio data into an array of base64 data, in the same order as the frames. Creating/destorying audio elemnts on each frame does not seem like the best way to go about it, but it is the only way I can think of. Is there a better way to go about this?
Note: There are rewind/fastforward/pause buttons in the swf file as well, so the audio will need to align with the frames when they are sent back, so I don't believe I can just create one long audio file from the smaller bits of data.

You will want to load these audio files as AudioBuffers and play them through the Web Audio API.
What you currently have are data-URLs, that do represent full audio file (with metadata).
Loading all of these in Audio elements may indeed not be a good idea, for a start because some browsers may not let you do so, and then because HTMLMediaElement are not meant for perfect timing.
So you will need to first fetch all these data-URLs to get back their actual binary content in ArrayBuffers, then you'll be able to extract the raw PCM audio data from these audio files.
// would be the same with data-URLs
const urls = [
"kbgd2jm7ezk3u3x/hihat.mp3",
"h2j6vm17r07jf03/snare.mp3",
"1cdwpm3gca9mlo0/kick.mp3",
"h8pvqqol3ovyle8/tom.mp3"
].map( (path) => 'https://dl.dropboxusercontent.com/s/' + path );
const audio_ctx = new AudioContext();
Promise.all( urls.map( toAudioBuffer ) )
.then( activateBtn )
.catch( console.error );
async function toAudioBuffer( url ) {
const resp = await fetch( url );
const arr_buffer = await resp.arrayBuffer();
return audio_ctx.decodeAudioData( arr_buffer );
}
function activateBtn( audio_buffers ) {
const btn = document.getElementById( 'btn' );
btn.onclick = playInSequence;
btn.disabled = false;
// simply play one after the other
// you could add your own logic of course
async function playInSequence() {
await audio_ctx.resume(); // to make noise we need to be allowed by a user gesture
let current = 0;
while( current < audio_buffers.length ) {
// create a bufferSourceNode, no worry, it weights nothing
const source = audio_ctx.createBufferSource();
source.buffer = audio_buffers[ current ];
// so it makes noise
source.connect( audio_ctx.destination );
// [optional] promisify
const will_stop = new Promise( (res) => source.onended = res );
source.start(0); // start playing now
await will_stop;
current ++;
}
}
}
<button id="btn" disabled>play all in sequence</button>

I ended up making an array in javascript of the index of the sound id, which corresponds with the frame id, and inside of the object it contains an audio element create as I parse the tags. The elements are not added into the DOM, and they are created up front, so they persist for the life of the frame-handler (as they are stored in a sounds array inside of the object), so there is no create/destory cost.
This way, when I play the frames (the visuals) I can call play on the audio element corresponding to the active frame. As the frames control which audio is played, the rewind/fastforward functionality is retained.

Related

Chrome produces no audio after reaching 50 audio output streams

During my testing, I have found out that reaching 50 audio output streams (as displayed in chrome://media-internals/ Audio tab) on a single tab causes the audio output to disappear. Does Chrome have a set maximum limit of audio output streams allowed per displayed tab? If so, is there some workaround for that? The Chrome version that I am using is Version 87.0.4280.141.
Whenever we're muting/unmuting the audio(second function below) and adjusting the mic volume(first function below), we create a new audio context. Does too many audio context instances caused the issue?
private setLocalStreamVolume(stream: MediaStream | undefined) {
const context = new AudioContext()
const destination = context.createMediaStreamDestination()
const gainNode = context.createGain()
if (stream) {
for(const track of stream.getTracks()){
const sourceStream = context.createMediaStreamSource(new MediaStream([track]));
sourceStream.connect(gainNode)
gainNode.connect(destination)
gainNode.gain.value = this._micVolume
}
}
return destination.stream
}
export function mixStreams(streams: Iterable<(MediaStream | undefined)>) {
const context = new AudioContext()
const mixedOutput = context.createMediaStreamDestination()
for(const stream of streams)
if(stream)
for(const track of stream.getTracks()){
const sourceStream = context.createMediaStreamSource(new MediaStream([track]));
sourceStream.connect(mixedOutput);
}
return mixedOutput.stream.getTracks()[0]
}
Does too many audio context interactions caused the issue?
Too many AudioContext instances certainly will. In fact, on some systems you can only use a single AudioContext.
I'm not sure what your specific use case is, but you probably only need one AudioContext. All your MediaStreamSourceNodes can live in the same context.

Embed every video in a directory on webhost

This may sound silly... but is there any way to embed all videos in a directory to a webpage? I'm hosting some videos on my website but right now you have to manually browse the directory and just click a link to a video.
I know I can just embed those videos to a html page but is there any way to make it adapt automatically when I add new videos?
How you do this will depend on how you are building your server code and web page code, but the example below which is node and angular based does exactly what you are asking:
// GET: route to return list of upload videos
router.get('/video_list', function(req, res) {
//Log the request details
console.log(req.body);
// Get the path for the uploaded_video directory
var _p;
_p = path.resolve(__dirname, 'public', 'uploaded_videos');
//Find all the files in the diectory and add to a JSON list to return
var resp = [];
fs.readdir(_p, function(err, list) {
//Check if the list is undefined or empty first and if so just return
if ( typeof list == 'undefined' || !list ) {
return;
}
for (var i = list.length - 1; i >= 0; i--) {
// For each file in the directory add an id and filename to the response
resp.push(
{"index": i,
"file_name": list[i]}
);
}
// Set the response to be sent
res.json(resp);
});
});
This code is old in web years (i.e. about 3 years old) so the way node handles routes etc is likely different now but the concepts remains the same, regardless of language:
go to the video directory
get the lit of video files in it
build them into a JSON response and send them to the browser
browser extracts and displays the list
The browser code corresponding to the above server code in this case is:
$scope.videoList = [];
// Get the video list from the Colab Server
GetUploadedVideosFactory.getVideoList().then(function(data) {
// Note: should really do some type checking etc here on the returned value
console.dir(data.data);
$scope.videoList = data.data;
});
You may find some way to automatically generate a web page index from a directory, but the type of approach above will likely give you more control - you can exclude certain file names types etc quite easily, for example.
The full source is available here: https://github.com/mickod/ColabServer

Ways to capture incoming WebRTC video streams (client side)

I am currently looking to find a best way to store a incoming webrtc video streams. I am joining the videocall using webrtc (via chrome) and I would like to record every incoming video stream to from each participant to the browser.
The solutions I am researching are:
Intercept network packets coming to the browsers e.g. using Whireshark and then decode. Following this article: https://webrtchacks.com/video_replay/
Modifying a browser to store recording as a file e.g. by modifying Chromium itself
Any screen-recorders or using solutions like xvfb & ffmpeg is not an options due the resources constrains. Is there any other way that could let me capture packets or encoded video as a file? The solution must be working on Linux.
if the media stream is what you want a method is to override the browser's PeerConnection. Here is an example:
In an extension manifest add the following content script:
content_scripts": [
{
"matches": ["http://*/*", "https://*/*"],
"js": ["payload/inject.js"],
"all_frames": true,
"match_about_blank": true,
"run_at": "document_start"
}
]
inject.js
var inject = '('+function() {
//overide the browser's default RTCPeerConnection.
var origPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;
//make sure it is supported
if (origPeerConnection) {
//our own RTCPeerConnection
var newPeerConnection = function(config, constraints) {
console.log('PeerConnection created with config', config);
//proxy the orginal peer connection
var pc = new origPeerConnection(config, constraints);
//store the old addStream
var oldAddStream = pc.addStream;
//addStream is called when a local stream is added.
//arguments[0] is a local media stream
pc.addStream = function() {
console.log("our add stream called!")
//our mediaStream object
console.dir(arguments[0])
return oldAddStream.apply(this, arguments);
}
//ontrack is called when a remote track is added.
//the media stream(s) are located in event.streams
pc.ontrack = function(event) {
console.log("ontrack got a track")
console.dir(event);
}
window.ourPC = pc;
return pc;
};
['RTCPeerConnection', 'webkitRTCPeerConnection', 'mozRTCPeerConnection'].forEach(function(obj) {
// Override objects if they exist in the window object
if (window.hasOwnProperty(obj)) {
window[obj] = newPeerConnection;
// Copy the static methods
Object.keys(origPeerConnection).forEach(function(x){
window[obj][x] = origPeerConnection[x];
})
window[obj].prototype = origPeerConnection.prototype;
}
});
}
}+')();';
var script = document.createElement('script');
script.textContent = inject;
(document.head||document.documentElement).appendChild(script);
script.parentNode.removeChild(script);
I tested this with a voice call in google hangouts and saw that two mediaStreams where added via pc.addStream and one track was added via pc.ontrack. addStream would seem to be local media streams and the event object in ontrack is a RTCTrackEvent which has a streams object. which I assume are what you are looking for.
To access these streams from your extenion's content script you will need to create audio elements and set the "srcObject" property to the media stream: e.g.
pc.ontrack = function(event) {
//check if our element exists
var elm = document.getElementById("remoteStream");
if(elm == null) {
//create an audio element
elm = document.createElement("audio");
elm.id = "remoteStream";
}
//set the srcObject to our stream. not sure if you need to clone it
elm.srcObject = event.streams[0].clone();
//write the elment to the body
document.body.appendChild(elm);
//fire a custom event so our content script knows the stream is available.
// you could pass the id in the "detail" object. for example:
//CustomEvent("remoteStreamAdded", {"detail":{"id":"audio_element_id"}})
//then access if via e.detail.id in your event listener.
var e = CustomEvent("remoteStreamAdded");
window.dispatchEvent(e);
}
Then in your content script you can listen for that event/access the mediastream like so:
window.addEventListener("remoteStreamAdded", function(e) {
elm = document.getElementById("remoteStream");
var stream = elm.captureStream();
})
With the capture stream available to your content script you can do pretty much anything you want with it. For example, MediaRecorder works really well for recording the stream(s) or you could use something like peer.js or maybe binary.js to stream to another source.
I haven't tested this but it should also be possible to override the local streams. For example, in the inject.js you could establish some blank mediastream, override navigator.mediaDevices.getUserMedia and instead of returning the local mediastream return your own mediastream.
This method should work in firefox and maybe others as well assuming you use an extenion/app to load the inject.js script at the start of the document. It being loaded before any of the target's libs is key to making this work.
edited for more detail
edited for even more detail
Capturing packets will only give you the network packets which you would then need to turn into frames and put into a container. A server such as Janus can record videos.
Running headless chrome and using the javascript MediaRecorder API is another option but much more heavy on resources.

MediaRecorder timeslice segments - only the first segment plays

I have the following on chrome latest:
var options = { mimeType: "video/webm;codecs=vp8" };
internalMediaRecorder = new MediaRecorder(internalStream, options);
internalMediaRecorder.ondataavailable = function (blob) {
// put blob.data into an array
var src = URL.createObjectURL(blobData.segment);
const $container = $("body");
const $video = $("<video id='" + blobData.ts + "-" + blob.data.size + "' controls src='" + src + "'></video>").css("max-width", "100%");
$container.prepend($video);
// if I stop/start the recorder, I get playable segments here, separated by unplayable mini-segments from onDataAvailable because I call stop right after processing a video. I can "approximate" desired behavior by doing this and then ignoring blobs that are less than some threshhold to ignore the "dead gap" segments.
}
internalMediaRecorder.start(segmentLengthInMs); // every 5s
I then compile an array of 5s segments - the blob data is available. However when I create a URL for each of these segments:
URL.createObjectURL(videoSegment)
Only the first video plays. Why is this?
UPDATE
If I stop/start the recorder in onDataAvailable, I get playable segments here, separated by unplayable mini-segments from onDataAvailable because I call stop right after processing a video. I can "approximate" desired behavior by doing this and then ignoring blobs that are less than some threshhold to ignore the "dead gap" segments. This smells like feet though and I'd like to get proper segmentation working if possible.
Its expected as per the spec
The UA MUST record stream in such a way that the original Tracks can be retrieved at playback time. When multiple Blobs are returned (because of timeslice or requestData()), the individual Blobs need not be playable, but the combination of all the Blobs from a completed recording MUST be playable.
The resulted blobs are not raw video data, they are encoded with requested MIME type. So you need merge all the blobs in correct order, to generate a playable video file.
var options = { mimeType: "video/webm;codecs=vp8" };
var recordedBlobs = [];
internalMediaRecorder = new MediaRecorder(internalStream, options);
internalMediaRecorder.ondataavailable = function (event) {
if (event.data && event.data.size > 0) {
recordedBlobs.push(event.data);
}
}
internalMediaRecorder.start(segmentLengthInMs); // every 5s
function play() {
var superBuffer = new Blob(recordedBlobs, {type: 'video/webm'});
videoElement.src = window.URL.createObjectURL(superBuffer);
}
See the demo

Web Audio API: How to load another audio file?

I want to write a basic script for HTML5 Web Audio API, can play some audio files. But I don't know how to unload a playing audio and load another one. In my script two audio files are playing in the same time,but not what I wanted.
Here is my code:
var context,
soundSource,
soundBuffer;
// Step 1 - Initialise the Audio Context
context = new webkitAudioContext();
// Step 2: Load our Sound using XHR
function playSound(url) {
// Note: this loads asynchronously
var request = new XMLHttpRequest();
request.open("GET", url, true);
request.responseType = "arraybuffer";
// Our asynchronous callback
request.onload = function() {
var audioData = request.response;
audioGraph(audioData);
};
request.send();
}
// This is the code we are interested in
function audioGraph(audioData) {
// create a sound source
soundSource = context.createBufferSource();
// The Audio Context handles creating source buffers from raw binary
soundBuffer = context.createBuffer(audioData, true/* make mono */);
// Add the buffered data to our object
soundSource.buffer = soundBuffer;
// Plug the cable from one thing to the other
soundSource.connect(context.destination);
// Finally
soundSource.noteOn(context.currentTime);
}
// Stop all of sounds
function stopSounds(){
// How can do this?
}
// Events for audio buttons
document.querySelector('.pre').addEventListener('click',
function () {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/hello.mp3');
}
);
document.querySelector('.next').addEventListener('click',
function() {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/nokia.mp3');
}
);
You should be pre-loading sounds into buffers once, at launch, and simply resetting the AudioBufferSourceNode whenever you want to play it back.
To play multiple sounds in sequence, you need to schedule them using noteOn(time), one after the other, based on buffer respective lengths.
To stop sounds, use noteOff.
Sounds like you are missing some fundamental web audio concepts. This (and more) is described in detail and shown with samples in this HTML5Rocks tutorial and the FAQ.