WebAudio streaming with fetch : DOMException: Unable to decode audio data - google-chrome

I'm trying to play an infinite stream coming from the fetch API using Chrome 51. (a webcam audio stream as Microsoft PCM, 16 bit, mono 11025 Hz)
The code works almost OK with mp3s files, except some glitches, but it does not work at all with wav files for some reason i get "DOMException: Unable to decode audio data"
The code is adapted from this answer Choppy/inaudible playback with chunked audio through Web Audio API
Any idea if its possible to make it work with WAV streams ?
function play(url) {
var context = new (window.AudioContext || window.webkitAudioContext)();
var audioStack = [];
var nextTime = 0;
fetch(url).then(function(response) {
var reader = response.body.getReader();
function read() {
return reader.read().then(({ value, done })=> {
context.decodeAudioData(value.buffer, function(buffer) {
audioStack.push(buffer);
if (audioStack.length) {
scheduleBuffers();
}
}, function(err) {
console.log("err(decodeAudioData): "+err);
});
if (done) {
console.log('done');
return;
}
read()
});
}
read();
})
function scheduleBuffers() {
while ( audioStack.length) {
var buffer = audioStack.shift();
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
if (nextTime == 0)
nextTime = context.currentTime + 0.01; /// add 50ms latency to work well across systems - tune this if you like
source.start(nextTime);
nextTime += source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
};
}
}
Just use play('/path/to/mp3') to test the code. (the server needs to have CORS enabled, or be on the same domain your run script from)

AudioContext.decodeAudioData just isn't designed to decode partial files; it's intended for "short" (but complete) files. Due to the chunking design of MP3, it sometimes works on MP3 streams, but wouldn't on WAV files. You'll need to implement your own decoder in this case.

Making the wav stream sound correctly implies to add WAV headers to the chunks as Raymond suggested, plus some webaudio magic and paquet ordering checks;
Some cool guys helped me to setup that module to handle just that and it works beautifully on Chrome : https://github.com/revolunet/webaudio-wav-stream-player
Now works on Firefox 57+ with some config flags on : https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/getReader#Browser_compatibility

Related

Upload HTML5 MediaRecorder's recorded video to AWS S3 in realtime

I want to record video and upload it in AWS S3 in realtime.
Things that I have done so far.
As soon as user clicks on Record Audio/Video button following code snippets gets called :
navigator.getUserMedia({ audio: true,video: true }, function (stream) {
mediaRecorder = new MediaRecorder(stream, {mimeType: 'video/webm'});
mediaRecorder.onstop = handleStop;
mediaRecorder.ondataavailable = handleDataAvailable;
mediaRecorder.start();
}
On Stop record Audio/Video button I'm uploading video to AWS S3 :
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: // . Enter your identity pool
RoleArn: // . Enter RoleArn
});
AWS.config.credentials.get(function(err) {
if (err) alert(err);
console.log(AWS.config.credentials);
});
var bucketName = ''; // Enter your bucket name
var bucket = new AWS.S3({
params: {
Bucket: bucketName
}
});
mediaRecorder.stop();
var blob = new Blob(recordedBlobs, {type: 'video/webm' });
var file = new File([blob], 'testVideo.webm');
var objKey = 'testing/' + file.name;
var params = {
Key: objKey,
ContentType: file.type,
Body: file,
ACL: 'public-read'
};
bucket.putObject(params, function(err, data) {
if (err) {
console.log(" Error while UPLOADING Video :");
} else {
console.log(" Success UPLOADING Video :");
}
});
Everything works perfectly fine. Video gets uploaded successfully when clicked on stop recording.
Video size vary from 100MB to 3GB
Now the problem is while the video is getting uploaded if the user close the browser then the upload fails.
So is there a way to upload video to S3 in realtime ?? i.e during recording phase it should get uploaded.
Or is there any other way to upload it ? before user closes the browser.
Now the problem is while the video is getting uploaded if the user closes the browser then the upload fails.
So is there a way to upload video to S3 in real time?
You're using the wrong technology to achieve this, if you need to make sure the recording is uploaded/recorded serverside simultaneously then you should be looking at WebRTC (Web Real-Time Communications).
Media recorder will very nearly always be of a higher (lossless) quality, the users upload speed will often not keep up so it'll increasingly be buffered for upload even if start uploading chunks immediately. WebRTC on the other hand (should) adjust the quality of the stream uploaded to match the network conditions encountered (lossy).
For an out of the box solution try something like OpenTok or Twilio, both make this pretty straightforward.

progressive load and play video from base64 pieces

I have many pieces of a video in base64.
Just that I want is to play the video progressively as I receive them.
var fileInput = document.querySelector('input#theInputFile');//multiple
fileInput.addEventListener('change', function(e) {
var files = fileInput.files;
for (var i = 0; i < files.length; i++) {
var file = fileInput.files[i]
fileLoaded(file, 0, 102400, file.size);
};
e.preventDefault();
});
videoA=[];
function fileLoaded(file, ini, end, size) {
if (end>size){end=size}
var reader = new FileReader();
var fr = new FileReader();
fr.onloadend = function(e) {
if (e.target.readyState == FileReader.DONE) {
var piece = e.target.result;
display(piece.replace('data:video/mp4;base64,', ''));
}
};
var blob = file.slice(ini, end, file.type);
fr.readAsDataURL(blob);
var init = end;
var endt = init+end;
if (end<size){
fileLoaded(file, init, end, size);
}
}
Trying to display the video by chunks:
var a=0;
function display(vid, ini, end) {
videoA.push(vid);
$('#video').attr('src','data:video/mp4;base64,'+videoA[a]);
a++;
}
I know this is not the way but I`m trying to search and any response adjust to that I'm searching.
Even I'm not sure if it is possible.
Thanks!
EDIT
I've tried to play the chunks one by one and the first one is played well but the rest of them give the error:
"Uncaught (in promise) DOMException: Failed to load because no supported source was found".
If I could make the chunks to base64 correctly it's enough for me
Ok, the solution is to solve the creation of base64 pieces from the original uploaded file in the browser that can be played by an html5 player
So I've put another question asking for that.
Chunk video mp4 file into base64 pieces with javascript on browser

ActionScript 3 FFMPEG losing Metadata of video

I am converting video files to the .flv format using FFMPEG so that I can use LoaderMax (GreenSocks) to play the video files. The issue is that when the video is converted with FFMPEG the metadata is lost so I cannot later on with LoaderMax get the duration or current play time with the code below.
video.getTime();
video.duration();
I could get the duration of the video before converting it with FFMPEG easily enough but this doesn't solve the issue of being able to get the current play time. My goal is to allow the user to click on the seek bar and jump to any point in the video which works but for obvious reasons I need to be able to show the current time and video length.
I'm attempting to now use FFMPEG with something called flvtool2 which should rebuild the metadata?
My code currently for this:
nativeProcessInfo = new NativeProcessStartupInfo();
nativeProcessInfo.executable = File.applicationDirectory.resolvePath(ffmpegPath); //path to ffmpeg (included in project files)
//nativeProcessInfo.executable = File.applicationDirectory.resolvePath(flvtool2Path); //path to flvtool2 (included in project files)
var processArgument:Vector.<String> = new Vector.<String>(); //holds command line arguments for converting video
processArgument.push("-i"); //filename
processArgument.push(filePath);
processArgument.push("-s"); //size
processArgument.push("640x480");
processArgument.push("-b:v"); //bitrate - video
processArgument.push("4800k");
processArgument.push("-b:a"); //bitrate -
processArgument.push("6400k");
processArgument.push("-ar"); //audio sampling frequency
processArgument.push("44100");
processArgument.push("-ac"); //audio channels
processArgument.push("2");
processArgument.push("-ab"); //audio bitrate frequency
processArgument.push("160k");
processArgument.push("-f"); //force
processArgument.push("flv");
processArgument.push("-");
/*processArgument.push("|");
processArgument.push("flvtool2");
processArgument.push("-U");
processArgument.push("stdin");
processArgument.push(filePath);*/
nativeProcessInfo.arguments = processArgument;
if (NativeProcess.isSupported) {
nativeProcess = new NativeProcess();
nativeProcess.start(nativeProcessInfo); //start video buffering
nativeProcess.addEventListener(ProgressEvent.STANDARD_OUTPUT_DATA, ProgressEventOutputHandler);
nativeProcess.addEventListener(ProgressEvent.STANDARD_ERROR_DATA, ProgressEventErrorHandler);
nativeProcess.addEventListener(NativeProcessExitEvent.EXIT, NativeProcessExitHandler);
nativeProcess.addEventListener(IOErrorEvent.STANDARD_OUTPUT_IO_ERROR, standardIOErrorHandler);
nativeProcess.addEventListener(IOErrorEvent.STANDARD_ERROR_IO_ERROR, standardIOErrorHandler);
} else {
trace("!NativeProcess.isSupported");
}
I've uploaded an example project to download which should help explain the problem. To use it you will need to point the ActionScript Properties to the location of Greensock to use LoaderMax and have a video somewhere on your computer to test with. The link is: http://www.prospectportal.co.uk/example.zip
Take this example of a working code to convert a video (an AVI in my case) to an FLV video file using ffmpeg via AIR's NativeProcess :
var loader:VideoLoader,
exe:File = File.applicationDirectory.resolvePath('ffmpeg.exe'),
video_in:File = File.applicationDirectory.resolvePath('video.avi'),
video_out:File = File.applicationDirectory.resolvePath('video.flv');
var args:Vector.<String> = new Vector.<String>();
args.push("-i", video_in.nativePath, "-b:v", "800k", "-ar", "44100", "-ab", "96k", "-f", "flv", video_out.nativePath);
var npsi:NativeProcessStartupInfo = new NativeProcessStartupInfo();
npsi.executable = exe;
npsi.arguments = args;
var process:NativeProcess = new NativeProcess();
process.addEventListener(ProgressEvent.STANDARD_OUTPUT_DATA, onOutputData);
process.addEventListener(ProgressEvent.STANDARD_ERROR_DATA, onErrorData);
process.addEventListener(IOErrorEvent.STANDARD_OUTPUT_IO_ERROR, onIOError);
process.addEventListener(IOErrorEvent.STANDARD_ERROR_IO_ERROR, onIOError);
process.addEventListener(NativeProcessExitEvent.EXIT, onExit);
process.start(npsi);
function onOutputData(event:ProgressEvent):void
{
trace("Got: ", process.standardOutput.readUTFBytes(process.standardOutput.bytesAvailable));
}
function onErrorData(event:ProgressEvent):void
{
trace("ERROR -", process.standardError.readUTFBytes(process.standardError.bytesAvailable));
}
function onExit(event:NativeProcessExitEvent):void
{
playFLV();
}
function onIOError(event:IOErrorEvent):void
{
trace(event.toString());
}
function playFLV()
{
loader = new VideoLoader(
video_out.nativePath,
{
container: this,
width: 400,
height: 300,
scaleMode: "proportionalInside",
bgColor: 0x000000,
autoPlay: true,
volume: 0.5
}
);
loader.addEventListener(LoaderEvent.COMPLETE, onVideoLoad);
loader.load();
}
function onVideoLoad(e:LoaderEvent): void {
trace(loader.duration); // gives for example : 67.238
loader.playVideo();
}
Hope that can help.

WinRT MediaElement not working with InMemoryRandomAccessStream

We loaded video as bytes array, created InMemoryRandomAccessStream over this array and tried to MediaElement.SetSource. In UI we have message on MediaElement - Invalid Source. We tried to save this stream to file and read new stream from this file - works perfectly. Both stream are the identical (we check it using SequenceEqual).
What is the problem?
Part of our code:
var stream = await LoadStream();
mediaElement.SetSource(stream , #"video/mp4");
...
public async Task<IRandomAccessStream> LoadStream()
{
...
var writeStream = part.ParentFile.AccessStream.AsStreamForWrite();
foreach (var filePart in part.ParentFile.Parts)
{
writeStream.Write(filePart.Bytes, 0, filePart.Bytes.Length);
}
writeStream.Seek(0, SeekOrigin.Begin);
return part.ParentFile.AccessStream;
}
P.S - the mime-type is correct for sure
Thanks!

Choppy/inaudible playback with chunked audio through Web Audio API

I brought this up in my last post but since it was off topic from the original question I'm posting it separately. I'm having trouble with getting my transmitted audio to play back through Web Audio the same way it would sound in a media player. I have tried 2 different transmission protocols, binaryjs and socketio, and neither make a difference when trying to play through Web Audio. To rule out the transportation of the audio data being the issue I created an example that sends the data back to the server after it's received from the client and dumps the return to stdout. Piping that into VLC results in a listening experience that you would expect to hear.
To hear the results when playing through vlc, which sounds the way it should, run the example at https://github.com/grkblood13/web-audio-stream/tree/master/vlc using the following command:
$ node webaudio_vlc_svr.js | vlc -
For whatever reason though when I try to play this same audio data through Web Audio it fails miserably. The results are random noises with large gaps of silence in between.
What is wrong with the following code that is making the playback sound so bad?
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var delayTime = 0;
var init = 0;
var audioStack = [];
client.on('stream', function(stream, meta){
stream.on('data', function(data) {
context.decodeAudioData(data, function(buffer) {
audioStack.push(buffer);
if (audioStack.length > 10 && init == 0) { init++; playBuffer(); }
}, function(err) {
console.log("err(decodeAudioData): "+err);
});
});
});
function playBuffer() {
var buffer = audioStack.shift();
setTimeout( function() {
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.start(context.currentTime);
delayTime=source.buffer.duration*1000; // Make the next buffer wait the length of the last buffer before being played
playBuffer();
}, delayTime);
}
Full source: https://github.com/grkblood13/web-audio-stream/tree/master/binaryjs
You really can't just call source.start(audioContext.currentTime) like that.
setTimeout() has a long and imprecise latency - other main-thread stuff can be going on, so your setTimeout() calls can be delayed by milliseconds, even tens of milliseconds (by garbage collection, JS execution, layout...) Your code is trying to immediately play audio - which needs to be started within about 0.02ms accuracy to not glitch - on a timer that has tens of milliseconds of imprecision.
The whole point of the web audio system is that the audio scheduler works in a separate high-priority thread, and you can pre-schedule audio (starts, stops, and audioparam changes) at very high accuracy. You should rewrite your system to:
1) track when the first block was scheduled in audiocontext time - and DON'T schedule the first block immediately, give some latency so your network can hopefully keep up.
2) schedule each successive block received in the future based on its "next block" timing.
e.g. (note I haven't tested this code, this is off the top of my head):
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var delayTime = 0;
var init = 0;
var audioStack = [];
var nextTime = 0;
client.on('stream', function(stream, meta){
stream.on('data', function(data) {
context.decodeAudioData(data, function(buffer) {
audioStack.push(buffer);
if ((init!=0) || (audioStack.length > 10)) { // make sure we put at least 10 chunks in the buffer before starting
init++;
scheduleBuffers();
}
}, function(err) {
console.log("err(decodeAudioData): "+err);
});
});
});
function scheduleBuffers() {
while ( audioStack.length) {
var buffer = audioStack.shift();
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
if (nextTime == 0)
nextTime = context.currentTime + 0.05; /// add 50ms latency to work well across systems - tune this if you like
source.start(nextTime);
nextTime+=source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
};
}