Using Node to stream Video to HTML5 - html

I've been playing around with node and websockets and built a small test app that streams audio using websockets. The server breaks apart the mp3 using createReadStream, throttles the stream using node-throttle and sens the binary data using the "ws" module.
On the client side I pick up the chunks on the websocket and use decodeAudioData (http://www.html5rocks.com/en/tutorials/webaudio/intro/) to decode and play the chunk. It all works relatively ok.
What I was curious to do next was to stream video in the same manner to the HTML5 video tag. But I can't really find any reference material on the web to achieve this in the same manner as my audio test above.
Is there a video equivalent for "decodeAudioData"?
Can I feed chunks of data into a video tag?
I've got a similar sample running that I picked up from...
https://gist.github.com/paolorossi/1993068
But this isn't really what I am looking for. First of all it doesn't really seem to be streaming to me. The client buffers it all before playing it.
Also, similar to my audio test I want the stream to be throttled on the server side so that when a new client connects they join the video at whatever point it is currently at. i.e. 30 minutes in or whatever.
Thanks

OK,
I found a solution to this after much searching.
The MediaSource API is what I was looking for...
var mediaSource = new MediaSource();
var sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"');
sourceBuffer.append(new Uint8Array(data));
This link provided the solution...
http://html5-demos.appspot.com/static/media-source.html

Related

Android use MediaMuxer combine h264+aac stream, but i found some questions

here is my question:
I use Android Apis "MediaMuxer" to combine h264 stream and aac stream to mp4 file,when I want to stop record,so I call this:mMediaMuxer.stop(),and the mp4 file can play well.
but sometimes,there is an unexpected happen,like kill power or power is gone suddendly,so there is no time to call "mMediaMuxer.stop()",finally this file can not play anyway....
Is anybody know now to fix this problem? I want to play this video event didn't call "mMediaMuxer.stop()" this method... or there is other Apis or sdk can combine h264+aac stream well?

HTML5 & Web audio api: Streaming microphone data from browser to server. Ideal transports and data compression

I am looking to take the audio input from the browser and stream it to multiple listeners. The intended use is for music, so the quality must mp3 standard or thereabouts.
I have attempted two ways, both yielding unsuccessful results:
WebRTC
Streaming audio directly between browsers works fine, but the audio quality seems to be non-customisable though what I have seen. (I have seen that it is using the Opus audio codec, but seems to not expose any controls).
Does anyone have any insight into how to increase the audio quality in WebRTC streams?
Websockets
The issue is the transportation from the browser to the server. The PCM audio data I can acquiring via the method below has proven too large to repeatedly stream to the server via websockets. The stream works perfectly in high speed internet environments, but on slower wifi it is un-usable.
var context = new webkitAudioContext()
navigator.webkitGetUserMedia({audio:true}, gotStream)
function gotStream (stream)
{
var source = context.createMediaStreamSource(stream)
var proc = context.createScriptProcessor(2048, 2, 2)
source.connect(proc)
proc.connect(context.destination)
proc.onaudioprocess = function(event)
{
var audio_data = event.inputBuffer.getChannelData(0)|| new Float32Array(2048)
console.log(audio_data)
// send audio_data to server
}
}
So the main question is, is there any way to compress the PCM data in order to make it easier to stream to the server? Or perhaps there is an easier way to go about this?
There are lots of ways to compress PCM data, sure, but realistically, your best bet is to get WebRTC to work properly. WebRTC is designed to do this - adaptively stream media - although you don't define what you mean by "multiple" listeners (there's a huge difference between 3 listeners and 300,000 simultaneous listeners).
There are several possible ways of resampling and/or compressing your data, none of them native though. I resampled the data to 8Khz Mono (your mileage may vary) with the xaudio.js lib from the speex.js environment. You could also compress the stream using speex, though that is used usually for audio only. In your case, I would probably send the stream to a server, compress it there and stream it to your audience. I really don't believe a simple browser to be good enough to serve data to a huge audience.
WebRTC seems to default to one mono channel around 42 kb/s, it seems to be primarily designed for voice.
You can disable the audio processing features using constraints to get a more consistent input from the browser using:
navigator.mediaDevices.getUserMedia({
audio: {
autoGainControl: false,
channelCount: 2,
echoCancellation: false,
latency: 0,
noiseSuppression: false,
sampleRate: 48000,
sampleSize: 16,
volume: 1.0
}
});
Then you also should set stereo and maxaveragebitrate params on the SDP:
let answer = await peer.conn.createAnswer(offerOptions);
answer.sdp = answer.sdp.replace('useinbandfec=1', 'useinbandfec=1; stereo=1; maxaveragebitrate=510000');
await peer.conn.setLocalDescription(answer);
This should output a string which looks like this:
a=fmtp:111 minptime=10;useinbandfec=1; stereo=1; maxaveragebitrate=510000
This could increase the bitrate up to 520kb/s for stereo, which is 260kps per channel. Actual bitrate depends on the speed of your network and strength of your signal tho.

Get the raw audio in getUserMedia success callback

I'm trying to get the raw audio in getUserMedia() success callback and post it to the server.
The success callback receives the LocalMediaStream object.
var onSuccess = function(s) {
var m=s.getAudioTracks(s);
//m[0] contains MediaStreamTrack object for audio
//get the raw audio and do the stuff
}
But there is no attribute or method to get the raw audio from channels in MediaStreamTrack.
How can we access the raw audio into this callback which is called on success of getUserMedia()?
I found the Recorder.js library-- https://github.com/mattdiamond/Recorderjs
But it is recording blank audio in Chrome: Version 26.0.1410.64 m.
It works fine on Chrome: Version 29.0.1507.2 canary SyzyASan.
I think there is issue of Web Audio API used by recorder.js
I'm looking for the solution without Web Audio API, that should work at least on chrome's official build.
Check out the MediaStreamAudioSourceNode. You can create one of those (via the AudioContext's createMediaStreamSource method) and then connect the output to RecorderJS or a plain old ScriptProcessorNode (which is what RecorderJS is built on).
Edit: Just realized you're asking if you can access the raw audio samples without the Web Audio API. As far as I know, I don't think that's possible.

Using local file for Web Audio API in Javascript

I'm trying to get sound working on my iPhone game using the Web Audio API. The problem is that this app is entirely client side. I want to store my mp3s in a local folder (and without being user input driven) so I can't use XMLHttpRequest to read the data. I was looking into using FileSystem but Safari doesn't support it.
Is there any alternative?
Edit: Thanks for the below responses. Unfortunately the Audio API is horribly slow for games. I had this working and the latency just makes the user experience unacceptable. To clarify, what I need is sounething like -
var request = new XMLHttpRequest();
request.open('GET', 'file:///./../sounds/beep-1.mp3', true);
request.responseType = 'arraybuffer';
request.onload = function() {
context.decodeAudioData(request.response, function(buffer) {
dogBarkingBuffer = buffer;
}, onError);
}
request.send();
But this gives me the errors -
XMLHttpRequest cannot load file:///sounds/beep-1.mp3. Cross origin requests are only supported for HTTP.
Uncaught Error: NETWORK_ERR: XMLHttpRequest Exception 101
I understand the security risks with reading local files but surely within your own domain should be ok?
I had the same problem and I found this very simple solution.
audio_file.onchange = function(){
var files = this.files;
var file = URL.createObjectURL(files[0]);
audio_player.src = file;
audio_player.play();
};
<input id="audio_file" type="file" accept="audio/*" />
<audio id="audio_player" />
You can test here:
http://jsfiddle.net/Tv8Cm/
Ok, it's taken me two days of prototyping different solutions and I've finally figured out how I can do this without storing my resources on a server. There's a few blogs that detail this but I couldn't find the full solution in one place so I'm adding it here. This may be considered a bit hacky by seasoned programmers but it's the only way I can see this working, so if anyone has a more elegent solution I'd love to hear it.
The solution was to store my sound files as a Base64 encoded string. The sound files are relatively small (less than 30kb) so I'm hoping performance won't be too much of an issue. Note that I put 'xxx' in front of some of the hyperlinks as my n00b status means I can't post more than two links.
Step 1: create Base 64 sound font
First I need to convert my mp3 to a Base64 encoded string and store it as JSON. I found a website that does this conversion for me here - xxxhttp://www.mobilefish.com/services/base64/base64.php
You may need to remove return characters using a text editor but for anyone that needs an example I found some piano tones here - xxxhttps://raw.github.com/mudcube/MIDI.js/master/soundfont/acoustic_grand_piano-mp3.js
Note that in order to work with my example you're need to remove the header part data:audio/mpeg;base64,
Step 2: decode sound font to ArrayBuffer
You could implement this yourself but I found an API that does this perfectly (why re-invent the wheel, right?) - https://github.com/danguer/blog-examples/blob/master/js/base64-binary.js
Resource taken from - here
Step 3: Adding the rest of the code
Fairly straightforward
var cNote = acoustic_grand_piano.C2;
var byteArray = Base64Binary.decodeArrayBuffer(cNote);
var context = new webkitAudioContext();
context.decodeAudioData(byteArray, function(buffer) {
var source = context.createBufferSource(); // creates a sound source
source.buffer = buffer;
source.connect(context.destination); // connect the source to the context's destination (the speakers)
source.noteOn(0);
}, function(err) { console.log("err(decodeAudioData): "+err); });
And that's it! I have this working through my desktop version of Chrome and also running on mobile Safari (iOS 6 only of course as Web Audio is not supported in older versions). It takes a couple of seconds to load on mobile Safari (Vs less than 1 second on desktop Chrome) but this might be due to the fact that it spends time downloading the sound fonts. It might also be the fact that iOS prevents any sound playing until a user interaction event has occured. I need to do more work looking at how it performs.
Hope this saves someone else the grief I went through.
Because ios apps are sandboxed, the web view (basically safari wrapped in phonegap) allows you to store your mp3 file locally. I.e, there is no "cross domain" security issue.
This is as of ios6 as previous ios versions didn't support web audio api
Use HTML5 Audio tag for playing audio file in browser.
Ajax request works with http protocol so when you try to get audio file using file://, browser mark this request as cross domain request. Set following code in request header -
header('Access-Control-Allow-Origin: *');

Expression Encoder 4 live stream consumed by HTML 5 <video>

I'm trying to serve up a live stream (ie. completely buffered in memory, cannot access the past) and am having trouble with Expression Encoder 4.
Ideally, I'd like to just stream a bare H.264 byte stream to the client consumed by:
<video id="mainVideoWindow">
<source src='http://localhost/path/to/my/stream.mp4' type='video/mp4' />
</video>
I figured I could stream it to the client just like any other byte stream over HTTP. However, I'm having trouble figuring out the appropriate code required to do (first day with Expression Encoder, not sure how to go about getting the raw byte stream) so nor do I know if it would work in the first place.
An alternate was to use IIS Live Streaming server:
var source = job.AddDeviceSource(device, null);
job.ActivateSource(source);
job.ApplyPreset(LivePresets.VC1IISSmoothStreaming720pWidescreen);
var format = new PushBroadcastPublishFormat();
format.PublishingPoint = new Uri("http://localhost/test.isml");
job.PublishFormats.Add(format);
job.StartEncoding();
// Let's listen for a keypress or error message to know when to stop encoding
while (Console.ReadKey(true).Key != ConsoleKey.X) ;
// Stop our encoding
Console.WriteLine("Encoding stopped.");
job.StopEncoding();
However, I'm having trouble getting the client side markup to want to display the video on Chrome and I haven't seen anything to indicate that it'd work on Chrome (though http://learn.iis.net/page.aspx/854/apple-http-live-streaming-with-iis-media-services indicates how it would work with an iOS device).
Anyone have any insights?
You are trying to consume (with your sencond example) a Smooth Streaming feed (HTTP-Adaptive Streaming by Microsoft) through HTML5, which is not supported.
This could work on iOS devices if you enable the Apple HTTP Live Streaming to transmux the fragments into MPEG-2 Transport Stream. This will also generate an Apple HTTP Live Streaming manifest which than can be called though the video tag.
...I saw that you have the IIS link. The Apple HTTP Live Streaming needs to be enabled on the IIS Server (IIS Media Services). This will work for iOS devices. Quicktime will get into play...