Two Chrome apps/extensions have caught my eye on the webstore:
Screencastify
Snagit
I am aware of chrome.desktopCapture and how I can use getUserMedia() to capture a live stream of a user's desktop.
Example:
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: desktop_id,
minWidth: 1280,
maxWidth: 1280,
minHeight: 720,
maxHeight: 720
}
}
}, successCallback, errorCallback);
I'd love to create my own screencast app that allows audio recording as well as embedding webcam capture in a given corner of the video like Screencastify.
I understand capturing the desktop and the audio and video of the user, but how do you put it all together and make it into a video file?
I'm assuming there is a way to create a video file from a getUserMedia() stream on ChromeOS. Something that only ChromeOS has implemented?
How is it done? Thanks in advance for your answers.
The actual encoding and saving of the video file isn't something that's been implemented in Chrome as of yet. Mozilla has it in a nascent form at the moment. I'm unsure of its state in ChromeOS. I can give you a little information I've gleaned during development with the Chrome browser, however.
The two ways to encode, save, and distribute a media stream as a video are client-side and server-side.
Server-side:
Requires a media server of some kind. The best I've free/open-source solution that I've found so far is Kurento. The media stream is uploaded(chunks or whole) or streamed to the media server where it is encoded and saved for later use. This also works with peer-to-peer by acting as a middleman, recording as the data streams through.
Client-side:
This is all about browser-based encoding. There are currently two working options that I've tested successfully in Chrome.
Whammy.js:
This method uses a canvas hack to save arrays of webp images and then encode them into a webm container. While slow, it works well with video. No audio support. I'm working on that at the moment.
videoconverter.js(was ffmpeg-asm.js):
This is a straight port of ffmpeg to JavaScript using Emscripten. It works with both audio and video. It's also gigantic, script-wise, at around 25MB uncompressed. The other reason I'm not using it in production is the shaky licensing ground that ffmpeg is on at the moment.
It has not been optimized as much as it could be. It would probably be quite a project to make it reliably production-ready.
Hopefully that at least gives you avenues of research.
Related
I'm trying to figure out if web based audio streaming sites use the Web Audio API for playback or if they rely on the audio element or something else.
Since the user of an audio streaming service typically doesn't need more functionality than starting and stopping the audio, then I guess that the audio element is enough. If a VU-meter is required then I would guess the Web Audio API would be used since it has an built in analyser node. But since IE doesn't support the API then I suppose you'd rather use the audio element and reach the IE users than using fancy extras such as an VU-meter.
I've been looking at the source code for Spotifys web player, Grooveshark, BBC radio and the Polish public radio but I find neither audio elements or use of the Web Audio API. I did find that the Swedish public radio (sr.se) makes use of the audio element though.
I'm not asking for anyone to go through the JavaScript source code for me, but rather if someone who is familiar with the subject could point me in the right direction.
I don't know of any internet radio services playing back their streams with the Web Audio API currently, but I wouldn't be surprised to find one. I've been working on one myself using Audiocog's excellent Aurora.js library, which enables codecs in-browser that wouldn't normally be available, by decoding the audio with JavaScript. However, for compatibility reasons as you have pointed out, this would be considered a bit experimental today.
Most internet radio stations use progressive HTTP streaming (SHOUTcast/Icecast style) which can be played back within an <audio> element or Flash. This works well but can be hard to get right, especially if you use SHOUTcast servers as they are not quite 100% compatible with HTTP, hurting browser support in some versions of Firefox and a lot of mobile browsers. I ended up writing my own server called AudioPump Server to get better browser and mobile browser support with HTTP progressive.
Depending on your Flash code and ActionScript version available, you might also have to deal with memory leaks in creative ways, since by default Flash will keep all of your stream data in memory indefinitely as it was never built to stream over HTTP. Many use RTMP with Flash (with Wowza or similar on the server), which Flash was built to stream with to get around this problem.
iOS supports HLS which is basically a collection of static files served by an HTTP server. The encoder writes a chunk of the stream to each file as the encoding occurs, and the client just downloads them and plays them back seamlessly. The benefit here is that the client can choose a bitrate to stream and, raising quality up and down as network conditions change. This also means that you can completely switch networks (say from WiFi to 3G) and still maintain the stream since chunks are downloaded independently and statelessly. Android "supports" HLS, but it is buggy. Safari is the only browser currently supporting HLS.
Compatibility detection is not something you need to solve yourself. There are many players, such as jPlayer and JW Player which wrangle HTML5 audio support detection, codec support detection, and provide a common API between playback for HTML5 audio and Flash. They also provide an optional UI if you want to get up and running quickly.
Finally, most stations do offer a link to allow you to play the stream in your own media player. This is done by linking to a playlist file (usually M3U or PLS) which is downloaded and often immediately opened (as configured by the user and their browser). The player software loads this playlist and then connects directly to the streaming server to begin playback. On Android, you simply link to the stream URL. It will detect the Content-Type response header, disconnect, and open its configured media player for playback. These days you have to hunt to find these direct links, but they are out there.
If you ever want to know what a station is using without digging around in their compiled and minified source code, simply use a tool like Fiddler or Wireshark and watch the traffic. You will find that it is very straightforward under the hood.
We use Web Audio for streaming via Aurora.js using a protocol very similar to HTTP Live Streaming. We did this because we wanted the same streaming backend to serve iPhone, Android and the web.
It was all a very long and painful process that took over 6 months of effort, but now that its all finished, its all good.
Have a look at http://radioflote.com and feel free to shoot questions or clarifications regarding anything. Go ahead and disassemble the code if you want to. Not a problem.
I use JWPlayer to play videos from the server. Videos are encoded using h.264 codec. If i open them in browser with h.264 support - video plays nicely and i can seek it, because server returns 206 header browser understands that its partial content. But if i try to play same video on Opera, for example, flash player is being used and it receives 200 OK! 2 problems here:
I can't seek the video, until part of it is downloaded
If the video is not "properly" encoded player can't even start playing it, until file is fully downloaded.
Is there something wrong with flash properly asking/understanding http headers?(i've never worked with flash before, so maybe my question is a bit silly and i just don't know flash's limitations)..
1) You need to have pseudo streaming enabled, for Flash - http://www.longtailvideo.com/support/jw-player/28855/pseudo-streaming-in-flash, if you can provide a link though, I will take a look at exactly what is going on here, I am more or less guessing about this one. HTML5 does not require a pseudo streaming module to be installed on the server side, though. In Flash, the default is progressive download, so you can only seek to downloaded parts, and in html5, this is not the case.
2) Yes, that is because of encoding. If your MP4 files cannot be seeked before they are completely downloaded, you will have to fix the MOOV atom (it contains the seeking information) located at the end of your video. Use this little application to parse your videos and add the necessary cue points - http://renaun.com/blog/2010/06/qtindexswapper-2/
Also, encoding via HandBrake - http://handbrake.fr/, can fix this as well, so the above tool wouldn't be necessary. You can encode using HandBrake, and enable "web optimized", which does the same this as the Index Swapper Tool. HandBrake also has command line encoding options as well.
How does one go about streaming video in HTML5 ? I can go with using a single browser of the latest version if I have to.I need to be able to start playing from any location of the movie even if the entire video has not been loaded by the browser.
WebRTC ?
I've already seen this question and no one has answered.
Does not allow the viewer to skip to the middle of a video in any
browser. They must watch the video straight through start to finish,
which is not ideal.
This is the main point for streaming.
Currently if you want to use pure HTML5 and work cross-browser you are limited to progressive streaming with the <video> element.
While that still does allow the user to skip ahead via the scrubber or programatically by setting the .currentTime there will still be some buffering while the browser re-loads enough content to be comfortable playing smoothly.
Solutions like Smooth Streaming, HLS do not work across browsers today so you would require a Flash or Silverlight plugin, though with MPEG-DASH being recognized by the W3C there is some hope for the future as samples like this demonstrate http://dash-mse-test.appspot.com/release-notes.html
For today however if you want to stick with an HTML5 solution and you have source in a format the works with the browser then you should be fine
Chrome has implemented the Media Source API in the mean time. Hoping the rest will follow.
http://www.w3.org/TR/media-source/
Abstract
This specification extends HTMLMediaElement to allow JavaScript to generate media streams for playback. Allowing JavaScript to generate streams facilitates a variety of use cases like adaptive streaming and time shifting live streams.
I would like to clarify certain things with what I found and raise certain questions with things that I dont know,
Capturing cam/mic through browser could be done through getusermedia();
Is there anything for i devices? because getusermedia() doesn't seem to work in i devices
How could I trap actual audio from web browser application (eg. if I play an audio file and forward it 2mins, I would like to capture the actual audio stream from the html5 player so that I hold the actual audio data)
You need to use Flash, if you are not going to support mobile devices. One best solution is to use wami-recorder.
From their website:
The Problem
As of this writing, most browsers still do not support WebRTC's getUserMedia(), which promises to give web developers microphone access via Javascript. This project achieves the next best thing for browsers that support Flash. Using the WAMI recorder, you can collect audio on your server without installing any proprietary media server software.
The Solution
The WAMI recorder uses a light-weight Flash app to ship audio from client to server via a standard HTTP POST. Apart from the security settings to allow microphone access, the entire interface can be constructed in HTML and Javascript.
Hope this helps.
When you play audio using the audio element in Chrome you get annoying clicks and cracks. At least under my 64bit Linux installation, even after I formatted and installed a new Fedora version. (Firefox and Opera are fine, even IE9 in a VirtualBox Windows 7.)
But demos using the Web Audio API instead of the audio element have perfect sound. So I was wondering if I could use the Web Audio API like the audio element? But there are some things you seem not to be able to do with this API. Or am I missing something? The things I couldn't find where:
starting to play a file before it is completely loaded
getting buffer progress updates (depends on the previous point)
getting play progress updates
seeking
Is there a way to do this with the Web Audio API?
This is where I would use it: http://tinyurl.com/magnatune-player
I think you should still use <audio> for streaming at the very least. You can treat it as a MediaElementAudioSourceNode in web audio if you'd like:
var mediaSourceNode = context.createMediaElementSource(audioElement);
AFAIK, there is no way to stream web audio directly. In fact, the web audio api suggests that you don't:
4.9. The AudioBuffer Interface
This interface represents a memory-resident audio asset (for one-shot sounds and other short audio clips). Its format is non-interleaved IEEE 32-bit linear PCM with a nominal range of -1 -> +1. It can contain one or more channels. Typically, it would be expected that the length of the PCM data would be fairly short (usually somewhat less than a minute). For longer sounds, such as music soundtracks, streaming should be used with the audio element and MediaElementAudioSourceNode.
Unless you used a MediaElementAudioSourceNode (which I would assume suffer from the same issues you're having since it's just using an <audio> tag) AFAIK the answers to your questions are:
starting to play a file before it is completely loaded: No.
getting buffer progress updates (depends on the previous point): Possibly (You could check for progress events on the XHR)
getting play progress updates: No.
seeking: No.
In the meantime Chrome fixed the audio playback issues. So I don't need any workarounds anymore.