I would like to know if the flash fallback (medialement or any other) can support accurate seek to the frame for an H.264 video in a MP4 container, transferred by HTTP using an IIS or apache web server (HTTP Pseudostreaming or Progressive Download). In HTML5, we can order to seek to any frame number and it will manage to decode whatever is needed to display the requested frame index.
Currently, what I can experiment with all flash player is something like a seek to the closest GOP or keyframe. However, this is unfortunately not enough for my project as I really need to be able to reach any arbitrary frame.
Do you know if there is a way to have an frame accurate seek in Flash, using HTTP pseudo streaming (I know this is possible with FMS and RTMP streaming).
Related
I am attempting to write a web service (using C# and WebAPI, thought the actual server technology likely isn't important) that hosts dynamic video files for consumption by a basic HTML5 video element.
Here is my situation and my requirements:
The video is being transcoded on the fly and so the final file size is not known. I cannot provide a Content-Length. However, the time duration of the video is known in advance.
The video must be immediately playable when the page loads, even if it's not done transcoding.
The user must be able, via the HTML5 video controls, to seek backwards and forwards in the video. If they attempt to seek forward to a point that is not transcoded, the request will block until the transcoding catches up.
I tried to use a 206: Partial Content response with the Content-Range header but since I am unable to provide a content-length, the player in Chrome seems unable to seek beyond the first chunk of video it gets and the player in Firefox doesn't even attempt to download more than the first chunk. It is also invalid to respond with a range outside what is asked for by the client but the video player always asks for bytes 0+.
Without a content-length, I considered using Transfer-Encoding: chunked and chunking the output. However, Chrome does not let you seek through a video if the server does not support ranged requests.
I have also considered specifying a fake content length of 1TB or something ridiculous just so I can do the ranged response option but I do not know if that will affect the progress bar or seek capabilities. Does the HTML5 video player determine the progress bar's dimensions based on file size or duration?
So what are my options? I am sure this is a problem that's been solved before. Can ranged responses be used in conjunction with chunked encoding?
Is there any way to stream mkv files on webpage using javascript/html or any other technologies? I found many questions about this, but I really want to know the answer - is this possible in any way? Maybe ajax, javascript, php, html? Maybe some external libraries? Anything?
I was wondering how youtube works. Is there possible to upload mkv file? Is so, how that videos are streaming to end user?
I know that browser doesn't support mkv natively, but maybe is some way to forcing html to do that?
Any help will be appreciated.
YouTube most probably works using the DASH protocol format. On the server side, the source audio and source video are separately divided into segments of different bitrate/quality. A manifest keeps an index of all possible segments values and their location. This allows to switch quality during playback in the player.
On the client side, the DASH (should be the same with the other main technology: HLS) manifest is used by the player to locate the segments to load in order to feed the content in two separate SourceBuffer, one for audio and one for video and both are being played synchronously together in the MediaSource. For an example player that handles this, see the Shaka Player developed by Google.
Conclusion, there is no need to use a container like mkv but each channel (video, audio) needs to point to a browser supported codec encoded segments.
You don't need anything special for streaming pre-recorded media files. A normal HTTP/1.1 or HTTP/2 server will work just fine. The browser is generally capable of seeking into the file using range requests.
Matroska (MKV) is a container format, and it actually is widely supported because it's basically the same as WebM. WebM is a subset of Matroska... the key differences being that there are suggested codecs for use. (Matroska itself supports almost anything.)
Your audio and video tracks in the file can use a variety of codecs... the key is to use codecs compatible with browsers. Opus for audio and VP8 for video will take you far.
From there, simply reference your video file in a <video> tag.
I'm trying to figure out if web based audio streaming sites use the Web Audio API for playback or if they rely on the audio element or something else.
Since the user of an audio streaming service typically doesn't need more functionality than starting and stopping the audio, then I guess that the audio element is enough. If a VU-meter is required then I would guess the Web Audio API would be used since it has an built in analyser node. But since IE doesn't support the API then I suppose you'd rather use the audio element and reach the IE users than using fancy extras such as an VU-meter.
I've been looking at the source code for Spotifys web player, Grooveshark, BBC radio and the Polish public radio but I find neither audio elements or use of the Web Audio API. I did find that the Swedish public radio (sr.se) makes use of the audio element though.
I'm not asking for anyone to go through the JavaScript source code for me, but rather if someone who is familiar with the subject could point me in the right direction.
I don't know of any internet radio services playing back their streams with the Web Audio API currently, but I wouldn't be surprised to find one. I've been working on one myself using Audiocog's excellent Aurora.js library, which enables codecs in-browser that wouldn't normally be available, by decoding the audio with JavaScript. However, for compatibility reasons as you have pointed out, this would be considered a bit experimental today.
Most internet radio stations use progressive HTTP streaming (SHOUTcast/Icecast style) which can be played back within an <audio> element or Flash. This works well but can be hard to get right, especially if you use SHOUTcast servers as they are not quite 100% compatible with HTTP, hurting browser support in some versions of Firefox and a lot of mobile browsers. I ended up writing my own server called AudioPump Server to get better browser and mobile browser support with HTTP progressive.
Depending on your Flash code and ActionScript version available, you might also have to deal with memory leaks in creative ways, since by default Flash will keep all of your stream data in memory indefinitely as it was never built to stream over HTTP. Many use RTMP with Flash (with Wowza or similar on the server), which Flash was built to stream with to get around this problem.
iOS supports HLS which is basically a collection of static files served by an HTTP server. The encoder writes a chunk of the stream to each file as the encoding occurs, and the client just downloads them and plays them back seamlessly. The benefit here is that the client can choose a bitrate to stream and, raising quality up and down as network conditions change. This also means that you can completely switch networks (say from WiFi to 3G) and still maintain the stream since chunks are downloaded independently and statelessly. Android "supports" HLS, but it is buggy. Safari is the only browser currently supporting HLS.
Compatibility detection is not something you need to solve yourself. There are many players, such as jPlayer and JW Player which wrangle HTML5 audio support detection, codec support detection, and provide a common API between playback for HTML5 audio and Flash. They also provide an optional UI if you want to get up and running quickly.
Finally, most stations do offer a link to allow you to play the stream in your own media player. This is done by linking to a playlist file (usually M3U or PLS) which is downloaded and often immediately opened (as configured by the user and their browser). The player software loads this playlist and then connects directly to the streaming server to begin playback. On Android, you simply link to the stream URL. It will detect the Content-Type response header, disconnect, and open its configured media player for playback. These days you have to hunt to find these direct links, but they are out there.
If you ever want to know what a station is using without digging around in their compiled and minified source code, simply use a tool like Fiddler or Wireshark and watch the traffic. You will find that it is very straightforward under the hood.
We use Web Audio for streaming via Aurora.js using a protocol very similar to HTTP Live Streaming. We did this because we wanted the same streaming backend to serve iPhone, Android and the web.
It was all a very long and painful process that took over 6 months of effort, but now that its all finished, its all good.
Have a look at http://radioflote.com and feel free to shoot questions or clarifications regarding anything. Go ahead and disassemble the code if you want to. Not a problem.
I use JWPlayer to play videos from the server. Videos are encoded using h.264 codec. If i open them in browser with h.264 support - video plays nicely and i can seek it, because server returns 206 header browser understands that its partial content. But if i try to play same video on Opera, for example, flash player is being used and it receives 200 OK! 2 problems here:
I can't seek the video, until part of it is downloaded
If the video is not "properly" encoded player can't even start playing it, until file is fully downloaded.
Is there something wrong with flash properly asking/understanding http headers?(i've never worked with flash before, so maybe my question is a bit silly and i just don't know flash's limitations)..
1) You need to have pseudo streaming enabled, for Flash - http://www.longtailvideo.com/support/jw-player/28855/pseudo-streaming-in-flash, if you can provide a link though, I will take a look at exactly what is going on here, I am more or less guessing about this one. HTML5 does not require a pseudo streaming module to be installed on the server side, though. In Flash, the default is progressive download, so you can only seek to downloaded parts, and in html5, this is not the case.
2) Yes, that is because of encoding. If your MP4 files cannot be seeked before they are completely downloaded, you will have to fix the MOOV atom (it contains the seeking information) located at the end of your video. Use this little application to parse your videos and add the necessary cue points - http://renaun.com/blog/2010/06/qtindexswapper-2/
Also, encoding via HandBrake - http://handbrake.fr/, can fix this as well, so the above tool wouldn't be necessary. You can encode using HandBrake, and enable "web optimized", which does the same this as the Index Swapper Tool. HandBrake also has command line encoding options as well.
When you play audio using the audio element in Chrome you get annoying clicks and cracks. At least under my 64bit Linux installation, even after I formatted and installed a new Fedora version. (Firefox and Opera are fine, even IE9 in a VirtualBox Windows 7.)
But demos using the Web Audio API instead of the audio element have perfect sound. So I was wondering if I could use the Web Audio API like the audio element? But there are some things you seem not to be able to do with this API. Or am I missing something? The things I couldn't find where:
starting to play a file before it is completely loaded
getting buffer progress updates (depends on the previous point)
getting play progress updates
seeking
Is there a way to do this with the Web Audio API?
This is where I would use it: http://tinyurl.com/magnatune-player
I think you should still use <audio> for streaming at the very least. You can treat it as a MediaElementAudioSourceNode in web audio if you'd like:
var mediaSourceNode = context.createMediaElementSource(audioElement);
AFAIK, there is no way to stream web audio directly. In fact, the web audio api suggests that you don't:
4.9. The AudioBuffer Interface
This interface represents a memory-resident audio asset (for one-shot sounds and other short audio clips). Its format is non-interleaved IEEE 32-bit linear PCM with a nominal range of -1 -> +1. It can contain one or more channels. Typically, it would be expected that the length of the PCM data would be fairly short (usually somewhat less than a minute). For longer sounds, such as music soundtracks, streaming should be used with the audio element and MediaElementAudioSourceNode.
Unless you used a MediaElementAudioSourceNode (which I would assume suffer from the same issues you're having since it's just using an <audio> tag) AFAIK the answers to your questions are:
starting to play a file before it is completely loaded: No.
getting buffer progress updates (depends on the previous point): Possibly (You could check for progress events on the XHR)
getting play progress updates: No.
seeking: No.
In the meantime Chrome fixed the audio playback issues. So I don't need any workarounds anymore.