How the media request range works? - html

I've video in the html with video tag like <video src='xxxx.mp4'>. But sometimes the video loading is very slow.
I check the media request and find it try to load a 1MB data at the first video request. The request header is as below, but with no range settings.
And some video's first request size is very small, then it can show the first frame quickly. How the video media request works, can I influence the request size each time?
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2
Connection: keep-alive
Cookie: UM_distinctid=17e6bd4dd5c30-0f869b25cdf7158-4c3e207f-151800-17e6bd4dd5d5ce
Host: concert-cdn.jzurl.cn
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
TE: trailers
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:98.0) Gecko/20100101 Firefox/98.0

If you are using just the video tag then the particular browser implements the logic to decide what size ranges to request. Some browsers actually request the entire file first, then abort that request and follow with individual range requests. You will also see requests from a browser with no end range as it 'looks' through the file - the logic again seems to be that a request can simply be cancelled if the rest of the data is not needed.
If you are using a Javascript player, like video.js etc, the player can in theory control this type of thing, but in practice for mp4 files, I think, many players just leverage the browsers HTML Video tag functionality anyway.
Focusing on what you are trying to achieve, there are a couple of things you can do to speed initial playback.
Firstly checking your server accepts range requests which it sounds like you have already done.
Next, assuming you are streaming an mp4 file, make sure the metadata is at the start - the 'MooV' atom as it known. There are several tools that will allow you make this change including ffmpeg:
https://ffmpeg.org/ffmpeg-formats.html#toc-mov_002c-mp4_002c-ismv
The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4 file has all the metadata about all packets stored in one location (written at the end of the file, it can be moved to the start for better playback by adding faststart to the movflags, or using the qt-faststart tool). A fragmented file consists of a number of fragments, where packets and metadata about these packets are stored together. Writing a fragmented file has the advantage that the file is decodable even if the writing is interrupted (while a normal MOV/MP4 is undecodable if it is not properly finished), and it requires less memory when writing very long files (since writing normal MOV/MP4 files stores info about every single packet in memory until the file is closed). The downside is that it is less compatible with other applications.
If the above does not address your needs then you may want to look at using an Adaptive Bit Rate streaming protocol. Nearly all major streaming services use this approach, but it does require more work on the server side and generally a special streaming packager server, although there are open source ones available (e.g. https://github.com/shaka-project/shaka-packager).
ABR creates multiple different bandwidth version of the video and breaks each into equally time sized chunks, e.g. 2 second chunks. The client device or player downloads the video one chunk at a time and selects the next chunk from the bit rate most appropriate to the current network conditions. It can choose a low bandwidth chunk to allow a quick start and you will often see this on commercial streaming services where the video quality is less at start up and then improves over time as high bandwidth chunks are requested subsequently.
More info here: https://stackoverflow.com/a/42365034/334402

Related

Streaming HTML5 Video chunks via multiple HTTP/2 TCP sockets

I'm trying to optimize the load time of a html5 video. Is there any way to make a browser deal with each webm video chunk as single TCP streams, to utilize HTTP/2 improved parallelisation?
You can not directly configure whether a browser reuses the same HTTP/2 connection for making another request or whether it uses a new connection. That is up to the browser to decide.
In theory just using one HTTP/2 connection should give you optimal performance, since it avoids the overhead for having to open new connections. In practice it might be sometimes worse than using multiple HTTP/1.1 connections, due to suboptimal flow-control windows or stream priorization in some HTTP/2 implementations.
One workaround to force multiple connections might be to serve some of the chunks through a different URL (pointing towards the same server), which prevents the browser from reusing the connection. That will however require some additional effort to set it up.
Another option could be to try disabling HTTP/2 for the server which serves those chunks.

Should I use multiple files or combine pages into one file?

This question betrays how much of a novice I am. I'm making a website, and I'm wondering - is it okay to have separate html files for the distinct pages of my website, or should I try to combine them into one html file? I'm curious about the general way of doing things.
You have to take account about 2 things here here:
#1 HTTP Request Header (A single file is better)
For each request the client will do to display you website, some informations are sent in additional of the content (example Headers)
Look like that (from here):
GET /tutorials/other/top-20-mysql-best-practices/ HTTP/1.1
Host: net.tutsplus.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 (.NET CLR 3.5.30729)
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Cookie: PHPSESSID=r2t5uvjq435r4q7ib3vtdjq120
Pragma: no-cache
Cache-Control: no-cache
So, every new file (http, css, images, js, ...) will add more headers, or meta-data and will slow down request..
#2 Browser cache (Multiples files are better)
Files who never change on your website (like Logo, or main CSS file) have no need to be reloaded on every page, and can be placed on the browser cache.
So, create multiple file for "global" code are a good way to "avoid" loading code on every pages.
Conclusion
Both are good, every case have a specific solution.
As a proxy administrator, I am sometimes astounded by how many separate files are requested for a single web page. In the old days, that could be trouble. Good browsers today use an HTTP CONNECT to establish a single TCP tunnel in which to pass those requests. Also, it's quite common for services to distribute content across multiple servers, each of which would require its own CONNECT. And, for more heavily accessed services only line, there will often be the use of content delivery network to better manage the load and abuse from malicious sources. These days, it's best to organize content across files according to the structure of the document, along with version control and other management considerations.
The answer to this is more an art than a science. You can break your content up in whatever seems to you a logical fashion, but as much as possible you want to try to view the site through the eyes of your users. That can mean actual research (surveys, focus groups, etc.) if you can afford it, or trying to draw inferences from Google Analytics or some other tracking system.
There's also an SEO angle. More pages can equate to greater presence in search engines, but if you overdo it you wind up with what Google terms "thin content" — pages with very little meat to them which don't convey much information.
This isn't really a question with a right or wrong answer. It's very much dependent on what you wish to accomplish both aesthetically and in terms of usability, performance, etc. The best answer comes with experience, but you can always copy sites you like until you get a feel for it.

playing a growing mp3 file on a web page

need your advice.
I have a web-serivice which generates mp3 files out of wavs.
The process of conversion takes time and I want visitor to be able to start listening right away, while the conversion is still going on.
Having tried the html5 tag I found that I can only play the part of mp3 file which was ready at the moment the mp3 file was fetched. I mean, it doesn't seem to care that the file might have grown since it was fetched.
What is the right way to approach this situation?
Thanks in advance for any info.
I believe that you can use JPlayer to play them. One of the features is that it does not preload.
EDIT : The
HTML5 audio player also has the preload attribute which can have the following values:
"none": will not prefetch anything
"metadata" (not sure if it's the correct word): will get some basic stuff like length, sample rate
"auto": will prefetch the entire mp3
You need to get a bit more control over the serving process, rather than just leaving it up to your web server.
When your web server responds to an HTTP request, it includes a Length: header that tells the client how big the requested resource is, in bytes. Your web server will only send up to the length available at the time of the request, because it doesn't know the file is about to be appended to. The client will download all of that data, but from the client's perspective, it will have downloaded the entire file when really the file wasn't even done being encoded yet.
To work around this issue, you need to pipe the output of your encoder to both a file, and your client at the same time. For the response data to the client, do not include a Length: header at all. Most clients will work with chunked encoding, allowing you to be compliant with HTTP/1.1. Some clients (early Android, old browsers, old VLC) cannot handle chunked encoding, and will just stream the data as it comes in.
How you do this specifically depends entirely on what you're using server-side, which you didn't specify in your question. Personally, I use Node.js for this as it makes the process very easy. You can simply pipe to both streams. Be careful that if you use the multiple pipe method, the pipes only run as fast as the slowest. Some streaming clients (such as VLC) will lower the TCP window size to not allow too much data to have to be buffered client-side. When this occurs, your writes to disk will run at the speed of the client. This may not matter to you, but it is something to be aware of.

Stream audio and video data in network in html5

How do I achieve streaming of audio and video data and pass it on the network. I gone through a good article Here, But did not get in depth. I want to have chat application in HTML5
There are mainly below question
How to stream the audio and video data
How to pass to particular IP address.
Get that data and pass to video and audio control
If you want to serve a stream, you need a server doing so, by either downloading and installing, or coding on your own.
Streams only work in one direction, there is no responding or "retrieve back". Streaming is almost the same as downloading, with slight differences, depending on the service and use case.
Most streams are downstreams, but there are also upstreams. Did you hear about BufferStreams in PHP, Java, whatever? It's basically the same: data -> direction -> cursor.
Streams work over many protocols, even via different network layers, for example:
network/subnet broadcast, peer 2 peer, HTTP, DLNA, even FTP streams, ...
The basic nature of a stream is nothing more than data beeing sent to an audience.
You need to decide:
which protocol do you want to use for streaming
which server software
which media / source / live or with selectable start/end
which clients
The most popular HTTP streaming server is Shoutcast by Nullsoft (Winamp).
There is also DLNA which afaik is not HTTP based.
To provide more information, you need to be more specific regarding your basic requirements and decisions.

Serving audio data from a Java servlet for an HTML5 audio control

This may be a Servlets question or an HTML5 question, depending on what the solution turns out to be... :)
I've got a (Tomcat) Servlet serving up short clips of audio which are then picked up in an HTML5 audio element. The audio correctly plays, but on some browsers only once (so that attempting to "rewind" or replay the audio does not then work). I suspect that this is because my Servlet is not reporting that it supports range requests: I notice that given a static audio file on the same server, Apache adds the HTTP "range-unit" response header, and replaying the file then works in such cases. So I am supposing that on browsers where I'm having the issue, in order to replay the file, the browser makes an HTTP Content-Range request rather than either buffering the entire file or re-requesting the entire file. (On Safari at least, replaying the audio served from my Servlet works fine: I'm guessing because Safari buffers the entire audio.)
So my questions:
is there a way in HTML to request that the browser buffer the entire audio file on playing in order to allow replays rather than the server needing necessarily to support range requests?
if not, does anyone have any experience of responding to range requests from a Servlet? I'm assuming it's just a case of (a) sending a "range-unit" response code in response to the initial request, then (b) looking out for relevant HTTP request headers ("Content-Range?") (I guess that's how they're handled?) on subsequent requests and only serving the relevant portion of the audio? Are there any pitfalls I should be aware of?
is there a way in HTML to request that the browser buffer the entire audio file on playing in order to allow replays rather than the server needing necessarily to support range requests?
Even if there is, it will not be supported on all browsers, especially mobiles/ipads/etc.
does anyone have any experience of responding to range requests from a Servlet?
There's implementation provided by BalusC. I have ported it to my environment and, apart some minor issues, (which are most likely unrelated to the implementation but to the client side specifics) it works great.