Icecast http connection latency over 30 seconds - html5-audio

I'm running an icecast server (2.4.3) and experiencing a very long "time to first byte". It's strange because this does not seem to be occurring from players (like mplayer), but only when using HTML5 audio. It takes anywhere from 30 seconds to 120 seconds to start playing the audio.
I'm thinking it's not a buffering issue because I don't seem to be getting ANY bytes back during this time. For instance, if I run a curl command with the verbose flag:
~ben ~: curl http://radio.example.com:8000/radio.mp3 -v
* Trying XX.XX.XX.XXX...
* TCP_NODELAY set
* Connected to radio.example.com (XX.XX.XX.XXX) port 8000 (#0)
> GET /radio.mp3 HTTP/1.1
> Host: radio.example.com:8000
> User-Agent: curl/7.51.0
> Accept: */*
>
It will sit like this for at minimum 28 seconds before I see any bytes coming in. Conversely, if I run mplayer:
~ben ~: mplayer http://radio.example.com:8000/radio.mp3
MPlayer 1.3.0-4.2.1 (C) 2000-2016 MPlayer Team
Can't init Apple Remote.
Playing http://radio.example.com:8000/radio.mp3.
Resolving radio.example.com for AF_INET6...
Couldn't resolve name for AF_INET6: radio.example.com
Resolving radio.example.com for AF_INET...
Connecting to server radio.example.com[XX.XX.XX.XXX]: 8000...
Cache size set to 320 KBytes
Cache fill: 0.00% (0 bytes)
ICY Info: StreamTitle='';
Cache fill: 5.00% (16384 bytes)
ICY Info: StreamTitle='';
Cache fill: 10.00% (32768 bytes)
ICY Info: StreamTitle='';
Cache fill: 15.00% (49152 bytes)
ICY Info: StreamTitle='';
ICY Info: StreamTitle='';
Audio only file format detected.
==========================================================================
Requested audio codec family [mpg123] (afm=mpg123) not available.
Enable it at compilation.
Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders
libavcodec version 57.24.102 (internal)
AUDIO: 44100 Hz, 2 ch, floatle, 128.0 kbit/4.54% (ratio: 16000->352800)
Selected audio codec: [ffmp3float] afm: ffmpeg (FFmpeg MPEG layer-3 audio)
==========================================================================
AO: [coreaudio] 44100Hz 2ch floatle (4 bytes per sample)
Video: no video
Starting playback...
It buffers and starts playing within a couple seconds.
Icecast config
Here are what I think are the relevant configs
<icecast>
<limits>
<clients>100</clients>
<sources>2</sources>
<queue-size>102400</queue-size>
<client-timeout>30</client-timeout>
<header-timeout>15</header-timeout>
<source-timeout>1</source-timeout>
<burst-size>943718</burst-size>
<mp3-metadata-interval>4096</mp3-metadata-interval>
</limits>
<mount>
<mount-name>/radio.mp3</mount-name>
<password>*****************</password>
<bitrate>128</bitrate>
<type>audio/mpeg</type>
<subtype>mp3</subtype>
<hidden>0</hidden>
<fallback-mount>/whitenoise.mp3</fallback-mount>
<fallback-override>1</fallback-override>
</mount>
</icecast>
Attempts to fix
I've tried this on a few different versions including 2.3.3, 2.3.3-kh-11, 2.4.0-kh4. I've gotten this to work properly with a kh branch in the past but I was not able to get the fallback mounts to work with the kh branch. I might just give up and try to go down that rabbit hole instead. I've also tried fiddling with all the of the burst and buffer configs but this problem doesn't seem to be related to those.

After exhausting my options on this, I just attempted to stream from another computer and noticed the problem was not happening. I asked some others to try it and they confirmed they were not seeing this problem on the stream. This led me to believe that the problem is somewhere on my computer (the streaming client in this case). I suspect that it's trying to upgrade the connection to TLS encrypted connection (this has happened with this domain before) but not sure. Either way, it's not a problem with icecast.

I'm going to guess the computer that was having the delay problem is running some kind of network antivirus or reverse proxy perhaps. Those can mess with the headers and often want to "scan" as much of the HTTP connection as they can. So they're buffering the http response and that's causing the delay.
I've seen this happen with corporate web filtering in the past.

Related

JPEG live stream in html slow

From a raw video source I'm trying to stream jpeg images to html as fast as possible in a embedded platform/board using linux.
At the gstreamer side I can see that the jpeg image is updated at ~37fps the pipeline looks like this:
appscr -> videoconvert -> jpegenc -> multifilesink
based in this question I created the next embedded html:
<!DOCTYPE html>
<html>
<head>
<meta charset='UTF-8' />
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script src="app.js"></script>
</head>
<body>
<img id="snapshot" src="snapshot.jpeg"/>
</body>
</html>
and the java script:
$(function() {
function refreshImage(){
$("#snapshot").attr("src", 'snapshot.jpeg?' + Math.random());
setTimeout(refreshImage, 20);
}
refreshImage();
})
Opening a web browser from a PC and typing the platform/board IP I can see the video stream, but the problem is that the image is updated too slow and I would expect a more fluid/fast video given the source frame rate (37fps).
does anyone know what could be the reason why the update is slow?
I think this deserves proper analysis since it is interesting subject (at least for me).
Testing environment
I completely replicated scenario on 2 PCs within same LAN.
PC 1 is creating jpeg images from live stream with following pipeline
gst-launch-1.0 -v rtspsrc location="rtsp://freja.hiof.no:1935/rtplive/_definst_/hessdalen03.stream" \
! rtph264depay ! avdec_h264 \
! timeoverlay halignment=right valignment=bottom \
! videorate ! video/x-raw,framerate=37000/1001 ! jpegenc ! multifilesink location="snapshot.jpeg"
and serving index.html, app.js and (endlessly updated) snapshot.jpeg with python's simple http server
python -m SimpleHTTPServer 8080
PC 2 is accessing index.html using Chrome browser (with developer tools window) and showing images.
For testing purposes
I've added timeoverlay in gstreamer pipeline that adds timestamp on each image in right bottom corner.
I increased refresh period in JS function on 1000 ms.
Analysis of test results
Here is browser's network log
Time column shows periods (in ms) that browser spends to fetch (download) one image from server. Those periods are not always the same with average of ~100ms for images with size ~87KB.
Fetch time interval actually includes:
interval that HTTP GET needs to reach server from browser,
interval that server needs to read image from disk and send it back as HTTP response,
interval that HTTP response needs to reach browser.
1st and 3rd interval are directly depend on "internet" environment: if browser is "farther" away from server interval will be greater.
2nd interval is proportional to server "speed": how fast server can read images from disk and handle HTTP request/response
There is another interval proportional to "speed" of PC which runs browser: how fast PC can handle HTTP GET request/response and re-render image.
Conclusion
There are many unavoidable delay intervals that depend on testing environment - internet and capabilities of server machine and client machine with browser - and your code in browser is executing as fast as it possibly can.
In any case, 37 fps sounds like some live stream video. There are specialized protocols to stream video that can be shown in browser (e.g. MPEG DASH or HLS) by serving video chunk-by-chunk (where each chunk contains many video frames).

high availability replicated servers, tomcat session lost. Firefox and chrome use 60 segs as TTL and don't respect DNS defined TTL

I have 4 servers for an http service defined on my DNS servers:
app.speednetwork.in. IN A 63.142.255.107
app.speednetwork.in. IN A 37.247.116.68
app.speednetwork.in. IN A 104.251.215.162
app.speednetwork.in. IN A 192.121.166.40
for all of them the DNS server specify a TTL (time to live) of more than 10 hours:
$ttl 38400
speednetwork.in. IN SOA plugandplay.click. info.plugandplay.click. (
1454402805
3600
3600
1209600
38400 )
Firefox ignore TTL and make a new DNS query after each 60 secs, as seen on
about:config -> network.dnsCacheExpiration 60 and on about:networking -> DNS.
Chrome shows here chrome://net-internals/#dns a correct cached dns entry, with more that 10 hours until Expired:
apis.google.com IPV4 216.58.210.174 2016-04-12 11:07:07.618 [Expired]
app.speednetwork.in IPV4 192.121.166.40 2016-04-12 21:45:36.592
but ignore this entry and every minute requery the dns as discussed https://groups.google.com/a/chromium.org/forum/#!topic/chromium-discuss/655ZTdxTftA and seen on chrome://net-internals/#events
The conclusion and the problem: every minute both browsers query dns again, receive a new IP from the 4 configured on DNS, go for a new IP/server and LOST THE TOMCAT SESSION.
As config every user browser is not an option, my question is:
1) There is some other DNS config I can use for high availability?
2) There is some http header I can use to instruct the browsers to continue using the same IP/server for the day?
The DNS TTL value is the maximum time the information may be cached. There is no minimum time, nor any requirement to cache at all. The browser behavior you describe is entirely within the DNS specs, and the browsers are doing nothing wrong. If your server solution depends on the clients remembering a DNS lookup for a certain time, then you need to redesign it. As you have already discovered, it does not work.
Building a load-balancing cluster of Tomcat servers is hardly rocket science these days, and you can easily google a solution yourself.
Keep-Alive header can make the trick. Using a large value as 65 secs, browsers reuse http connection along all session and don't try a new dns query. This is true in my app, where there is a piggyback XMLHttpRequest connection to server every minute, maybe you'll need a bigger value. Apache default is 5 secs.
On using tomcat directly:
response.setHeader("Keep-Alive", " timeout=65");
On using apache (and mod_ajp) in front of tomcat:
nano /etc/apache2/apache2.conf:
MaxKeepAliveRequests 0
KeepAliveTimeout 65
But this was not a total solution. After disconnects http connection is closed, or under several concurrent server petitions, each one is open over several servers, so results are not under the same server session.
Finally I solve this implementing CORS (cross domain), fixing a server to work with (app1, app2, etc.) and go for it until this server fails.
CORS headers on both server and client let me exchange data no matter that initial files download was from app. (IE another domain).

bug when websocket receive more than 2^15 bytes on chrome: Received a frame that sets compressed bit while another decompression is ongoing

I tried to passed a JSON via websocket to a html GUI. When size is upper than 32768 bytes, chrome raises this exception:
WebSocket connection to 'ws://localhost:8089/events/' failed: Received a frame that sets compressed bit while another decompression is ongoing
on the line where WebSocket is instantiated :
this._websocket = new WebSocket(url);
However it work fine on firefox. I used jetty 9.1.3 on server side and I tried with chrome 33 and 34 beta.
I forget to precise that if I send length message superior than 32768 bytes, on chrome's network debugging tools, it show 32768 bytes length instead of real message length.
Any ideas ?
When using Jetty 9.1.2.v20140210 I don't have any problems with the connection, but with the later 9.1.3.v20140225 version it fails and I get the error using Opera or Chrome. Firefox works fine on all versions.
I submitted a bug report to Jetty about this: https://bugs.eclipse.org/bugs/show_bug.cgi?id=431459
This might be a bug in Jetty.
permessage-deflate requires the compression bit be set on the first frame of a fragmented message - and only on that.
It might be that Jetty fragments outgoing message to 32k fragments, and sets the compression bit on all frames. If so, that's a bug.
I have just tested current Chrome 33 using Autobahn|Testsuite: everything works as expected .. including messages with 128k.
You can test Jetty using above testsuite. It'll catch the bug if there is one.

How to receive a rtp, rtcp or udp, from a stream of gstreamer, on video HTML5?

I'm trying to get a video stream RTP/RTCP using HTML5, the stream was generated by gstreamer. I used examples of gstreamer, so I can pass through RTP ports:5000, and RTCP:5001, and can receive streams using gstreamer. But using HTML5 could not receive them. So I tried to read a bit about HTML5 and saw that it can receive theora/ogg, webm/vp8, mp4/avc, and protocols may be, HTTP, RTP, RTCP, UDP, and others, but I could not use RTP, RTCP or UDP, HTTP only managed to receive. But I had a very satisfactory result using the VLC plugin for Mozilla Firefox, using the UDP protocol. I wonder if anyone has any tips, I do not want to use source files as src="/tmp/test.avi" needs to be a video stream that can be udp, RTP, RTCP. Thank you!
If you don't need to stream at low fps, you can use GStreamer to transcode your stream in MJPEG and stream it in TCP, and then use VLC to get this TCP stream and stream it to HTTP. It works very well (0.5 sec of latency), but if you decrease the fps (1 fps) VLC introduces a latency of around 11 sec.
Here are some test commands that should work out of the box, using the GStreamer videotestsrc :
GStreamer :
gst-launch -v videotestsrc horizontal-speed=1 ! deinterlace ! videorate ! videoscale ! video/x-raw-yuv, framerate=15/1, width=256,
height=144 ! jpegenc quality=20 ! multipartmux
boundary="--videoboundary" ! tcpserversink host=localhost port=3000
VLC :
vlc -vvv -I rc tcp://localhost:3000 --sout
'#standard{access=http{mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=localhost:8081}'
then open a browser to http://localhost:8081 (or create an HTML page with an img tag whose "src" attribute is http://localhost:8081)

Server response gets cut off half way through

I have a REST API that returns json responses. Sometimes (and what seems to be at completely random), the json response gets cut off half-way through. So the returned json string looks like:
...route_short_name":"135","route_long_name":"Secte // end of response
I'm pretty sure it's not an encoding issue because the cut off point keeps changing position, depending on the json string that's returned. I haven't found a particular response size either for which the cut off happens (I've seen 65kb not get cut off, whereas 40kbs would).
Looking at the response header when the cut off does happen:
{
"Cache-Control" = "must-revalidate, private, max-age=0";
Connection = "keep-alive";
"Content-Type" = "application/json; charset=utf-8";
Date = "Fri, 11 May 2012 19:58:36 GMT";
Etag = "\"f36e55529c131f9c043b01e965e5f291\"";
Server = "nginx/1.0.14";
"Transfer-Encoding" = Identity;
"X-Rack-Cache" = miss;
"X-Runtime" = "0.739158";
"X-UA-Compatible" = "IE=Edge,chrome=1";
}
Doesn't ring a bell either. Anyone?
I had the same problem:
Nginx cut off some responses from the FastCGI backend. For example, I couldn't generate a proper SQL backup from PhpMyAdmin. I checked the logs and found this:
2012/10/15 02:28:14 [crit] 16443#0: *14534527 open()
"/usr/local/nginx/fastcgi_temp/4/81/0000004814" failed (13: Permission
denied) while reading upstream, client: *, server: , request:
"POST / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host:
"", referrer: "http://*/server_export.php?token=**"
All I had to do to fix it was to give proper permissions to the /usr/local/nginx/fastcgi_temp folder, as well as client_body_temp.
Fixed!
Thanks a lot samvermette, your Question & Answer put me on the right track.
Looked up my nginx error.log file and found the following:
13870 open() "/var/lib/nginx/tmp/proxy/9/00/0000000009" failed (13: Permission denied) while reading upstream...
Looks like nginx's proxy was trying to save the response content (passed in by thin) to a file. It only does so when the response size exceeds proxy_buffers (64kb by default on 64 bits platform). So in the end the bug was connected to my request response size.
I ended fixing my issue by setting proxy_buffering to off in my nginx config file, instead of upping proxy_buffers or fixing the file permission issue.
Still not sure about the purpose of nginx's buffer. I'd appreciate if anyone could add up on that. Is disabling the buffering completely a bad idea?
I had similar problem with cutting response from server.
It happened only when I added json header before returning response header('Content-type: application/json');
In my case gzip caused the issue.
I solved it by specifying gzip_types in nginx.conf and adding application/json to list before turning on gzip:
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/json;
gzip on;
It's possible you ran out of inodes, which prevents NginX from using the fastcgi_temp directory properly.
Try df -i and if you have 0% inodes free, that's a problem.
Try find /tmp -mtime 10 (older than 10 days) to see what might be filling up your disk.
Or maybe it's another directory with too many files. For example, go to /home/www-data/example.com and count the files:
find . -print | wc -l
Thanks for the question and the great answers, it saved me a lot of time. In the end, the answer of clement and sam helped me solve my issue, so the credits go to them.
Just wanted to point out that after reading a bit about the topic, it seems it is not recommended to disable proxy_buffering since it could make your server stall if the clients (user of your system) have a bad internet connection for example.
I found this discussion very useful to understand more.
The example of Francis Daly made it very clear for me:
Perhaps it is easier to think of the full process as a chain of processes.
web browser talks to nginx, over a 1 MB/s link.
nginx talks to upstream server, over a 100 MB/s link.
upstream server returns 100 MB of content to nginx.
nginx returns 100 MB of content to web browser.
With proxy_buffering on, nginx can hold the whole 100 MB, so the
nginx-upstream connection can be closed after 1 s, and then nginx can
spend 100 s sending the content to the web browser.
With proxy_buffering off, nginx can only take the content from upstream at
the same rate that nginx can send it to the web browser.
The web browser doesn't care about the difference -- it still takes 100
s for it to get the whole content.
nginx doesn't care much about the difference -- it still takes 100 s to
feed the content to the browser, but it does have to hold the connection
to upstream open for an extra 99 s.
Upstream does care about the difference -- what could have taken it 1
s actually takes 100 s; and for the extra 99 s, that upstream server is
not serving any other requests.
Usually: the nginx-upstream link is faster than the browser-nginx link;
and upstream is more "heavyweight" than nginx; so it is prudent to let
upstream finish processing as quickly as possible.
We had a similar problem. It was caused by our REST server (DropWizard) having SO_LINGER enabled. Under load DropWizard was disconnecting NGINX before it had a chance to flush it's buffers. The JSON was >8kb and the front end would receive it truncated.
I've also had this issue – JSON parsing client-side was faulty, the response was being cut off or worse still, the response was stale and was read from some random memory buffer.
After going through some guides – Serving Static Content Via POST From Nginx as well as Nginx: Fix to “405 Not Allowed” when using POST serving static while trying to configure nginx to serve a simple JSON file.
In my case, I had to use:
max_ranges 0;
so that the browser doesn't get any funny ideas when nginx adds Accept-Ranges: bytes in the response header) as well as
sendfile off;
in my server block for the proxy which serves the static files. Adding it to the location block which would finally serve the found JSON file didn't help.
Another protip for serving static JSONs would also be not forgetting the response type:
charset_types application/json;
default_type application/json;
charset utf-8;
Other searches yielded folder permission issues – nginx is cutting the end of dynamic pages and cache it or proxy buffering issues – Getting a chunked request through nginx, but that was not my case.