Server-Sent Events API - Default reconnection time - html

I'm currently studying SSE's. In the w3 specifications it says:
A reconnection time, in milliseconds. This must initially be a user-agent-defined value, probably in the region of a few seconds.
Source: http://www.w3.org/TR/eventsource/#concept-event-stream-reconnection-time
How I understand this is that each webbrowser has a default reconnection time, but I can't seem to find the exact values?

From the footnote on p.65 of Data Push Apps with HTML5 SSE:
At the time of writing, it is 3 seconds in Chrome and Safari (see
core/page/EventSource.cpp in the WebKit or Blink source code) and 5
seconds in Firefox (see content/base/src/EventSource.cpp in the
Mozilla source code).
(I just checked and those are still the current values.)

Related

Chrome (and chromecast) playback stops after a few seconds

I have no idea where can I actually get help for this, so I'm hoping I'll get some pointers on where should I post my issue.
So we're supporting a radioplayer application that lets you play streams on chromecast devices. We've received more and more reports of streams stopping after a few seconds.
Inspecting the chromecast receiver application we found an error:
error: MediaError {code: 3, message: "PIPELINE_ERROR_DECODE: Format conversion failed."}
Ok, so it has an issue with decoding the stream. Just to keep the fun in funeral it turns out that it works on a chromecast gen1, but not on a chromecast gen3 or a home mini.
So we did what every normal developer would do: gave up and went to work at McDonalds created a sample webpage, with the streams (to leave out all the mess that comes with casting stuff). Tested this page in different browsers and browser versions: works everywhere except chrome. And not just chrome. If your chrome version is 66 or below, happy days. If its 67 or newer, the playback will stop. In the browser we get a slightly different error message, but since stops pretty much at the very same point where the chromecast does... I seem to see a common factor there.
Here is the sample page: http://chromecast.radioplayer.aerian.org/test.html
To tell a couple of radio stations to go and fix their streams would be feasible... but we're talking about potentially 50-90 stations, who's streams are otherwise working, except on a chromecast.
Is this a bug? Is this a feature?
If its a bug, where should a raise it? If its a feature would you like onions with your burger?
I've raised a bug report with chromium, turns out it's a bug. It's being fixed in V75. More details here: https://bugs.chromium.org/p/chromium/issues/detail?id=956027#c7

Confusing HTTP/2 protocol information in Chrome debugger Network tab

I see some of them show "h2" and some "http/2+quic/43" but never "h2+quic/43". What's the difference between h2 and http/2 in this case? And what's the "43" in "quic/43"? Protocol version or port number?
Well basically QUIC is still being worked on and is not standardised. Google, as the inventors, have their own implementation (sometimes called gQUIC) which is only available in Chromium based browsers and on a few server implementations. It is based on HTTP/2 (well actually it was based on SPDY which then got standardised into HTTP/2).
It doesn't really use HTTP/2 any more but a modified version of it. So whether you call it h2 or http/2 doesn't really matter - it's neither. But at a high level h2 and http/2 can be treated the same in this context.
When QUIC is formally standardised later this year (or possibly even next year) by the IETF it will use HTTP/3 to reflect the diverges from HTTP/2 and so it should change to h3. That is currently being worked on but no browser supports it yet. It is known as iQUIC for now but imagine it will just become QUIC after it becomes a format standard and Google migrates to it and stops using gQUIC (in a similar way that the deprecated SDPY once HTTP/2 was formalised). gQUIC and iQUIC are already quite different.
The number 43 is a version number. Google used to iterate QUIC quite quickly as they were in charge of both ends (browser and server) though seems to have slowed down now (hopefully reflecting it's maturity and the fact less changes are needed). There used to be a change log in the Chromium source code showing what changed in each version, but can't find it now...

Chrome parse date with 11 minute difference

I saw a similar answer here Parsed date has minute difference but it's not exactly the same
I have a problem with google chrome. I have an application developed with GWT. This application sent an RPC to a server, and it get some data in return.
In this data there are some Date object. Seeing this date in EDGE and Firefox everything is ok but in Chrome they have 11 minute less.
I don't think it's a "code parsing" problem... because if I watch the RPC answer in Firefox and Chrome, I can see the RPC answer already wrong.
In firefox I see the object as "jsdate: Date 1800-01-01-01T07:30:00.000Z" and this is what I expect
In chrome I see the object as "jsdate: 1800 08:19:56 GMT+0049"
you can see image of devtool screenshot with the link below
firefox
chrome
In chrome version 69 i get this
In older version of chrome (for example 63) i get the same as firefox
Note that the time in Italy in 1800 was a bit different than now -
https://www.timeanddate.com/time/zone/italy/rome (pick 1800 - 1849 in "Time zone changes for:")
https://www.timeanddate.com/time/zone/italy
In Italy, standard time was introduced in 1893. Until then, the
country had been using solar mean time, based on Italy's longitude. It
was 49 minutes and 56 seconds ahead of GMT, then the world's time
standard.
In 1893, Italy advanced its clocks by 10 minutes and 4 seconds, so the
local time was exactly 1 hour ahead of GMT. The country still uses
this local time as standard time today.
So I am not sure this is a bug in Chrome, but a feature. I think Chrome simply updated their time zone data to include data for older times and it is now more accurate. Other browsers may follow suite at some point.
Since new Date(...) converts the input to the local time, the time zone is taken into account.

where is the chrome dev tools "continuous page repainting" option?

There used to be an option in dev tools under Render right next to the FPS Meter that said "Enable continuous page repainting". Now it's not there. Where did it go?
Version 60.0.3112.113 (Official Build) (64-bit)
There used to be an option in dev tools under Render right next to the FPS Meter that said "Enable continuous page repainting". Now it's not there. Where did it go?
It's been removed. See Crbug issue #523040.
The broader question here is, what workflow can you use to replace Continuous Painting Mode (CPM)?
CPM was before my time, but it looks like it gave you a realtime estimate of painting cost for the entire page. There's nothing equivalent to that anymore, but Performance recordings can definitely help you gauge how much painting your page is doing. The general idea is to start recording, interact with your page, and then analyze the recording results to see how many paint events occurred, and how much time each one took. See Get Started With Analyzing Runtime Performance to get familiar with the Performance panel.
2 Nov 2017 Update
There's a new feature coming to DevTools in Chrome 64 (which is currently in Canary) that's close to CPM called the Performance Monitor. It shows a realtime view of FPS, Layouts Per Second, and Style Recalculations Per Second.
The "continuous page repainting" debugging option was removed from Chrome quite a few versions ago. However, you can still get to paint instrumentation in the Performance tab of the developer tools:
Developer tools -> Performance -> Settings -> Enable advanced paint instrumentation
This will not enable continuous repaint since as far as I can tell Chrome no longer does that, but will allow you to see a profile of how your page actually worked during recording, and can be very useful for tracking down performance problems. It is integrated with other performance profile data as well.
I personally have found this article: https://blog.algolia.com/performant-web-animations/ to be useful if you're working on animations, but I'm not going to summarize it here since it's quite long and I'm not sure you're specifically looking to improve animation performance anyway. (No association with the author; just useful info.)

Chrome extension architecture for real time update: websocket vs. regular fetch

I'm building an app which is made of a webserver (currently using NodeJS but doesn't really matter here) and a Chrome extension.
The point of the Chrome extension is to display some data from the server. For this, I need the extension to be aware of changes on the server-side. To do that, I see 3 possible strategies :
1. Websocket
Having a websocket in the background that triggers updates in real time.
2. Regular fetch
But I also can have a timer that will fetch new data on a defined frequency (something like every minute)
3. Event triggered fetch
Another solution would be to detect browser activity, and to trigger a server call during that activity (eg. on active tab changed)
The tradeof I see is browser memory/network usage vs. realtimeness.
To define realtimeness necessity, I'd compare it to an unread email indicator: it doesn't matter if the user is made aware a minute after, but conceptually, it means always displaying out-dated data, which doesn't sound great to me.
On the other hand, if using websockets, or event listeners come at a too high cost and will eventually slow down the browser too much, I'd rather have a minute delay. :)
But maybe fetching the server every minute with most of the time no update to send may be even more expensive than keeping a websocket alive...
Well, any insights on this tradeof and how to choose would be highly appreciated!
Special thanks to #Rob-w for helping me to reformulate the question.