Thanks in Advanced,
Safari and Chrome are randomly timing out / freezing / hanging. This happens randomly on random pages. I can click refresh about 5 times and then the pages load quickly. If I let it try to download, the page get the server is not responding error. The pages always load very fast with Firefox. I typically see the problem after I am logged in the backend which involves sessions / cookies. Cookies are enabled on both browsers. IE also works but is slower than firefox. I am not sure what is wrong with the code that is causing the problem.
Safari 6.0/
Chrome 21.0
Firefox 15.0
on Mac 10.8.1
I have also had problems on windows xp with chrome. Very few times I had lost my session and had to relogin on IE 6.
Try adding an EXPIRES header:
Web pages are becoming increasingly complex with more scripts, style
sheets, images, and Flash on them. A first-time visit to a page may
require several HTTP requests to load all the components. By using
Expires headers these components become cacheable, which avoids
unnecessary HTTP requests on subsequent page views. Expires headers
are most often associated with images, but they can and should be used
on all page components including scripts, style sheets, and Flash.
and configuring Entity Tags:
Entity tags (ETags) are a mechanism web servers and the browser use to
determine whether a component in the browser's cache matches one on
the origin server. Since ETags are typically constructed using
attributes that make them unique to a specific server hosting a site,
the tags will not match when a browser gets the original component
from one server and later tries to validate that component on a
different server.
I got the information above using YSlow on Chrome.
Related
I have this 1 Mb file for testing purposes (though the issue applies to any file hosted at lptoronto.com): https://lptoronto.com/sandbox/test.png
It always loads instantly in Internet Explorer (both via http and https).
In Google Chrome (currently Version 63.0.3239.132 (Official Build) (64-bit) on Windows 10) the situation is as follows.
When fetched via http, this file loads instantly without an issue.
When fetched via https, the same file takes about 40 (!) seconds to load (with very rare and irregular exceptions, when it also loads fast via https on occasion).
Chrome network monitor shows that all that 40 seconds the image is being slowly but steadily downloaded at low speed, i.e. there is nothing like waiting for server response etc.
Here's the screencast showing IE and Chrome side-by-side loading the same image:
https://www.youtube.com/watch?v=M4cUuhG1YuM
From time to time the issue disappears for a few minutes or an hour, but then re-appears again, without me doing anything on my side.
Same behavior is observed on at least one other computer - the one of my colleague (different ISP, different location).
Needless to say that I'm testing in a clean environment - cache cleared, extensions, firewall, antivirus disabled, connection verified and measured etc.
No Chrome issue whatsoever with any other website, be it http or https.
Hosting provider is yet unable to troubleshoot on their end, but they're still trying to help (it takes some time). They tried disabling mod_deflate, re-installing SSL, disabling caching rules, but to no avail.
The same issue was once observed about 2 months ago. That time I asked the hosting provider to disable SSL completely just to be able to work on my website content. When they re-enabled SSL in less than a day, the issue has gone; but now it re-appeared again, and there is no clue as to what is going on.
The bottomline:
issue appears only in Chrome, only with certain site, only with https
changing only the browser solves the issue
changing only the protocol (https to http) solves the issue
I honestly tried to google out anything similar, but failed to.
I would appreciate if you try the link above in the incognito Chrome window and report the load/refresh time, and of course any ideas are more than welcome.
Whenever I send a GET-request to my webapp using chrome, according to my apache access log two identical requests (not always, but most of the times, I can't reproduce it - it's not for the favicon) get send to the sever, although only one is shown in the chrome dev tools. I deactivated all extensions and it's still happening.
Is this https://news.ycombinator.com/item?id=1872177 true and is it a chrome feature or should I dig deeper within my app to find the bug?
I think it's even worse than that. My experience here (developing with Google App Engine), is that Chrome is making all kinds of extra requests.
This is possibly due to the option that is in the Settings, checked by default:
Predict network actions to improve page load performance
Here is a really weird example: my website's page runs a notifications check every 15 seconds (done in javascript). Even after closing all tabs related to my website, I see requests coming from my IP, some random pages but also the notification check request. To me that means that Chrome has a page of my website running in the background and is even evaluating its javascript.
When I request a page, I pretty much always get another request for one of the links in that page. And it also requests resources of the extra pages (.css, .js, .png files). Lots of requests going on.
I have seen the same behavior with the development server that runs locally.
Happens also from another computer / network.
Doesn't happen with Firefox.
Also, see What to do with chrome sending extra requests?
This Yahoo Developer Network article says that browsers handle non-cacheable resources that are referenced more than once in a single HTML differently. I didn't find any rule about this in the HTTP/1.1 cache RFC.
I made some experiments in Chrome, but I couldn't figure out the exact rules. It was loading a duplicate non-cacheable scripts tag only once. Then I referenced the same script in 3 iframes. The first one triggered a network request, but the others were served from the cache. I tried to reference the same url as the src of an image, and that triggered a network reques again.
Is there any documentation about this behavior? How does this differ between browsers?
When a client decides to retrieve a resource, it's RFC2616 that governs that rules of whether that resource can be returned from a cache, or needs to be revalidated/reloaded from the origin server (mostly section 14.9 but you really need to read the whole thing) .
However, when you have multiple copies of the same resource on the same page, after the first copy has been retrieved following the rules of RFC2616, the decision as to whether to retrieve additional copies is now covered by the HTML5 spec (mostly specified in the processing model for fetching resources).
In particular, note step 10:
If the resource [...] is already being downloaded for other reasons (e.g. another invocation of this algorithm), and this request would be identical to the previous one (e.g. same Accept and Origin headers), and the user agent is configured such that it is to reuse the data from the existing download instead of initiating a new one, then use the results of the existing download instead of starting a new one.
This clearly describes a number of factors that could come into play in deciding whether a resource may be reused or not. Some key points:
Same Accept and Origin headers: While most browsers use the same Accept headers everywhere, in Internet Explorer they're different for an image vs a script vs HTML. And every browser sends a different Referer when frames are involved, and while Referer isn't directly mentioned, Accept and Origin were only given as examples.
Already being downloaded: Note that that is something quite different from already downloaded. So if the resource occurs multiple times on the page, but the first occurrence is finished downloading before the second occurrence is encountered, then the option to reuse may not be applicable.
The user agent is configured to reuse the data: That implies to me that the decision to reuse or re-retrieve the data is somewhat at the discretion of the user-agent, or at least a user option.
The end result, is that every single browser handles caching slightly differently. And even in a particular browser, the results may differ based on timing.
I created a test case with three nested frames (i.e. a page containing an iframe, which itself contained an iframe) and 6 copies of the same script, 2 on each page (using Cache-Control:no-cache to make them non-cacheable, but also tested with other variations, including max-age=0).
Chrome loaded only 1 copy.
Internet Explorer tended to vary, assumedly based on load, but was between 1 and 3.
Safari loaded 3 copies, one for each frame (i.e. with different Referer headers).
Opera and Firefox loaded all 6 copies.
When I reused the same resource in a couple of images (one on the root page, one in the first iframe) and a couple of other images for reference, the behaviour changed.
Chrome now loaded 5 copies, 1 of each type on each page. While Accept headers in Chrome are the same for images and scripts, the header order is different, which suggests they may be treated differently, and potentially cached differently.
Internet Explorer loaded 2 copies, 1 of each type which was to be expected for them. Assumedly that could have varied though, given their behaviour when it was just scripts.
Safari was still only 3 copies, one per frame.
Opera was inexplicably still 6. Couldn't tell what portion of those were scripts and which were images. But possibly this is also something that could vary based on load or timing.
Firefox loaded 8 copies, which was to be expected for them. The 6 scripts, plus the 2 new images.
Now this was what happened when viewing the page normally - i.e. just entering the page url into the address bar. Forcing a reload with F5 (or whatever the equivalent on Safari) produced a whole different set of results. And in general, the whole concept of reloading, F5 vs Ctrl-F5, what headers get sent by the client, etc. also differs wildly from one browser to the next. But that's a subject for another day.
The bottom line is caching is very unpredictable from one browser to the next, and the specs somewhat leave it up to the implementors to decide what works best for them.
I hope this has answered your question.
Additional Note: I should mention that I didn't go out of my way to test the latest copy of every browser (Safari in particular was an ancient v4, Internet Explorer was v9, but the others were probably fairly up to date). I doubt it makes much difference though. The chances that all browsers have suddenly converged on consistent behaviour in this regard is highly unlikely.
Molnarg, if you read the article properly it will become clear why this happens.
Unnecessary HTTP requests happen in Internet Explorer, but not in
Firefox. In Internet Explorer, if an external script is included twice
and is not cacheable, it generates two HTTP requests during page
loading. Even if the script is cacheable, extra HTTP requests occur
when the user reloads the page.
This behavior is unique to Internet Explorer only. If you ask me why this happens, I would say that IE developers chose to ignore the HTTP/1.1 cache RFC or at least could not implement it. Maybe it is a work in progress. But then again there are lot of aspects wherein IE is different from most of the browsers (JavaScript, HTML5, CSS ). This can't be helped unless devs update it.
The Yahoo Dev Article you gave lists best practices for high performance. This must accommodate for all the IE users, which are impaired by this. Which is why including same script multiple times, though OK for other browsers hurts IE users and should be avoided.
Update
Non-cacheable resources will generate network request, once or multiple times alike.
From 2. Overview of Cache Operation from the HTTP/1.1 cache RFC
Although caching is an entirely OPTIONAL feature of HTTP, we assume
that reusing the cached response is desirable and that such reuse
is the default behavior when no requirement or locally-desired
configuration prevents it.
So using cache means attempting to reuse and non-cacheable means opposite. Think of it like this non-cacheable request is like HTTP request with cache turned off (fallback from HTTP with cache on).
Cache-Control: max-age=n does not prevent cache storing it merely states that cache item is stale after n seconds. To prevent using cache use these headers for image:
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
I'm building a photography site, which ideally should work offline. Caching the css/js files required is pretty straightforward. The issue is, what to do with the actual photographs.
I'm currently loading them from flickr, both a thumbnail version and a full res one. This brings me to two questions:
Is it possible to cache files that are delivered from an external source, or does everything have to come from the same domain?
It is probably too much to cache all pictures in the app-cache, because it will cause a big download the first time a user hits the site. What are the recommendations here? Is it possible to have the user explicitly turn on the full version of the app cache?
If you're not serving anything over https, you can include resources from as many origins as you want in the CACHE section. But cross-origin appcaching with https only works in Chrome today. I doubt Flickr serves images over https, so you should be fine.
Some browsers will prompt the user the first time an appcache would be downloaded, but not all (I know Firefox does, but Chrome doesn't). For more control you would have to implement some logic in your application. Maybe let users make a choice within your application, store it as a per-user setting and then only serve your page with a manifest to users who opted in.
What is new in HTML 5’s “offline web applications” feature which was not already available in all browsers?
Offline caching is the job of the browser — how did it become a job of HTML?
A web cache is a mechanism for the
temporary storage (caching) of web
documents, such as HTML pages and
images, to reduce bandwidth usage,
server load, and perceived lag. A web
cache stores copies of documents
passing through it; subsequent
requests may be satisfied from the
cache if certain conditions are met.
As written in Wikipedia’s article for Web cache.
And this is written for offline web cache in the W3C website:
In order to enable users to continue
interacting with Web applications and
documents even when their network
connection is unavailable — for
instance, because they are traveling
outside of their ISP's coverage area —
authors can provide a manifest which
lists the files that are needed for
the Web application to work offline
and which causes the user's browser to
keep a copy of the files for use
offline.
What is HTML 5 doing better and different in caching?
Is it similar to offline mode in Internet Explorer 5? And can we cache the data beyond the limit of amount of space set in browser?
Please give me an example so that I can understand the difference of HTML 5 offline cache, and browser caches.
Web browser caching is when browsers decide to store files locally to improve performance. HTTP allows web servers to suggest browsers how long to store the files for, and allows browsers to ask the server whether a file has changed (so that they can avoid re-downloading it).
However, it’s not designed to reliably store assets required by an offline application. It’s ultimately up to the browser whether, and for how long, it caches the files. And browsers will often stop using their cached version if they can’t contact the server to check that it’s up-to-date.
The HTML5 offline web applications spec provides web authors with the ability to tell browsers what to store for offline access, and requires browsers to keep those files up-to-date when it is online. It also provides a DOM property that tells the developer whether the browser is online or offline, and events that fire when the online status changes.
As Peeter describes in his answer, this allows web app developers to store user-inputted data whilst the user is offline, then sync it with the server when they’re online again. The developer has to do this storage and syncing manually, as the browser only provides the events indicating online status, but if the browser also supports localStorage, the developer can store the data there.
I can do no better than point you to the relevant chapter of Dive into HTML5: http://diveintohtml5.ep.io/offline.html
You can now cache dynamic data, instead of just js/css/html files / images.
Lets say you've got a todo list application open in your browser. You're connected to the internet and you're adding a bunch of stuff you have to do.
Boom, you're on an airplane without a connection. You've got 6 hours of time to kill so you decide to get some work done. You finish all of the things on your todo list (the list was still open in your browser). You select all of the items and change their state to "finished".
Your plane lands, you open up your laptop and refresh the page. All the changes you did without a connection are now synced to the server as you have a internet connection now.