In one of the online documents that talks about appcache for HTML5, it indicates that the cached files get updated once an offline user reconnects. I checked the original HTML5 appcache definition by W3, and I am not able to find anything that supports this statement.
Does anyone know if this is to be true?
Thanks in advance
MDN says the following, although if you scroll up on that page it says it's being deprecated.
If an application cache exists, the browser loads the document and its associated resources directly from the cache, without accessing the network. This speeds up the document load time.
The browser then checks to see if the cache manifest has been updated on the server.
If the cache manifest has been updated, the browser downloads a new version of the manifest and the resources listed in the manifest. This is done in the background and does not affect performance significantly.
And logic tells me that it would also depend on the app you're using, server you're trying to connect to and any special settings it might have, how long your browser keeps it's history, what it keeps, and if you saved the page to view offline - whether or not you have all the code/images saved in the right location(s).
Example:
Imagine you saved a page to view offline, and that page has a JS event handler that ran a while loop that did an ajax request every n seconds to do something, like make a number on a page change as long as you were online... As long as the loop is running, you suddenly connect to the internet, and it makes the request to the proper url with the right arguments, then it should go through, even though the url in your browser might say something like file:///C:/Users/you/Desktop/....
I've done this before, even though my url was like the one above. One time I was using braintree's drop-in javascript to a website, and using it's api on my backend. Trying to load the page when offline = Nothing. Online = Updated the spot on the page just fine when I had the required arguments, and it was pointing to the right url. If I got offline again, I could refresh the page, see the same images loaded in the <div>, but I couldn't send any data with it.
I ran into an interesting problem today using Chrome and I'm hoping there is a better way to fix it than what I ended up doing.
The issue starts with an invalid SSL certificate on a site that I'm configuring. In Chrome it's possible to advance past this screen using a link which adds a security exception for the current domain so that you don't have to view this warning message again.
It's also possible to clear this warning by going to the site with the exception then clicking the Not secure text and choosing the Re-enable warnings option.
Now my problem, I have a couple different redirects in place on the site that will redirect my .com and .bank domains to the primary .net domain. While developing I added security exceptions for all three of these domains. This becomes and issue when testing that my SSL certificate is configured properly. I want to clear out Chrome's stored exception for the .com domain - but I cannot do so using the Re-enable warnings option because as soon as I arrive at the page Chrome sees that an exception is already stored and proceeds to load the page normally which then gets redirected to the .net domain. Because of this there is no point where I can actually clear out the bypassed security warning in Chrome...
The only way I've been able to find to clear out these exceptions is to use the Reset option in Chrome's settings, which is not something I want to do regularly. I'm wondering if there is a hidden settings page in Chrome that lists all of the bypassed security warnings so that I may clear them out individually.
To "Re-enable warnings" for all SSL warnings if you don't want to clear your history (or if you dont know all the exemptions you have in place), you can close Chrome and edit:
"C:\Users\USER\AppData\Local\Google\Chrome\User Data\Default\Preferences"
and set ssl_cert_decisions":{},"
Stored in the JSON-path:
profile > content_settings > exceptions > ssl_cert_decisions
Or you can change the decision_expiration_time of the specific exemption to be equal to the last_modified time
Example: "ssl_cert_decisions":{"https://expired.badssl.com:443,*":{"last_modified":"13235055329485008","setting":{"cert_exceptions_map":{"-201cgaDTf2DD6Cj0N6/tKvudkzDuRBA3GwKd8T9hE7mHhQ=":1},"decision_expiration_time":"13235055329485008","version":1}}}
you will have to clear the browsing data for that site, the easiest way I found to do this is (Ctrl+Shift+Del) to bring the clear browser data window up then set time range to 1 hour, choose browsing history only then click clear data. Hope this is useful.
I know this was asked before but this is what I'm experiencing -
I'm working on a Chrome extension that needs to persist some data and I'm using localStorage for that . When I go to Settings->Tools->Clear Browsing Data and check everything (including 'since the beginning of time') , I would expect the localStorage of my background page to clear .
However everything stays put. The localstorage wasn't deleted!
It's not that I don't like that behavior , it's actually pretty great for my app , but is this normal ? Shouldn't localStorage delete once the user tries to clear everything , just like cookies should delete?
P.S
I found this nice blog that asks and tries to answer the same question :
http://sharonminsuk.com/blog/2011/03/21/clearing-cache-has-no-effect-on-html5-localstorage-or-sessionstorage/
Seems like the behavior changes from browser to browser . The behavior I talked about happens on Chrome 28.0.1500.71 m
This bug is not normal behavior. ( to answer your question )
I'm calling this a bug because someone might be using a computer at a library with some type of locally hosted application. There is a clear expectation that data is not retained in any way under a purge called "beginning of time"
Firefox purges localStorage data when you clear all browser data. It does this if the file is stored locally or hosted on a web domain.
Chrome purges localStorage data only if you code is hosted on a domain.
I made a video of this bug..
https://youtu.be/CgojKg4v7X0
Save this URL with HTML/JS a local drive to reproduce the bug...
https://html5dataprivacy.github.io/
steps:
- load a local web page containing javascript HTML5 storage code
interact with the page that stores your data in a way that changes the data
clear everything in history until the beginning of time
give the keyboard and mouse to another user in the library or public cafe...
result: That javascript storage is retained , another person can see your data...
expected result: The data is purged for the new person at the keyboard
notes: This bug does not exist on Firefox current version as of April 19th, 2017. Does not fail if chrome is working off a hosted domain
Workaround: After you clear things to the beginning of time you must open up the console and type "localStorage.clear()"
ps: please be kind. This is my first attempt to answer on StackOverFlow :)
I've recently added HTTP headers to my site to inform the browser to check with the server every time it comes across a given JS/CSS URL. I've tested it and it works perfectly; all browsers now make conditional GET requests.
Here's the problem though -- people still have the old headers cached; headers which more or less told the browser "cache this forever; don't bother asking for an update!". This can be busted with a hard refresh. I don't want to have to communicate to everyone to please hit F5 on any buggy pages after we push out code.
Are there any HTTP header(s)/HTML meta tag(s) I could put on the HTML document itself to say "Browser, ignore the headers you have on the JS/CSS files and download the latest version of all the included files on this page"?
Eventually this problem will work itself out as more and more people clear their cache or learn to refresh on their own. But, I'd rather fix it now. Then in a month or so, I'll remove the HTML-level headers to get caching where I want -- on a per resource basis.
EDIT: I do not want to rename the resources or add on query parameters. That's what we used to use (?v=18, ?v=19, etc.) and it was a chore to increment that number every time we updated resources. Even doing that programmatically isn't the ideal solution; especially now that our server is configured correctly. It makes more sense to do it on the HTTP level so it works regardless of how you're accessing it -- included on a page, directly from the address bar, or otherwise.
pass a parameter to on the script source which will force a reload of the script... in fact you could do it by version or similiar
<script src="/test/script/myawesomescript.js?ver=1.0&pwn=yes" ...>
that would work and be seemless to the other users... when you feel like it has been long enough go back to the old way. but this will work if you want to force a refresh from users.
This method is utilized to prevent caching of webpages by some frameworks. Let me know if you were successful
http://css-tricks.com/can-we-prevent-css-caching/ -- here is a link to the concept for css (should work in js too) -- the biggest difference is you dont want it to never cache, so dont use a time stamp, use my style like from above :) enjoy!
Basically the only way is to get the browser not to use the cached URL.
One method is to use a cache-busting dummy parameter on the end of the URL.
some-name.css?q=1
That will force the browser to reload that file (because that URL isn't in the cache), and the downloaded file won't be cached because of your new headers. However: you may need to use this new name indefinitely, because you can't guarantee that once you leave off the dummy parameter again the cached version may still be used.
The other method is to completely rename the file.
my-new-name.css
Google chrome sends multiple requests to fetch a page, and that's -apparently- not a bug, but a feature. And we as developers just have to deal with it.
As far as I could dig out in five minutes, chrome does that just to make the surfing faster, so if one connection gets lost, the second will take over.
I guess if the website is well developed, then it's functionality won't break by this, because multiple requests are just not new.
But I'm just not sure if I have accounted for all the situations this feature can produce.
Would there be any special situations? Any best practices to deal with them?
Update 1: Now I see why my bank's page throws an error when I open the page with chrome! It says: "Only one window of the browser should be open." That's their solution to security threats?!!
Your best bet is to follow standard web development best practises: don't change application state as a result of a GET call.
If you're worried I recommend updating your data layer unit tests for GET calls to be duplicated & ensure they return the same data.
(I'm not seeing this behaviour with Chrome 8.0.552.224, by the way, is very new?)
I saw the subjected behavior while writing a server application and found that earlier answers are probably not true.
Chrome distributes a single request into multiple http ones to fetch resources in parallel. In this case, it is an image which it fetches as a separate http get.
I have attached screen shot of packet capture through wireshark.
It is for a simple get request to port 8080 for which the server returns a hello message.
Chrome sends the second get request for obtaining favorite icon which you see on top of every tab opened. It is NOT a second get to cater time out or any such thing.
It should be considered another element that differs across browsers. However, doing things in multiple http requests in parallel is kind of a standard thing in browsers as of 2018.
Here is a reference question that i found latter
Chrome sends two requests SO
Chrome issue on google code
It also can be caused by link tags with empty href attributes, at least in Chromium (v41). For example, each of the following line will generate an additional query on the pageĀ :
<link rel="shortcut icon" href="" />
<link rel="icon" type="image/x-icon" href="" />
<link rel="icon" type="image/png" href="" />
It seams that looking for empty attributes in the page is a good starting point, either href or src.
This behavior can be caused by SRC='' or SRC='#' in IMG or (as in my case) IFRAME tag. Replacing '#' with 'about:blank" has fixed the problem.
Here http://forums.mozillazine.org/viewtopic.php?f=7&t=1816755 they say that SCRIPT tags can be the issue as well.
My observation of this characteristic (bug/feature/whatever) occurs when I am typing in a URL and the Autocomplete lands on a match while still typing in the URL.
Chrome takes that match and fetches the page, I assume for the caching benefits that would occur when loading the page yourself....
I have just implemented a single-use Guid token (asp.net/TSQL) which is generated when the first form in a series of two (+confirmation page) is generated. The Token is recorded as "pending" in the DB when it is generated. The Guid token accompanies posts as a hidden field, and is finally marked as closed when the user operation is completed (payment). This mechanism does work, and prevents any of the forms being resubmitted after the payment is made. However, I see 2 or 3 (!?) additional tokens generated by additional requests quickly one after the other. The first request is what ends up in front of the user (localhost - so ie., me), where the generated content ends up for the other two requests I have no idea. I wondered initially why Page_Load handlers were firing mutliple times for one page impression, so I tried a flag in Http.Context.Current - but found to my dismay, that the subsequent requests come in on the same URL but with no post data, and empty Http.Context.Current arrays - ie., completely (for practical purposes) seperate http requests. How to handle this? Some sort of token and logic to refuse subsequent page body content requests while the first is still processing? I guess this could take place as a global context?
This only happens when I enable "webug" extension (which is a FirePHP replacement for chrome). If I disable the extension, server only gets one request.
I just want to update on this one. I've encountered the same problem but on css style.
I've looked at all my src, href, script tag and none of them had an empty string. The offending entry was this:
<div class="Picture" style="background-image: url('');"> </div>
Make sure you also check your styles for empty url string
I was having this problem, but none of the solutions here were the issue. For me, it was caused by the APNG extension in Chrome (support for animated PNGs). Once I disabled that extension, I no longer saw double requests for images in the browser. I should note that regardless of whether the page was outputting a PNG image, disabling this extension fixed the issue (i.e., APNG seems to cause the issue for images regardless of image type, they don't have to be PNG).
I had many other extensions as well (such as "Web Developer" which many have suggested is the issue), and those were not the problem. Disabling them did not fix the issue. I'm also running in Developer Mode and that didn't make a difference for me at all.
In my case, it was Chrome (v65) making a second GET /favicon.ico, even though the response was text/plain thus clearly no <link in there referring the icon. It stopped doing that after I replied with a 404.
Firefox (v59) was sending 2 requests for favicon; again it stopped doing this after the 404.
I'm having the same bug. And like the previous answer this issue is because I've installed the Validator chrome extension
Once disable the extension, works normally.
In my case I have enpoint (json) data to a different server and browser make first an empty request(Request Method:OPTIONS) to check if a endpoint accept requests from my server, Same-origin policy. Also goot to know is a Angular 1 App.
In conclusion I make requests from localhost to a online fake json data.
I had empty tcp packet sent by Chrome to my simple server before normal html GET query and /favicon after. Favicon wasn`t a problem but empty tcp was, since my server was waiting either for data or for connection to be finished. It had no data and wouldn't release connection for 2 minutes. So thread was hanging for 2 minutes.
Jrummell's Link in a comment to original post helped me. It says empty tcp packets could be caused by "Predict network actions to improve page load performance" setting. I tried turning off prediction settings one by one and it worked. In chrome version 73.0.3683.86 (Official Build) (64-bit) this behavior was caused by chrome setting "Use a prediction service to load pages more quickly" turned on.
So in chrome~73 you can try going to setting -> advanced -> privacy and security -> Use a prediction service to load pages more quickly and turn it OFF.
It could be situation when Chrome send in start the request with method OPTIONS and only the second is real request with method GET. Usually in code we deal only with GET (or POST/PUT/DELETE..) but not with OPTIONS. Check if the first request has method OPTIONS.