I'm looking for a way, how to ignore a particular site's setting of no-cache and always fetch the content from the cache when I use the "back" button. The usual forced reload (ctrl+r) should be unchanged.
I wish that the site wouldn't even know that I viewed the cached version of the page, ideally with a filled form just before sending (unless there's some javascript communicating of course).
The issue is, that all articles on Google deal with how to clear/ignore cache, while I want the exact opposite.
Any ideas?
Related
Currently, I am facing an issue with HTML page. So here if a user visited the site then the browser is caching the entire HTML page. So when the user hits the URL again then the browser taking that HTML from the cache instead of calling/requesting to the server for HTML contents. Here our Team member forgot to add meta tags which would force the browser to take content from Server each time. Is there any way that we could resolve the issue? Since the page request itself not reaching the server so User will not see the refresh contents of the website. If user do Ctrl+F5 then they can see updated contents. I went through many sites and stack overflow questions but I did find a solution for forcing HTML page to load contents from server using meta tags.But existing users is there any resolution that we could apply?
Problem is here the page did not call server to get contents it just loads from cache.
There's nothing you can do.
You've previously instructed the browser to cache the file (presumably for a long time) and not check for updates (via ETags or If-Modified-Since) so it is going to use the cached version until its cache expires (from the user intervening or automatically (which might be sooner than your caching instructions said)).
There's no way to provide new caching instructions to the browser without it requesting them from the server (which it won't do because of the existing rules).
Is there a way to create a Chrome extension to trick a site loaded in an iFrame into thinking it's not in a frame?
We load clients' sites into an iframe for demos, but some resources get blocked due to them disallowing being loaded in an iFrame. We'd like to load these sites into a frame as though you were browsing directly to the site in a standalone tab.
You should use the Chrome's webRequest in order to intercept the server response. See the API. Here you go for onHeadersReceived event where you are in control of any response headers => you need to remove X-Frame-Options header from the response.
That's pretty much it, if this is the only problem in loading those sites.
However, for the sake of completeness, in order to fully trick the browser (which you most likely do not need) you need also to inject a script into every page that would clear up some things like window.parent by simple removing them from window object and some other things like origin etc. However removing the header would work for 99.9999% of your use cases.
I've noticed that both Firefox and Chrome issue a new HTTP request when you view the source for a web page that you've already loaded. It's particularly annoying when the page itself is slow to load or if it won't load at all.
Why is that? Wouldn't they have the existing source for the originally received page cached already? Is it based on Cache-Control headers?
This has been on my mind for a while (usually, comes up when looking at what's behind slow web apps).
In the context of Chrome, according to this link it is indeed base on Cache-Control headers.
...view-source grabs the html source from the http cache
and pretty-prints it, but for the page NOT in http cache, it's 'forced to'
make a new request.
To me, this makes sense. You wouldn't want to use what is currently rendered as the source of truth as obviously the HTML can be manipulated dynamically. If you can't use this, then the http cache would be the next likely candidate for the source. If the source in unavailable from cahce, a subsequent GET of the source seems to be the only alternative.
This does, however, introduce another interesting delima raised here.
Requesting the URL again doesn't make sense as there is no guarantee that source received during the second request will match what was received during the first request.
I would imagine this was a conscious trade-off that was made to ensure that a view-source request is always satisfied in some form or another.
You need to do "Inspect Element" for the live web page. Show-code reloads the page to show the source code without modification.
To improve the performance/responsiveness of my website I have implemented a partial page load using AJAX, replaceState, pushState, and a popstate listener.
I essentially store the central part of my page (HTML) as my state object in the history. When a link is clicked I request just the central bit of the page from the server (identifying these requests with a different Accept header) and replace it with javascript. On popstate I grab the previous central part and push it back into the dom.
This mostly works fine, however I have found a particular issue which I am stuck on. It is a little complicated to explain, so my apologies if this isn't very clear.
There is a search form on most of our pages. The partial page loading via ajax is only on GET requests, and the form performs a POST which results in a full page load.
If I navigate the following set of pages, I end up on a malformed partial page consisting of ONLY the central content, without any of the surrounding dom:
Start on the Home Page (via full page load) - perform a Search (post-redirect-get)
Takes you to Search Results (via full page load) - then click Home
Returns you to the Home Page (via dynamic get) - click browser back
Search Results (from popstate listener) - click browser back
Malformed home page.
When the malformed home page appears, my popstate listener is not present at all.
What I think is happening, is that the second load (dynamic, partial) of the home page is being cached by the browser, and then when the full page back occurs, the browser is merely showing the cached partial response rather than the full page.
To try to remedy this I have added a Vary: Accept header to the response to let the browsers know that content may change based on the accept header. I have also added Cache-Control max-age=0, pragma no-cache, and a past expiry date to the partial loaded content to try to force the browser not to cache this, but none of this solves it.
Unfortunately my company does not allow external traffic to our dev servers, so I cannot show you the problem. I have looked at various similar questions on here, but none of them seem quite the same, nor do the solutions suggested seem to work.
If I add a pointless parameter (blah=blah) to my dynamic GET requests, this solves the problem. However this is an ugly hack that I'd rather not do. This seems like it should be solvable with headers, since I think it is a caching problem. Can anyone explain what's going on?
That's a caching issue. With the response header Cache-Control set to no-cache or max-age=0, the problem doesn't happen in FF (as you said), but it persists in Chrome.
The header that worked for me is Cache-Control: no-store. That's not consistently implemented by all the browsers (you can find questions asking what is the difference between no-cache and no-store), but has the result you expect in Chrome as well.
I had a similar issue. I'm building a web-based wizard and using jquery ajax to load each step (ASP.NET MVC on the back end).
The inital route is something like /Questions/Show - which loads the whole page and displays the first question (question 0). When the user clicks the Next image, it does a jquery .load() with the url /Questions/Save/0 . This saves the answer and returns a partial view with the next question. The next save does a jquery .load() with /Questions/Save/1 ...
I implemented History.js so that the user can go back and forward (forth?). It stores the question number in the state data. When it detects a statechange (and the state's question number is different from what's on the page), it does a jquery .load() to load the correct question.
At first I was using the same route as when the initial page is loaded (/Questions/Show/X where X is the question number). On the back end I detected whether it was an ajax request, and if so returned a partial view instead of the full view. This is where the issue came in similar to yours: Say I was on question 3, went back, then forward, then went to www.google.com, and then hit the back button. It showed question 3 but it was the partial view - because the browser cached the partial for that route.
My solution was to create a separate route for the ajax call: /Questions/Load/X (it used common code on the back end). Now with two different routes (/Questions/Show for non-ajax and /Questions/Load for ajax), the browser shows the full page correctly in the above situation.
So maybe a similar solution would work for you... i.e. two different routes for your home page - one for the full page and one for a partial. Hope that helps.
When a link is clicked I request just the central bit of the page from the server (identifying these requests with a different Accept header) and replace it with javascript.
Awesome. That's the RESTful way to do it. But there's one thing left to do to make it work: add a Vary header to the response.
Vary: Accept
That tells the browser that a request with a different Accept header might get a different response. Because the two requests use different Accept headers, the browser (and any caching proxies) will cache the responses separately.
Unlike setting Cache-Control: no-store, this will still allow you to use caching.
I have a situation where my page loads some information from a database, which is then modified through AJAX.
I click a link to another page, then use the 'back' button to return to the original page.
The changes to the page through AJAX I made before don't appear, because the browser has the unchanged page stored in the cache.
Is there a way of fixing this without setting the page not to cache at all?
Thanks :)
Imagine that each request to the server for information, including the initial page load and each ajax request, are distinct entities. Each one may or may not be cached anywhere between the server and the browser.
You are modifying the initial page that was served to you (and cached by the browser, in most cases) with arbitrary requests to the server and dynamic DOM manipulation. The browser has to capacity to track these changed.
You will have to maintain state, maybe using a cookie, in order to reconstruct the page. In fact, it seems to me that a dynamically generated document that you may wish to move to and from should definitely have a workflow defined that persists and retrieves it's state.
Perhaps set a cookie for each manipulated element with the key that was sent to the server to get the data?