I'm learning about web performance.
And as I know, when make request to require a resource with header: "expires" or "cache-control" that still valid, the browser'll not make a conditional GET to ask server if a resource has been modified.
So, why the browser alway make conditional GET when i make this request: https://www.debian.org/Pics/debian.png
screeshot request information
Take a close look at the request headers: "Cache-Control: max-age=0".
When refreshing an URL, your browser always adds this header to make sure it's "refreshed". If you want to see your real browser's behavior, try navigating to another url then click the back button, Chrome should get the image from its cache.
I've just tried, it does!
Related
I noticed that whenever you download a PDF in Chrome, it consistently makes two requests, and then cancels one of them. This is causing the request to be registered twice in my Web app, which don't want. Is there a way to get Chrome to only make one request for PDFs?
I've researched this topic quite a bit now, and I have not found a sufficient answer. Closely-related answers suggest that the problem is that Chrome is looking for a favicon, but the network tab shows that it is actually making the same request twice, and then canceling the second request.
Is there a way to prevent Chrome from making the second request?
Below is a link to a random PDF file that I found through Google which when clicked should demonstrates the behavior. I would've posted a picture of my network tab in devtools but this is my first post on Stack Overflow, and the site is prohibiting me from uploading a picture.
https://www.adobe.com/enterprise/accessibility/pdfs/acro6_pg_ue.pdf
It looks like a bug in Chrome: https://bugs.chromium.org/p/chromium/issues/detail?id=587709
The problem is that Chrome, when it loads an iframe that returns a PDF stream, writes an "embed" tag inside that iframe which again contains the same URL as the iframe. This triggers a request for that URL again, but Chrome immediately cancels it. (see the network tab)
But by that time, the damage is done.
We have the same issue here, and it does not occur in Firefox or IE.
We're still looking for a good solution to this problem.
I'm still trying to find a proper solution but as a partial "fix" for now you could have two options
1) set the content disposition to "attachment" in the header
setting that to "inline" cause chrome to run a second cancelled call
so for example you can do something like that (nodejs resp in example)
res.writeHead(200, {
'Content-Type' : 'application/pdf',
'Access-Control-Allow-Origin' : '*',
'Content-Disposition' : 'attachment; filename=print.pdf'
});
unfortunately this solution will force the browser to download the pdf straight away instead of rendering it inline and that's not maybe desiderable
2) adding "expires" in the headers
this solution will always fire a second cancelled call but it's ignored by the server
so for example you can do something like that (nodejs resp in example)
res.writeHead(200, {
'Content-Type' : 'application/pdf',
'Access-Control-Allow-Origin' : '*',
'Content-Disposition' : 'inline; filename=print.pdf',
'Expires' : new Date(new Date().getTime() + (60000))
});
I had the same problem in an iframe. I turned of the PDF Viewer extension and the problem disappeared. I'm thinking the extension downloads the file twice. The first time to get the size, the second time to download with a progress bar (using the size gathered in the first request)
I've tried the other solutions and none worked for me, I'm a little late, I know, but just for the record, I solved this in the following manner:
Adding the download attribute:
In my case I was using a form, so it goes like this:
<form action="/package.zip" method="POST" download>
This worked on Brave and Safari, which previously showed the same problem, I think it will work for Chrome.
With my case, problem wasn't browser related. I've noticed our scrollbar plugin's (OverlayScrollbars) DOM manipulations reloads embedded pdf data and calls controller more than once due to on plugin's construct or destroy events. After I've initialized scrollbar before DOM is ready, problem is solved.
Are schemeless urls like
//blog.flowl.info/
valid in HTTP (rfc?), like in plain HTTP Requests and Responses, or are they only valid in HTML attributes and content ?
HTTP/1.1 302 - Moved
Location: //blog.flowl.info
GET //blog.flowl.info
Update:
I have two contradictionary answers now. Which is correct?
Sidequestion:
Why does the browser even resolve those to:
//blog.flowl.info/
->
http://blog.flowl.info/
instead of:
//blog.flowl.info/
->
http://blog.flowl.info///blog.flowl.info/
They are valid in the Location header field (http://greenbytes.de/tech/webdav/rfc7231.html#header.location).
They are not valid in the request line of an HTTP request.
The browser resolves it this way because this is how relative reference resolution works (http://greenbytes.de/tech/webdav/rfc3986.html#reference-resolution).
As far as I understand protocol/scheme is a mandatory part of an URL and is used by server and intermediate proxies/gateways etc to infer how to handle communication on top of plain TCP/IP. If you are not using http/https but some other well known or even custom protocol, you will have to specify it.
Browser was created for browsing html pages served over HTTP protocol. Hence if you don't specify scheme it automatically defaults it as http. There is also concept of absolute v/s relative URL that you will need to look into how subsequent URLs are resolved by browser.
I am using Google Chrome Developer Tools to try to see the response of some AJAX url's.
The problem is that when I click on the NETWORK TAB, then on the link, then on RESPONSE, I see this text : "THIS REQUEST HAS NO RESPONSE DATA AVAILABLE".
I have been using FIREBUG and I am 100% sure there is a response from that page.
Can somebody help with this ?
Thank you !
You can try manually checking if there's a response or not
So, generally when dealing with ajax, in most cases we use the POST, You can create a 'same structured' page to handle same input/response but using Get method and print the output data as normal.
This way you can see if there's any response/errors in your script very easily
We have an ecommerce website that displays groups of products by category using a URL format that maps almost exactly to the REST URL format we would like to use for our forthcoming API.
e.g. example.com/products/latest or example.com/products/hats
Is it a valid pattern to use the same URL for visible (HTML) and invisible (JSON) results, and to use the Accept http request header to determine what should be returned.
i.e. if you call example.com/products/latest with Accept: application/json you get just the product data, but if you use text/html you get the full HTML page (header, footer, site chrome etc.)
And if so, is this a good idea - will we run into problems if, for instance, the website needs to change, but the API needs to be stable?
UPDATE: some helpful resources - here is an article[1] by Peter Williams discussing the use of the HTTP Accept header to version APIs, and I have also referenced an SO question[2] that reveals some of the problems of using this approach. Probably better to use a custom HTTP header?
[1] Making the case for using Accept: http://barelyenough.org/blog/2008/05/versioning-rest-web-services/
[2] Problems with jQuery (& IE): Cannot properly set the Accept HTTP header with jQuery
[3] Making the case for using Accept: http://blog.steveklabnik.com/2011/07/03/nobody-understands-rest-or-http.html
[4] Sitting on the fence: http://www.informit.com/articles/article.aspx?p=1566460
Using http headers is generally becoming the accepted way of determining this.
In ASP.NET MVC for example there is an IsAjaxRequest method that checks for the X-Requested-With header and if it is equal to "XMLHttpRequest" it is deemed to be an ajax request.
Last time I tried to do that (and this was a few years ago) I found I could not override the Accept header of an XMLHttpRequest object in Opera. If that isn't a worry for you, then go for it, that is how HTTP was designed to work.
I recommend setting your HTML response to have a higher q value then your JSON response though, some browsers send Accept: */*.
I have no experience with this, but Restful Web Services recommends that you version your API via the URL (e.g. api.example.com/v1/products/hats) — I’m not sure that would fit with using the same URLs for the website and the API.
i wonder if there is some way to do something like that:
If im on a specific site i want that some of javascript files to be loaded directly from my computer (f.e. file:///c:/test.js), not from the server.
For that i was thinking if there is a possibility to make an extension which could change HTML code in a response which browser gets right before displaying it. So whole process should look like that:
request is made
browser gets response from server
#response is changed# - this is the part when extension comes in
browser parse changed response and display page with that new response.
It doesnt even have to be a Chrome extension anyway. It should just do the job described above. It can block original file and serve another one (DNS/proxy?) or filter whole HTTP traffic in my computer and replace specific code to another one of matched response.
You can use the WebRequest API to achieve that. For example, you can add a onBeforeRequest listener and redirect some requests:
chrome.webRequest.onBeforeRequest.addListener(function(details)
{
var responseData = "<div>Some text</div>"
return {redirectUrl: "data:text/html," + encodeURIComponent(responseData)};
}, {urls: ["https://www.google.com/"]}, ["blocking"]);
This will display a <div> element with the text "some text" instead of the Google homepage. Note that you can only redirect to URLs that the web server itself is allowed to redirect to. This means that redirecting to file:/// URLs is not possible, and you can only redirect to files inside your extension if these are web accessible. data: and http: URLs work fine however.
In Windows you can use the Proxomitron (proxomitron.info) which is a local proxy that can intercept any page or file being loading into your browser and change it using regular expressions (no DOM parsing) however you want, before it is rendered by the browser.