Given 2 links:
link1
link1
On a page at this domain:
www.site.com/index.html.
Are both the header request going to look the same ? Or can i expect a "/somepage.html" and "somepage.html" request to the site.com domain ?
EDIT: Here is what i see in the raw header in Chrome console:
GET /resources/css/jquery-ui.css HTTP/1.1
Host: site.com
Connection: keep-alive
Cache-Control: no-cache
Pragma: no-cache
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
Accept: text/css,*/*;q=0.1
Referer: http://site.com/
So to clarify the question is the request parsed by rewrite engines only the first line? If so i can use ^/resource for instance to target it ?
Both will look the same. /somepage.html
The browser creates /somepage.html, knowing that it is relative to the current document which is in /. The browser always requests the full path from the server. This is required, as the server doesn't know what page the browser is currently on. HTTP is inherently stateless.
Related
I have an AngularJS WebAPI application.
As far as I can understand the OPTIONS request is constructed automatically by the browser.
POST http://localhost:3048/Token HTTP/1.1
Host: localhost:3048
Connection: keep-alive
Content-Length: 78
Accept: application/json, text/plain, */*
Origin: http://localhost:2757
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Referer: http://localhost:2757/Auth/login
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8
grant_type=password&username=xxx%40live.com&password=xxx
Response:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 971
Content-Type: application/json;charset=UTF-8
Expires: -1
Server: Microsoft-IIS/8.0
Access-Control-Allow-Origin: *
Set-Cookie: .AspNet.Cookies=CpvxrR1gPFNs0vP8GAmcUt0EiKuEzLS1stLl-70O93wsipJkLUZuNdwC8tZc5M0o1ifoCjvnRXKjEBk3nLRbFlbldJLydW2BWonr5JmBjRjXZyKtcc29ggAVhZlc2E-3gGDlyoZLAa5Et8zrAokl8vsSoXmHnsjrxZw0VecB_Ry98Ln84UuKdeHlwSBnfaKKJfsN-u3Rsm6MoEfBO5aAFEekhVBWytrYDx5ks-iVok3TjJgaPc5ex53kp7qrtH3izbjT7HtnrsYYtcfPtmsxbCXBkX4ssCBthIl-NsN2wObyoEqHMpFEf1E9sB86PJhTCySEJoeUJ5u3juTnPlQnHsk1UTcO0tDb39g-_BD-I4FWS5GMwxLNtmut3Ynjir0GndwqsvpEsLls1Y4Pq7UuVCTn7DMO4seb64Sy8oEYkKZYk9tU4tsJuGD2CAIhdSc-lAmTAA78J5NOx23klkiuSe_SSiiZo5uRpas_1CFHjhi1c8ItEMpgeTsvgTkxafq5EOIWKPRxEHbCE8Dv106k5GlKK5BaH6z7ESg5BHPBvY8; path=/; HttpOnly
X-SourceFiles: =?UTF-8?B?QzpcR1xhYmlsaXRlc3Qtc2VydmVyXFdlYlJvbGVcVG9rZW4=?=
X-Powered-By: ASP.NET
Date: Tue, 13 Jan 2015 04:54:55 GMT
{"access_token":"TkJ2trqT ....
Now logged in
I log out which is nothing more than removing the token and log in again. Something happens that is different. Before it did not send the OPTIONS but now it does. Is there something resulting from a previous request/response that would influence the browser to act different the second time I log in?
OPTIONS http://localhost:3048/Token HTTP/1.1
Host: localhost:3048
Connection: keep-alive
Access-Control-Request-Method: POST
Origin: http://localhost:2757
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36
Access-Control-Request-Headers: accept, authorization, content-type
Accept: */*
Referer: http://localhost:2757/Auth/login
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Response:
HTTP/1.1 400 Bad Request
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 34
Content-Type: application/json;charset=UTF-8
Expires: -1
Server: Microsoft-IIS/8.0
X-SourceFiles: =?UTF-8?B?QzpcR1xhYmlsaXRlc3Qtc2VydmVyXFdlYlJvbGVcVG9rZW4=?=
X-Powered-By: ASP.NET
Date: Tue, 13 Jan 2015 04:56:32 GMT
{"error":"unsupported_grant_type"}
If I do a browser reset and reload of the page then it goes back to like before where it does not send OPTIONS the first time and I am able to log in.
Probably I need to change something on the server so it handles options.
BUT why does my browser (Chrome) not send OPTIONS the first time?
Whether the Chrome (or any other browser) sends an OPTIONS request is exactly specified by the CORS specfication:
When the cross-origin request algorithm is invoked, these steps must be followed:
...
2. If the following conditions are true, follow the simple cross-origin request algorithm:
The request method is a simple method and the force preflight flag is unset.
Each of the author request headers is a simple header or author request headers is empty.
3. Otherwise, follow the cross-origin request with preflight algorithm.
Note: Cross-origin requests using a method that is simple with author request headers that are not simple will have a preflight request to ensure that the resource can handle those headers. (Similarly to requests using a method that is not a simple method.)
Your OPTIONS request contains the following request header:
Access-Control-Request-Headers: accept, authorization, content-type
This means that your Angular app has inserted the non-simple Authorization request header, probably as a part of an authentication scheme. Non-simple "author request headers" trigger the OPTIONS request, as you can see in the above quote.
To allow the request to succeed, your server should handle OPTIONS request and respond with:
Access-Control-Allow-Origin: https://example.com
Access-Control-Allow-Headers: authorization
To learn more about CORS, see https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS.
When you first login you most likely set the Authorization HTTP header somewhere in your login procedure. On the other side, you forgot to remove this header when the user logs out.
When you try to login again, the Authorization HTTP header is still present. This triggers the browser to perform a preflight request (see explanation of Rob W: https://stackoverflow.com/a/27924344/548020. Considering that you try to login with a grant type password it does not make sense to send an Authorization header, as this implies that you are already authorized (= logged in). Your are basically asking your backend to log you in and at the same time telling your backend that you are already authorized (= logged in).
This can be fixed by simple removing the Authorization HTTP header when the user logs out.
You can also clean your Headers when you login, before sending your POST request:
delete $http.defaults.headers.common['Authorization'];
I have seen a lot of urls lately that seem to have hijacked a portion of the site - 'www.example.com' In the example url: "http://www.example.com//wp-includes/js/apparently_user_defined_dir/from_here_on/index.html" 'wp-includes' is wordpress, and js javascript - what has been done (typically) to these sites and what is to be done? (aside from notifying example.com or their host..)
Thank you.
Apart from the domain the other are "path"
The link you entered is translated by browser to a HTTP request with the header (example):
GET //wp-includes/js/apparently_user_defined_dir/from_here_on/index.html HTTP/1.1
Host: www.example.com
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2024.2 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: el-GR,el;q=0.8
Check this line:
GET //wp-includes/js/apparently_user_defined_dir/from_here_on/index.html HTTP/1.1
Its the server's job to reply to the request, so if the server knows that
//wp-includes/js/apparently_user_defined_dir/from_here_on/index.html
means its ok.
Most times this is translated to a path on a folder, but probably not in this case.
The page you entered returns a Status Code: 404 Not Found so ... your requested page was not found and it responds you with this error page ... which for some reason reports to the user no error. (We all know this is an example page.)
I have Amazon S3 objects, and for each object, I have set
Cache-Control: public, max-age=3600000
That is roughly 41 days.
And I have Amazon CloudFront Distribution set with Minimum TTL also with 3600000.
This is the first request after clearing cache.
GET /1.0.8/web-atoms.js HTTP/1.1
Host: d3bhjcyci8s9i2.cloudfront.net
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.57 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
And Response is
HTTP/1.1 200 OK
Content-Type: application/x-javascript
Content-Length: 226802
Connection: keep-alive
Date: Wed, 28 Aug 2013 10:37:38 GMT
Cache-Control: public, max-age=3600000
Last-Modified: Wed, 28 Aug 2013 10:36:42 GMT
ETag: "124752e0d85461a16e76fbdef2e84fb9"
Accept-Ranges: bytes
Server: AmazonS3
Age: 342557
Via: 1.0 6eb330235ca3971f6142a5f789cbc988.cloudfront.net (CloudFront)
X-Cache: Hit from cloudfront
X-Amz-Cf-Id: 92Q2uDA4KizhPk4TludKpwP6Q6uEaKRV0ls9P_TIr11c8GQpTuSfhw==
Even while Amazon clearly sends Cache-Control, Chrome still makes second request instead of reading it from Cache.
GET /1.0.8/web-atoms.js HTTP/1.1
Host: d3bhjcyci8s9i2.cloudfront.net
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.57 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
If-None-Match: "124752e0d85461a16e76fbdef2e84fb9"
If-Modified-Since: Wed, 28 Aug 2013 10:36:42 GMT
Question:
Why does chrome makes second request?
Expires
This behavior changes when I put an explicit Expires attribute in headers. Browser will not send subsequent request for Expires header, but for cache-control public, it does send it. My all S3 objects will never change, they are immutable, when we change file, we put them as new object with new URL.
In Page Script Reference
Chrome makes subsequent requests only sometimes, I did this test by actually typing URL in browser. When script is referenced by HTML page, for few subsequent requests chrome loads cached scripts, but once again after sometime, once in a while it does send request to server. There is no Disk Size issue here, Chrome has sufficient cache space.
Problem is we get charged for every request, and I want S3 objects to be cached forever, and should be loaded from Cache and should never connect to server back.
When you press F5 in Chrome, it will always send requests to the server. These will be made with the Cache-Control:max-age=0 header. The server will usually respond with a 304 (Not Changed) status code.
When you press Ctrl+F5 or Shift+F5, the same requests are performed, but with the Cache-Control:no-cache header, thus forcing the server to send an uncached version, usually with a 200 (OK) status code.
If you want to make sure that you're utilizing the local browser cache, simply press Enter in the address bar.
If the HTTP Response contains the etag entry, the conditional request will always be made. ETag is a cache validator tag. The client will always send the etag to the server to see if the element has been modified.
If Chrome Developer Tools are open (F12), Chrome usually disables caching.
It is controllable in the Developer Tools settings - the Gear icon to the right of the dev-tools top bar.
If you are hitting the refresh button for loading the particular page or resource, the if-modified-since header request is sent everytime, if you instead request the page/resource as a separate request in a new tab or via a link in a script or html page, it will load the page/resource from the browser cache itself.
This is what has happened in my case, may be this is the general universal case. I am not completely sure, but this is what I gathered via my digging.
Chrome adds Cache-control: max-age=0 header when you use self-signed certificate. Switching from HTTPS to HTTP will remove this header.
Firefox doesn't add this header.
I'm trying to develop an HTML5 game. To this end I have the PHP built-in server running on my local machine to work on, then I "compile" it (combine/minify JS files, minify CSS, etc) and upload them to the production server.
I've run into an issue with the sound effects. For some reason, Chrome won't recognise that it has reached the end of the sound, and therefore will get stuck in a "buffering" state. This is in spite of a valid Content-Length header. IE and Firefox don't seem to have this problem. It also affects the loop attribute, and it also fails to fire ended events.
These problems are only present on the PHP built-in server. It works fine in production. What am I missing?
EDIT: Here's an example request/response on the PHP server:
GET /bin/aud/rightwhereiwantyou.mp3 HTTP/1.1
Host: 10.0.0.110:8000
Connection: keep-alive
Accept-Encoding: identity;q=1, *;q=0
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31
Accept: */*
Referer: http://10.0.0.110:8000/snake.php
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Range: bytes=0-
HTTP/1.1 200 OK
Host: 10.0.0.110:8000
Connection: close
X-Powered-By: PHP/5.4.13
Content-Type: audio/mpeg
Content-Length: 1685058
My wild guess is that Chrome is accepting a partial content response (a byte range), since the MP3 file could be large and you might want to be streaming it instead.
Have you tried adding the Accept-Ranges and Content-Range headers to your response? See the answer to this question for an example.
This appears to be a known bug with Google Chrome.
I am having problems doing some cross-origin requests with Firefox and the application cache.
The error handler of my XHR request get called, and the status of the XHR request is 0.
When I see the network logs with firebug, I see an OPTIONS request that looks fine :
OPTIONS /foo.bar HTTP/1.1
Host: localhost:1337
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:19.0) Gecko/20100101 Firefox/19.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Origin: http://localhost:8080
Access-Control-Request-Method: GET
Access-Control-Request-Headers: content-type
Connection: keep-alive
To which the server respond something that looks OK :
HTTP/1.1 200 OK
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Access-Control-Allow-Origin: http://localhost:8080
Access-Control-Allow-Methods: GET, PUT, DELETE, OPTIONS
Access-Control-Allow-Headers: content-type
Date: Thu, 14 Mar 2013 17:55:22 GMT
Connection: keep-alive
Transfer-Encoding: chunked
Then the GET itself gets no responses :
GET /foo.bar HTTP/1.1
Host: localhost:1337
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:19.0) Gecko/20100101 Firefox/19.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Origin: http://localhost:8080
Connection: keep-alive
(When looking at the server logs, the server never receives the request)
I am using the html5 application cache mechanism, and here is the network section of my manifest :
NETWORK:
default.php
resource.php
http://localhost:1337/
Here is what I tried :
Replace http://localhost:1337/ with * in the manifest file : It works, but I don't like it, I found blocking non explicit network request handy when it comes to detecting missing CACHE entries.
Replace the GET method with a POST method : it works, but I don't like it as it is semantically wrong (I am trying to get a resource, not to post data).
Replace the GET method with a custom-but-semantically-correct READ method : it doesn't work, but it was fun.
It is my understanding that what I am trying to do falls into the step 3 of the Changes to the networking model in the W3C's specs and should work as is.
So, after all this, my questions are :
What am I doing wrong ?
Is this a bug with firefox ? (I forgot to tell, my site works like a charm in Chrome and IE10 (yes, IE10, like Microsoft Internet Explorer version 10)
If I had to do a quirk to make it work with Firefox, which one should I do ? Is there a better solution than the 2 bad ones I found ?
Although the spec says that http://localhost:1337 in the NETWORK section of your cache manifest should be sufficient, it might be worth trying the full URL (http://localhost:1337/foo.bar) to see if there's a bug in Firefox's implementation.
If that doesn't do the trick and all else fails, I would just go with putting * in your NETWORK section, at least until you figure out what's causing the problem. Value code that works for your users over code that works for you. Besides, there are other ways to find missing entries in the manifest.
That problem was mentioned in A List Apart: Application Cache is a Douchebag. See Gotcha #9.
You have to listen to each response and then filter for response or error on your own.
$.ajax( url ).always( function(response) {
// Exit if this request was deliberately aborted
if (response.statusText === 'abort') { return; } // Does this smell like an error?
if (response.responseText !== undefined) {
if (response.responseText && response.status < 400) {
// Not a real error, recover the content resp
} else {
// This is a proper error, deal with it
return;
}
}
// do something with 'response'
});
There is an open defect in Firefox (see also the linked duplicate) that any cross domain resource referenced in the manifest gets blocked on subsequent refreshes. Not much you can do at this point except vote and wait.
Note that this issue should be resolved in Firefox 33 onwards.