Chrome - Confirm Form Resubmission - Different behaviors - google-chrome

In a commercial web application that uses POST requests frequently, it was necessary to perform history.go(-1) to trigger back navigation. As may others has experienced, I am getting -
Confirm Form Resubmission ERR_CACHE_MISS
error in Chrome. However, same works just fine in Firefox (this is no a duplicate, please read on).
It is true that using POST requests to render content is the reason leading to above problem (without using Post/Redirect/Get design pattern).
However, while looking for an alternative solution, it was observed that in certain web sites / apps it is possible to navigate back in Chrome (cache hit) where as in some sites it fails. I have inspected all the HTTP headers from successful sites and it looks like HTTP headers are not making any difference.
Could it possibly be the browser acting differently based on the SSL certificate used by the web application or what could be the reason for back navigation working in certain web sites?
Example working web applications :
http://gmail.com/ - Enter some random email. Click next. Enter incorrect password multiple times and use browser back button to navigate back.
https://support.hostgator.com/ - Enter some random text in search box (do this several times). Use browser back button to navigate back.
POST request used in failing web application :
POST /post3.jsp HTTP/1.1
Host: 192.168.1.111
Connection: keep-alive
Content-Length: 18
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Origin: https://192.168.1.111
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.99 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Referer: https://192.168.1.111/post2.jsp
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8

Identified that in Chrome back navigation fails in above work flow, unless SSL Certificate used in HTTPS communication is a valid, trusted certificate.
If you are using a self signed certificate, add the CA certificate as a Trusted Root Certification Authorities and everything should work as expected.

Related

Fiddler Echo Service error while accessing localhost in my chrome browser

I want to run apache-tomcat-6.0.44 server on my laptop. I have configured Classpath, Path and CATALINA_HOME variables. I am able to run the server from command prompt (using "startup" command). But the problem is, when I am typing "http://localhost:8080" in the url bar of my chrome browser and clicking enter, I am gettig error as:
Fiddler Echo Service
GET / HTTP/1.1 Host: localhost:8080 Proxy-Connection: keep-alive
Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8
Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 6.1)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101
Safari/537.36 Accept-Encoding: gzip, deflate, sdch Accept-Language:
en-GB,en-US;q=0.8,en;q=0.6 Cookie: _ga=GA1.1.1441246844.1443724804 To
configure Fiddler as a reverse proxy instead of seeing this page, see
Reverse Proxy Setup You can download the FiddlerRoot certificate**
I have gone through some of the solutions, which are:
1) Fiddler Echo Service blocking oracle homepage in browser
2) Fiddler not capturing traffic from browsers
But I am still not able to fix the issue. Could anyone please help me out with this.
Along with using proxy server, Check if your Lan setting is set to detect Automatically.
Go to below section of chrome
"settings>Advanced settings>Network>Change Proxy settings" You shall get a wizard.
In there click "Lan Settings" and select the checkbox for "Automatically detect settings"
Another possibility is mismatch of crtificate installed.
server name used in Apache (httpd.conf) must be the same as the server name in apache (httpd-ssl.conf) e.g. in Apache (httpd.conf)ServerName localhost:8080 then in apache (httpd-ssl.conf) should be like this ServerName www.example.com:8080

What does a double forward slash mean in a url aside from the protocol seperator at the begining?

I have seen a lot of urls lately that seem to have hijacked a portion of the site - 'www.example.com' In the example url: "http://www.example.com//wp-includes/js/apparently_user_defined_dir/from_here_on/index.html" 'wp-includes' is wordpress, and js javascript - what has been done (typically) to these sites and what is to be done? (aside from notifying example.com or their host..)
Thank you.
Apart from the domain the other are "path"
The link you entered is translated by browser to a HTTP request with the header (example):
GET //wp-includes/js/apparently_user_defined_dir/from_here_on/index.html HTTP/1.1
Host: www.example.com
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2024.2 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: el-GR,el;q=0.8
Check this line:
GET //wp-includes/js/apparently_user_defined_dir/from_here_on/index.html HTTP/1.1
Its the server's job to reply to the request, so if the server knows that
//wp-includes/js/apparently_user_defined_dir/from_here_on/index.html
means its ok.
Most times this is translated to a path on a folder, but probably not in this case.
The page you entered returns a Status Code: 404 Not Found so ... your requested page was not found and it responds you with this error page ... which for some reason reports to the user no error. (We all know this is an example page.)

Why does Browser still sends request for cache-control public with max-age?

I have Amazon S3 objects, and for each object, I have set
Cache-Control: public, max-age=3600000
That is roughly 41 days.
And I have Amazon CloudFront Distribution set with Minimum TTL also with 3600000.
This is the first request after clearing cache.
GET /1.0.8/web-atoms.js HTTP/1.1
Host: d3bhjcyci8s9i2.cloudfront.net
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.57 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
And Response is
HTTP/1.1 200 OK
Content-Type: application/x-javascript
Content-Length: 226802
Connection: keep-alive
Date: Wed, 28 Aug 2013 10:37:38 GMT
Cache-Control: public, max-age=3600000
Last-Modified: Wed, 28 Aug 2013 10:36:42 GMT
ETag: "124752e0d85461a16e76fbdef2e84fb9"
Accept-Ranges: bytes
Server: AmazonS3
Age: 342557
Via: 1.0 6eb330235ca3971f6142a5f789cbc988.cloudfront.net (CloudFront)
X-Cache: Hit from cloudfront
X-Amz-Cf-Id: 92Q2uDA4KizhPk4TludKpwP6Q6uEaKRV0ls9P_TIr11c8GQpTuSfhw==
Even while Amazon clearly sends Cache-Control, Chrome still makes second request instead of reading it from Cache.
GET /1.0.8/web-atoms.js HTTP/1.1
Host: d3bhjcyci8s9i2.cloudfront.net
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.57 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
If-None-Match: "124752e0d85461a16e76fbdef2e84fb9"
If-Modified-Since: Wed, 28 Aug 2013 10:36:42 GMT
Question:
Why does chrome makes second request?
Expires
This behavior changes when I put an explicit Expires attribute in headers. Browser will not send subsequent request for Expires header, but for cache-control public, it does send it. My all S3 objects will never change, they are immutable, when we change file, we put them as new object with new URL.
In Page Script Reference
Chrome makes subsequent requests only sometimes, I did this test by actually typing URL in browser. When script is referenced by HTML page, for few subsequent requests chrome loads cached scripts, but once again after sometime, once in a while it does send request to server. There is no Disk Size issue here, Chrome has sufficient cache space.
Problem is we get charged for every request, and I want S3 objects to be cached forever, and should be loaded from Cache and should never connect to server back.
When you press F5 in Chrome, it will always send requests to the server. These will be made with the Cache-Control:max-age=0 header. The server will usually respond with a 304 (Not Changed) status code.
When you press Ctrl+F5 or Shift+F5, the same requests are performed, but with the Cache-Control:no-cache header, thus forcing the server to send an uncached version, usually with a 200 (OK) status code.
If you want to make sure that you're utilizing the local browser cache, simply press Enter in the address bar.
If the HTTP Response contains the etag entry, the conditional request will always be made. ETag is a cache validator tag. The client will always send the etag to the server to see if the element has been modified.
If Chrome Developer Tools are open (F12), Chrome usually disables caching.
It is controllable in the Developer Tools settings - the Gear icon to the right of the dev-tools top bar.
If you are hitting the refresh button for loading the particular page or resource, the if-modified-since header request is sent everytime, if you instead request the page/resource as a separate request in a new tab or via a link in a script or html page, it will load the page/resource from the browser cache itself.
This is what has happened in my case, may be this is the general universal case. I am not completely sure, but this is what I gathered via my digging.
Chrome adds Cache-control: max-age=0 header when you use self-signed certificate. Switching from HTTPS to HTTP will remove this header.
Firefox doesn't add this header.

Google Chrome issuing Head Request to random domains

Today I was investigating on something with Fiddler, when I noticed that, when I launch Google Chrome, I have always 3 HEAD requests to some domains which seem to be randomly chosen.
Here is a sample :
HEAD http://fkgrekxzgo/ HTTP/1.1
Host: fkgrekxzgo
Proxy-Connection: keep-alive
Content-Length: 0
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31
Accept-Encoding: gzip,deflate,sdch
Do you have any idea of why Google Chrome behaves this way ?
Thanks guys
This is Chrome checking to see if your ISP converts non-resolving domains into your.isp.com/search?q=randomnonresolvingdomain
See https://mikewest.org/2012/02/chrome-connects-to-three-random-domains-at-startup
This algorithm seems unusable with forward proxy servers. Browser definitely asks for random page and proxy definitely returns some page -- error (50x), masked error (50x or 40x) or nice "you are lost" page with HTTP code 200 .

Audio problems in Chrome

I'm trying to develop an HTML5 game. To this end I have the PHP built-in server running on my local machine to work on, then I "compile" it (combine/minify JS files, minify CSS, etc) and upload them to the production server.
I've run into an issue with the sound effects. For some reason, Chrome won't recognise that it has reached the end of the sound, and therefore will get stuck in a "buffering" state. This is in spite of a valid Content-Length header. IE and Firefox don't seem to have this problem. It also affects the loop attribute, and it also fails to fire ended events.
These problems are only present on the PHP built-in server. It works fine in production. What am I missing?
EDIT: Here's an example request/response on the PHP server:
GET /bin/aud/rightwhereiwantyou.mp3 HTTP/1.1
Host: 10.0.0.110:8000
Connection: keep-alive
Accept-Encoding: identity;q=1, *;q=0
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31
Accept: */*
Referer: http://10.0.0.110:8000/snake.php
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Range: bytes=0-
HTTP/1.1 200 OK
Host: 10.0.0.110:8000
Connection: close
X-Powered-By: PHP/5.4.13
Content-Type: audio/mpeg
Content-Length: 1685058
My wild guess is that Chrome is accepting a partial content response (a byte range), since the MP3 file could be large and you might want to be streaming it instead.
Have you tried adding the Accept-Ranges and Content-Range headers to your response? See the answer to this question for an example.
This appears to be a known bug with Google Chrome.