Today I was investigating on something with Fiddler, when I noticed that, when I launch Google Chrome, I have always 3 HEAD requests to some domains which seem to be randomly chosen.
Here is a sample :
HEAD http://fkgrekxzgo/ HTTP/1.1
Host: fkgrekxzgo
Proxy-Connection: keep-alive
Content-Length: 0
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31
Accept-Encoding: gzip,deflate,sdch
Do you have any idea of why Google Chrome behaves this way ?
Thanks guys
This is Chrome checking to see if your ISP converts non-resolving domains into your.isp.com/search?q=randomnonresolvingdomain
See https://mikewest.org/2012/02/chrome-connects-to-three-random-domains-at-startup
This algorithm seems unusable with forward proxy servers. Browser definitely asks for random page and proxy definitely returns some page -- error (50x), masked error (50x or 40x) or nice "you are lost" page with HTTP code 200 .
Related
I want to run apache-tomcat-6.0.44 server on my laptop. I have configured Classpath, Path and CATALINA_HOME variables. I am able to run the server from command prompt (using "startup" command). But the problem is, when I am typing "http://localhost:8080" in the url bar of my chrome browser and clicking enter, I am gettig error as:
Fiddler Echo Service
GET / HTTP/1.1 Host: localhost:8080 Proxy-Connection: keep-alive
Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8
Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 6.1)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101
Safari/537.36 Accept-Encoding: gzip, deflate, sdch Accept-Language:
en-GB,en-US;q=0.8,en;q=0.6 Cookie: _ga=GA1.1.1441246844.1443724804 To
configure Fiddler as a reverse proxy instead of seeing this page, see
Reverse Proxy Setup You can download the FiddlerRoot certificate**
I have gone through some of the solutions, which are:
1) Fiddler Echo Service blocking oracle homepage in browser
2) Fiddler not capturing traffic from browsers
But I am still not able to fix the issue. Could anyone please help me out with this.
Along with using proxy server, Check if your Lan setting is set to detect Automatically.
Go to below section of chrome
"settings>Advanced settings>Network>Change Proxy settings" You shall get a wizard.
In there click "Lan Settings" and select the checkbox for "Automatically detect settings"
Another possibility is mismatch of crtificate installed.
server name used in Apache (httpd.conf) must be the same as the server name in apache (httpd-ssl.conf) e.g. in Apache (httpd.conf)ServerName localhost:8080 then in apache (httpd-ssl.conf) should be like this ServerName www.example.com:8080
In a commercial web application that uses POST requests frequently, it was necessary to perform history.go(-1) to trigger back navigation. As may others has experienced, I am getting -
Confirm Form Resubmission ERR_CACHE_MISS
error in Chrome. However, same works just fine in Firefox (this is no a duplicate, please read on).
It is true that using POST requests to render content is the reason leading to above problem (without using Post/Redirect/Get design pattern).
However, while looking for an alternative solution, it was observed that in certain web sites / apps it is possible to navigate back in Chrome (cache hit) where as in some sites it fails. I have inspected all the HTTP headers from successful sites and it looks like HTTP headers are not making any difference.
Could it possibly be the browser acting differently based on the SSL certificate used by the web application or what could be the reason for back navigation working in certain web sites?
Example working web applications :
http://gmail.com/ - Enter some random email. Click next. Enter incorrect password multiple times and use browser back button to navigate back.
https://support.hostgator.com/ - Enter some random text in search box (do this several times). Use browser back button to navigate back.
POST request used in failing web application :
POST /post3.jsp HTTP/1.1
Host: 192.168.1.111
Connection: keep-alive
Content-Length: 18
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Origin: https://192.168.1.111
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.99 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Referer: https://192.168.1.111/post2.jsp
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8
Identified that in Chrome back navigation fails in above work flow, unless SSL Certificate used in HTTPS communication is a valid, trusted certificate.
If you are using a self signed certificate, add the CA certificate as a Trusted Root Certification Authorities and everything should work as expected.
I have seen a lot of urls lately that seem to have hijacked a portion of the site - 'www.example.com' In the example url: "http://www.example.com//wp-includes/js/apparently_user_defined_dir/from_here_on/index.html" 'wp-includes' is wordpress, and js javascript - what has been done (typically) to these sites and what is to be done? (aside from notifying example.com or their host..)
Thank you.
Apart from the domain the other are "path"
The link you entered is translated by browser to a HTTP request with the header (example):
GET //wp-includes/js/apparently_user_defined_dir/from_here_on/index.html HTTP/1.1
Host: www.example.com
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2024.2 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: el-GR,el;q=0.8
Check this line:
GET //wp-includes/js/apparently_user_defined_dir/from_here_on/index.html HTTP/1.1
Its the server's job to reply to the request, so if the server knows that
//wp-includes/js/apparently_user_defined_dir/from_here_on/index.html
means its ok.
Most times this is translated to a path on a folder, but probably not in this case.
The page you entered returns a Status Code: 404 Not Found so ... your requested page was not found and it responds you with this error page ... which for some reason reports to the user no error. (We all know this is an example page.)
I'm trying to develop an HTML5 game. To this end I have the PHP built-in server running on my local machine to work on, then I "compile" it (combine/minify JS files, minify CSS, etc) and upload them to the production server.
I've run into an issue with the sound effects. For some reason, Chrome won't recognise that it has reached the end of the sound, and therefore will get stuck in a "buffering" state. This is in spite of a valid Content-Length header. IE and Firefox don't seem to have this problem. It also affects the loop attribute, and it also fails to fire ended events.
These problems are only present on the PHP built-in server. It works fine in production. What am I missing?
EDIT: Here's an example request/response on the PHP server:
GET /bin/aud/rightwhereiwantyou.mp3 HTTP/1.1
Host: 10.0.0.110:8000
Connection: keep-alive
Accept-Encoding: identity;q=1, *;q=0
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31
Accept: */*
Referer: http://10.0.0.110:8000/snake.php
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Range: bytes=0-
HTTP/1.1 200 OK
Host: 10.0.0.110:8000
Connection: close
X-Powered-By: PHP/5.4.13
Content-Type: audio/mpeg
Content-Length: 1685058
My wild guess is that Chrome is accepting a partial content response (a byte range), since the MP3 file could be large and you might want to be streaming it instead.
Have you tried adding the Accept-Ranges and Content-Range headers to your response? See the answer to this question for an example.
This appears to be a known bug with Google Chrome.
Given 2 links:
link1
link1
On a page at this domain:
www.site.com/index.html.
Are both the header request going to look the same ? Or can i expect a "/somepage.html" and "somepage.html" request to the site.com domain ?
EDIT: Here is what i see in the raw header in Chrome console:
GET /resources/css/jquery-ui.css HTTP/1.1
Host: site.com
Connection: keep-alive
Cache-Control: no-cache
Pragma: no-cache
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
Accept: text/css,*/*;q=0.1
Referer: http://site.com/
So to clarify the question is the request parsed by rewrite engines only the first line? If so i can use ^/resource for instance to target it ?
Both will look the same. /somepage.html
The browser creates /somepage.html, knowing that it is relative to the current document which is in /. The browser always requests the full path from the server. This is required, as the server doesn't know what page the browser is currently on. HTTP is inherently stateless.