Chrome ignoring hosts file for subdomains of localhost - google-chrome

When I attempt to visit http://mysubdomain.localhost chrome resolves to [::1]80, even if there is an explicit entry for this domain in the hosts file. No other browser behaves this way. Firefox, safari, and curl all resolve the IP address given in my hosts file. This is the entirety of my hosts file at the moment:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
192.168.88.88 mysubdomain.localhost
And yet, when I attempt to visit http://mysubdomain.localhost in chrome, it does not resolve to 192.168.88.88. This is problematic for me, because 192.168.88.88 is a virtual machine running on my computer. I could change the domain to http://mysubdomain.local or http://mysubdomain.dev, but that would require me to update a configuration file used by many people on the project, which I'd rather avoid, because I may break some aspect of their workflow.
Firefox (working as desired)
curl (working as desired)
Chrome (not working as desired)
Some things I've already tried:
I am not using a proxy
I have cleared browser cache many times
I have cleared dns cache from chrome://net-internals/#dns
I have restarted the machine several times
I have cleared the system DNS cache with the terminal command sudo dscacheutil -flushcache;sudo killall -HUP mDNSResponder
I have tried incognito mode several times
I have tried created a new chrome user account
System information:
Chrome version: 53.0.2785.116
OS Version: Mac OS 10.11.6 (El Capitan)

Upon further review, I think this is unfortunately working as designed. From the chromium issue queue:
This was done as a security mitigation, as OS X's resolver does not properly ensure that .localhost domains are not queried on the network, which is a key security property of ensuring .localhost is truely local. Because we can't trust the resolver to do the secure thing, we unfortunately can't trust the resolver (even when it may be secure)...
The security risk is not about properly configured server vs improperly configured server. It's that a DNS resolver should never send foo.localhost requests out to the network. If it does, a network attacker could make "foo.localhost" point to any IP of their choosing. This is bad, because "localhost" (and "*.localhost") have special privileges (c.f. http://www.w3.org/TR/powerful-features/#is-origin-trustworthy ), and because they have those special privileges, they need to be secure.
In fact, it seems that chrome may be the only tool in the bunch properly implementing RFC-6761 which states in part:
Name resolution APIs and libraries SHOULD recognize localhost
names as special and SHOULD always return the IP loopback address
for address queries and negative responses for all other query
types. Name resolution APIs SHOULD NOT send queries for
localhost names to their configured caching DNS server(s).
So it seems there is no way to fix this. I will change the domain of my virtual machine to http://mysubdomain.local

After playing around with this and using firefox for a while, I found a workaround by accident. Instead of changing your development environments you can simply install https://www.telerik.com/download/fiddler.
Fiddler bypasses the DNS of chrome, I believe, so you are left with a perfectly working system without having to change all your environments.
I have tested this on Windows 10 with Hyper-v over vagrant.

Related

Burp Interception does not work for localhost in Chrome

I can't intercept requests made by Chrome version 73.0.3683.86 to my localhost site.
Local host site is running on IIS on http://127.0.0.3:80
Burp proxy lister is default one on 127.0.0.1:8080
Interception rules are default one as well
In my LAN settings, "Bypass proxy server for local addresses" is not enabled
When Interception is turned ON and I reload page in Chrome browser, no request is "caught" by Burp, my local site loads and only the external requests are intercepted, such as loading external scripts from CDN.
Also under "Proxy" > "HTTP History" there is only request to external sites, and all requests to http://127.0.0.3:80 are not recorded.
When I reload same page by Internet Explorer 11, initial GET request is intercepted by Burp, as expected. Also "Proxy" > "HTTP History" shows all the requests to local site http://127.0.0.3:80
What is the problem with the Chrome? Thanks!
Found the solution late yesterday. I am using the Chrome extension ProxySwitchy, but it doesn't matter if you use that or the system proxy configuration. The solution works the same way.
You can solve this problem by adding an entry in /etc/hosts file like below
127.0.0.1 localhost
127.0.0.1 somehostname
Now burp will intercept request from somehostname
Which version of Chrome are you using?
Have you tried using the FoxyProxy Chrome extension?
As a workaround, you could modify the hosts file on your machine.
I experienced the same issue when I upgraded from Opera 58.0 to 60.0. I think that this is Chrome related, because I've also experienced it in all other Chrome browsers. Opera 58 utilizes Chrome 71.0.3578.98. Opera 60 utilizes version Chrome 73.0.3683.103. Something was definitely updated in Chrome between these versions to cause this problem to happen.
You have to subtract the implicit bypass rules defined in Chrome (https://chromium.googlesource.com/chromium/src/+/master/net/docs/proxy.md#Implicit-bypass-rules)
Requests to certain hosts will not be sent through a proxy, and will
instead be sent directly.
We call these the implicit bypass rules. The implicit bypass rules
match URLs whose host portion is either a localhost name or a
link-local IP literal. Essentially it matches:
localhost
*.localhost [::1]
127.0.0.1/8
169.254/16
[FE80::]/10
https://chromium.googlesource.com/chromium/src/+/master/net/docs/proxy.md#Bypass-rule_Subtract-implicit-rules
Whereas regular bypass rules instruct the browser about URLs that
should not use the proxy, Subtract Implicit Rules has the opposite
effect and tells the browser to instead use the proxy.
In order to be able to proxy through the loopback interface, you have to add the entry
<-loopback>
in the list of hosts for which you don't want to a proxy. It is a bit confusing, indeed.
Make sure you haven't enabled socks proxy option, it happened with me too and i found the solution when i disabled the socks proxy option, just make sure it's disabled!
Example:
It helped me
I turned on this settings

Forget client certificate setting for specific domain (Chrome)

Using: Chrome 67.0.3396.99
Our webserver implements X.509 client authentication. The certificates are offered through the PKCS#11 interface; we connect a smartcard (in this case: Yubikey 4), the browser prompts for the certificate selection and PIN.
We disconnected the smartcard and visited the authenticated domain (say, localhost:8000) to observe the behavior of the webserver (in a local development environment).
The webserver correctly refused to serve the request.
However, Chrome now does never send anymore the certificates while visiting localhost:8000, even if the smartcard is connected.
The following did not resolve the problem:
Clearing all site data through the developer console;
Resettting site preferences to their defaults (through chrome://settings/content/siteDetails?);
Rebuilding the webserver.
Any pointer where I can clear this state from Chrome would be greatly appreciated. As a temporary fix, we run the server at a different port but this is not an option on the long term. As this scenario is very plausible to also happen in production.

Using getUserMedia() on insecure origins in Chrome

I am developing a webpage that uses camera. When I test in Chrome in my local network, camera doesn't work and I get warning in the console:
getUserMedia() no longer works on insecure origins. To use this feature, you should consider switching your application to a secure origin, such as HTTPS. See link for more details.
In the link provided there is an instruction to set some flags in Chrome. So I tried. My command looks like this:
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --unsafely-treat-insecure-origin-as-secure="192.168.0.15" --user-data-dir=c:\chrome-dev-profile
But when I run Chrome I get this message:
You are using an unsupported command-line flag: --unsafely-treat-insecure-origin-as-secure. Stability and security will suffer.
What am I doing wrong?
Is there another way I can test in local network without setting up https server? I need this just for development.
Luka,
I've run into this bug just yesterday. I have not found out how to get Chrome to honor that flag on the command line yet. But I did find a workaround that works for my case.
I'm running my web services on a Linux machine that is running an ssh server. I'm testing on windows with chrome, and used putty to connect to the linux box from windows and then created a "local port forward" to make my remote linux box's ipaddress:port appear on localhost:port on windows. Depending on your platform this workaround may work for you. This approach isn't too cumbersome if you only have a few ports to forward.
In my particular case my setting for putty looked like
L8080 localhost:8080
To see more about port forwarding and ssh see: https://help.ubuntu.com/community/SSH/OpenSSH/PortForwarding

Chrome localhost does not work

I have defined some virtual servers that until the last days were working fine.
Now they don't on Chrome, but there are no problems in firefox or safary.
I get this:
This webpage is not available
ERR_ICANN_NAME_COLLISION
Hide details
This site is using a new generic top-level domain (gTLD). If you have
used loc.dev to access an internal site in the past, contact your
network administrator.
I found as a solution:
Set the "Built-in Asynchronous DNS" to "Disabled" in chrome://flags, but the is no such flag in my chrome version ( 43.0.2357.81 )
Do you know a solution for this?
LE : If i move the site on the htdocs file and i go on the url http://localhost, it works. It seems that it has a problem only with virtualhosts.
Got the same issue after updating to the latest chrome version last night. I was getting a ERR_NAME_NOT_RESOLVED error only on google chrome for all of my virtual hosts. Here's how that looked.
Screen Shot-> DNS name not resolved error
Here's the fix I made.
Clear up the google DNS cache by typing this in the Chrome browser
chrome://net-internals/#dns
Screenshot -> Flushing Chrome DNS cache
You will see a button "Clear Host Cache". Press that DNS cache
will be flushed.
Keep this DNS window open. Now access the virtual host in the browser
for me it was http:/api.localhost. Once you do that you will see a
new entry in the DNS window. for me it was "localhost."
notice the period "." at the end of localhost that showed an error.
Last step is to simply add this entry as to your localhost file.
Your hosts file should be updated with an entry to resolve localhost. to 127.0.0.1:
# dont forget the trailing . !!!
127.0.0.1 localhost.
in the hosts file located at:
for linux : /etc/hosts
for windows : C:\Windows\System32\drivers\etc\hosts
Another solution for your case might be to ditch the .dev at the end of your local virtual host domain
This has to do with some new changes by google. ".dev" comes under google's TLD (In the corner of the internet where people care about DNS, there is a bit of an uproar at Google's application for over a hundred new top-level domains, including .dev)
Try this Use a domain name you own. Possibly using the full name like "localhost.dev.$yourdomain" could help you here depending on your setup.
With the 'chrome' I face the same issue because by mistake I comment out the
127.0.0.1 localhost from the host file, But 'Firefox' will work.
Just make sure your host file include
127.0.0.1 localhost
FIXING
Try contacting your system administrator.
ERR_ICANN_NAME_COLLISION.
if you are using magento and getting such error
just go to you database and search for core_config_data
click on it then check your web store name
change the store name
restart your wamp and fixed.
Worked for me:
chrome://net-internals/#hsts -> Domain Security Policy -> Delete domain security policies -> enter there localhost and press delete
Here is another catch for you, my virtual hosts in Windows hosts file were defined as:
127.0.0.1 bla.bla.bla.localhost
127.0.0.1 bla2.bla2.localhost
And actual server virtual host directives in Xamp Apache Vhosts file made it all work nicely in all browsers, but Chrome!
A simple fix - dont end with full "locahost" word, rename the vhosts to end with anything else, just "loc" did it in my case, all works in Chrome now!
Been having this problem with Version 56.0.2924.87 (64-bit) of chrome, attempting to access a vm by gset.localhost, just would not work.
Changed the url in the hosts file to gset.loc and it works fine.
The answer seems to be do not use localhost in your hosts file urls when attempting to access a virtual machine running on your machine using chrome.
All browsers - chrome, firefox, safari were not resolving my virtual host and kept re-directing to www.mysite.dev
After pulling my hair for hours - it turned out I just need to change mysite.dev to www.mysite.dev in the /etc/hosts file.

Virtual hosts in nginx on a chromebook

I develop a few webpages on my local computer. My setup is one virtual server in nginx for each site and then an entry in /etc/hosts for each site pointing an domain name to 127.0.0.1.
However I switched computer to a chromebook and can't keep this way of working anymore.
I use nginx in a crouton created debian chroot and it works fine. However chrome os won't let me edit /etc/hosts. I still reach my nginx with 127.0.0.1 but I can't reach any of my virtual servers anymore.
What's your solution to this problem? (I know that I can force edit /etc/hosts on chrome os if I disable automatic updates, I would wish to avoid this.)
You can try a google chrome extension like this one to override dns settings: https://chrome.google.com/webstore/detail/dns-overrider/acmhaiiijfheggcaanjlgpampclpbnoh/