I am creating a new web site, where I was redirecting HTTP traffic on port 80 to port 443, using certificates created by Certbot. I was using NginX as a reverse proxy for Apache2, so all requests for PHP scripts were to be served from Apache.
I encountered a problem, and decided to remove the HTTPS redirection, stop the Apache server, and start again from the beginning. In other words, I now had Nginx working on its own, and just on port 80.
When testing in Google Chrome 62.0.3202.75, I dutifully cleared the cache. Many times. However Chrome continued to redirect my requests for http://sub.domain.com/index.php to https://sub.domain.com/index.php, which of course failed. Other browsers were happy to download the index.php file, with no complaints.
It was only when I decide to restore the original default settings for Chrome that Chrome started to behave correctly again.
How is it that Chrome was determined to unilaterally perform a redirect that was no longer valid, even after emptying the cache? Is there a more powerful way (other than restoring settings their original defaults) of getting Chrome to let go of a page?
Related
I search everywhere I found 0 information about this specific redirect.
I have app that I need to use "HTTP" to function, recently chrome started to redirect my app to HTTPS automatically, and If I put HTTPS to HTTP code in my app it cause infinite loop.
My app is not on the HSTS preload domain list, my app and server have no redirect code to HTTPS.
Request URL: http://4444.com/z.txt
Request Method: GET
Status Code: 307 Internal Redirect (from disk cache)
Referrer Policy: strict-origin-when-cross-origin
Cross-Origin-Resource-Policy: Cross-Origin
Location: https://4444.com/z.txt
Non-Authoritative-Reason: DNS
This does not happen on any other browser other than chrome.
Does that mean chrome is targeting my host's dns to make sure all website hosted on the DNS is HTTPS?
If yes I think this is very bad move from google as I can't find any announcement by google that they will start forcing https on websites. This could break many non https sites without prior warning.
If not what can I do to fix this issue?
Thanks
I encountered the same problem just like you, and that situation didn't happen all the time, sometimes when I open another window as incognito mode, the redirect disappeared!Sooooo Annoying:(
And I just tried another way, it seemed to work fine with me!
go to : chrome://net-internals/#dns, and click Clear host cache , then refresh your page, the redirect will be gone!
Even if the "Always use secure connections" (chrome://settings/security) is disabled, chrome will still try to use HTTPS if it finds HTTPS records in DNS, as per #dns-https-svcb flag - "Support for HTTPS records in DNS" (chrome://flags/#dns-https-svcb) - which is enabled by default.
This causes the loop leading to the ERR_TOO_MANY_REDIRECTS with Non-Authoritative-Reason: DNS.
Either remove any HTTPS record from the host zone file or disable the aforementioned flag from chrome.
Did you access websites using VPN? VPN server seems able to force http to https
I'm a front-end developer working on an application where the login/ response put a Session-Cookie on the client. The later request will be authorized since the user "logged in".
Starting from Chrome 80
All cookies without a SameSite attribute will be treated as if they had SameSite=Lax specified. In other words, they will be restricted to first-party only (server and client on the same domain).
If you need third-party cookies (server and client on different domains), then they must be marked with SameSite=None.
Restricted to first-party by default
Set-Cookie: cname=cvalue; SameSite=Lax
Allowed in third-party contexts
Set-Cookie: cname=cvalue; SameSite=None; Secure
For my application, I want the default behavior. My client and server running on the same domain in production. But in development I'm working from localhost (different domain).
Up until now, chrome had special flag under chrome://flags - SameSite by default cookies. I could Enable this flag on my development machine and the login passed. And in production, I didn't need this flag because I wanted the default behavior.
Starting from Chrome 91
The SameSite by default cookies flag was removed. This means that from this version I can't login into my app, without deploying it to production.
Does anybody knows how can I get the Session-Cookie while working from localhost. But still keeping the security of SameSite=Lax. If possible with client only changes, but if needed also with server changes.
Chrome DevTools - SameSite error message
Chrome 80 Flags menu - These flags removed in Chrome 91
Update
I tried to solve this by making the server use SameSite=None (development only).
This causes a different error: Connection isn't secure. This is because when using SameSite=None you are required to add the suffix Secure and of curse use HTTPS connection.
Secure connection has its own problems like having to pay for a Certificate in development.
Workaround: Downgrade Chrome
This is not a solution! just a temporary workaround for anybody like me how got his work halted due to this update.
Uninstall Chrome
Go to "Add or remove programs" and uninstall Chrome. Notice that user data like cookies and saved browser passwords may be lost.
Download Chrome v90 from slimjet.com, or from any other site. Then install Chrome.
Prevent auto-update Chrome, according to this StackOverflow solution: open C:\Program Files (x86)\Google\Update
rename the file GoogleUpdate.exe to GoogleUpdate2.exe.
This will cause Chrome to not find the update package.
Update Flags - Open Chrome and type: chrome://flags
Search #same-site-by-default-cookies and Disable the flag
I have found a way to fix it and share it with everyone :-)
Description appears in the issues section:
Specify SameSite=None and Secure if the cookie should be sent in
cross-site requests. This enables third-party use.
In the Developer Tools section, go to the Application tab, and on the left side to Cookies:
The cookie that you want to share with other domains, mark the Secure
check and in Samesite put None. Update the site tab locally and you
will be able to use the cookies that allow you to send through the
domain of origin
I hope this brightens your day
As of Chrome v107 (Nov 2022)
I had a similar issue, spent a few hours digging, and what I found is that the only solution for Chrome is to make your front-end connection secure, ie https (using a proxy for instance): Link
An alternative solution is to use Firefox and set: about:config > network.cookie.sameSite.noneRequiresSecure=false. This allows SameSite=None; Secure=false
In our case, we are able to also run our server locally on a different port and point our client app to that localhost address for development purposes.
For example, I have the client app running on localhost:1234 and sending requests to a local copy of the server running on localhost:5678. This ensures that cookies are set successfully since the client and server are now "SameSite".
Admittedly, this is perhaps more of a workaround than a solution, but I hope it helps in the short term.
If you want to perform "unsafe" CORS requests (which means performing a POST/PUT/DELETE request) you will need to modify the tomcat conf/context.xml file, to set sameSiteCookies to "none" instead of "lax".
...
<!-- default samesite cookies configuration, for CORS set sameSiteCookies to "none" and configure bundle for HTTPS -->
<CookieProcessor sameSiteCookies="none" />
...
You can set the SameSite attribute manually to "None" + tick "Secure" inside the devtools for development.
That way you would not have to modify your production environment (keep the cookies as SameSite=Lax).
Anytime I have dev tools open on localhost my cookies are deleted and I am redirected to the login page on every page load which means I cannot use dev tools to debug or get insight into my site. I have localhost setup with a valid SSL cert (self-signed) and the site works normally until I open dev tools. How do I fix or disable this new "security" or setting in chrome?
After lots of issues and trying out many different things I came across this post/answer
When adding a Javascript library, Chrome complains about a missing source map, why?
Turns out that when I opened Dev Tools it would request a CSS map and the request was being sent to a different firewall causing my application to require me to re-authenticate every time this resource was requested. Turning off the CSS source map option fixed the issue
I can't intercept requests made by Chrome version 73.0.3683.86 to my localhost site.
Local host site is running on IIS on http://127.0.0.3:80
Burp proxy lister is default one on 127.0.0.1:8080
Interception rules are default one as well
In my LAN settings, "Bypass proxy server for local addresses" is not enabled
When Interception is turned ON and I reload page in Chrome browser, no request is "caught" by Burp, my local site loads and only the external requests are intercepted, such as loading external scripts from CDN.
Also under "Proxy" > "HTTP History" there is only request to external sites, and all requests to http://127.0.0.3:80 are not recorded.
When I reload same page by Internet Explorer 11, initial GET request is intercepted by Burp, as expected. Also "Proxy" > "HTTP History" shows all the requests to local site http://127.0.0.3:80
What is the problem with the Chrome? Thanks!
Found the solution late yesterday. I am using the Chrome extension ProxySwitchy, but it doesn't matter if you use that or the system proxy configuration. The solution works the same way.
You can solve this problem by adding an entry in /etc/hosts file like below
127.0.0.1 localhost
127.0.0.1 somehostname
Now burp will intercept request from somehostname
Which version of Chrome are you using?
Have you tried using the FoxyProxy Chrome extension?
As a workaround, you could modify the hosts file on your machine.
I experienced the same issue when I upgraded from Opera 58.0 to 60.0. I think that this is Chrome related, because I've also experienced it in all other Chrome browsers. Opera 58 utilizes Chrome 71.0.3578.98. Opera 60 utilizes version Chrome 73.0.3683.103. Something was definitely updated in Chrome between these versions to cause this problem to happen.
You have to subtract the implicit bypass rules defined in Chrome (https://chromium.googlesource.com/chromium/src/+/master/net/docs/proxy.md#Implicit-bypass-rules)
Requests to certain hosts will not be sent through a proxy, and will
instead be sent directly.
We call these the implicit bypass rules. The implicit bypass rules
match URLs whose host portion is either a localhost name or a
link-local IP literal. Essentially it matches:
localhost
*.localhost [::1]
127.0.0.1/8
169.254/16
[FE80::]/10
https://chromium.googlesource.com/chromium/src/+/master/net/docs/proxy.md#Bypass-rule_Subtract-implicit-rules
Whereas regular bypass rules instruct the browser about URLs that
should not use the proxy, Subtract Implicit Rules has the opposite
effect and tells the browser to instead use the proxy.
In order to be able to proxy through the loopback interface, you have to add the entry
<-loopback>
in the list of hosts for which you don't want to a proxy. It is a bit confusing, indeed.
Make sure you haven't enabled socks proxy option, it happened with me too and i found the solution when i disabled the socks proxy option, just make sure it's disabled!
Example:
It helped me
I turned on this settings
I'm having a strange problem with a separately hosted subdomain I have. I'm running an application on Engine Yard, let's call it mysite.com. I have a wildcard SSL certificate installed there which covers all the subdomains (things like api.mysite.com). We recently decided to migrate our blog to be hosted independently (right now it lives on wordpress.com). Because I can't run the blog alongside our Rails app with ease on Engine Yard, I decided to grab some cheap hosting space from Dreamhost to host our Wordpress blog there. I set up the server there to fully host our subdomain (let's call it blog.mysite.com), and updated the DNS A record on Hover (our DNS provider) to point blog.mysite.com to the Dreamhost server. So here's the issue:
If I go to blog.mysite.com via Firefox or Safari on my Mac I see the basic Wordpress install which I set up. However, if I try to view things with Chrome I get the following error:
This webpage is not available
Error 118 (net::ERR_CONNECTION_TIMED_OUT): The operation timed out.
This happens on all Macs running Chrome I could get my hands on. I tried both clearing the cache and flushing the DNS but nothing. The weirdest part is Chrome keeps looking for https://blog.mysite.com instead of http://blog.mysite.com. There is no SSL cert installed on the subdomain for the blog on Dreamhost because it's not necessary.
Has anyone ever come across this before? And in case anyone wants to try the actual address is http://blog.frestyl.com.
sounds like you have a 301 permanent redirect that Chrome registered http://blog.frestyl.com -> https://blog.frestyl.com. Besides clearing the cache I'm not sure what else can be done.