chrome keeps redirecting because of HSTS - google-chrome

I have implemented a simple static server like this in /etc/nginx/sites-available/default that will serve a bunch of files
server {
listen 80;
server_name www.x.app x.app;
root /usr/share/app/front-end/build;
location / {
index index.html;
autoindex on;
autoindex_exact_size off;
}
}
but when i browse http://www.x.app will immediately get redirected to https://www.x.app but i want to browse as http and i searched a lot and find out the Non-Authoritative-Reason: HSTS header that chrome gets ( that will redirect me using 307 Internal Redirect ) there is security occur because of "HSTS"
the answers said that use add_header Strict-Transport-Security "max-age=0"; in NginX Configurations but it didn't work
P.S.1: i cleared my chrome cache and also doesn't work
P.S.2:
Querying HSTS/PKP domain in chrome:
Found:
static_sts_domain: app
static_upgrade_mode: FORCE_HTTPS
static_sts_include_subdomains: true
static_sts_observed: 1613773712
static_pkp_domain:
static_pkp_include_subdomains:
static_pkp_observed:
static_spki_hashes:
dynamic_sts_domain:
dynamic_upgrade_mode: UNKNOWN
dynamic_sts_include_subdomains:
dynamic_sts_observed:
dynamic_sts_expiry:

When Google launched the .app top level domain they announced it will be only be available over HTTPS as it will have HSTS preloaded in Chrome’s (and other browsers) code, rather than depending on sites configuring it as such in their webserver a:
https://blog.google/technology/developers/introducing-app-more-secure-home-apps-web/
A key benefit of the .app domain is that security is built in—for you and your users. The big difference is that HTTPS is required to connect to all .app websites, helping protect against ad malware and tracking injection by ISPs, in addition to safeguarding against spying on open WiFi networks. Because .app will be the first TLD with enforced security made available for general registration, it’s helping move the web to an HTTPS-everywhere future in a big way.
So you need to use another domain name if you do not want to use HTTPS. Note that .dev is in the same situation.

Related

CORS on Firefox and potential help on chrom private local network access

I am currently using the FLASK developer HTTP server, and I am trying to build a local service (run on localhost) that serves files for a remote visualization website.
Here is the code for the python side
#app.route('/task/<path:path>', methods=['GET', 'HEAD', 'POST', 'PUT', 'DELETE', 'CONNECT', 'OPTIONS', 'TRACE', 'PATCH'])
def static_file1(path):
p = "./task/" + path
return flask.send_file(p, conditional=True)
For safari, it just works like a charm.
As this screenshot indicates, flask development http server can serve files partially.
However, it didn't work for firefox for one request but not for the other.
And here are the headers for the first failed request
Successful request header
So I do believe the CORS header (Access-Control-Allow-Origin) is set correctly, otherwise the second request would fail.
Then what did I do incorrectly?
Second part:
It also doesn't work in Chrome, both requests failed, but I found the article below explaining new security features:
https://developer.chrome.com/blog/private-network-access-preflight/#:~:text=%23%20What%20is%20Private%20Network%20Access,to%20make%20private%20network%20requests.
But even with "Access-Control-Allow-Private-Network" set to "true" (See screenshot above), both requests still failed in chrome. And error msg:
Access to XMLHttpRequest at 'http://localhost:10981/task/a5c8616777d000499ff0cd5dbb02c957/datahub.json' from origin 'https://somepublic.website' has been blocked by CORS policy: The request client is not a secure context and the resource is in more-private address space `local`.
Any suggestion would be helpful!
Thanks!
Update 1:
After enabling ad-hoc SSL context (unsigned certificate) on the flask side, and using https on both localhost and "the public website", and changing the "#allow-insecure-localhost" flag in chrome to true, it works in chrome now. But still doesn't in firefox.
If you check the specification you will see that it is a "Draft Community Group Report" and
This specification was published by the Web Platform Incubator Community Group. It is not a W3C Standard nor is it on the W3C Standards Track.
The contribute list is made up entirely of people working for Google.
I can't find any mention of it in Firefox's bug tracker.
It looks like this is a highly experimental specification, which Firefox simple doesn't implement.
There doesn't appear to be any way to persuade Firefox to provide access from a secure, public Oritin to an insecure private origin.

Chrome cookies not working after tomcat web server reboot

I noticed recently, that when I reboot my Tomcat web server, that the Chrome browser can no longer store cookies. i.e. tomcat uses cookies for http sessions, and the browser can no longer get its http session, also the cookie we use to store the logged in user fails, and the user does not remain logged in.
This seems to be a new issue with Chrome, perhaps from a recent update, I do not remember seeing it before. If I close the Chrome browser, then reopen it, it is fine again (until the server is rebooted again).
The issue does not happen on Firefox, seems like a bug in Chrome.
Has anyone else noticed this issue, or know of a solution?
I found some posts about Chrome/tomcat cookie issues and the suggestion to set,
sessionCookiePathUsesTrailingSlash=false in the context.xml
but this does not fix the issue.
It seems it might be related to the website supporting both https and http, and switching between the two (although it did occur on a website that did not support https as well...)
Okay, I can now recreate the issue, steps are.
connect to website via https
logout / login
connect to website via http
Tomcat JSESSIONID cookie can no longer be stored (oddly user/password cookies are stored)
This only happens on Chrome, and only since the Chrome update that add the "insecure" flag on login pages that use http
Okay I added this to my web.xml
<session-config>
<cookie-config>
<http-only>true</http-only>
<secure>true</secure>
</cookie-config>
</session-config>
This did not fix the issue, but made the issue always occur through http, i.e. make http no longer able to store the JSESSIONID cookie.
I tried <secure>false</secure> but still get the old issue.
So, it is related to this setting at least. Anyone have any ideas?
Logged bug on Chrome,
https://bugs.chromium.org/p/chromium/issues/detail?id=698741
I was able to reproduce your problem with Chrome: Just it is needed to create HttpSession from HTTPS zone. Any subsequent HTTP request will not send the session cookie and any attempt to Set-Cookie:JSESSIONID= through HTTP is ignored by chrome.
The problem is localized when the user switch from HTTPS to HTTP. The HTTPS session cookie is maintained even if server is restarted and is working properly. (I tested with Tomcat6, Tomcat 9, and using an apache proxy for SSL)
This is response header sent by Tomcat when session is created from HTTPS
Set-Cookie: JSESSIONID=CD93A1038E89DFD39F420CE0DD460C72;path=/cookietest;Secure;HttpOnly
and this one for HTTP (note Secure is missing)
Set-Cookie:SESSIONID=F909DBEEA37960ECDEA9829D336FD239;path=/cookietest;HttpOnly
Chrome ignores the second set-Cookie. On the other hand Firefox and Edge replace the Secure cookie with the not-secured. To determine what the correct behaviour should be I have reviewed RFC2109
4.3.3 Cookie Management
If a user agent receives a Set-Cookie response header whose NAME is
the same as a pre-existing cookie, and whose Domain and Path
attribute values exactly (string) match those of a pre-existing
cookie, the new cookie supersedes the old.
So, It is clear is a chrome bug, as you supposed in the question: The HTTP cookie should replace the one setted by HTTPS
Removing cookie manually from Chrome or invalidating the session at server side makes it work again (if after these actions the session is created using HTTP)
By default the JSESSIONID cookie is created with Secure when is requested from HTTPS. I guess this is the reason that Chrome do not allow to overwrite the cookie. But if you try to set <secure>false</secure> in web.xml Tomcat ignores it and the Set-Cookie header is sent with Secure
<session-config>
<cookie-config>
<http-only>true</http-only>
<secure>true</secure>
</cookie-config>
</session-config>
Changing cookie name, setting sessionCookiePathUsesTrailingSlash or removing HttpOnly has had no effect
I could not find a workaround for this issue except invalidating server session when logged user switch from HTTPS to HTTP.
Finally I opened a bug in chromium: https://bugs.chromium.org/p/chromium/issues/detail?id=698839
UPDATED
The issue is finally marked as Won't Fix because it is an intentional change. See https://www.chromestatus.com/feature/4506322921848832
Strict Secure Cookies
This adds restrictions on cookies marked with the 'Secure' attribute. Currently, Secure cookies cannot be accessed by insecure (e.g. HTTP) origins. However, insecure origins can still add Secure cookies, delete them, or indirectly evict them. This feature modifies the cookie jar so that insecure origins cannot in any way touch Secure cookies. This does leave a carve out for cookie eviction, which still may cause the deletion of Secure cookies, but only after all non-Secure cookies are evicted.
I remember seeing this a couple of times and as far as I can remember this was the only recommendation on the matter, as you mentioned:
A possible solution to this might be adding sessionCookiePathUsesTrailingSlash=false in the context.xml and see how that goes.
Some info on the matter from here
A discussion here (same solution)
Hope I didn't confuse the issues and this helps you, let me know with a comment if I need to edit/if worked/if I should delete, thanks!
There is a draft document to deprecate the modification of 'secure' cookies from non-secure origins (submitted by Google). It specifies the recommendations to amend the HTTP State Management Mechanism document.
Abstract of the document:
This document updates RFC6265 by removing the ability for a non-
secure origin to set cookies with a 'secure' flag, and to overwrite
cookies whose 'secure' flag is set. This deprecation improves the
isolation between HTTP and HTTPS origins, and reduces the risk of
malicious interference.
Chrome already implemented this feature in v 52 and same feature is also implemented in Mozilla few days back.
To solve this issue, I think you should connect to website via https only.
The bad way I think is to set sessionCookieName = "JSESSIONIDForHttp" in context.xml
Let Browser's cookie know:
If secure https condition use default "JSESSIONID".
If not secure http condition use "JSESSIONIDForHttp".

Why does Chrome sometimes ask for basic auth a second time and Firefox not?

I am running a React frontend and a Laravel backend on a Nginx server (homestead Vagrant box) behind a basic auth, the Nginx configuration for that looks like:
server {
...
location / {
try_files $uri $uri/ /index.php?$query_string;
auth_basic "Restricted";
auth_basic_user_file /home/vagrant/Code/project/.htpasswd;
}
}
This is basically running all right and Chrome (v52, Mac OS X) "sometimes" ask for the auth again on subsequent requests, for example to load a image which is defined as css-background on a button hover. This behaviour (at least for my research so far) is not consistent and I cant reproduce it regularly, it occurs from time to time, I can´t find a reason for the subsequent auth request.
In Firefox (v47.0, Max OS X) I get one auth prompt and then it is working like expected.
Do you have any idea how to debug the specific behaviour in Chrome or make sure that the first auth prompt will be the only one?
Note: The frontend send some further XHR calls to the backend which have also the "authorization" header set to fulfill the basic auth without showing the prompt.
I suspect the issue here is with how you're storing the authorization token locally and the amount of time for which it's valid. Browsers will handle local storage a little differently from one another, so if you're using local storage or session storage, it may simply be a difference in how the data is persisted.
I believe this SO post would probably help answer the question: How persistent is localStorage?
Basically Chrome allows the data to have a set a timeout period while in Firefox "it is not possible to specify an expiration period for any of your data".
If you're using Chrome frequently and clear your cache for other reasons, you're likely also clearing your auth token. If you're only using Firefox for testing, you likely have a cached auth token that's not expiring.

Loading an external widget in widgets-config.xml

I am unable to load an iWidget externally on the communities page
This is my widget def:
<widgetDef defId="qmiWidget" primaryWidget="false" modes="view fullpage edit search"
url="http://questionmine.com/app1/widgets/index/publishProject_iWidget"/>
But it replaces the http and tries to load it internally
"NetworkError: 403 Forbidden - https://connectionsww.demos.ibm.com/communities/ajaxProxy/http/questionmine.com/app1/widgets/index/publishProject_iWidget"
Any idea how can I do this ?
Since your widget resides on another domain, you have to configure the "Ajax Proxy" to allow this.
Take a look at this here:
http://www-10.lotus.com/ldd/lcwiki.nsf/xpDocViewer.xsp?lookupName=IBM+Connections+4.5+Documentation#action=openDocument&res_title=Configuring_the_AJAX_proxy_ic45&content=pdcontent
For testing purposes (ONLY testing) it would be safe to allow "*" but for a production environment it is strongly advised to be more specific, in your case something like "questionmine.com/app1/*"
You can even configure specific proxy rules per application (Communities, Profiles, Homepage,...)
http://www-10.lotus.com/ldd/lcwiki.nsf/xpDocViewer.xsp?lookupName=IBM+Connections+4.5+Documentation#action=openDocument&res_title=Configuring_the_AJAX_proxy_for_a_specific_application_ic45&content=pdcontent
BTW: If you ever tried to enable feeds in a community, the same applies. Without further configuration, only same-domain feeds would be allowed.

Can a URL have multiple parts of subdomain to it?

I have a domain name abc.mydomain.com
This is a https URL ( http redirects to the https version )
However, I now need to be able to handle www.abc.mydomain.com to redirect to abc.mydomain.com
How can I do this? is it a webserver level redirect or something to be done at the DNS resolution.
I know my URL already has the "abc" as its sub-domain and I dont need a "www", however, we noticed that "www.news.google.com" resolves to "news.google.com" - hence wondering if I can achieve it too
Thank you!
In short, yes.
DNS works on a hierarchy - the DNS server for .com can delegate down to the nameserver for your domain which can delegate further, or just answer the requests, which needs to be your first step.
If you use Bind style zone files, you can do something like (where 123.45.67.89 is your webserver IP address):
* IN A 123.45.67.89
Then, you also need your webserver to resolve that to the right virtual host/redirect as desired.