Content-Security-Policy Blocking Allowlisted Domains - google-chrome

I have been using a content-security-policy-report-only header for multiple weeks and have been seeing violations for multiple domains which are allowlisted in the same csp header. I placed all the domains I want to allowlist in the default src and don't have any other directives (other than style src for a nonce). I see other traffic which seem to pass and when I test the url on my own browser it succeeds without any violations. I have been looking at possible reasons behind this and that lead me to posts such as Content-Security-Policy Blocking Whitelisted Domains, Why is script-src-elem not using values from script-src as a fallback?, and https://csplite.com/csp277/ 

These links say that status code 0 or empty indicate the request was blocked while the browser tries to load the link and can happen due to ad blockers. I was also seeing these status code 0 violations reports and have filtered out status code 0/empty. But even after the change I still see a few requests that violate on allowlisted domains and with status code 200. Could this also be due to adblockers?
I did notice that some violations caused by extensions would list in source-file as chrome-extension so unsure if this could be due to extensions.

One thing I did notice was that if a report had a blocked uri that was not in the allow list in the default src, the violated directive would be a frame-src (which should default to default-src where there would be no domain so violation is expected). But in the case of a report with a blocked uri that was in the allow list, the violated directive and effective directive would be an img-src (which should also default to default but maybe it's not seeing the allowlisted domain there)

Example Report
{"linenumber":"",
"request":"",
"documenturi":"mysite.com",
"originalpolicy":"default-src 'self' mysite.com *.redirectsite.com redirectsite.com; style-src 'nonce-d93e18cc'; report-uri /csp-reports",
"violateddirective":"img-src",
"statuscode":"200",
"referrer":"",
"scriptsample":"",
"effectivedirective":"img-src",
"columnnumber":"",
"requestheaders":"",
"blockeduri":"https://x.redirectsite.com/s........",
"sourcefile":""}

Does anyone have any experience with this?

Ended up switching from content-security-policy-report-only to content-security-policy header even though was seeing these status code 200 violations. After switching, still saw status code 0 violations but status code 200 disappeared. Perhaps it is a bug with how browsers support the content-security-policy-report-only header. But ended up working out for this usecase. Hope this helps someone else

Related

CSP Violations from *.demdex.net & *.tt.omtrdc.net while these domains are in whitelist

I'm encountering a problem while having CSP header set on my page, and as per https://docs.adobe.com/content/help/en/id-service/using/reference/csp.html
I've added the relevant domains with the wildcard (*) char but still seeing violation reports to my backend saying these domains report the policy. Is this something familiar?
My CSP header looks like this:
Content-Security-Policy-Report-Only: connect-src 'self' *.demdex.net *.doubleclick.net *.forter.com *.tt.omtrdc.net dpm.demdex.net; report-uri <my_report_uri>
Tried to re-create the issue myself on a MacOS system but with no luck - I can see requests going to the mentioned domains with no violations.
You can see I also explicitly added dpm.demdex.net
Also browser reports shows that the affective policy is indeed the one showed above under original-policy param so don't look like a collision
This happens almost entirely on Windows machines from all browsers. any ideas on this? did someone experience anything similar?

How to disable CSP protection in chrome?

What am I doing ?
I wrote a script that runs every 1 second and sends a POST request with some data to a server which is running locally. Since the page that I am trying this on has some response headers attached to it (CSP headers). Therefore, the request is unable to proceed to CSP policy in chrome.
Here is the error I get
Refused to connect to 'https://domain.in/api/users' because it
violates the following Content Security Policy directive: "connect-src
'self' https://.whatsapp.net https://www.facebook.com
https://.giphy.com https://.tenor.co blob:
https://crashlogs.whatsapp.net/wa_clb_data
https://crashlogs.whatsapp.net/wa_fls_upload_check
https://www.bingapis.com/api/v6/images/search
https://.google-analytics.com wss://*.web.whatsapp.com
wss://web.whatsapp.com https://dyn.web.whatsapp.com"
What I want ?
Since I am just testing few things, I can bear an insecure environment by disabling the CSP in chrome. So please tell me how to do that ?
What have I tried/read so far ?
I have tried searching for this online and even some of the questions on SO but none of them gave a satisfying and completely working example. Some have suggested using this plugin but I can't see it working in my case.
Please suggest all that you can.
Using the Disable CSP plugin actually works but it fails sometimes. This may be because the server sends the CSP header as soon as the connection between the client and server completes. I needed to toggle this plugin just before that for it to work.
That is how I got it working.

ERR_BLOCKED_BY_XSS_AUDITOR when downloading file using selenium

I'm trying to download a file using selenium by simulating click on a download button but Chrome reports ERR_BLOCKED_BY_XSS_AUDITOR. If I use the "--disable-xss-auditor" argument to bypass, the page would be reloaded and nothing get downloaded. What seems strange to me is that when I actually download the file with my mouse in a Chrome session that's even controlled by selenium, the file downloads well.
Please help me understand what xss auditor does? Why can't I download the file with selenium?
BTW, I'm using python if it matters.
Thanks
X-XSS-Protection
The HTTP X-XSS-Protection response header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting (XSS) attacks. Although these protections are largely unnecessary in modern browsers when sites implement a strong Content-Security-Policy that disables the use of inline JavaScript ('unsafe-inline'), they can still provide protections for users of older web browsers that don't yet support CSP.
Header type Response header
----------- ---------------
Forbidden header name no
Syntax
X-XSS-Protection: 0: Disables XSS filtering.
X-XSS-Protection: 1: Enables XSS filtering (usually default in browsers). If a cross-site scripting attack is detected, the browser will sanitize the page (remove the unsafe parts).
X-XSS-Protection: 1: mode=block Enables XSS filtering. Rather than sanitizing the page, the browser will prevent rendering of the page if an attack is detected.
X-XSS-Protection: 1: report= (Chromium only) Enables XSS filtering. If a cross-site scripting attack is detected, the browser will sanitize the page and report the violation. This uses the functionality of the CSP report-uri directive to send a report.
Background
As per Intent to Ship: Changes to the XSS Auditor Chromium team made two changes:
Change the default behavior to X-XSS-Protection: 1; mode=block, which blocks the page load by navigating to a unique origin when XSS is detected, rather than filtering out specific scripts.
Deprecate the filter mode, with the intent to remove it completely at some future date.
Implementation Status
XSS Auditor blocks by default: Chrome's XSS Auditor should block pages by default, rather than filtering out suspected reflected XSS. Moreover, we should remove the filtering option, as breaking specific pieces of page's script has been an XSS vector itself in the past.
As per XSS Auditor: Block by default, remove filtering this issue was discussed and a fix was attempted. Some more discussion happened in False positives with ERR_BLOCKED_BY_XSS_AUDITOR and finally in ERR_BLOCKED_BY_XSS_AUDITOR on bona fide site when posting to a forum Chromium team decided Status: WontFix
Solution
You need to induce WebDriverWait for the desired element to be clickable. Here are some examples of the WebDriverWait implementation:
Java:
new WebDriverWait(driver, 20).until(ExpectedConditions.elementToBeClickable(By.linkText("text_within_the _link"))).click();
Python:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.LINK_TEXT, "text_within_the _link"))).click()
C#:
new WebDriverWait(driver, TimeSpan.FromSeconds(10)).Until(ExpectedConditions.ElementToBeClickable(By.LinkText("text_within_the _link"))).Click();
Reference
Event 1046 - Cross-Site Scripting Filter
The misunderstood X-XSS-Protection
I slowed down the clicks (2 clicks needed to download, added a sleep between them) and it works! Have no idea what happened...
XSS Auditor is a built-in function of Chrome and Safari which is designed to mitigate Cross-site Scripting (XSS) attacks. It aims to identify if query parameters contain malicious JavaScript and block the response if it believes the payloads were injected into the server response.
XSS is a vulnerability that occurs when the data get (mis)interpreted as code and executed on a victim's browser. The idea is to use a headless browser like Selenium WebDriver, and inject XSS payloads along with functional and user interaction tests
Python don't have anything to do with that, I think that might be the chrome version or something
i have shared the link which will help you understand better.
Chrome: ERR_BLOCKED_BY_XSS_AUDITOR details

Chrome cookies not working after tomcat web server reboot

I noticed recently, that when I reboot my Tomcat web server, that the Chrome browser can no longer store cookies. i.e. tomcat uses cookies for http sessions, and the browser can no longer get its http session, also the cookie we use to store the logged in user fails, and the user does not remain logged in.
This seems to be a new issue with Chrome, perhaps from a recent update, I do not remember seeing it before. If I close the Chrome browser, then reopen it, it is fine again (until the server is rebooted again).
The issue does not happen on Firefox, seems like a bug in Chrome.
Has anyone else noticed this issue, or know of a solution?
I found some posts about Chrome/tomcat cookie issues and the suggestion to set,
sessionCookiePathUsesTrailingSlash=false in the context.xml
but this does not fix the issue.
It seems it might be related to the website supporting both https and http, and switching between the two (although it did occur on a website that did not support https as well...)
Okay, I can now recreate the issue, steps are.
connect to website via https
logout / login
connect to website via http
Tomcat JSESSIONID cookie can no longer be stored (oddly user/password cookies are stored)
This only happens on Chrome, and only since the Chrome update that add the "insecure" flag on login pages that use http
Okay I added this to my web.xml
<session-config>
<cookie-config>
<http-only>true</http-only>
<secure>true</secure>
</cookie-config>
</session-config>
This did not fix the issue, but made the issue always occur through http, i.e. make http no longer able to store the JSESSIONID cookie.
I tried <secure>false</secure> but still get the old issue.
So, it is related to this setting at least. Anyone have any ideas?
Logged bug on Chrome,
https://bugs.chromium.org/p/chromium/issues/detail?id=698741
I was able to reproduce your problem with Chrome: Just it is needed to create HttpSession from HTTPS zone. Any subsequent HTTP request will not send the session cookie and any attempt to Set-Cookie:JSESSIONID= through HTTP is ignored by chrome.
The problem is localized when the user switch from HTTPS to HTTP. The HTTPS session cookie is maintained even if server is restarted and is working properly. (I tested with Tomcat6, Tomcat 9, and using an apache proxy for SSL)
This is response header sent by Tomcat when session is created from HTTPS
Set-Cookie: JSESSIONID=CD93A1038E89DFD39F420CE0DD460C72;path=/cookietest;Secure;HttpOnly
and this one for HTTP (note Secure is missing)
Set-Cookie:SESSIONID=F909DBEEA37960ECDEA9829D336FD239;path=/cookietest;HttpOnly
Chrome ignores the second set-Cookie. On the other hand Firefox and Edge replace the Secure cookie with the not-secured. To determine what the correct behaviour should be I have reviewed RFC2109
4.3.3 Cookie Management
If a user agent receives a Set-Cookie response header whose NAME is
the same as a pre-existing cookie, and whose Domain and Path
attribute values exactly (string) match those of a pre-existing
cookie, the new cookie supersedes the old.
So, It is clear is a chrome bug, as you supposed in the question: The HTTP cookie should replace the one setted by HTTPS
Removing cookie manually from Chrome or invalidating the session at server side makes it work again (if after these actions the session is created using HTTP)
By default the JSESSIONID cookie is created with Secure when is requested from HTTPS. I guess this is the reason that Chrome do not allow to overwrite the cookie. But if you try to set <secure>false</secure> in web.xml Tomcat ignores it and the Set-Cookie header is sent with Secure
<session-config>
<cookie-config>
<http-only>true</http-only>
<secure>true</secure>
</cookie-config>
</session-config>
Changing cookie name, setting sessionCookiePathUsesTrailingSlash or removing HttpOnly has had no effect
I could not find a workaround for this issue except invalidating server session when logged user switch from HTTPS to HTTP.
Finally I opened a bug in chromium: https://bugs.chromium.org/p/chromium/issues/detail?id=698839
UPDATED
The issue is finally marked as Won't Fix because it is an intentional change. See https://www.chromestatus.com/feature/4506322921848832
Strict Secure Cookies
This adds restrictions on cookies marked with the 'Secure' attribute. Currently, Secure cookies cannot be accessed by insecure (e.g. HTTP) origins. However, insecure origins can still add Secure cookies, delete them, or indirectly evict them. This feature modifies the cookie jar so that insecure origins cannot in any way touch Secure cookies. This does leave a carve out for cookie eviction, which still may cause the deletion of Secure cookies, but only after all non-Secure cookies are evicted.
I remember seeing this a couple of times and as far as I can remember this was the only recommendation on the matter, as you mentioned:
A possible solution to this might be adding sessionCookiePathUsesTrailingSlash=false in the context.xml and see how that goes.
Some info on the matter from here
A discussion here (same solution)
Hope I didn't confuse the issues and this helps you, let me know with a comment if I need to edit/if worked/if I should delete, thanks!
There is a draft document to deprecate the modification of 'secure' cookies from non-secure origins (submitted by Google). It specifies the recommendations to amend the HTTP State Management Mechanism document.
Abstract of the document:
This document updates RFC6265 by removing the ability for a non-
secure origin to set cookies with a 'secure' flag, and to overwrite
cookies whose 'secure' flag is set. This deprecation improves the
isolation between HTTP and HTTPS origins, and reduces the risk of
malicious interference.
Chrome already implemented this feature in v 52 and same feature is also implemented in Mozilla few days back.
To solve this issue, I think you should connect to website via https only.
The bad way I think is to set sessionCookieName = "JSESSIONIDForHttp" in context.xml
Let Browser's cookie know:
If secure https condition use default "JSESSIONID".
If not secure http condition use "JSESSIONIDForHttp".

HTML5 Application Cache Manifest - index fallback without catchall

I'm delivering a manifest for my app that looks something like this
CACHE MANIFEST
#1359542586541
NETWORK:
*
FALLBACK:
/ /index.offline
When offline, it works correctly by returning index.offline on the index path, however it has the side-effect of returning index.offline for every other resource as well. i.e. The / pattern is acting as a catchall.
Is there any way of matching the index page without everything else so only the homepage uses the fallback?
One irritation this causes is that it seems to return index.offline whenever a request returns a 500 status
No, the first URL in a FALLBACK is a prefix match. The only way is to always use the explicit index page rather than rely on default documents:
FALLBACK:
/index.html /index.offline
There was a discussion of the behaviour for 500 errors on HTML5 Help mailing list last February including several responses by the spec editor, this message specifically talks about FALLBACK sections.