What does origin-based security model mean? - html

I am studying websocket RFC 6455 where the security model of web-socket is stated to be origin-based security model . As well it is mentioned that this security model is used by web browsers. So what is this origin-based security model about?

CORS does not apply to WebSocket. A page JS can connect to any WebSocket server. It's just that browser WebSocket clients will send an origin header, which you may or may not use in your server to deny the client. However, non-browser clients can fake that, so it's of limited use.

Essentially, data/script is classified as trusted or not based on where it's loaded from, if you know about same origin policy or cross origin resource sharing (CORS) then you know that browsers puts some restrictions on Javascript that is loaded from different domains.

What happens:
Client connects to Server, setting up TCP connection with HTTP layered on top.
In case of HTTPS, there is also an agreement on the cryptographic protocol to use, a key exchange and possibly a certificate exchange. If a certificate exchange happens:
Client may ascertain that the Server is what it pretends it is by verifying the certificate of the Server using the public key of the Server (generally done tp make sure there is no man-in-the-middle attack going on or DNS spoofing is happening etc.)
Server may ascertain that the Client is what it pretends it is by verifying the certificate of the Client using the public key of the Client (only done in cases where the use case demands that the Client identity is important)
Connection is established! From here on, anything that goes over the TCP connection is considered healthy. "Going over the connection" means "same origin": It comes from the same client (or it comes from the same server).
It might well be that there is an evil hack on the client (or even the server) that borks the existing connection on the TCP or HTTP level and injects its own packets, data, requests or XML blocks. Too bad! There is no way this can be precluded in the described approach. One would need to have additional checks on the protocol, e.g. have a separate signature on each individual request signed by mutually trusted hardware modules installed by ${company representative} or something similarly complex.

Related

Secure communication between FIWARE orion and context-provider/IoT agent

I have to think about an architecture using FIWARE orion context-broker and several IoT agents/context-provider. In the documentation is a section describing how to securue the communication from an IoT agent/context-provider to orion. But how to secure the other sider?
What I understand, so far, is that a context-provider has to expose a REST endpoint (/op/query) on which it accepts incomming traffic. But how do it can make sure, that these request are valid?
In case of a subscription you can use httpCustom instead of http in the provider section, when you create a subscription. With this it is possible to use a static token which will be used by orion, when making request to the given url. This isn't possible for registration. Any suggestions how a context-provider/IoT agent can decide if an incoming request is a valid one?
With NGSIv2 Subscription/Notification and Register/Forwarding you will receive an X-Auth-Token Header with the token used in the initial update operation. You should be able to check within the IDM (Keystone in our Stack).
As a workaround you may use the value itself to send some kind of Apikey along with the real value.
Network security may apply also, it is common to use firewalls and restrict ip/ports, or stablish APN/VPN at distributed architectures (at least with unsecured devices or external networks).
Last, if synchronous communication is not a must for your use case (registers are sync, sub/notif are async), it is not a big deal to use Subs/Notif mechanism to communicate with a Context Adapter. We do sometimes, registers are tricky and troublesome.
Best.

Chrome and Safari not honorring HPKP

I added HPKP header to my site, but it is not honored by Chrome or Safari. I tested it manually by setting a proxy and by going to chrome://net-internals/#hsts and looking for my domain - which did not found. The HPKP seems correct, and I also tested it using HPKP toolset so I know it is valid.
I am thinking I might be doing something weird with my flow. I have a web app, which is served over myapp.example.com. On login, the app redirects the user to authserver.example.com/begin to initiate OpenID Connect Authorization Code flow. HPKP header is returned only from authserver.example.com/begin, and I think this might be the issue. I have include-subdomain in the HPKP header so I think this is not the issue.
This is the HPKP header (line breaks added for readability):
public-key-pins:max-age=864000;includeSubDomains; \
pin-sha256="bcppaSjDk7AM8C/13vyGOR+EJHDYzv9/liatMm4fLdE="; \
pin-sha256="cJjqBxF88mhfexjIArmQxvZFqWQa45p40n05C6X/rNI="; \
report-uri="https://reporturl.example"
Thanks!
I added HPKP header to my site, but it is not honored by Chrome or Safari... I tested it manually by setting a proxy...
RFC 7469, Public Key Pinning Extension for HTTP, kind of sneaks that past you. The IETF published it with overrides, so an attacker can break a known good pinset. Its mentioned once in the standard by name "override" but the details are not provided. The IETF also failed to publish a discussion in a security considerations section.
More to the point, the proxy you set engaged the override. It does not matter if its the wrong proxy, a proxy certificate installed by an mobile device OEM, or a proxy controlled by an attacker who tricked a user to install it. The web security model and the standard allow it. They embrace interception and consider it a valid use case.
Something else they did was make the reporting of the broken pinset a Must Not or Should Not. It means the user agent is complicit in the coverup, too. That's not discussed in a security considerations section, either. They really don't want folks to know their supposed secure connection is being intercepted.
Your best bet to avoid it is move outside the web security model. Don't use browser based apps when security is a concern. Use a hybrid app and perform the pinning yourself. Your hybrid app can host a WebView Control or View, but still get access to the channel to verify parameters. Also see OWASP's Certificate and Public Key Pinning.
Also see Comments on draft-ietf-websec-key-pinning on the IETF mailing list. One of the suggestions in the comment was change the title to "Public Key Pinning Extension for HTTP with Overrides" to highlight the feature. Not surprisingly, that's not something they want. They are trying to do it surreptitiously without user knowledge.
Here's the relevant text from RFC 6479:
2.7. Interactions with Preloaded Pin Lists
UAs MAY choose to implement additional sources of pinning
information, such as through built-in lists of pinning information.
Such UAs should allow users to override such additional sources,
including disabling them from consideration.
The effective policy for a Known Pinned Host that has both built-in
Pins and Pins from previously observed PKP header response fields is
implementation-defined.
Locally installed CAs (like those used for proxies like you say are running) override any HPKP checks.
This is necessary so as not to completely break the internet given the prevalence of them: anti-virus software and proxies used in large corporations basically MITM https traffic through a locally issued certificate as otherwise they could not read the traffic.
Some argue that locally installing a CA requires access to your machine, and at that point it's game over anyway, but to me this still massively reduces the protection of HPKP and that, coupled with the high risks of using HPKP, means I am really not a fan of it.

How to use WebRTC without an answer?

In the absence of a signalling server for coordinating the initial exchange, does WebRTC provide any way to allow the responder to send information freely to the caller, if the responder has only received an offer and has no other methods of communication with the caller?
(There's no signalling server because the web app must be useable offline. Any method to establish a connection with only one exchange of information would also be useful.)
Sorry, it's a long and weird question.
I guess by offline you mean that you have two parties that will connect through a network not connected to the internet.
Signaling is just a way to transmit information between the two parties. For the sake of example it can even be manual copy and paste. Even one of the parties can play the role of a server if the other one has a way of connecting to it (doable within the same network).
Without some kind of signaling mechanism, a WebRTC connection is not possible. And signaling is not part of the WebRTC specification, nor of any implementation.
Webrtc needs a signalling system for establishing peer to peer connection. Now the thing to notice is why it needs signalling.
In the process of establishing peer connection the two parties exchange sdp which contains information such as the IP and Port at both ends at which the media/data packets will get exchanged. Similarly it contains the encoding/decoding or codec to be used plus many other useful things.Thus without the exchange of these packets between both the parties any communication can't be possible.
That is why it is not possible at least in case of webrtc that without the communication from both sides a peer connection can be established.

With CORS, why do servers declare which clients may trust it, instead of clients declaring what servers they trust? [duplicate]

This question already has answers here:
XMLHttpRequest cannot load XXX No 'Access-Control-Allow-Origin' header
(11 answers)
Closed 6 years ago.
There is something about Cross Origin Resource Sharing (CORS) that I have never truly understood, namely that with a cross-origin HTTP request, it is not the client that gets to decide which server(s) it wants to trust; instead, the server declares (in the Access-Control-Allow-Origin response header) that one or more particular clients (origins) trust it. A CORS-enabled browser will only deliver the server's response to the application if the server says that the client trusts the server. This seems like a reverse way of establishing a trust relationship between two HTTP parties.
What would make more sense to me is a mechanism similar to the following: The client declares a list of origins that it trusts; for example, via some fictional <meta allow-cross-origin="https://another-site:1234"/> element in the <head>. (Of course a browser would have to ensure that these elements are read-only and cannot be removed, modified, or augmented via scripts.)
What am I misunderstanding about CORS? Why would a client-side declaration of trusted origins not work? Why is it that the servers get to confirm which clients (origins) may trust its responses? Who is actually protected from whom by CORS? Does it protect the server, or the client?
(These are a lot of questions. I hope it's clear that I am not expecting an answer to each of these, but rather just an answer that points out my fundamental misunderstanding.)
Client has nothing to do with it. With a CORS header you're telling the client which other servers do I trust. Those then can share your resources and client wont mind.
For example if you have two domains you tell the client so let your resources be used by your second website, you dont say i trust you as a client.
So you're protecting the server, not client. You dont want AJAX API Endpoints to be accessible by scripts hosted anywhere in the world.
A client has nothing to gain/lose from this. Its only a protection for servers because using AJAX all the URLs are clearly visible to anyone and had it been not for this protection, anybody could go ahead run their front end using your API, only servers have to lose from this so they get to decide who can use their resources.

When should one use a 'www' subdomain?

When browsing through the internet for the last few years, I'm seeing more and more pages getting rid of the 'www' subdomain.
Are there any good reasons to use or not to use the 'www' subdomain?
There are a ton of good reasons to include it, the best of which is here:
Yahoo Performance Best Practices
Due to the dot rule with cookies, if you don't have the 'www.' then you can't set two-dot cookies or cross-subdomain cookies a la *.example.com. There are two pertinent impacts.
First it means that any user you're giving cookies to will send those cookies back with requests that match the domain. So even if you have a subdomain, images.example.com, the example.com cookie will always be sent with requests to that domain. This creates overhead that wouldn't exist if you had made www.example.com the authoritative name. Of course you can use a CDN, but that depends on your resources.
Also, you then don't have the ability to set a cross-subdomain cookie. This seems evident, but this means allowing authenticated users to move between your subdomains is more of a technical challenge.
So ask yourself some questions. Do I set cookies? Do I care about potentially needless bandwidth expenditure? Will authenticated users be crossing subdomains? If you're really concerned with inconveniencing the user, you can always configure your server to take care of the www/no www thing automatically.
See dropwww and yes-www (saved).
Just after asking this question I came over the no-www page which says:
...Succinctly, use of the www subdomain
is redundant and time consuming to
communicate. The internet, media, and
society are all better off without it.
Take it from a domainer, Use both the www.domainname.com and the normal domainname.com
otherwise you are just throwing your traffic away to the browers search engine (DNS Error)
Actually it is amazing how many domains out there, especially amongst the top 100, correctly resolve for www.domainname.com but not domainname.com
There are MANY reasons to use the www sub-domain!
When writing a URL, it's easier to handwrite and type "www.stackoverflow.com", rather than "http://stackoverflow.com". Most text editors, email clients, word processors and WYSIWYG controls will automatically recognise both of the above and create hyperlinks. Typing just "stackoverflow.com" will not result in a hyperlink, after all it's just a domain name.. Who says there's a web service there? Who says the reference to that domain is a reference to its web service?
What would you rather write/type/say.. "www." (4 chars) or "http://" (7 chars) ??
"www." is an established shorthand way of unambiguously communicating the fact that the subject is a web address, not a URL for another network service.
When verbally communicating a web address, it should be clear from the context that it's a web address so saying "www" is redundant. Servers should be configured to return HTTP 301 (Moved Permanently) responses forwarding all requests for #.stackoverflow.com (the root of the domain) to the www subdomain.
In my experience, people who think WWW should be omitted tend to be people who don't understand the difference between the web and the internet and use the terms interchangeably, like they're synonymous. The web is just one of many network services.
If you want to get rid of www, why not change the your HTTP server to use a different port as well, TCP port 80 is sooo yesterday.. Let's change that to port 1234, YAY now people have to say and type "http://stackoverflow.com:1234" (eightch tee tee pee colon slash slash stack overflow dot com colon one two three four) but at least we don't have to say "www" eh?
There are several reasons, here are some:
1) The person wanted it this way on purpose
People use DNS for many things, not only the web. They may need the main dns name for some other service that is more important to them.
2) Misconfigured dns servers
If someone does a lookup of www to your dns server, your DNS server would need to resolve it.
3) Misconfigured web servers
A web server can host many different web sites. It distinguishes which site you want via the Host header. You need to specify which host names you want to be used for your website.
4) Website optimization
It is better to not handle both, but to forward one with a moved permanently http status code. That way the 2 addresses won't compete for inbound link ranks.
5) Cookies
To avoid problems with cookies not being sent back by the browser. This can also be solved with the moved permanently http status code.
6) Client side browser caching
Web browsers may not cache an image if you make a request to www and another without. This can also be solved with the moved permanently http status code.
There is no huge advantage to including-it or not-including-it and no one objectively-best strategy. “no-www.org” is a silly load of old dogma trying to present itself as definitive fact.
If the “big organisation that has many different services and doesn't want to have to dedicate the bare domain name to being a web server” scenario doesn't apply to you (and in reality it rarely does), which address you choose is a largely cultural matter. Are people where you are used to seeing a bare “example.org” domain written on advertising materials, would they immediately recognise it as a web address without the extra ‘www’ or ‘http://’? In Japan, for example, you would get funny looks for choosing the non-www version.
Whichever you choose, though, be consistent. Make both www and non-www versions accessible, but make one of them definitive, always link to that version, and make the other redirect to it (permanently, status code 301). Having both hostnames respond directly is bad for SEO, and serving any old hostname that resolves to your server leaves you open to DNS rebinding attacks.
Apart from the load optimization regarding cookies, there is also a DNS related reason for using the www subdomain. You can't use CNAME to the naked domain. On yes-www.org (saved) it says:
When using a provider such as Heroku or Akamai to host your web site, the provider wants to be able to update DNS records in case it needs to redirect traffic from a failing server to a healthy server. This is set up using DNS CNAME records, and the naked domain cannot have a CNAME record. This is only an issue if your site gets large enough to require highly redundant hosting with such a service.
As jdangel points out the www is good practice in some cookie situations but I believe there is another reason to use www.
Isn't it our responsibility to care for and protect our users. As most people expect www, you will give them a less than perfect experience by not programming for it.
To me it seems a little arrogant, to not set up a DNS entry just because in theory it's not required. There is no overhead in carrying the DNS entry and through redirects etc they can be redirected to a non www dns address.
Seriously don't loose valuable traffic by leaving your potential visitor with an unnecessary "site not found" error.
Additionally in a windows only network you might be able to set up a windows DNS server to avoid the following problem, but I don't think you can in a mixed environment of mac and windows. If a mac does a DNS query against a windows DNS mydomain.com will return all the available name servers not the webserver. So if in your browser you type mydomain.com you will have your browser query a name server not a webserver, in this case you need a subdomain (eg www.mydomain.com ) to point to the specific webserver.
Some sites require it because the service is configured on that particular set up to deliver web content via the www sub-domain only.
This is correct as www is the conventional sub-domain for "World Wide Web" traffic.
Just as port 80 is the standard port. Obviously there are other standard services and ports as well (http tcp/ip on port 80 is nothing special!)
Imagine mycompany...
mx1.mycompany.com 25 smtp, etc
ftp.mycompany.com 21 ftp
www.mycompany.com 80 http
Sites that don't require it basically have forwarding in dns or redirection of some-kind.
e.g.
*.mycompany.com 80 http
The onlty reason to do it as far as I can see is if you prefer it and you want to.