Per the original proposal, regarding "Prefer Secure Origins For Powerful New Features"
“Particularly powerful” would mean things like: features that handle personally-identifiable information, features that handle high-value information like credentials or payment instruments, features that provide the origin with control over the UA's trustworthy/native UI, access to sensors on the user's device, or generally any feature that we would provide a user-settable permission or privilege to. Please discuss!
“Particularly powerful” would not mean things like: new rendering and layout features, CSS selectors, innocuous JavaScript APIs like showModalDialog, or the like. I expect that the majority of new work in HTML5 fits in this category. Please discuss!
Yet for some reason service workers have been thrown into the first category. Is there any canonical reason for why this happened?
Jake Archibald from Google in official Service Workers draft spec sandbox,
later cited by Matt Gaunt from HTML5rocks states that
Using service worker you can hijack connections, fabricate, and filter responses. Powerful stuff. While you would use these powers for good, a man-in-the-middle might not. To avoid this, you can only register for service workers on pages served over HTTPS, so we know the service worker the browser receives hasn't been tampered with during its journey through the network.
To me this applies to ServiceWorker:
features that handle personally-identifiable information, features that handle high-value information like credentials or payment instruments
Being basically a proxy between the page and the server a ServiceWorker can easily intercept, read and potentially store each information contained into each request and response travelling from the origin, included personally identifiable information and passwords.
Related
We want to use ejabberd in the context of a web application having fairly unique and business rules, we'd therefore need to have every chat message (not protocol message, but message a user sends to another one) go through our web application first, and then have the web application deliver the message to ejabberd on behalf of the user (if our business rules allow the message to be sent).
The web application is also the one providing the contact lists (called rosters if I understand correctly to ejabberd). We need to be and remain the single source of truth to ease maintenance.
To us, ejabberd value added would be to deliver chat messages in near real-time to clients, and enable cool things such as presence indicators. Web clients will maintain a direct connection to ejabberd through websocket, but this connection will have to be read-only as far as chat messages are concerned, and read-write as far as presence messages are concerned.
The situation is similar with regards to audio and video calls. While this time the call per see will directly be managed by ejabberd to take advantage of built-in STURN, TURN etc... and will not need to go through our web app, we have custom business logic to manage who is able to call who, when, how often etc... (so in order words, we have custom business logic to authorize the call or not and we'd like to keep all the business logic centralized in the web app).
My question is what would be the proper hooks we'd need to look into to achieve what we are after? I spent an hour or so in the documentation, but I couldn't find what I am after so hopefully someone can provide me pointers. In an ideal world, we'd like to expose API endpoints from our web app that ejabberd hooks can hit. However, the first question is: which relevant hooks is ejabberd offering and where are these hooks documented?
Any help would be greatly appreciated, thank you!
When a client sends a packet to ejabberd, it triggers the user_send_packet hook, providing the packet and the state of the client's session process. Several modules use that hook, for example mod_service_log.
Beginning with Chrome 80, third party cookies will be blocked unless they have SameSite=None and Secure as attributes. With None as a new value for that attribute. https://blog.chromium.org/2019/10/developers-get-ready-for-new.html
The above blog post states that Firefox and Edge plan on implementing these changes at an undetermined date. And there are is a list of incompatible browsers listed here https://www.chromium.org/updates/same-site/incompatible-clients.
What would be the best practice for handling this situation for cross-browser compatibility?
An initial thought is to use local storage instead of a cookie but there is a concern that a similar change could happen with local storage in the future.
You hit on the good point that as browsers move towards stronger methods of preserving user privacy this changes how sites need to consider handling data. There's definitely a tension between the composable / embeddable nature of the web and the privacy / security concerns of that mixed content. I think this is currently coming to the foreground in the conflict between locking down fingerprinting vectors to prevent user tracking that are often the same signals used by sites to detect fraud. The age old problem that if you have perfect privacy for "good" reasons, that means all the people doing "bad" things (like cycling through a batch of stolen credit cards) also have perfect privacy.
Anyway, outside the ethical dilemmas of all of this I would suggest finding ways to encourage users to have an intentional, first-party relationship with your site / service when you need to track some kind of state related to them. It feels like a generally safe assumption to code as if all storage will be partitioned in the long run and that any form of tracking should be via informed consent. If that's not the direction things go, then I still think you will have created a better experience.
In the short term, there are some options at https://web.dev/samesite-cookie-recipes:
Use two sets of cookies, one with current format headers and one without to catch all browsers.
Sniff the useragent to return appropriate headers to the browser.
You can also maintain a first-party cookie, e.g. SameSite=Lax or SameSite=Strict that you use to refresh the cross-site cookies when the user visits your site in a top-level context. For example, if you provide an embeddable widget that gives personalised content, in the event there are no cookies you can display a message in the widget that links the user to the original site to sign in. That way you're explicitly communicating the value to your user of allowing them to be identified across this site boundary.
For a longer-term view, you can look at proposals like HTTP State Tokens which outlines a single, client-controlled token with an explicit cross-site opt-in. There's also the isLoggedIn proposal which is concerned with providing a way of indicating to the browser that a specific token is used to track the user's session.
We have a web application that occasionally receives web request that we detect as attempts to inject SQL code, from Google virtual servers (Compute Engine).
I was asked to find a way to identify who is responsible for said machines, so that we can take the corresponding legal actions on our part, or at least, confirm that Google shut down those servers.
What I need is to find a way to communicate with Google, by email or chat, but I haven't found information about it.
EDIT 1:
I have tried to communicate with Google to indicate the information I am looking for, but the only contact available in my case is with the billing department, which could not confirm that they will give me that information if I buy a technical assistance package. On the other hand, I understand that this package is to review requirements of the applications that you own, but in my case I am looking for legal information.
What was recommended to me was to enter the corresponding application in
https://support.google.com/code/contact/cloud_platform_report?hl=en
but I have not received a response for weeks.
I am disappointed in Google, especially because of the importance of computer security.
I will keep searching information.
You can find all information concerning Tech support, phone support and Chat support in your Google Cloud console. Also, this doc shows different supports based on your support role or package.
I've been playing with HTML5 location lookups recently and its relatively straightforward to pull someones location from a device like an iPhone.
I want to write an app that uses location data, but its important that the location be factual. In other words I need to prevent people from authoring a fake post to the backing website / web service with mocked up GPS coordinates.
Is there anyway to collect GPS coordinates from a mobile device using the HTML5 geolocation apis and securely transmit that back to a web service in a way that someone wouldn't be able to author a post with the same data and "game the system" so to speak?
Not without some serious encryption on the payload on the client. Which if there is money involved, someone will reverse engineer and figure out how to create valid payloads themselves. Remember if there is money or fame involved then somebody will think the effort to do something like this is "worth it". If your web service is public and not using some kind of encryption nothing on the client will ensure that someone with a network connection can't sniff your protocol and fake whatever data they want. And SSL won't cut it. Anyone can proxy the SSL connection on their local network decrypt the payload and inspect it to their hearts content.
No. Completely agree with the answer from fuzzy lollipop. If you’re talking to a remote machine, the data can always be faked. Always always. What makes you certain you’re even talking to a mobile device at all? The User-Agent string? Pfft, it can be faked. Talking to a GPS? Pfft, could be coming from a predefined path. Talking to a web browser? Pfft, could be a bot, or some other malware.
And don’t think encryption (i.e. HTTPS) is going to help you. The client could edit any of your HTML, CSS, or JavaScript on-the-fly — take Firebug or Greasemonkey for example.
The reasons why you can’t trust the client are the same as the reasons why exploits such as SQL or HTML injection are so common. Ever heard the phrase “the customer is always right”? Well, the customer may be right, but the client is always untrustworthy.
The system is there to be gamed. As flaws are discovered, you patch them one by one. It’s more like leapfrog, rather than achieving the holy grail. Bruce Schneier’s quip “security is a process, not a product” comes to mind. Asking for a system that “can’t be gamed” is missing the point. What you need to be doing is creating a system where the server sanitises the data, and/or rejects bad data — fuzz testing is not a bad idea, either.
That’s about the best you can do without shipping custom untamperable mobiles to your customers with the OS in ROM, and the inside sealed with epoxy.
I want to implement a discrete remote authentication server that handles login for many sites. Somewhat similar to OpenID.
Basically, I have site-1 and site-2 and they're both reliant on the same user database, which is on a separate auth-site. So, auth-site handles user authentication for them, and during this process, makes information on the authenticating user available to the requesting system.
Each site can be on a completely separate domain name, on completely separate machines.
This is all via HTTP(S), there can be no direct database access.
There's one last quirk: once an user has logged in to site-1, when accessing any other site reliant on auth-site, the site must treat the user as already authenticated.
This whole business must be entirely fuss-free to the end-user. It should work like a simple everyday login form.
As a concrete example, say we're talking about stackoverflow.com and serverfault.com, and they both authenticate via authentic-overflow-server-stack.com. Again, once logged in to either site, I can go to the other and do my business without logging in again.
What I'd like to know are the general interaction mechanism between the sites behind this scenario.
In my particular setup, I'm using Rails, but I'm not looking for code[1], just general best practice and guidance, so feel free to answer in pseudo-code or any generally readable language. OTOH, bear in mind that I'll have decent MVC, REST, and meta-programming in my toolkit.
[1]: unless you happen to know an existing tiny neat free MIT/BSD-licensed app/plugin/generator that handles this.
It sounds like (especially with the emphasis on fuss-free), you want something like what the Wikimedia Foundation is doing. Basically, you log on to en.wikipedia.org, then that server communicates with other servers (e.g. en.wikinews.org) and gets authentication tokens. Finally, those tokens are embedded into images, e.g. http://en.wikinews.org/wiki/Special:AutoLogin?token=xxxxxxxxxxxxxxx , and when your browser visits that url (img src) it gets a authentication cookie for Wikinews. Of course, the source code is available for your reivew at http://www.mediawiki.org/wiki/Extension:CentralAuth .
OpenID is also a good choice, but it does require that the user "consciously" visit two domains. An example of one entity with two domains doing this is Canonical. E.g., if you go to https://help.ubuntu.com/community/UserPreferences they will redirect you to Launchpad (https://login.launchpad.net/+openid) for authentication.
Note that Wikipedia is doing this over http, but you can do it all https to ensure the img src tokens aren't intercepted.
Looks like CAS is good enough for me, and has ruby implementations, along with dozens of other lesser languages, e.g. one that rhymes with femoral bone rage.
http://code.google.com/p/rubycas-server/
http://code.google.com/p/rubycas-client/
It sounds like you want to actually use the OpenID protocol itself. There's no reason you can't restrict the authentication provider to only your own server, and do some shortcuts that make the authentication process transparent. Also, the OpenID protocol supports what you describe about logging into one implies logging in to all services.