Some websites we're running have been audited for security issues and one problem is that we don't protect authentification forms against brute force.
It has been decided that we would implement a CAPTCHA system after several wrong auth attempts.
The problem I'm dugging now is finding on what criteria we could identify unique visitors.
Since our users are often grouped behind the same IP address, this criteria won't be enough.
Cookies can be desactivated. I read somewhere in here an advice to use user agent and / or some keys from the HTTP request headers but I guess a trained hacker would generate new ones as he tries to brute force our websites.
Seeing that the conversion rates isn't an argument for our web site, what is the best pratice to identify unique visitors?
Thanks for your help!
Related
I have a laravel site and i have a mysql table with, among other columns, a column with a Md5 unique code.
Then I create a Get request with parameter id = Md5 that reload in a form web the others datas specific for that Md5 number.
It could be considered a site secure about these informations, even if this site has no login and password?
Or there is another method to implement a secure page for datas without login and pass
Thx a lot
#Marco, for the overall "Is my site secure?" question, theres not any method/tech that ensure you 100% security, so probably the question should be "Is this site secure enough, based on its usage scenario? Which risks are we able to take in?"
Md5 hashes can be easily cracked by bruteforce, so a possible attack to your site could involve using a md5 dictionary attack to exfiltrate all possible data.
I'd think about other non-static(without user-password pairs) authentication methods as :
Challenge response
One-time passwords based on previous passwords
One-time passwords based on time
Of course, all of the above are subject to resources avaliable to your organization, and again, your current site usage context.
I'm building a website, where users are going to store sensitive data. People who have access to the DB should not be able to view this data, but I can't use hashing functions, as the users will need to view the data they have stored. How should I go about this?
TL;DR: Encrypting columns of databases won't help much.
Best practice: figure out your threat model before you spend time and money securing your system. If you build complex security measures without a clear idea of your threat model, you'll trick yourself into a false sense of safety.
And, encrypting columns in a DBMS is a complex security measure.
What is your threat model? What attacks will you get? By whom? How will attacks damage you?
Your most likely outsider attack comes from cybercriminals breaking into your web servers to steal your users' information to commit identity theft (Equifax), blackmail (Ashley Madison), or espionage (US Government human resources database).
If you encrypt some columns in your DBMS and your web users need to be able to view and update those columns, your web servers will have to know the encryption and decryption keys. If a criminal pwns your web server, he will have your keys. Therefore he will have access to the encrypted columns of your dbms. And, he'll have a big signpost saying LOOK! Here's the secret stuff!
There are plenty of other imaginable outsider attacks, of course. Somebody could break through your firewall and hit your database directly. Somebody could get into a cache and grab cached sensitive data. Somebody could guess your web app's administrator password. Or, steal a bulk upload file.
Your proposed design imagines an insider attack. People who already have DBMS access credentials must be prevented from seeing certain columns in certain tables. What will they do with that information? You didn't say. What's the threat?
Stopping short of encryption, you can do these things to keep your insiders from violating your users' confidentiality.
Get the sensitive data out of your system entirely. For example, if you're handling credit cards, work with stripe.com or braintree.com. They'll hold your secrets for you, and they have excellent cybersecurity teams.
Sort out whether you can trust your insiders. Investigate prospective employees, etc.
Establish clear security policies. For example, "We never look at the credit_card table unless we have a specific need to do so." If you're handling health care data in the US, you already have HIPAA guidelines. Get your insiders to understand and agree to your guidelines.
Sack insiders who violate these policies intentionally.
Build mechanisms to help enforce policies. Issue each insider his or her own username/password pair to access the DBMS. Use selective GRANT operations at the table and column level to allow and disallow viewing of data. For example,
GRANT SELECT (name, address) ON person TO username#'%';
lets username see the name and address columns, but not the taxpayer_id column in the person table. Read this. https://dev.mysql.com/doc/refman/5.7/en/grant.html#grant-column-privileges
Spend your time and money on good firewalls protecting your DBMS machines. Study up on OWASP and follow those practices. Spend time and money running penetration tests of your web app and fixing the problems. Spend them on vetting and training your insiders. These things slow down attackers more effectively than the imagined magic bullet of encrypted columns.
There's the old joke about the two guys and the bear.
Bear: Roar.
Adam: Uh oh, I don't know if we can run faster than this bear.
Bill: I just have to run faster than you.
That's a good way to handle security for your small web site. Make it hard enough to crack that the bad guys will attack somebody else.
If you're running a big web site with a large number of sensitive records (I'm looking at you, Equifax) this isn't good enough.
I have been developing and running a small website using apache2 for several years, and ~once per day, my error log is spammed with requests for nonexistent files related to PHPMyAdmin. My site does not use PHP, though there is an active MySQL server (using non-conventional settings). All requests are made over a span of 2-5 seconds. Am I safe in assuming these are all requests sniffing for vulnerabilities, or is there any instance in which a legitimate site/company/server might need this information? e.g. advertisers and such? As it is, I've got a script setup to automatically ban any IP that attempts to access one of these nonexistent files. Also, if all of these requests are people searching for vulnerabilities, is there any way to have some fun with the perpetrators? e.g. a well-placed redirect to the NSA? Thanks.
There is nothing to worry about. Most likely those will be automated bots that search for publicly released vulnerabilities (or their identifiers, such as a specific url), default box set ups, default username/password combinations etc. Those bots are looking for quick and easy exploitation, so normally they will only search for a couple of urls and then move on, thus there is nothing to worry about. You will have to get to used to this though, because as the site will grow, those may occur more commonly (then you might want to start thinking about restricting access by IP range etc)
To improve security against brute-force login attempts, version 4.1.0-rc1 has an optional reCAPTCHA module.
I have a simple REST JSON API for other websites/apps to access some of my website's database (through a PHP gateway). Basically the service works like this: call example.com/fruit/orange, server returns JSON information about the orange. Here is the problem: I only want websites I permit to access this service. With a simple API key system, any website could quickly attain a key by copying the key from an authorized website's (potentially) client side code. I have looked at OAuth, but it seems a little complicated for what I am doing. Solutions?
You should use OAuth.
There are actually two OAuth specifications, the 3-legged version and the 2-legged version. The 3-legged version is the one that gets most of the attention, and it's not the one you want to use.
The good news is that the 2-legged version does exactly what you want, it allows an application to grant access to another via either a shared secret key (very similar to Amazon's Web Service model, you will use the HMAC-SHA1 signing method) or via a public/private key system (use signing method: RSA-SHA1). The bad news, is that it's not nearly as well supported yet as the 3-legged version yet, so you may have to do a bit more work than you otherwise might have to right now.
Basically, 2-legged OAuth just specifies a way to "sign" (compute a hash over) several fields which include the current date, a random number called "nonce," and the parameters of your request. This makes it very hard to impersonate requests to your web service.
OAuth is slowly but surely becoming an accepted standard for this kind of thing -- you'll be best off in the long run if you embrace it because people can then leverage the various libraries available for doing that.
It's more elaborate than you would initially want to get into - but the good news is that a lot of people have spent a lot of time on it so you know you haven't forgotten anything. A great example is that very recently Twitter found a gap in the OAuth security which the community is currently working on closing. If you'd invented your own system, you're having to figure out all this stuff on your own.
Good luck!
Chris
OAuth is not the solution here.
OAuth is when you have endusers and want 3rd party apps not to handle end user passwords. When to use OAuth:
http://blog.apigee.com/detail/when_to_use_oauth/
Go for simple api-key.
And take additional measures if there is a need for a more secure solution.
Here is some more info, http://blog.apigee.com/detail/do_you_need_api_keys_api_identity_vs._authorization/
If someone's client side code is compromised, they should get a new key. There's not much you can do if their code is exposed.
You can however, be more strict by requiring IP addresses of authorized servers to be registered in your system for the given key. This adds an extra step and may be overkill.
I'm not sure what you mean by using a "simple API key" but you should be using some kind of authentication that has private keys(known only to client and server), and then perform some kind of checksum algorithm on the data to ensure that the client is indeed who you think it is, and that the data has not been modified in transit. Amazon AWS is a great example of how to do this.
I think it may be a little strict to guarantee that code has not been compromised on your clients' side. I think it is reasonable to place responsibility on your clients for the security of their own data. Of course this assumes that an attacker can only mess up that client's account.
Perhaps you could keep a log of what ip requests are coming from for a particular account, and if a new ip comes along, flag the account, send an email to the client, and ask them to authorize that ip. I don't know maybe something like that could work.
Basically you have two options, either restrict access by IP or then have an API key, both options have their positive and negative sides.
Restriction by IP
This can be a handy way to restrict the access to you service. You can define exactly which 3rd party services will be allowed to access your service without enforcing them to implement any special authentication features. The problem with this method is however, that if the 3rd party service is written for example entirely in JavaScript, then the IP of the incoming request won't be the 3rd party service's server IP, but the user's IP, as the request is made by the user's browser and not the server. Using IP restriction will hence make it impossible to write client-driven applications and forces all the requests go through the server with proper access rights. Remember that IP addresses can also be spoofed.
API key
The advantage with API keys is that you do not have to maintain a list of known IPs, you do have to maintain a list of API keys, but it's easier to automatize their maintenance. Basically how this works is that you have two keys, for example a user id and a secret password. Each method request to your service should provide an authentication hash consisting of the request parameters, the user id and a hash of these values (where the secrect password is used as the hash salt). This way you can both authenticate and restrict access. The problem with this is, that once again, if the 3rd party service is written as client-driven (for example JavaScript or ActionScript), then anyone can parse out the user id and secret salt values from the code.
Basically, if you want to be sure that only the few services you've specifically defined will be allowed to access your service, then you only option is to use IP restriction and hence force them to route all requests via their servers. If you use an API key, you have no way to enforce this.
All of production of IP's security seems produces a giant bug to users before getting connected. Symbian 60s has the fullest capability to left an untraced, reliable and secure signal in the midst of multiple users(applying Opera Handler UI 6.5, Opera Mini v8 and 10) along with the coded UI's, +completely filled network set-up. Why restrict for other features when discoverable method of making faster link method is finally obtained. Keeping a more identified accounts, proper monitoring of that 'true account'-if they are on the track-compliance of paying bills and knowing if the users has an unexpired maintaining balance will create a more faster link of internet signal to popular/signatured mobile industry. Why making hard security features before getting them to the site, a visit to their accounts monthly may erase all of connectivity issues? All of the user of mobile should have no capability to 'get connected' if they have unpaid bills. Why not provide an 'ALL in One' -Registration/Application account, a programmed fixed with OS, (perhaps an e-mail account) instead with a 'monitoring capability' if they are paying or not (password issues concern-should be given to other department). And if 'not' turn-off their account exactly and their other link features. Each of them has their own interests to where to get hooked daily, if you'd locked/turn them off due to unpaid bills that may initiate them to re-subscribe and discipline them more to become a more responsible users and that may even expire an account if not maintained. Monthly monitoring or accessing of an identified 'true account' with collaboration to the network provider produces higher privacy instead of always asking for users 'name' and 'password', 'location', 'permissions' to view their data services. IP's marked already their first identity or 'finding the location of the users' so, it's seems unnessary to place it on browsers pre-searches, why not use 'Obtaining data' or 'Processing data.'
We have website with articles users can vote for. What is the recommended method of limiting votes?
There are so many sites that have voting implemented that I know some possible solutions but I guess that is some basic bulletproof recommended method based on sessions, IPs, time limit, etc.
What is the best way to send votes from browser? Basic GET/POST or AJAX request? Is it necessary to use some pregenerated request-id?
Update: We cannot use user registration.
[...] bulletproof [...]
Impossible.
Limiting by account will help - IP addresses are far to dynamic and easily changeable to be remotely "secure". You then of course have to limit account creation, again, difficult..
Stackoverflow does it quite nicely (there was blog-entry about this recently, "New Question / Answer Rate Limits") - basically have accounts where you have to actively participate for a while before you can vote. Then you are rate-limited (by account) until you've participated for a bit longer. Then the limits are removed, so you don't annoy more active (more trusted) users.
If you just want to prevent causal, "accidental" voting, limit by cookie, and possibly also by IP (bearing in mind more than one user can be behind a single IP).. If you want to try and prevent abuse, require accounts which you can't just click "signup" for (or rather, one you cannot write a "click signup 2000 times"-script for), although this isn't always possible (or practical)
The best way of preventing duplicate posts is having only signed in users vote. That way you can store their vote in some data storage (DB).
If you want to allow for users to vote anonymously, use the browser session.
The downside of this is that they can just close/reopen the browser and revote.
I would not recommend using IP for restricting votes, since many users can be behind a proxy, so it will look like they have the same IP. If one of those users vote, the others could not vote anymore.
This may help for your bulletprof recommendation request : Content Voting Database and Application Design
There's no bulletproof solution unless you require some serious (banking level) authentication. That said, the basic solution is to use sessions (cookies). IP limiting is a very bad idea (for example I'm sharing an IP with about 20 other people).
Use authenticated users
Don't block IP
Don't verify votes by cookies
Try to use captcha if same IP is voting multiple times with different accounts
If you want to allow non authenticated users, then you're sure to have to use captcha to avoid bots. But still i think that the best is to allow vote to authenticated users only.
You can make something like, a user younger than 1h/2h can't vote to avoid bots creating accounts and feeding votes.
The best approach should be by user_id, one user can only vote one time on each challenge, each challenge should have an unique ID.
Allow new users to register.
Check if user is authenticated.
Check if user already voted in challenge ID, before creating a new vote.