Counting Unique Visitor - unique

I want to count unique visitors and show them to visitors.
I don't want to use any 3rd party tool (like analytics or something
else)
What is a unique visitor exactly? Does the REAL unique visitor changes with IP, cookie or MAC?
I've though this way:
Get visitors IP adress
Search it from database
If exists, don't do anything
If not, insert IP adress and server time to database and add this to count
Is this way right? Should I use cookies or get MAC adresses too? BTW all these things -getting information, store it, compare it- legal?
And one last question. Can I do all these things WITHOUT database? Only with using JS, PHP and text files or something else?

IP and MAC are not good ideas, because:
Many users can share the same IP address, e.g. when behind a NAT.
You have no way of accessing the MAC address of the client, unless you have special software (not an ordinary HTTP server) and you operate on a LAN. Or you exploit some security bug in browser, but that does not count ;)
Setting a cookie with a uniquely generated value is a good idea, but be aware that cookies can be turned off and erased by the client. As of legality, as long as you declare the usage of cookies and you don't do evil things (counting unique visitors is ok), you are safe.
If you assume that a client with no cookie is a new visitor, then you don't need neither a database nor a unique value in the cookie, simply check if the cookie is present or not and set it. If you want to get more information, then, yes, you will have to keep track of unique values in cookies.

Related

DNS Issue - My old name servers still show up in the eurid WHOIS search

I have recently transferred a .eu domain to a new registrar. I issued the NS synchronisation for over 72h now. When I go to my domain, I can see that it is still some times pointing to the old site (hosted with the old registrar), and others to the new one. I also have troubles with my emails. I cannot connect anymore to the old mail server, and neither to the new one. When I do a WHOIS search at eurid, it shows me the list of name servers associated to my domain. It shows four name servers. The two new ones, and the two old ones. Is this normal, and usually an indication/symptom of a slower than usual propagation, or does it indicate that the new registrar did something wrong with the DNS configuration?
It really does sound like the registrar made a mistake. A WHOIS lookup shouldn't have cached information. I recently updated my WHOIS information on a .com, and it was instantaneous.
Now, that doesn't mean that DNS lookups are going to propagate instantaneously however. Since you used the phrase "WHOIS search", I'm not entirely certain you mean the standard "WHOIS" lookup.

response to phpMyAdmin sniffing

I have been developing and running a small website using apache2 for several years, and ~once per day, my error log is spammed with requests for nonexistent files related to PHPMyAdmin. My site does not use PHP, though there is an active MySQL server (using non-conventional settings). All requests are made over a span of 2-5 seconds. Am I safe in assuming these are all requests sniffing for vulnerabilities, or is there any instance in which a legitimate site/company/server might need this information? e.g. advertisers and such? As it is, I've got a script setup to automatically ban any IP that attempts to access one of these nonexistent files. Also, if all of these requests are people searching for vulnerabilities, is there any way to have some fun with the perpetrators? e.g. a well-placed redirect to the NSA? Thanks.
There is nothing to worry about. Most likely those will be automated bots that search for publicly released vulnerabilities (or their identifiers, such as a specific url), default box set ups, default username/password combinations etc. Those bots are looking for quick and easy exploitation, so normally they will only search for a couple of urls and then move on, thus there is nothing to worry about. You will have to get to used to this though, because as the site will grow, those may occur more commonly (then you might want to start thinking about restricting access by IP range etc)
To improve security against brute-force login attempts, version 4.1.0-rc1 has an optional reCAPTCHA module.

Database problems when allowing multiple browser persistent log ins

I am trying to implement a 'remember me' system with cookies that will remember a user across browsers meaning that if a user logs into a website using browser A and checks 'remember me', and then logs into browser B using 'remember me', he will continue to be automatically logged in regardless of which browser he uses. (checking 'remember me' in browser B will not break his persistent login in browser A).
To do this, I set up my database so that multiple keys can be stored alongside a user id. When a user logs onto my website, the cookie's value is checked. If that value is found in the database, the user is assigned a new cookie and that cookie key entry in the database is updated to match. Other keys are left alone so that other browsers' login persistence will not be affected. When a user logs out manually, the cookie is checked, the corresponding entry in the database is deleted, and then the cookie is deleted.
The problem comes up when a user manually deletes his cookie. If the user does this, I have no way of deleting the corresponding entry in the database. It will simply become a permanent entry in my database. This was not a problem when I was not trying to support cross-browser 'remember me', but has become a problem by allowing multiple cookie keys to be stored.
Is there any way that I can fix / avoid this?
There is a ton of information out there on persistent logins, but persistent logins across browsers never seems to be covered, so any help would be great. (Also feel free to critique my approach and any security issues. It seemed way more secure when I was only allowing one 'remember me' per user, but persistent log ins across browsers seems like functionality that users would want).
I am using MySQL and PHP.
I agree with #llion's suggestion of setting an expiry on the cookies, in which case you can schedule a process to clear out expired cookies from the dB. However, you can make this appear to the user almost as though the cookies are indefinitely persistent by extending their life whenever you see them.
For the benefit of any other readers interested in this question, I really hope that you are only storing hashes of the cookie in your dB.
I would suggest going with a "remember me (long enough)" solution. Set an expiry on the sessions but make it a lengthy one. Depending on how often you would expect users to login this could be anything from 8 hours to a week to a year plus. Each time they visit with a valid cookie you update the expiry behind the scenes and it appears persistent. If they delete cookies then eventually their session will be removed.
(If you're not actually using sessions, which it doesn't sound like you are, you'd need to add some maintenance coding around this. Probably best to learn about sessions instead of reinventing the wheel.)
To answer your question clearly:
There is no way for you to know of rogue remember_me tokens on the wild, the only real solution will be to be make your remember_me tokens last only a couple of weeks, then cron-job or daemon kill them.
This fixes your DB overcrowding which seems to be the use case of your request.
Please take a note you are facing a reality problem, where there is no way you can guess when a user has deleted the cookie, no backprocess is fired from the browser or other method, so the only approach will be to kill them regularly if not used, and refresh the expiration date once used.
The way you describe your system is more secure, (if done right) that long live php sessions, so i suggest you keep your current approach, secure it with series+tokens, and kill the unused for a couple of weeks long_live tokens.
Hope that helps you.
ummm, what happens if he is on another machine and uses a browser, same login? it's sure to happen. in our house I do this all the time. I have 3 boxes downstairs and my mother has 2 machines upstairs.
maybe you can guarantee a session is unique using microtime and the UA string from navigatior.userAgent
but you can't get the computername. but you could possibly get their IP address through the JS api. http://www.w3.org/TR/2010/WD-system-info-api-20100202/#network but using this might trigger some sort of warning dialog in the browser. nope. doesn't work.
java can get the ip.

Simple, secure API authentication system

I have a simple REST JSON API for other websites/apps to access some of my website's database (through a PHP gateway). Basically the service works like this: call example.com/fruit/orange, server returns JSON information about the orange. Here is the problem: I only want websites I permit to access this service. With a simple API key system, any website could quickly attain a key by copying the key from an authorized website's (potentially) client side code. I have looked at OAuth, but it seems a little complicated for what I am doing. Solutions?
You should use OAuth.
There are actually two OAuth specifications, the 3-legged version and the 2-legged version. The 3-legged version is the one that gets most of the attention, and it's not the one you want to use.
The good news is that the 2-legged version does exactly what you want, it allows an application to grant access to another via either a shared secret key (very similar to Amazon's Web Service model, you will use the HMAC-SHA1 signing method) or via a public/private key system (use signing method: RSA-SHA1). The bad news, is that it's not nearly as well supported yet as the 3-legged version yet, so you may have to do a bit more work than you otherwise might have to right now.
Basically, 2-legged OAuth just specifies a way to "sign" (compute a hash over) several fields which include the current date, a random number called "nonce," and the parameters of your request. This makes it very hard to impersonate requests to your web service.
OAuth is slowly but surely becoming an accepted standard for this kind of thing -- you'll be best off in the long run if you embrace it because people can then leverage the various libraries available for doing that.
It's more elaborate than you would initially want to get into - but the good news is that a lot of people have spent a lot of time on it so you know you haven't forgotten anything. A great example is that very recently Twitter found a gap in the OAuth security which the community is currently working on closing. If you'd invented your own system, you're having to figure out all this stuff on your own.
Good luck!
Chris
OAuth is not the solution here.
OAuth is when you have endusers and want 3rd party apps not to handle end user passwords. When to use OAuth:
http://blog.apigee.com/detail/when_to_use_oauth/
Go for simple api-key.
And take additional measures if there is a need for a more secure solution.
Here is some more info, http://blog.apigee.com/detail/do_you_need_api_keys_api_identity_vs._authorization/
If someone's client side code is compromised, they should get a new key. There's not much you can do if their code is exposed.
You can however, be more strict by requiring IP addresses of authorized servers to be registered in your system for the given key. This adds an extra step and may be overkill.
I'm not sure what you mean by using a "simple API key" but you should be using some kind of authentication that has private keys(known only to client and server), and then perform some kind of checksum algorithm on the data to ensure that the client is indeed who you think it is, and that the data has not been modified in transit. Amazon AWS is a great example of how to do this.
I think it may be a little strict to guarantee that code has not been compromised on your clients' side. I think it is reasonable to place responsibility on your clients for the security of their own data. Of course this assumes that an attacker can only mess up that client's account.
Perhaps you could keep a log of what ip requests are coming from for a particular account, and if a new ip comes along, flag the account, send an email to the client, and ask them to authorize that ip. I don't know maybe something like that could work.
Basically you have two options, either restrict access by IP or then have an API key, both options have their positive and negative sides.
Restriction by IP
This can be a handy way to restrict the access to you service. You can define exactly which 3rd party services will be allowed to access your service without enforcing them to implement any special authentication features. The problem with this method is however, that if the 3rd party service is written for example entirely in JavaScript, then the IP of the incoming request won't be the 3rd party service's server IP, but the user's IP, as the request is made by the user's browser and not the server. Using IP restriction will hence make it impossible to write client-driven applications and forces all the requests go through the server with proper access rights. Remember that IP addresses can also be spoofed.
API key
The advantage with API keys is that you do not have to maintain a list of known IPs, you do have to maintain a list of API keys, but it's easier to automatize their maintenance. Basically how this works is that you have two keys, for example a user id and a secret password. Each method request to your service should provide an authentication hash consisting of the request parameters, the user id and a hash of these values (where the secrect password is used as the hash salt). This way you can both authenticate and restrict access. The problem with this is, that once again, if the 3rd party service is written as client-driven (for example JavaScript or ActionScript), then anyone can parse out the user id and secret salt values from the code.
Basically, if you want to be sure that only the few services you've specifically defined will be allowed to access your service, then you only option is to use IP restriction and hence force them to route all requests via their servers. If you use an API key, you have no way to enforce this.
All of production of IP's security seems produces a giant bug to users before getting connected. Symbian 60s has the fullest capability to left an untraced, reliable and secure signal in the midst of multiple users(applying Opera Handler UI 6.5, Opera Mini v8 and 10) along with the coded UI's, +completely filled network set-up. Why restrict for other features when discoverable method of making faster link method is finally obtained. Keeping a more identified accounts, proper monitoring of that 'true account'-if they are on the track-compliance of paying bills and knowing if the users has an unexpired maintaining balance will create a more faster link of internet signal to popular/signatured mobile industry. Why making hard security features before getting them to the site, a visit to their accounts monthly may erase all of connectivity issues? All of the user of mobile should have no capability to 'get connected' if they have unpaid bills. Why not provide an 'ALL in One' -Registration/Application account, a programmed fixed with OS, (perhaps an e-mail account) instead with a 'monitoring capability' if they are paying or not (password issues concern-should be given to other department). And if 'not' turn-off their account exactly and their other link features. Each of them has their own interests to where to get hooked daily, if you'd locked/turn them off due to unpaid bills that may initiate them to re-subscribe and discipline them more to become a more responsible users and that may even expire an account if not maintained. Monthly monitoring or accessing of an identified 'true account' with collaboration to the network provider produces higher privacy instead of always asking for users 'name' and 'password', 'location', 'permissions' to view their data services. IP's marked already their first identity or 'finding the location of the users' so, it's seems unnessary to place it on browsers pre-searches, why not use 'Obtaining data' or 'Processing data.'

Safely store credentials between website visits

I'm building a website which allows users to create accounts and access the site's content. I don't want users to log in each time they visit the site, so I'm planning on storing the username and password in a cookie -- however, I've heard this is bad practice, even if the password is hashed in the cookie.
What "best practices" should I follow to safely remember of a users credentials between visits to my website?
Don't ever do that. Throwing around passwords in the open.
Safest method:
Store the username in a database, in the same row a randomly generated salt value, in the same row a hash checksum of the password including the salt. Use another table for sessions that references the table with user credentials. You can insert in the sessions table when the user logs in a date you want the session to expire (eg. after 15 days). Store the session id in a cookie.
Next time the user logs in, you get the password, add to it the salt for the user, geterate the hash, compare it to the one you have. If they match open a session by inserting a row in the sessions table and sending the session id in a cookie. You can check if the user has logged in and which user it is by this cookie.
Edit:
This method is the most popular in use on most sites. It hits a good balance between being secure and practical.
You don't simply use an autoincrement value for the session id. You make it by using some complicated checksum which is hard to repeat. For example concatenate username, timestamp, salt, another random salt, and make an md5 or sha checksum out of it.
In order to implement a feature that involves user credentials in a website/service there most be some exchange of data related to the credentials between the client and the server. This exposes the data to man in the middle attacs etc. Additionally cookies are stored in the users harddrive. No method can be 100% safe.
If you want additional security you can make your site go over https. This will prevent people from stealing cookies and passwords using man in the middle attacks.
Note:
Involving IP addresses in the mix is not a really good idea. Most often multiple clients will come from the same IP address over NATs etc.
You shouldn't need to store the password, just an identifier for the user that your application can interpret to be them.
Things you need to be aware of:
If the cookie is copied, will another user be able to pretend to be that user
A user shouldn't be able to construct a cookie that would authenticate them as another user
A possible solution to deal with these would be to create a one-time key for each user that is changed when they next use the application.
You will probably never be able to remember a user fully securely, so this should only be used if there is no sensitive data involved.
Passwords in any form shouldn't be stored in cookies. Cookies can easily be stolen.
Some browsers already support saving passwords. Why not let the user use that instead?
Storing a hash of the username in a cookie could provide this "remember me" functionality.
However for sensitive areas of the system you would need to know that a user entered the system on cached credentials so that you could offer a username/password prompt before you let them cause any real damage. This could be held as a session based flag.