Database problems when allowing multiple browser persistent log ins - mysql

I am trying to implement a 'remember me' system with cookies that will remember a user across browsers meaning that if a user logs into a website using browser A and checks 'remember me', and then logs into browser B using 'remember me', he will continue to be automatically logged in regardless of which browser he uses. (checking 'remember me' in browser B will not break his persistent login in browser A).
To do this, I set up my database so that multiple keys can be stored alongside a user id. When a user logs onto my website, the cookie's value is checked. If that value is found in the database, the user is assigned a new cookie and that cookie key entry in the database is updated to match. Other keys are left alone so that other browsers' login persistence will not be affected. When a user logs out manually, the cookie is checked, the corresponding entry in the database is deleted, and then the cookie is deleted.
The problem comes up when a user manually deletes his cookie. If the user does this, I have no way of deleting the corresponding entry in the database. It will simply become a permanent entry in my database. This was not a problem when I was not trying to support cross-browser 'remember me', but has become a problem by allowing multiple cookie keys to be stored.
Is there any way that I can fix / avoid this?
There is a ton of information out there on persistent logins, but persistent logins across browsers never seems to be covered, so any help would be great. (Also feel free to critique my approach and any security issues. It seemed way more secure when I was only allowing one 'remember me' per user, but persistent log ins across browsers seems like functionality that users would want).
I am using MySQL and PHP.

I agree with #llion's suggestion of setting an expiry on the cookies, in which case you can schedule a process to clear out expired cookies from the dB. However, you can make this appear to the user almost as though the cookies are indefinitely persistent by extending their life whenever you see them.
For the benefit of any other readers interested in this question, I really hope that you are only storing hashes of the cookie in your dB.

I would suggest going with a "remember me (long enough)" solution. Set an expiry on the sessions but make it a lengthy one. Depending on how often you would expect users to login this could be anything from 8 hours to a week to a year plus. Each time they visit with a valid cookie you update the expiry behind the scenes and it appears persistent. If they delete cookies then eventually their session will be removed.
(If you're not actually using sessions, which it doesn't sound like you are, you'd need to add some maintenance coding around this. Probably best to learn about sessions instead of reinventing the wheel.)

To answer your question clearly:
There is no way for you to know of rogue remember_me tokens on the wild, the only real solution will be to be make your remember_me tokens last only a couple of weeks, then cron-job or daemon kill them.
This fixes your DB overcrowding which seems to be the use case of your request.
Please take a note you are facing a reality problem, where there is no way you can guess when a user has deleted the cookie, no backprocess is fired from the browser or other method, so the only approach will be to kill them regularly if not used, and refresh the expiration date once used.
The way you describe your system is more secure, (if done right) that long live php sessions, so i suggest you keep your current approach, secure it with series+tokens, and kill the unused for a couple of weeks long_live tokens.
Hope that helps you.

ummm, what happens if he is on another machine and uses a browser, same login? it's sure to happen. in our house I do this all the time. I have 3 boxes downstairs and my mother has 2 machines upstairs.
maybe you can guarantee a session is unique using microtime and the UA string from navigatior.userAgent
but you can't get the computername. but you could possibly get their IP address through the JS api. http://www.w3.org/TR/2010/WD-system-info-api-20100202/#network but using this might trigger some sort of warning dialog in the browser. nope. doesn't work.
java can get the ip.

Related

Example Data of localstorage and sessionstorage

I understand the textbook definition/concept of localstorage and sessionstorage. I really should write, "I believe I do". My 2 questions are as follows:
Can you provide a clear example of when one (localstorage/session storage) should be used over the other? Basically, what data should
be stored in the localstorage and what data would be stored in the
sessionstorage? I have read a list of country codes could go into the local storage, I ponder if this is really right. What would happen if the country list changes, wouldn't the old list always display and how would one refresh the list upon a change?
What happens when the localstorage and/or sessionstorage hits
the max mb for the browser?
1) The data you store either with LocalStorage or SessionStorage depends on how you want your user to experience your application.
For example, if you have a login page, the username should be something kept with LocalStorage, because probably this same user will log into your app multiple times and not necesseraly wants to save the password in the browser. Having the username in LocalStorage will make it easier for the user to login in the future, even after closing the browser or changing tabs.
But, if you have a system that provides services like booking, searching or maybe comparison between products, storing data with SessionStorage would be better, because although the values set by the user while using your application won't change during this session, they might - and probably will - change in a future use of your application.
In your case specifically, and repeating what was said in the beginning, even with changes in your list of countries, you need to have in mind how your user will interact with your system and what are your needs with the data that is being provided by them.
Don't forget you can always clean the localStorage if you need, and set new values as they appear.
2) There's a really good explanation of how the browser responds to a full memory here

How to detect sql table updates with multiple open browsers or windows?

-I have an html page with a textbox element with autocomplete feature.
-The autocomplete list is filled from Mysql table called X.
-A user can open this page from multiple browsers or windows at the same time.
-The user is able to add new records or update existing records to table X from the same page.
Now as he adds new records I want the other window or the browser detect that a change happened in the table and refresh the autocomplete list so it is visible there too.
How can I achieve this?
I am thinking of checking if the table changed on every keypress of the textbox, but I am afraid that's gonna slow the page.
The other solution I was thinking is can I apply a trigger in this case?
I know this is used alot for example you can open your gmail account from multiple browser or window and if you edit anything you will be able to see it from the rest.
I appreciate your help as I searched alot about this but I couldn't find a solution.
This is a very broad question and has many, many answers. It also depends on your database back end. Among many is couple of note worthy ones, if you use a Bus of some sort in the back end you can push your change to the db then to the bus and your web client can consume it from there so it know to refresh. The other is use a trigger (if you're using MSSQL) to push the change, using a CLR assembly your created to an MSMQueue and consume it from there, it'll reduce the constant polling for the db. Personally I always use the Bus for this kind of things but it depends on your set up.
A SQL trigger wouldn't help here - that's just for running logic inside the DB. The issue is that you don't have a way to push changes down to the client (except perhaps Web sockets or something, but that would probably be a lot of work), so you would have to resort to polling the server for updates. Doing so on key press might be excessive - perhaps on focus and/or periodically (every minute?)? To lessen the load, you could have the client make the request using some indicator of the state that it last successfully fetched, and have the server only return changes (deletions and insertions - an update would be a combination of the two) so then rather than the full list every time it is only a delta.
Within a single browser you may be able to incorporate local storage as well, but that won't help across multiple browsers.
Another option would be to not store the autocomplete options locally and always fetch from the server (on key press). Typically you would not send the request when the input length is less than some threshold (say, 3 characters) to try to keep the result size reasonable. You can also throttle the key press event so that multiple presses in quick succession get combined into only one request sent, and also store and cancel any outstanding asynchronous requests before sending a new one. This approach will guarantee you always get the most current data from the database, and while it will add a degree of latency to the autocomplete in my experience it is rarely an issue.

Secure AND Stateless JWT Implementation

Background
I am attempting to implement token authentication with my web application using JSON Web tokens.
There are two things I am trying to maintain with whatever strategy I end up using: statelessness and security. However, from reading answers on this site and blog posts around the internet, there appears to be some folks who are convinced that these two properties are mutually exclusive.
There are some practical nuances that come into play when trying to maintain statelessness. I can think of the following list:
Invalidating compromised tokens on a per-user basis before their expiration date.
Allowing a user to log out of all of their "sessions" on all machines at once and having it take immediate effect.
Allowing a user to log out of the current "session" on their current machine and having it take immediate effect.
Making permission/role changes on a user record take immediate effect.
Current Strategy
If you utilize an "issued time" claim inside the JWT in conjunction with a "last modified" column in the database table representing user records, then I believe all of the points above can be handled gracefully.
When a web token comes in for authentication, you could query the database for the user record and:
if (token.issued_at < user.last_modified) then token_valid = false;
If you find out someone has compromised a user's account, then the user can change their password and the last_modified column can be updated, thus invalidating any previously issued tokens. This also takes care of the problem with permission/role changes not taking immediate effect.
Additionally, if the user requests an immediate log out of all devices then, you guessed it: update the last_modified column.
The final problem that this leaves is per-device log out. However, I believe this doesn't even require a trip to the server, let alone a trip to the database. Couldn't the sign out action just trigger some client-side event listener to delete the secure cookie holding the JWT?
Problems
First of all, are there any security flaws that you see in the approach above? How about a usability issue that I am missing?
Once that question is resolved, I'm really not fond of having to query the database each time someone makes an API request to a secure end point, but this is the only strategy that I can think of. Does anyone have any better ideas?
You have made a very good analysis of how some common needs break the stateleness of JWT. I can only propose some improvements on your current strategy
Current strategy
The drawback I see is that always is required a query to the database. And trivial modifications on user data could change last_modified and invalidate tokens.
An alternative is to maintain a token blacklist. Usually is assigned an ID to each token, but I think you can use the last_modified. As operations revocation of tokens probably are rare, you could keep a light blacklist (even cached in memory) with just userId, and last_modified.
You only need to set an entry after updating critical data on user (password, permissions, etc) and currentTime - maxExpiryTime < last_login_date. The entry can be discarded when currentTime - maxExpiryTime > last_modified (no more non-expired tokens sent).
Could not sign out the action just trigger some client-side event listener to delete the cookie secure holding the JWT?
If you are in the same browser with several open tabs, you can use the localStorage events to sync info between tabs to build a logout mechanism (or login / user changed). If you mean different browsers or devices, then a you would need to send some way of event from server to client. But it means maintain an active channel, for example a WebSocket, or sending a push message to a native mobile app
Are there any security flaws that you 'see in the above approach?
If you are using a cookie, note you need to set an additional protection against CSRF attacks. Also if you do not need to access cookie from client side, mark it as HttpOnly
How about a usability issue that i am missing?
You need to deal also with rotating tokens when the are close to expire.

response to phpMyAdmin sniffing

I have been developing and running a small website using apache2 for several years, and ~once per day, my error log is spammed with requests for nonexistent files related to PHPMyAdmin. My site does not use PHP, though there is an active MySQL server (using non-conventional settings). All requests are made over a span of 2-5 seconds. Am I safe in assuming these are all requests sniffing for vulnerabilities, or is there any instance in which a legitimate site/company/server might need this information? e.g. advertisers and such? As it is, I've got a script setup to automatically ban any IP that attempts to access one of these nonexistent files. Also, if all of these requests are people searching for vulnerabilities, is there any way to have some fun with the perpetrators? e.g. a well-placed redirect to the NSA? Thanks.
There is nothing to worry about. Most likely those will be automated bots that search for publicly released vulnerabilities (or their identifiers, such as a specific url), default box set ups, default username/password combinations etc. Those bots are looking for quick and easy exploitation, so normally they will only search for a couple of urls and then move on, thus there is nothing to worry about. You will have to get to used to this though, because as the site will grow, those may occur more commonly (then you might want to start thinking about restricting access by IP range etc)
To improve security against brute-force login attempts, version 4.1.0-rc1 has an optional reCAPTCHA module.

good approach in tracking data for unregistered users

This is how the system works:
I have a catalog of items. An guest user can choose to add an item from the catalog to what we call the inquiry bin. The system keeps track of the items added to the inquiry bin for that particular session. The user can delete items from the bin.
I was wondering what may be the most optimal way of storing these items. Database? Sessions? or Cookies?
Thanks in advance!
Are these inquiry items required to be available to everyone? Or just the particular user that created them?
If they have to be globally available, then you'd have to stick them in the database, with appropriate flag fields to mark them as temporary and which session created them. If it's per user, then it's best to stick them in the session.
Cookies shouldn't be used for major data storage, even if it's just a few items. The less data the client has, the less chance there is to mess around with the innards of your system by feeding bad data via the cookie. If there's just a session ID, then there's essentially no chance of doing anything, other than guessing someone else's session ID.
Client side cookies have best performance, No round trip to web server is a big win for performance. But Cookie has size limitation. see following link about limitation on IE, Other browser should have similar limitation.
http://support.microsoft.com/kb/306070, cookies are used for small amount day storage, like session key.
Session normally means one of server process, if you use on a web farm, Session can not be shared across multiple web server. If you have a single web server, session should be best way to store information on the server side.
For database, it is most flexible solution, but it has performance hit. for high performance website, proper caching is key to go.