Whats the best way to implement this use in my database? - mysql

I have a central database containing millions of IDs. And I have a group of users (50-100 users), all being able to request extraction of IDs from this big database.
Atm what I do is when a user sends a GET request, I SELECT 100 ids then update them with the flag USED and return the 100. The problem is, if I get too many requests at the same time, multiple users will receive the same ids (because I dont lock the db when doing select and then update)
If I lock the database my problem will be solved, but it will also be slower.
What other alternative I have?
Thanks!

Look ahead another step... What if a "user" gets 100 rows, then keels over dead. Do you have a way to release those 100 for someone else to work on?
You need an extra table to handle "check out" and "check in". Also, use that table to keep track of the "next" 100 to assign to a user.
When a user checks out the 100, a record of that is stored in the table, together with a timestamp and "who" checked them out. If they don't "check them back in within, say, an hour, then you assign that 100 to another user.
Back on something more mundane... How to pick 100. If there is an auto_increment id with no gaps, then use simple math to chunk up the list. If there are a lot of gaps, then use SELECT id FROM tbl WHERE id > $leftoff ORDER BY id LIMIT 100, 1 to get the end of the next 100.

If each user has their own key, you could pull from the millions of IDs starting from their key*10000. For example, user #9 would first get IDs #90000 to #90099, then #90100 to #90199 next time.
You could set the IDs as "Used" before they get sent back, so one user requesting IDs multiple times will never get duplicates. This needn't lock the database for other users.
If they don't request keys more than 100 times before the database can update, this should avoid collisions. You might need to add logic to allow users who request often not to run out, like by having a pool of IDs that can repopulate their supply, but that depends on particulars that aren't clear from the original question.

Related

Storing count of records in SQL table

Lets say i have a table with posts, and each post has index of topic it belongs to. And i have a table with topics, with integer field, representing number of posts in this topic. When i create new post, i increase this value by 1, and then i delete post, i decrease value by 1.
I do it to not query database each time i need to count number of posts in certain topics.
But i heared that this approach may not be safe to use and actual number of posts in table may not match stored value.
Is there any ceratin info about how safe is it?
Without transactions, the primary issue is timing. Consider a delete and two users:
Time User 1 User 2
1 count = count - 1
2 update finishes How many posts?
3 delete post Count returned
4 delete finishes
Remember that actions such as updates and deletes take a finite amount of time -- even if they take effect all at once. Because of this, User 2 will get the wrong number of posts. This is a race condition; and it may or may not be an issue in your application.
Transactions fix this particular problem, by ensuring that resetting the count and deleting the post both take effect "at the same time".
A secondary issue is data quality. Your data consistency checks are outside the database. Someone can come directly into the database and say "Oh, these posts from user X should be removed". That user might then delete those posts en masse -- but "forget" or not know to change the associated values.
This can be a big issue. Triggers should solve this problem.

Should a session table be cleared off from the records after a user logs out?

I am using a MySql table to store a session record for the current logged in user. Once the user logs off, I update few fields in the same record and flags(revoked) it that it should not be used again. So for every LogIn a new record is created. This serves my purpose, but it turns out that the table is going to grow huge.
What should be the standard approach for storing Sessions? Should the ones, which are revoked be stored in a separate table, or should they be deleted or left in the same table?
I consider leaving the data in the same session table. While querying for a particular record, I query with two fields : (idPeople (not unique) and revoked (0 or 1)), for example SELECT * FROM session WHERE idPeople = "someValue" AND revoked = 0. and then update the record if needed while the user is, logged in or kogging out. Will the huge size of table affect this? or MySql will handle this? And what are other ramifications for this which I am unable to see?
First, it may be a good idea to add a unique field to your table (e.g. SESSION_ID, which could be a running auto-increment number), define this field as a unique ID, and use it to quickly find the record to be updated (i.e. revoke=1).
Second, this type of table always triggers the question you are asking, and the best answer can only be given after you assess and answer some preliminary questions, for instance:
When you wish to check the activities of a user, how far into the past does it make sense to go? One month? One year?
What is the longest period that you may wish to keep this information available (even using non routine queries to retrieve?
What type of questions (queries) I expect to be asked on this table?
One you answer those questions, you can consider the following options:
Have a routine process that would run once a day (at midnight or any other time your system can afford it) which would delete rows whose timestamp is older than, say, one month (or any other period suiting your needs), OR
Same as above but would first copy those records to an "history" table,
Change the structure of your table to a more efficient one, by adding some fields (as suggested above) and indices that would provide good answers for your "SELECT" needs.

What's the correct way to protect against multiple sessions getting the same data?

Let's say I have a table called tickets which has 4 rows, each representing a ticket to a show (in this scenario these are the last 4 tickets available to this show).
3 users are attempting a purchase simultaneously and each want to buy 2 tickets and all press their "purchase" button at the same time.
Is it enough to handle the assignment of each set of 2 via a TRANSACTION or do I need to explicitly call LOCK TABLE on each assignment to protect against the possibility that 2 of the tickets will be assigned to two users.
The desire is for one of them to get nothing and be told that the system was mistaken in thinking there were available tickets.
I'm confused by the documentation which says that the LOCK will be implicitly released when I start a TRANSACTION, and was hoping to get some clarity on the correct way to handle this.
If you use a transaction, MySQL takes care of locking automatically. That's the whole point of transactions -- they totally prevent any kind of interference due to overlapping requests.
You could use "optimistic locking": When updating the ticket as sold, make sure you include the condition that the ticket is still available. Then check if the update failed (you get a count of rows updated, can be 1 or 0).
For example, instead of
UPDATE tickets SET sold_to = ? WHERE id = ?
do
UPDATE tickets SET sold_to = ? WHERE id = ? AND sold_to IS NULL
This way, the database will assure that you don't get conflicting updates. No need for explict locking (the normal transaction isolation will be sufficient).
If you have two tickets, you still need to wrap the two calls into a single transaction (and roll back if either of them failed.

Recommend to track all logins, update login table, or both?

Currently I am having a hard time deciding/weighing the pros/cons of tracking login information for a member website.
Currently
I have two tables, login_i and login_d.
login_i contains the member's id, password, last login datetime, and total count of logins. (member id is primary key and obviously unique so one row per member)
login_d contains a list of all login data in history which tracks each and every time a login occurs. It contains member's id, datetime of login, ip_address of login. This table's primary key is simply an auto-incremented INT field, really purposeless but need a primary and the only unique single field (an index on the otherhand is different but still not concerned).
In many ways I see these tables as being very similar but the benefit of having the latter is to view exactly when a member logged in, how many times, and which IP it came from. All of the information in login_i (last login and count) truthfully exists in login_d but in a more concise form without ever needing to calculate a COUNT(*) on the latter table.
Does anybody have advice on which method is preferred? Two tables will exist regardless but should I keep record of last_login and count in login_i at all if login_d exists?
added thought/question
good comment made below - what about also tracking login attempts based on a username/email/ip? Should this ALSO be stored in a table (a 3rd table I assume).
this is called denormalization.
you ideally would never denormalize.
it is sometimes done anyway to save on computationally expensive results - possibly like your total login count value.
the downside is that you may at some point get into a situation where the value in one table does not match the values in the other table(s). of course you will try your best to keep them properly up to date, but sometimes things happen. In this case, you will possibly generate bugs in application logic if they receive an incorrect value from one of the sources.
In this specific case, a count of logins is probably not that critical to the successful running of the app - so not a big risk - although you will still have the overhead of maintaining the value.
Do you often need last login and count? If Yes, then you should store it in login_i aswell. If it's rarely used then you can take your time process the query in the giant table of all logins instead of storing duplicated data.

Where to store users visited pages?

I have a project, where I have posts for example.
The task is next: I must show to user his last posts visit.
This is my solution: every time user visits new (for him) topic, I create a new record in table visits.
Table visits has next structure: id, user_id, post_id, last_visit.
Now my tables visits has ~14,000,000 records and its still growing every day..
May be my solution isnt optimal and exists another way how to store users visits?
Its important to save every visit as standalone record, because I also have feature to select and use users visits. And I cant purge this table, because data could be needed later month, year. How I could optimize this situation?
Nope, you don't really have much choice other than to store your visit data in a table with columns for (at a bare minimum) user id, post id, and timestamp if you need to track the last time that each user visited each post.
I question whether you need an id field in that table, rather than using a composite key on (user_id, post_id), but I'd expect that to have a minor effect, provided that you already have a unique index on (user_id, post_id). (If you don't have an index on that pair of fields, adding one should improve query performance considerably and making it a unique index or composite key will protect against accidentally inserting duplicate records.)
If performance is still an issue despite proper indexing, you should be able to improve it a bit by segmenting the table into a collection of smaller tables, but segment it by user_id or post_id (rather than by date as previous answers have suggested). If you break it up by user or post id, then you will still be able to determine whether a given user has previously viewed a given post and, if so, on what date with only a single query. If you segment it by date, then that information will be spread across all tables and, in the worst-case scenario of a user who has never previously viewed a post (which I expect to be fairly common), you'll need to separately query each and every table before having a definitive answer.
As for whether to segment it by user id or by post id, that depends on whether you will more often be looking for all posts viewed by a user (segment by user_id to get them all in one query) or all users who have viewed a post (segment by post_id).
If it doesn't need to be long lasting, you could store it in session instead. If it does, you could either break the records apart by table, like say 1 per month, or you could only store the last 5-10 pages visited, and delete old ones as new ones come in. You could also change it to pages visited today, this week, etc.
If you do need all 14 million records, I would create another historical table to archive the visits that are not the most relevant for the day-to-day site operation.
At the end of the month (or week, or quarter, etc...) have some scheduled logic to archive records beyond a certain cutoff point to the historical table and reduce the number of records in the "live" table. This should help increase the query speed on the "live" table since you would have less records in it.
If you do need to query all of the data, you can use both tables and have all of the data available to you.
you could delete the ones you don't need - if you only want to show the last 10 visited posts then
DELETE FROM visits WHERE user_id = ? AND id NOT IN (SELECT id from visits where user_id = ? ORDER BY last_visit DESC LIMIT 0, 10);
(i think that's the best way to do that query, any mysql guru can tell me otherwise? you can ORDER BY in DELETE but the LIMIT only takes 1 parameter, so you can't do LIMIT 10, 100 there)
after inserting/updating each new row, or every few days if you like
Having a structure like (id, user_id, post_id, last_visit) for your vists table, makes it appear as though you are saving all posts, not just last post per Topic. Don't you need a topic ID in there somewhere so that you can determine what there last post PER TOPIC was, and so you know which row to replace when they post in the same topic more than once?
Store post_ids to $_SESSION and then using MYSQL IN with one SELECT query you will be able to show his visited posts. But all those ids will be destroyed after member close his browser, but anyways, this is much more faster and optimal than using database.
edit: sorry, I didn't notice you that you must store that records in database and use it after months. Then I have no idea how to optimize it, but with 14 mln. records you should definitely use indexes.