MySQL Multi users Transactions - mysql

I'm developing a hospital management system, as you know is a huge system, to make things clear, Reception will open a new ticket for the patient Ticket no will be a customized form month year no (112011000101) which mean ticket no 101 on the month Nov on 2011.
When saving the ticket, my application will read the last saved ticket no & increase the no with one, (Select Tick_No from tickets where Tick_No Like '112011'% order by Tick_No DESC limit 1), this will return the last saved ticket which is (112011000101) increment by one ==> new ticket will be (112011000102).
So if there are two employees saving new tickets in the same is there is a chance to get duplicate ticket no??
I'm using transactions with mysql database which will make a row-level locking but still table available for any queries.
so i need a clear answer please if it is possible.
note in my code, when saving if something go wrong a rollback will be issued and auto retry saving function will be issued (in each saving order the application will perform check for the last number).

You should make a table-level locking in this case, so you could use the MyISAM table engine. This should help http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html

Related

How do I trigger an event in a database when a column has certain value?

I am working on building a social network application similar to twitter where users have newsfeeds, followers, posts ect...
I am trying to implement a feature which would make posts (a post in my application is equivalent to a post on facebook) EXPIRE after a certain amount of time.
What do I mean by expire?
1. Post disappears from news feed
2. User whose post expires, relieves a notification alerting them that the post
has expired. On a programmatic level this is just a insert statement being executed when the post expires.
What have I done so far?
Making posts disappear from the newsfeed was simple, I just adjusted the query which return the newsfeed by checking the date_of_expiration column and compare it to NOW().
Creating notifications when the post expired was trickier.
My initial approach was to make a mysql CRON job which ran every 2 minutes, triggering an event which would select all posts where NOW() > date_of_expiration and use the selected data to insert a notification entry into my notification table.
This works, however, I do not want to use a CRON job. The 2 minute delay means a user might have to wait a full 2 minutes after the post actually expired before receiving the notification telling the user their post expired. I'm assuming if the table had many entries this wait time could be even greater depending on how long the it takes to run the select and insert statements.
What am I looking for?
Another solution to inserting a notification into the notification table when a users post expires.
I was thinking that if there was a way to create some kind of event that would trigger when the expiration date value for each row (in the posts table) is greater than NOW(), it would be a very good solution to my problem. Is something like this possible? What is commonly done in this scenario?
FYI my stack is: MYSQL, JAVA with an Android+IOS front end, but I don't mind going out of my stack to accomplish this feature
I am not sure how your application works. But here is a though, I have done in an application that interact with a telephone system where each second count.
I implemented a server-sent event where a script will keep checking for new updates every second. Then the script will update the client with any new/expired notifications.
I am not sure if this is what you are looking for but it is worth sharing.
EDITED
Since you are leaning more toward having a table for the notification why now create the notification at run time with in a transaction?
START TRANSACTION;
INSERT INTO posts(comment, createdBy....)Values('My new comment',123);
SELECT #lastID := LAST_INSERT_ID();
-- Create temporary table with all the friends to notify
-- "this will help you with performance" Hint then engine type
-- Make sure the final userId list is unique otherwise you will be
-- inserting duplicate notifications which I am sure you want to avoid
CREATE TEMPORARY TABLE myFriends (KEY(user_id)) ENGINE=MEMORY
SELECT 1 FROM users AS s
INNER JOIN friends AS f ON f.friedId = s.userId
WHERE s.userID = su.userID
-- insert the notifications all at once
-- This will also help you with the performance a little
INSERT INTO notifications(userID, postId, isRead)
SELECT userID, #lastID AS postId,'0' AS isRead
FROM users AS su
INNER JOIN myFriends AS f ON f.userId = su.userId;
-- commit the transaction if everything passed
commit;
-- if something fails
rollback;
more thoughts, depending how busy you application will be things to consider
Make sure your server is built with good hardware. lots of RAM 64GB+ and a good hard drives, SSD will be great if possible,
Also, you may consider using GTID replication to have more sources for read.
This is hard to answer, since i don't understand well enough your database schema or the access pattern of the clients. However, I have some ideas that might help you:
What about marking the posts table as expired with a separate "expired" column? If you do that, you could select the posts that are to be sent to the client by getting all posts that are not marked as expired. This of course will include also the messages that are newly expired (NOW() > date_of_expiration) but are not marked yet. Let your java program sort the freshly expired posts out before sending the reply. At this point in your program you already have the posts that need to be marked and these are the exact same ones that need to be inserted into the notification table. You can just do that at this place in your Java program.
Advantage
No need for EVENTS or Cron jobs at all. This should be fairly efficient if you set indexes correctly in your tables. No need for a JOIN with the notification table.
Disadvantage
You need to store the expired info extra in a column, that may require a schema change.

What's the correct way to protect against multiple sessions getting the same data?

Let's say I have a table called tickets which has 4 rows, each representing a ticket to a show (in this scenario these are the last 4 tickets available to this show).
3 users are attempting a purchase simultaneously and each want to buy 2 tickets and all press their "purchase" button at the same time.
Is it enough to handle the assignment of each set of 2 via a TRANSACTION or do I need to explicitly call LOCK TABLE on each assignment to protect against the possibility that 2 of the tickets will be assigned to two users.
The desire is for one of them to get nothing and be told that the system was mistaken in thinking there were available tickets.
I'm confused by the documentation which says that the LOCK will be implicitly released when I start a TRANSACTION, and was hoping to get some clarity on the correct way to handle this.
If you use a transaction, MySQL takes care of locking automatically. That's the whole point of transactions -- they totally prevent any kind of interference due to overlapping requests.
You could use "optimistic locking": When updating the ticket as sold, make sure you include the condition that the ticket is still available. Then check if the update failed (you get a count of rows updated, can be 1 or 0).
For example, instead of
UPDATE tickets SET sold_to = ? WHERE id = ?
do
UPDATE tickets SET sold_to = ? WHERE id = ? AND sold_to IS NULL
This way, the database will assure that you don't get conflicting updates. No need for explict locking (the normal transaction isolation will be sufficient).
If you have two tickets, you still need to wrap the two calls into a single transaction (and roll back if either of them failed.

Applying MySQL transactions to my shopping cart checkout process

I have this online shop that I built a while ago and I've been using LOCK TABLES on the product stock tables to make sure stock doesn't go below 0 or isn't update properly.
Lately there's been a lot of traffic and lots of operations (some other LOCK TABLES too) on the database so I have been asking myself if there was a faster and safer way to keep the data consistent.
I'm trying to understand transactions and how they work and I'm wondering if the following logic would successfully apply to my checkout process:
1) START TRANSACTION (so that any following query can be rolled back, if one of them fails or some other condition like stock is lower than needed)
2) insert customer in the 'customers' table
3) insert order info in the 'orders' table
4) similar queries to different tables
5) for each product in the cart:
5.1) update products set stock = stock - x where stock - x >= 0 (x is whatever units the customer wants to buy)
5.2) check affected rows and if affected rows == 0 then ROLLBACK and exit (not enough stock for the current product so cancel the order/throw an error)
5.3) some other queries... etc..
6) COMMIT
Does that sound correct?
What I don't get (and don't know if I should be concerned about in the first place) is what happens to the 'products' table and the stock figures, if some concurrent session (another customer) tries to place the same order or an order containing some of the same products.
Would the second transaction wait for the first one to be finished and the second transaction use the latest stock figures or would they both run concurrently and probably both fail in some cases or what?
Your workflow is correct, even though I would take step 2 (save customer's details) out of the transaction: you probably want to remember your customer, even if the order couldn't be placed.
When a row is updated within a transaction, the row (or in some unfortunate cases, the whole table) becomes locked in exclusive mode until the end of the transaction. It menas that a simultaneous attempt to place an order on the same product would be put on hold when trying to update the stock.
When the first transaction is committed (or rolled back), the lock is released, and a concurrent update would be updating the new value.
Recommended reading: this manual chapter, in full, and this page in particular. Yes, it's a lot (but don't worry if you don't understand everything at the beginning -- the rabbit hole is very, very deep)

MySQL INSERT SELECT WHERE race condition

I'm working on a ticketing system where users escrow a large amount of tickets at once (basically all tickets that are not out of stock) before claiming them. These tickets that shown to the user and they can select whichever ticket they want to claim.
This escrow system could introduce race conditions if two users try to escrow the same tickets at the same time and there aren't enough tickets, as in:
Tickets left: 1
User A hits the page, checks number of tickets left. 1 ticket left
User B hits the page, checks number of tickets left. 1 ticket left
Since they both have a ticket left they would both escrow the ticket, making tickets left -1.
I'd like to avoid locking if at all possible and am wondering if a statement with subqueries like
INSERT INTO ticket_escrows (`ticket`,`count`)
SELECT ticket,tickets_per_escrow FROM tickets WHERE tickets.total > (
COALESCE(
SELECT SUM(ticket_escrows.count) FROM ticket_escrows
WHERE ticket_escrows.ticket = tickets.id
AND ticket_escrows.valid = 1
,0)
+
COALESCE(
SELECT SUM(ticket_claims.count)
FROM ticket_claims
WHERE ticket_claims.ticket = tickets.id
,0)
)
will be atomic and allow me to prevent race conditions without locking.
Specifically I'm wondering if the above query will prevent the following from happening:
Max tickets: 50 Claimed/Escrowed tickets: 49
T1: start tx -> sums ticket escrows --> 40
T2: start tx -> sums ticket escrows --> 40
T1: sums ticket claims --> 9
T2: sums ticket claims --> 9
T1: Inserts new escrow since there is 1 ticket left --> 0 tickets left
T2: Inserts new escrow since there is 1 ticket left --> -1 tickets left
I'm using InnoDB.
To answer your question "if a statement with subqueries ... will be atomic": In your case, yes.
It will be atomic only when enclosed in a transaction. Since you state you're using InnoDB, the query even with subqueries is an SQL statement and as such executed in a transaction. Quoting the documentation:
In InnoDB, all user activity occurs inside a transaction. If autocommit mode is enabled, each SQL statement forms a single transaction on its own.
...If a statement returns an error, the commit or rollback behavior depends on the error.
Also, isolations levels matter.
In terms of the SQL:1992 transaction isolation levels, the default InnoDB level is REPEATABLE READ
REPEATABLE READ may not be enough for you, depending on the logic of your program. It prevents transactions from writing data that was read by another transaction until the reading transaction completes, phantom reads are possible, however. Check SET TRANSACTION for a way how to change the isolation level.
To answer your second question "if the above query will prevent the following from happening ...": In a transaction with the SERIALIZABLE isolation level it cannot happen. I believe the default level should be safe as well in your case (supposing tickets.total doesn't change), but I'd prefer having it confirmed by someone.
You have really left a lot of information out about how you want this to work, which is why you haven't gotten more/better answers.
Ticketing is a matter of trade-offs. If you show someone there are 10 tickets available, either you immediately make all 10 tickets unavailable for everyone else (which is bad for everyone else) or your don't, which means that person could potentially order a ticket that someone else snapped up while they were deciding which ticket to take. An "escrow" system doesn't really help matters, as it just moves the problem from which tickets to purchase to which tickets to escrow.
During the period where you are not locking everyone else out, the best practice is to craft your SQL in such a way that updates or inserts will fail if someone else has modified the data while you were working on it. This can be as simple as incrementing a counter in the row every time the row is changed and using that counter (plus the primary key) in the WHERE clause of the UPDATE statement. If the counter changed, then the update fails and you know you've lost the race.
I don't understand what you want to have happen or your data structures enough to give you much more advice.

How to lock a database after a particular time from accepting an entry?

Okay..
I am making a web base application,that will be connected to a sms gateway.
It is basically an attendance app for colleges.
The Attendance will be updated using a sms by the teacher.
Now,the main part-
What I want to do is,the teacher should not be able to correct the attendance after 10minutes of sending the 1st message.i.e,the database should accept a correction or new message for the same class and the same teacher only for 10 minutes after the 1st attendance is recieved in the database.
So only recieving from a particular number should be blocked and also only if it is for the same class...
I hope the question is clear :o
Thankyou
This is not the sort of thing that you should be enforcing at the DB level, it belongs in your application code. If you can't connect time, number & class together in your DB, it's time to change your schema.
As Sean McSomething mentioned, this is not done at database level, this is business logic that should be checked just before interacting with a database. The best practice actually is to simply add a column time_created and before updating simply check if NOW() and time_created interval is less than 10 minutes. It's a pretty trivial job, but don't bother trying to do this in database with some stored procedures or other stuff, as it will make your application almost un-debuggable and very sloppy.
Check if there is a row with active number and active class, if there are no - insert, if there are any - check if this row's time_created is greater than 10 minutes ago, if it's not - update, ignore otherwise.