I have a table that is collecting data input by users.
I want to carry out the following SQL statements:
SELECT statement 1
SELECT statement 2
UPDATE table rows that I've read out in the 2 select statements
I want to guard against the possibility of another user inputting new data in between any of the statements.
I've read the MySQL manual and it seems that I could lock the tables first, but I'm more familiar with transactions and I would like to know if wrapping a transaction around the 3 statements would achieve what I want. I've found it quite hard to be certain this will work from reading the manuals (or maybe it's just me....)
There are two possible problem scopes here; transactions (if you're using an engine that supports transactions, like InnoDB) will solve one of them.
Transactions keep all of your queries operating on a snapshot of database state when the transaction was started, and any modifications are applied all-or-nothing when the transaction is completed. This effectively solves interleaving and race conditions with the queries.
However, you stated that you want to prevent a user inputting new data in between any of the statements. If this is a situation where you want to ensure that a user submitting a request is starting from current data, you'll need to implement your own locking mechanism, or at least a way to trap cases where interleaving between requests is causing an issue.
Basically, transactions will only help with queries running in concurrent requests. If this scenario would be a problem:
User1 requests data
User2 requests data
User1 submits modifications
User2 submits modifications
Where User2 was able to submit their changes without knowing about the changes made by User1, you need your own locking system; transactions aren't going to help. This is coming from a web development background where each step is a separate web request in a separate transaction.
Related
I have read that transaction is usually used in movie ticket booking website, to solve concurrent purchase problem. However, I failed to understand why is it necessary.
If at the same time, 2 users book the same seat (ID = 1) on the same show (ID = 99), can't you simply issue the following SQL command?
UPDATE seat_db
SET takenByUserID=someUserId
WHERE showID=99 AND seatID=1 AND takenByUserID IS NOT NULL
As I can see, this SQL is already been executed atomically, there's no concurrency issue. The database will set seat ID=1 to 1st user of which the server receives the request, then let the 2nd user's request fail. So, why is transaction still needed for ticket booking system?
When you batch all of your DML statements into a single transaction typically you are telling the database a couple things:
Make this operation (i.e. book movie ticket) an all-or-nothing operation
Ensure you don't leave any orphan rows and have consistent data
Lock all the associated tables up-front so that no other writes can be done while the operation runs
Prevents other transactions from modifying tables your current operation wants to access
Prevents deadlock and allows processing to continue by aborting one of the locking queries
Whether you need to wrap your UPDATE seat_db request in its own transaction depends on what other processing (DML) is being done before and after it.
You'll have to use transactions if your action involves multiple unrelated rows. For example, if the user has to pay for the ticket, then there will be at least two updates: update the user's credit and mark the seat as occupied. If any of the two updates were performed alone you'll definitely get into trouble.
I need your advice.
I have a mysql database which stores the data from my minecraft server. The server is using the ebean api for the mysql stuff.
I will have multiple servers running the same synched data when the user base increases. The server that the user is connected to does not matter. It looks all the same for him. But how can I handle an example case in which from two servers two players in the same guild edit something at the same time. One server will throw an optimistic lock exception. But what to do if it is something important like a donation to the guild bank? The amount donated might get duped or is lost. Tell the user to retry it? Or let the server automatically resend the query with the updated data from the database? A friend of mine said something like a socket server in the middle that handles ALL mysql statements might be a good idea. But that would require a lot of work to make sure that it does reconnect to the minecraft servers if the connection is lost etc. It would also require me to get the raw update query or serialize the ebean table but I don't know how to accomplish any of those possibilities.
I have not found an answer to my question yet and I hope that it hasn't been answered before.
There are two different kinds of operations the Minecraft servers can perform on the DBMS. On one hand, you have state-update operations, like making a deposit to an account. The history of these operations matters. For the sake of integrity, you must use transactions for these. They're not idempotent, meaning that you can't repeat them multiple times and expect the same result as if you only did them once. You should investigate the use of SELECT ... FOR UPDATE transactions for these.
If something fails during such a transaction, you must issue a ROLLBACK of the transaction and try again. You'd be smart to log these retries in case you get a lot of rollbacks: that suggests you have some sort of concurrency trouble to track down.
By the way, you don't need to bother with an explicit transaction on a query like
UPDATE credit SET balance = balance + 200 WHERE account = 12367
Your DBMS will get this right, even when multiple connections hit the same account number.
The other kind of operation is idempotent. That is, if you carry out the operation more than once, the result is the same as if you did it once. For example, setting the name of a player is idempotent. For those operations, if you get some kind of exception, you can either repeat the operation, or simply ignore the failure in the assumption that the operation will be repeated later in the normal sequence of gameplay.
I am experiencing what appears to be the effects of a race condition in an application I am involved with. The situation is as follows, generally, a page responsible for some heavy application logic is following this format:
Select from test and determine if there are rows already matching a clause.
If a matching row already exists, we terminate here, otherwise we proceed with the application logic
Insert into the test table with values that will match our initial select.
Normally, this works fine and limits the action to a single execution. However, under high load and user-abuse where many requests are intentionally sent simultaneously, MySQL allows many instances of the application logic to run, bypassing the restriction from the select clause.
It seems to actually run something like:
select from test
select from test
select from test
(all of which pass the check)
insert into test
insert into test
insert into test
I believe this is done for efficiency reasons, but it has serious ramifications in the context of my application. I have attempted to use Get_Lock() and Release_Lock() but this does not appear to suffice under high load as the race condition still appears to be present. Transactions are also not a possibility as the application logic is very heavy and all tables involved are not transaction-capable.
To anyone familiar with this behavior, is it possible to turn this type of handling off so that MySQL always processes queries in the order in which they are received? Is there another way to make such queries atomic? Any help with this matter would be appreciated, I can't find much documented about this behavior.
The problem here is that you have, as you surmised, a race condition.
The SELECT and the INSERT need to be one atomic unit.
The way you do this is via transactions. You cannot safely make the SELECT, return to PHP, and assume the SELECT's results will reflect the database state when you make the INSERT.
If well-designed transactions (the correct solution) are as you say not possible - and I still strongly recommend them - you're going to have to make the final INSERT atomically check if its assumptions are still true (such as via an INSERT IF NOT EXISTS, a stored procedure, or catching the INSERT's error in the application). If they aren't, it will abort back to your PHP code, which must start the logic over.
By the way, MySQL likely is executing requests in the order they were received. It's possible with multiple simultaneous connections to receive SELECT A,SELECT B,INSERT A,INSERT B. Thus, the only "solution" would be to only allow one connection at a time - and that will kill your scalability dead.
Personally, I would go about the check another way.
Attempt to insert the row. If it fails, then there was already a row there.
In this manner, you check or a duplicate and insert the new row in a single query, eliminating the possibility of races.
As our Rails application deals with increasing user activity and load, we're starting to see some issues with simultaneous transactions. We've used JavaScript to disable / remove the buttons after clicks, and this works for the most part, but isn't an ideal solution. In short, users are performing an action multiple times in rapid succession. Because the action results in a row insert into the DB, we can't just lock one row in the table. Given the high level of activity on the affected models, I can't use the usual locking mechanims ( http://guides.rubyonrails.org/active_record_querying.html#locking-records-for-update ) that you would use for an update.
This question ( Prevent simultaneous transactions in a web application ) addresses a similar issue, but it uses file locking (flock) to provide a solution, so this won't work with multiple application servers, as we have. We could do something similar I suppose with Redis or another data store that is available to all of our application servers, but I don't know if this really solves the problem fully either.
What is the best way to prevent duplicate database inserts from simultaneously executed transactions?
Try adding a unique index to the table where you are having the issue. It won't prevent the system from attempting to insert duplicate data, but it will prevent it from getting stored in the database. You will just need to handle the insert when it fails.
Hi I am developing a site with JSP/Servlets running on Tomcat for the front-end and with a MySql db for the backend which is accessed through JDBC.
Many users of the site can access and write to the database at the same time ,my question is :
Do i need to explicitly take locks before each write/read access to the db in my code?
OR Does Tomcat handle this for me?
Also do you have any suggestions on how best to implement this ? I have written a significant amount of JDBC code already without taking the locks :/
I think you are thinking about transactions when you say "locks". At the lowest level, your database server already ensure that parallel read writes won't corrupt your tables.
But if you want to ensure consistency across tables, you need to employ transactions. Simply put, what transactions provide you is an all-or-nothing guarantee. That is, if you want to insert a Order in one table and related OrderItems in another table, what you need is an assurance that if insertion of OrderItems fails (in step 2), the changes made to Order tables (step 1) will also get rolled back. This way you'll never end up in a situation where an row in Order table have no associated rows in Order items.
This, off-course, is a very simplified representation of what a transaction is. You should read more about it if you are serious about database programming.
In java, you usually do transactions by roughly with following steps:
Set autocommit to false on your jdbc connection
Do several insert and/or updates using the same connection
Call conn.commit() when all the insert/updates that goes together are done
If there is a problem somewhere during step 2, call conn.rollback()