Real-world uses of MySQL savepoints in web services? - mysql

Does anyone have experience they can share using MySQL savepoints (directly or via an ORM), especially in a non-trivial web service? Where have you actually used them? Are they reliable enough (assuming you're willing to run a fairly recent version of MySQL) or too bleeding-edge or expensive?
Lastly, does anyone have experience with something like the following use case and did you use savepoints for it? Say the main point of some specific unit of work is to add a row to an Orders table (or whatever, doesn't have to be order-related, of course) and update an OrdersAuditInfo table, in the same transaction. It is essential that Orders be updated if at all possible, but OrdersAuditInfo table is not as essential (e.g., it's ok to just log an error to a file, but keep going with the overall transaction). At a low level it might look like this (warning, pseudo-SQL follows):
BEGIN;
INSERT INTO Orders(...) VALUES (...);
/* Do stuff outside of SQL here; if there are problems, do a
ROLLBACK and report an error (i.e., Order is invalid in this
case anyway). */
SAVEPOINT InsertAudit;
INSERT INTO OrdersAudit(...) VALUES(...);
/* If the INSERT fails, log an error to a log file somewhere and do: */
ROLLBACK TO SAVEPOINT InsertAudit;
/* Always want to commit the INSERT INTO Orders: */
COMMIT;
But even here perhaps there'd be a better (or at least more common) idiom? One could do the OrdersAuditInfo insert in a completely different transaction but it would be nice to be guaranteed that the OrdersAuditInfo table were not written to unless the final COMMIT actually worked.

I generally tend to avoid SAVEPOINTs, as it can make code quite hard to understand and verify.
In the case you posted, wrapping in a single transaction will depend on whether having OrdersAudit records exactly corresponding with Orders, is part of your business rules.
EDIT: Just re-read your question, and you do not have a requirement for guaranteed correspondence between OrdersAudit and Orders. So I wouldn't use any transaction for the insertion of the OrdersAudit records.

I believe you are using a savepoint to avoid failed INSERTs to rollback the whole transaction. But in that case, the best way to do so is to use INSERT IGNORE. If it fails you will get a warning instead of an error, and nothing will rollback.
Since you don't need a rollback, I suggest you don't use a transaction.
SAVEPOINTs are great if you may want to rollback some successful statements (but not all statements). For example:
START TRANSACTION;
Relatively slow query that you don't want to run often
INSERT INTO a
SAVEPOINT one;
SELECT id INTO #id FROM slot WHERE status = 'free' ORDER BY timestamp LIMIT 1;
INSERT INTO b (slot_id, ...) VALUES (#id, ...)
INSERT INTO c (slot_id, ...) VALUES (#id, ...)
INSERT INTO d (slot_id, ...) VALUES (#id, ...)
COMMIT;
If the SELECT returns nothing, you may run a global ROLLBACK. If any of the following INSERTs fails, you may ROLLBACK TO SAVEPOINT one and pick another id. Obviously, in a case like this, you want to implement a maximum number of attempts in your code.

Related

MySQL which is the method to rollback a transaction effective?

I saw in multiple answers here and on google that rollback a transaction implies only the rollback of the last command, and i read also that implies ALL commands. (neither both documented or referenced by)
That I need to do is create a store procedure that insert/update on table A, get the last ID of A, insert that ID into B, get the last id of B, insert it into C, etc, etc, etc.
I want to know which is the method to commit or rollback all commands in the transaction, in order to start the transaction and if something fails, get back everything as the original.
SQL code with IF error and last_id will be preciated, because also I saw a LOT of differents ways to get the last id and I don't know which is better.
By the way, all tables are InnoDB
Kind regards,
If you BEGIN a transaction then nothing will get applied until you COMMIT it. Dropping your connection or issuing a ROLLBACK is the same as never committing it.
This is, of course, presuming you have autocommit set on, which is usually the case.
You can roll-back individual commands if you wrap them as transactions as well.
More information is available in the documentation.
Keep in mind that MyISAM and other engines do not support transactions, where InnoDB does. Further, only INSERT, UPDATE, DELETE and REPLACE statements are able to be rolled back. Other things, like alterations to the schema, are not.
As documented under START TRANSACTION, COMMIT, and ROLLBACK Syntax:
These statements provide control over use of transactions:
[ deletia ]
ROLLBACK rolls back the current transaction, canceling its changes.

mysql concurrent and identical transactions trouble

OK here's the basic idea of what is happening:
begin transaction
some_data=select something from some_table where some_condition;
if some_data does not exists or some_data is outdated
new_data = insert a_new_entry to some_table
commit transaction
return new_data
else
return some_data
end
When multiple processes execute the code above simultaneously(like the client issues a lot of identical requests at a same time), a lot of 'new_data' will be inserted while actually only one is needed.
I think it's a quite typical scenario of concurrency, but still I can't figure out a decent way to avoid it. Things I can think about maybe like having a single worker process to do the select_or_insert job, or maybe set the isolation level to Serializable(unacceptable). But neither is quite satisfactory to me.
PS: The database is mysql, table engine is innodb, and isolation level is repeatable read
In your initial SELECT, use SELECT ... FOR UPDATE.
This ensures that the record is locked against other clients reading it until the transaction has been committed (or rolled-back), so they wait on the SELECT command and do not continue through to the rest of the logic until the first client has completed its UPDATE.
Note that you will need to ROLLBACK the transaction in the else condition, or else the lock will continue blocking until the connection is dropped.

Partially failed insertion to related data

What should i do when i want to insert related data to different tables and one the insertions fail. So only a portion of the important related data is inserted to one of the tables. Then i obviously don't want that data fracture to stay in the table because it is not useful by itself. What are the best ways and technique to implement this kind of behaviour?
One of the best things that you can do is set the auto commit to 0. From there you can nest it in a transaction. That way you can provide a conditional that if the table doesn't fully update, roll back and it is not saved to your disk.
START TRANSACTION;
SELECT #A:=SUM(salary) FROM table1 WHERE type=1;
UPDATE table2 SET summary=#A WHERE type=1;
COMMIT;
I got this from the MYSQL website: http://dev.mysql.com/doc/refman/5.0/en/commit.html
This is what transactions are for.
A transaction wraps a bunch of transactions. At the end a transaction is either committed (all the changes made hit the disk) or rolled back (none of them are)
You start a transaction with the BEGIN statement, commit it with COMMIT, and roll it back with ROLLBACK

Seeking an example of a procedure that uses row_count

I want to write a procedure that will handle the insert of data into 2 tables. If the insert should fail in either one then the whole procedure should fail. I've tried this many different ways and cannot get it to work. I've purposefully made my second insert fail but the data is inserted into the first table anyway.
I've tried to nest IF statements based on the rowcount but even though the data fails on the second insert, the data is still being inserted into the first table. I'm looking for a total number of 2 affected rows.
Can someone please show me how to handle multiple inserts and rollback if one of them fails? A short example would be nice.
If you are using InnoDB tables (or other compatible engine) you can use the Transaction feature of MySQL that allows you to do exactly what you want.
Basically you start the transaction
do the queries checking for the result
If every result is OK you call the CONMIT
else you call the ROLLBACK to void all the queries within the transaction.
You can read and article about with examples here.
HTH!
You could try turning autocommit off. It might be automatically committing your first insert even though you haven't explicitly committed the transaction that's been started:
SET autocommit = 0;
START TRANSACTION
......

mysql insert race condition

How do you stop race conditions in MySQL? the problem at hand is caused by a simple algorithm:
select a row from table
if it doesn't exist, insert it
and then either you get a duplicate row, or if you prevent it via unique/primary keys, an error.
Now normally I'd think transactions help here, but because the row doesn't exist, the transaction don't actually help (or am I missing something?).
LOCK TABLE sounds like an overkill, especially if the table is updated multiple times per second.
The only other solution I can think of is GET_LOCK() for every different id, but isn't there a better way? Are there no scalability issues here as well? And also, doing it for every table sounds a bit unnatural, as it sounds like a very common problem in high-concurrency databases to me.
what you want is LOCK TABLES
or if that seems excessive how about INSERT IGNORE with a check that the row was actually inserted.
If you use the IGNORE keyword, errors
that occur while executing the INSERT
statement are treated as warnings
instead.
It seems to me you should have a unique index on your id column, so a repeated insert would trigger an error instead of being blindingly accepted again.
That can be done by defining the id as a primary key or using a unique index by itself.
I think the first question you need to ask is why do you have many threads doing the exact SAME work? Why would they have to insert the exact same row?
After that being answered, I think that just ignoring the errors will be the most performant solution, but measure both approaches (GET_LOCK v/s ignore errors) and see for yourself.
There is no other way that I know of. Why do you want to avoid errors? You still have to code for the case when another type of error occurs.
As staticsan says transactions do help but, as they usually are implied, if two inserts are ran by different threads, they will both be inside an implied transactions and see consistent views of the database.
Locking the entire table is indeed overkill. To get the effect that you want, you need something that the litterature calls "predicate locks". No one has ever seen those except printed on the paper that academic studies are published on. The next best thing are locks on the "access paths" to the data (in some DBMS's : "page locks").
Some non-SQL systems allow you to do both (1) and (2) in one single statement, more or less meaning the potential race conditions arising from your OS suspending your execution thread right between (1) and (2), are entirely eliminated.
Nevertheless, in the absence of predicate locks such systems will still need to resort to some kind of locking scheme, and the finer the "granularity" (/"scope") of the locks it takes, the better for concurrency.
(And to conclude : some DBMS's - especially the ones you don't have to pay for - do indeed offer no finer lock granularity than "the entire table".)
On a technical level, a transaction will help here because other threads won't see the new row until you commit the transaction.
But in practice that doesn't solve the problem - it only moves it. Your application now needs to check whether the commit fails and decide what to do. I would normally have it rollback what you did, and restart the transaction because now the row will be visible. This is how transaction-based programmer is supposed to work.
I ran into the same problem and searched the Net for a moment :)
Finally I came up with solution similar to method to creating filesystem objects in shared (temporary) directories to securely open temporary files:
$exists = $success = false;
do{
$exists = check();// select a row in the table
if (!$exists)
$success = create_record();
if ($success){
$exists = true;
}else if ($success != ERROR_DUP_ROW){
log_error("failed to create row not 'coz DUP_ROW!");
break;
}else{
//probably other process has already created the record,
//so try check again if exists
}
}while(!$exists)
Don't be afraid of busy-loop - normally it will execute once or twice.
You prevent duplicate rows very simply by putting unique indexes on your tables. That has nothing to do with LOCKS or TRANSACTIONS.
Do you care if an insert fails because it's a duplicate? Do you need to be notified if it fails? Or is all that matters that the row was inserted, and it doesn't matter by whom or how many duplicates inserts failed?
If you don't care, then all you need is INSERT IGNORE. There is no need to think about transactions or table locks at all.
InnoDB has row level locking automatically, but that applies only to updates and deletes. You are right that it does not apply to inserts. You can't lock what doesn't yet exist!
You can explicitly LOCK the entire table. But if your purpose is to prevent duplicates, then you are doing it wrong. Again, use a unique index.
If there is a set of changes to be made and you want an all-or-nothing result (or even a set of all-or-nothing results within a larger all-or-nothing result), then use transactions and savepoints. Then use ROLLBACK or ROLLBACK TO SAVEPOINT *savepoint_name* to undo changes, including deletes, updates and inserts.
LOCK tables is not a replacement for transactions, but it is your only option with MyISAM tables, which do not support transactions. You can also use it with InnoDB tables if row-level level locking isn't enough. See this page for more information on using transactions with lock table statements.
I have a similar issue. I have a table that under most circumstances should have a unique ticket_id value, but there are some cases where I will have duplicates; not the best design, but it is what it is.
User A checks to see if the ticket is reserved, it isn't
User B checks to see if the ticket is reserved, it isn't
User B inserts a 'reserved' record into the table for that ticket
User A inserts a 'reserved' record into the table for that ticket
User B check for duplicate? Yes, is my record newer? Yes, leave it
User A check for duplicate? Yes, is my record newer? No, delete it
User B has reserved the ticket, User A reports back that the ticket has been taken by someone else.
The key in my instance is that you need a tie-breaker, in my case it's the auto-increment id on the row.
In case insert ignore doesnt fit for you as suggested in the accepted answer , so according to the requirements in your question :
1] select a row from table
2] if it doesn't exist, insert it
Another possible approach is to add a condition to the insert sql statement,
e.g :
INSERT INTO table_listnames (name, address, tele)
SELECT * FROM (SELECT 'Rupert', 'Somewhere', '022') AS tmp
WHERE NOT EXISTS (
SELECT name FROM table_listnames WHERE name = 'Rupert'
) LIMIT 1;
Reference:
https://stackoverflow.com/a/3164741/179744