I want to test a script what it does when MySQL transaction fails. So I want to simulate transaction failure on COMMIT.
Image the script opens a transaction, makes some queries, and then commits the transaction. I want to implant some extra query to the transaction, to ensure the transaction will fail on COMMIT. So I can test how the script recovers from the transation failure.
I don't want to use explicit ROLLBACK. I want to simulate some real-life case when a commit fails. The exact DB structure is unimportant. It may adapt to the solution.
Edit: I need the transaction to fail on COMMIT, not on some prior query. Therefore answers that rollback prior to COMMIT are not what I want. Such as this one: How to simulate a transaction failure?
My unsuccessful attempts:
Inserting a row with invalid PK or FK fails immediately with insert. Temporarily disabling FK checks with FOREIGN_KEY_CHECKS=0 won't help as they won't be rechecked on COMMIT. If it was psql, defferable constraints would help. But not in mysql.
Opening two parallel transactions and inserting a row with the same PK (or any column with unique constaint) in both transactions locks the later transaction on insert and waits for the former transaction. So the transaction rolls back on insert not on commit.
So I believe you can try the following
Two phase commit : Use two database ( First and second) and make
use of two phase commit. When the commit statement is supposed to be
executed you can shutdown the second db. This way your commit
operation will fail and transaction will rollback.
You executed several inserts and before your commit your database server dies. A transaction may fail if it doesn't receive commit .
Hopefully it helps!
Related
I am for example trying to create a new record in my mysql database. In the case of a sql.ErrTxDone, what does it actually mean, what should i do in-case the transaction was committed ?
You get this error if a transaction is in a state where it cannot be used anymore.
sql.Tx:
After a call to Commit or Rollback, all operations on the transaction fail with ErrTxDone.
And also sql.ErrTxDone:
ErrTxDone is returned by any operation that is performed on a transaction that has already been committed or rolled back.
var ErrTxDone = errors.New("sql: transaction has already been committed or rolled back")
What should you do? Don't use the transaction anymore. If you have further task, do it outside of it or in another transaction.
If you have tasks that should be in the same transaction, don't commit it until you do everything you have to. If the transaction was rolled back (e.g. due to a previous error), you have no choice but to retry (using another transaction) or report failure.
If you're already using transactions, try to put everything in the transaction that needs to happen all-or-nothing. That's the point of transactions. Either everything in it gets applied, or none of them. Using them properly you don't have to think about cleaning up after them. They either succeed and you're happy, or they don't and you either retry or report error, but you don't have to do any cleanup.
On a website, when a user posts a comment I do several queries, Inserts and Updates. (On MariaDB 10.1.29)
I use START TRANSACTION so if any query fails at any given point I can easily do a rollback and delete all changes.
Now I noticed that this locks the tables when I do an INSERT from an other INSERT, and I'm not talking while the query is running, that’s obvious, but until the transaction is not closed.
Then DELETE is only locked if they share a common index key (comments for the same page), but luckily UPDATE is no locked.
Can I do any Transaction that does not lock the table from new inserts (while the transaction is ongoing, not the actual query), or any other method that lets me conveniently "undo" any query done after some point?
PD:
I start Transaction with PHPs function mysqli_begin_transaction() without any of the flags, and then mysqli_commit().
I don't think that a simple INSERT would block other inserts for longer than the insert time. AUTO_INC locks are not held for the full transaction time.
But if two transactions try to UPDATE the same row like in the following statement (two replies to the same comment)
UPDATE comment SET replies=replies+1 WHERE com_id = ?
the second one will have to wait until the first one is committed. You need that lock to keep the count (replies) consistent.
I think all you can do is to keep the transaction time as short as possible. For example you can prepare all statements before you start the transaction. But that is a matter of milliseconds. If you transfer files and it can take 40 seconds, then you shouldn't do that while the database transaction is open. Transfer the files before you start the transaction and save them with a name that indicates that the operation is not complete. You can also save them in a different folder but on the same partition. Then when you run the transaction, you just need to rename the files, which should not take much time. From time to time you can clean-up and remove unrenamed files.
All write operations work in similar ways -- They lock the rows that they touch (or might touch) from the time the statement is executed until the transaction is closed via either COMMIT or ROLLBACK. SELECT...FOR UPDATE and SELECT...WITH SHARED LOCK also get involved.
When a write operation occurs, deadlock checking is done.
In some situations, there is "gap" locking. Did com_id happen to be the last id in the table?
Did you leave out any SELECTs that needed FOR UPDATE?
Using liquibase v3.6.3 on MySQL. If I understood correctly, every CHANGESET is run into a SQL transaction by default. However, seems to me that transactions are being commites in a CHANGE basis. When running this script
databaseChangeLog:
- changeSet:
id: changeset-
changes:
- renameTable:
oldTableName: old_table
newTableName: new_table
- addColumn:
columns:
- column:
name: test_column_name
type: varchar(255)
tableName: other_table
If the addColumn tag fails because some SQL exception (i.e constraint check or other), then the databasechangelog table won't be updated, which I don't expect to, as the changeset failed. However, the firs statement DID pass and my table is now called new_table.
Of course, if I correct the problem causing the second one to fail and retry the update, it will fail because old_table doesn't exist anymore.
I'm aware of this paragraph in the liquibase documentation
Liquibase attempts to execute each changeSet in a transaction that is
committed at the end, or rolled back if there is an error. Some
databases will auto-commit statements which interferes with this
transaction setup and could lead to an unexpected database state.
Therefore, it is usually best to have just one change per changeSet
unless there is a group of non-auto-committing changes that you want
applied as a transaction such as inserting data.
https://www.liquibase.org/documentation/changeset.html
but I don't really understand it. Auto-commit means auto commiting A TRANSACTION. If all the changeset is wrapped in a transaction, why are there only some changes passing? Should liquibase rollback the whole transaction?
Any best practices for this? Can't we manually set transactions in liquibase?
It is not Liquibase that is committing a changeset partially.
I have worked with many databases and a basic concept for all the databases I used, is that a transaction combines data modifications (DML) only.
DDL is never part of a transaction. It is always executed immediately and an open transaction is automatically committed before it is executed.
The reason for that is that the rollback command of a database can handle data modifications only. It can't handle DDL. And if a rollback is not possible anymore then keeping the transaction open becomes useless.
So, Liquibase does create a transaction and commits all changes at the end as the doucmentation states. But that works only if the changeset contains DML only, no DDL.
And because of that DDL and DML should never be mixed in one changeset and every DDL statement should be in a separate changeset. Otherwise there is no way that Liquibase can prevent a changeset from partially succeeding and causing trouble when trying to rollback.
Mysql (and many other relational databases) has implicit commit concept. Most of database trigger the commit implicitly (just like you call COMMIT yourself) to end the current active transaction before(or after) executing the DDL statements.
Liquibase tries to apply specified changes of one changeset under single transaction. In your case, there are two changes and both are DDL statements (RENAME TABLE and ALTER TABLE), under one change set. Both statements will trigger the implicit commit which would leave the database inconsistent state if later statement fails.
More information on mysql implicit commit on their website including the comprehensive list of SQL statements which trigger the implicit commits.
Hope it helps.
I am using aiomysql (https://github.com/aio-libs/aiomysql) and have some problems with the unclosed transaction and locking rows. I use AIO connections pool in my application.
I am NOT using SA context managers for transactions.
My questions:
If I do only and only SELECT's as I understand there are no locks on rows, So do I need to call wait for conn.commit() or I can skip it if I can skip how MySQL should now what the transaction ends?
In the code below, then AIO MySQL start a new transaction? then acquire() connection called or then create cursor called or I should explicitly call "START TRANSACTION"?
The commit needs to be inside the try block as you want to be sure to rollback if there is no commit. However selects do not require commits.
If autocommit is True on your connection then each insert or update is considered a single transaction and implicitly committed. If autocommit is False then you automatically get transactions and must commit after your inserts. You do not need to call START TRANSACTION if autocommit is false.
If you need to call START TRANSACTION you use conn.begin() documented here:
https://aiomysql.readthedocs.io/en/latest/connection.html#connection
A MySQL transaction is used if you have multiple contingent updates that must all be successful together or rolled back. For example a bank transfer that fails on the second update needs to be rolled back:
Withdraw money from account A
Deposit money in account B
You can find a transaction example in the aiomysql github.
https://github.com/aio-libs/aiomysql/tree/master/examples
Consider a transaction T1,
Start transaction;
Update emp set emp_id=1 where emp_id=3;
commit;
The engine i am using is INNODB engine.
Before commit operation of the above shown transaction, I had accessed the table again it is showing the previous committed values. If the Row Level locking is placed on the table, it might have shown the error (you cannot access while some transaction is in the middle). Is there any wrong in my understanding.? Can any one help me on this?
Anything that is done as a part of a transaction is available to the same transaction even before the transaction is committed. The changes are not available in other transactions.
To test this, you need to update in one transaction and then from another terminal start a new transaction and try to access. The second transaction will be able to read the data but if you try to update the update will block and wait for the first transaction to be committed.
If you want the second select to wait and return the updated data you should use select for update.