Working of Transactions in INNODB engine - mysql

Consider a transaction T1,
Start transaction;
Update emp set emp_id=1 where emp_id=3;
commit;
The engine i am using is INNODB engine.
Before commit operation of the above shown transaction, I had accessed the table again it is showing the previous committed values. If the Row Level locking is placed on the table, it might have shown the error (you cannot access while some transaction is in the middle). Is there any wrong in my understanding.? Can any one help me on this?

Anything that is done as a part of a transaction is available to the same transaction even before the transaction is committed. The changes are not available in other transactions.
To test this, you need to update in one transaction and then from another terminal start a new transaction and try to access. The second transaction will be able to read the data but if you try to update the update will block and wait for the first transaction to be committed.
If you want the second select to wait and return the updated data you should use select for update.

Related

MariaDB. Use Transaction Rollback without locking tables

On a website, when a user posts a comment I do several queries, Inserts and Updates. (On MariaDB 10.1.29)
I use START TRANSACTION so if any query fails at any given point I can easily do a rollback and delete all changes.
Now I noticed that this locks the tables when I do an INSERT from an other INSERT, and I'm not talking while the query is running, that’s obvious, but until the transaction is not closed.
Then DELETE is only locked if they share a common index key (comments for the same page), but luckily UPDATE is no locked.
Can I do any Transaction that does not lock the table from new inserts (while the transaction is ongoing, not the actual query), or any other method that lets me conveniently "undo" any query done after some point?
PD:
I start Transaction with PHPs function mysqli_begin_transaction() without any of the flags, and then mysqli_commit().
I don't think that a simple INSERT would block other inserts for longer than the insert time. AUTO_INC locks are not held for the full transaction time.
But if two transactions try to UPDATE the same row like in the following statement (two replies to the same comment)
UPDATE comment SET replies=replies+1 WHERE com_id = ?
the second one will have to wait until the first one is committed. You need that lock to keep the count (replies) consistent.
I think all you can do is to keep the transaction time as short as possible. For example you can prepare all statements before you start the transaction. But that is a matter of milliseconds. If you transfer files and it can take 40 seconds, then you shouldn't do that while the database transaction is open. Transfer the files before you start the transaction and save them with a name that indicates that the operation is not complete. You can also save them in a different folder but on the same partition. Then when you run the transaction, you just need to rename the files, which should not take much time. From time to time you can clean-up and remove unrenamed files.
All write operations work in similar ways -- They lock the rows that they touch (or might touch) from the time the statement is executed until the transaction is closed via either COMMIT or ROLLBACK. SELECT...FOR UPDATE and SELECT...WITH SHARED LOCK also get involved.
When a write operation occurs, deadlock checking is done.
In some situations, there is "gap" locking. Did com_id happen to be the last id in the table?
Did you leave out any SELECTs that needed FOR UPDATE?

Ensure MySQL transaction fails on COMMIT

I want to test a script what it does when MySQL transaction fails. So I want to simulate transaction failure on COMMIT.
Image the script opens a transaction, makes some queries, and then commits the transaction. I want to implant some extra query to the transaction, to ensure the transaction will fail on COMMIT. So I can test how the script recovers from the transation failure.
I don't want to use explicit ROLLBACK. I want to simulate some real-life case when a commit fails. The exact DB structure is unimportant. It may adapt to the solution.
Edit: I need the transaction to fail on COMMIT, not on some prior query. Therefore answers that rollback prior to COMMIT are not what I want. Such as this one: How to simulate a transaction failure?
My unsuccessful attempts:
Inserting a row with invalid PK or FK fails immediately with insert. Temporarily disabling FK checks with FOREIGN_KEY_CHECKS=0 won't help as they won't be rechecked on COMMIT. If it was psql, defferable constraints would help. But not in mysql.
Opening two parallel transactions and inserting a row with the same PK (or any column with unique constaint) in both transactions locks the later transaction on insert and waits for the former transaction. So the transaction rolls back on insert not on commit.
So I believe you can try the following
Two phase commit : Use two database ( First and second) and make
use of two phase commit. When the commit statement is supposed to be
executed you can shutdown the second db. This way your commit
operation will fail and transaction will rollback.
You executed several inserts and before your commit your database server dies. A transaction may fail if it doesn't receive commit .
Hopefully it helps!

FOR UPDATE doesn't seem to lock the row in MySql InnoDB

MySql = v5.6
Table engine = InnoDB
I have one mysql cli open. I run:
START TRANSACTION;
SELECT id FROM my_table WHERE id=1 FOR UPDATE;
I then have a second cli open and run:
SELECT id FROM my_table WHERE id=1;
I expected it to wait until I either committed or rolled back the first transaction but it doesn't, it just brings back the row straight away as if no row-locking had occurred.
I did another test where I updated a status field in the first cli and I couldn't see that change in the 2nd cli until I committed the transaction, proving the transactions are actually working.
Am I misunderstanding FOR UPDATE or doing something wrong?
update:
Needed FOR UPDATE on the 2nd SELECT query
That action you saw is valid. With "MVCC", different connections can see different versions on the row(s).
The first connection grabbed a type of lock that prevents writes, but not reads. If the second connection had done FOR UPDATE or INSERT or other "write" type of operation, it would have been either delayed waiting for the lock to be released, or deadlocked. (A deadlock would require other locks going on also.)
Common Pattern
BEGIN;
SELECT ... FOR UPDATE; -- the row(s) you will update in this transaction
miscellany work
UPDATE...; -- those row(s).
COMMIT;
If two threads are running that code at the "same" time on the same row(s), the second one will stalled at the SELECT..FOR UPDATE. After the first thread finished, the SELECT will run, getting the new values. All is well.
Meanwhile, other threads can SELECT (without for update) and get some value. Think of these threads as getting the value before or after the transaction, depending on the exact timing of all the threads. The important thing is that these 'other' threads will see a consistent view of the data -- either none of the updates in that transaction have been applied, or all have been applied. This is what "Atomic" means.

Do "SELECT ... LOCK IN SHARE MODE" and "SELECT ... FOR UPDATE" have to be inside of a transaction?

I'm reading the documentation for these commands and am confused. The descriptions for the commands mention transactions:
SELECT ... LOCK IN SHARE MODE sets a shared mode lock on any rows that
are read. Other sessions can read the rows, but cannot modify them
until your transaction commits. If any of these rows were changed by
another transaction that has not yet committed, your query waits until
that transaction ends and then uses the latest values.
For index records the search encounters, SELECT ... FOR UPDATE blocks
other sessions from doing SELECT ... LOCK IN SHARE MODE or from
reading in certain transaction isolation levels. Consistent reads will
ignore any locks set on the records that exist in the read view. (Old
versions of a record cannot be locked; they will be reconstructed by
applying undo logs on an in-memory copy of the record.)
But then the examples don't show transactions being used. Running a test command such as select * from users for update; without a transaction doesn't result in any errors (it works). Does this mean transactions don't have to be used with these commands? If so, is there any advantage to putting these commands inside of a transaction?
In InnoDB each query is effectively run in a transaction. If you don't start transaction explicitly (with start transaction or by setting autocommit to off), each transaction is committed after the query run. This means that if you are not in a transaction, the lock acquired with SELECT ... IN SHARE MODE will be released as soon as the query is completed. There is nothing that prevents you from doing this, it just doesn't make much sense to use locks outside of a transaction; as these locks are to guarantee that the value you select won't change until a later query you are going to execute (like if you want to insert/update data in one table based on the values in another)
A transaction ensures that all the commands it contains will either run successfully or rollback.
These types of select statements affect other transactions in other sessions. So basically wrapping these in transactions is only a matter of whether you are selecting the data as part of a larger set of commands.
If you only want to select the data you should either use the shared lock or no lock at all and no need to begin a transaction.

While in a transaction, how can reads to an affected row be prevented until the transaction is done?

I'm fairly sure this has a simple solution, but I haven't been able to find it so far. Provided an InnoDB MySQL database with the isolation level set to SERIALIZABLE, and given the following operation:
BEGIN WORK;
SELECT * FROM users WHERE userID=1;
UPDATE users SET credits=100 WHERE userID=1;
COMMIT;
I would like to make sure that as soon as the select inside the transaction is issued, the row corresponding to userID=1 is locked for reads until the transaction is done. As it stands now, UPDATEs to this row will wait for the transaction to be finished if it is in process, but SELECTs simply will read the previous value. I understand this is the expected behaviour in this case, but I wonder if there is a way to lock the row in such a way that SELECTs will also wait until the transaction is finished to return the values?
The reason I'm looking for that is that at some point, and with enough concurrent users, it could happen that while the previous transaction is in process someone else reads the "credits" to calculate something else. Ideally the code run by that someone else should wait for the transaction to finish to use the new value, because otherwise it could lead to irreversible desync issues.
Note that I don't want to lock the entire table for reads, just the specific row.
Also, I could add a boolean "locked" field to the tables and set it to 1 every time I'm starting a transaction but I don't really feel this is the most elegant solution here, unless there is absolutely no other way to handle this through mysql directly.
I found a workaround, specifically:
SELECT ... LOCK IN SHARE MODE sets a shared mode lock on the rows
read. A shared mode lock enables other sessions to read the rows but
not to modify them. The rows read are the latest available, so if they
belong to another transaction that has not yet committed, the read
blocks until that transaction ends.
(Source)
It seems that one can just include LOCK IN SHARE MODE in the critical SELECT statements that rely on transactional data and they will indeed wait for current transactions to finish before retrieving the row/s. For this to work the transaction has to use FOR UPDATE explicitly (as opposed to the original example I gave). E.g., given the following:
BEGIN WORK;
SELECT * FROM users WHERE userID=1 FOR UPDATE;
UPDATE users SET credits=100 WHERE userID=1;
COMMIT;
Anywhere else in the code I could use:
SELECT * FROM users WHERE userID=1 LOCK IN SHARE MODE;
Since this statement is not wrapped in a transaction, the lock is released immediately, thus having no impacts in subsequent queries, but if the row involving userID=1 has been selected for update within a transaction this statement would wait until the transaction is done, which is exactly what I was looking for.
You could try the SELECT ... FOR UPDATE locking read.
A SELECT ... FOR UPDATE reads the latest available data, setting exclusive locks on each row it reads. Thus, it sets the same locks a searched SQL UPDATE would set on the rows.
Please go through the following site: http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html