I start a transaction.
Then I need to rollback it.
Can I somehow get a list of the queries that get "discarded" this way?
(ps: of course I can log them beforehand; I was wondering if this could be done in a more "natural" way)
If you're on a recent MySQL 5.1, this should work:
SHOW ENGINE INNODB STATUS includes a list of active transactions for the InnoDB engine. Each is prefixed with a transaction id and a process id, and looks somewhat like this:
---TRANSACTION 0 290328284, ACTIVE 0 sec, process no 3195, OS thread id
34831 rollback of SQL statement
MySQL thread id 18272
<query may be here>
The MySQL thread id will correspond to the CONNECTION_ID() of your session, that you can get from SHOW FULL PROCESSLIST or information_schema.processlist, so you can determine which transaction is yours. You'll have to parse the text, and parse the query out of it, if it's present.
If that's not enough, you can try something like SET #PROGRESS = #PROGRESS + 1 before each ROLLBACK statement, and then SELECT #PROGRESS from DUAL at the end of your query to find out how far the transaction went before it hit a rollback.
If you're using InnoDB, take a look at the InnoDB monitor and stderr. I think that the best practice is to store them in the application (server), since it won't be dependent on the platform.
Related
I did some queries without a commit. Then the application was stopped.
How can I display these open transactions and commit or cancel them?
How can I display these open transactions and commit or cancel them?
There is no open transaction, MySQL will rollback the transaction upon disconnect.
You cannot commit the transaction (IFAIK).
You display threads using
SHOW FULL PROCESSLIST
See: http://dev.mysql.com/doc/refman/5.1/en/thread-information.html
It will not help you, because you cannot commit a transaction from a broken connection.
What happens when a connection breaks
From the MySQL docs: http://dev.mysql.com/doc/refman/5.0/en/mysql-tips.html
4.5.1.6.3. Disabling mysql Auto-Reconnect
If the mysql client loses its connection to the server while sending a statement, it immediately and automatically tries to reconnect once to the server and send the statement again. However, even if mysql succeeds in reconnecting, your first connection has ended and all your previous session objects and settings are lost: temporary tables, the autocommit mode, and user-defined and session variables. Also, any current transaction rolls back.
This behavior may be dangerous for you, as in the following example where the server was shut down and restarted between the first and second statements without you knowing it:
Also see: http://dev.mysql.com/doc/refman/5.0/en/auto-reconnect.html
How to diagnose and fix this
To check for auto-reconnection:
If an automatic reconnection does occur (for example, as a result of calling mysql_ping()), there is no explicit indication of it. To check for reconnection, call mysql_thread_id() to get the original connection identifier before calling mysql_ping(), then call mysql_thread_id() again to see whether the identifier has changed.
Make sure you keep your last query (transaction) in the client so that you can resubmit it if need be.
And disable auto-reconnect mode, because that is dangerous, implement your own reconnect instead, so that you know when a drop occurs and you can resubmit that query.
Although there won't be any remaining transaction in the case, as #Johan said, you can see the current transaction list in InnoDB with the query below if you want.
SELECT * FROM information_schema.innodb_trx\G
From the document:
The INNODB_TRX table contains information about every transaction (excluding read-only transactions) currently executing inside InnoDB, including whether the transaction is waiting for a lock, when the transaction started, and the SQL statement the transaction is executing, if any.
You can use show innodb status (or show engine innodb status for newer versions of mysql) to get a list of all the actions currently pending inside the InnoDB engine. Buried in the wall of output will be the transactions, and what internal process ID they're running under.
You won't be able to force a commit or rollback of those transactions, but you CAN kill the MySQL process running them, which does essentially boil down to a rollback. It kills the processes' connection and causes MySQL to clean up the mess its left.
Here's what you'd want to look for:
------------
TRANSACTIONS
------------
Trx id counter 0 140151
Purge done for trx's n:o < 0 134992 undo n:o < 0 0
History list length 10
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0 0, not started, process no 17004, OS thread id 140621902116624
MySQL thread id 10594, query id 10269885 localhost marc
show innodb status
In this case, there's just one connection to the InnoDB engine right now (my login, running the show query). If that line were an actual connection/stuck transaction you'd want to terminate, you'd then do a kill 10594.
By using this query you can see all open transactions.
List All:
SHOW FULL PROCESSLIST
if you want to kill a hang transaction copy transaction id and kill transaction by using this command:
KILL <id> // e.g KILL 16543
With this query below, you can check how many transactions are running currently:
mysql> SELECT count(*) FROM information_schema.innodb_trx;
+----------+
| count(*) |
+----------+
| 3 |
+----------+
1 row in set (0.00 sec)
We are running into a very strange problem with disjunct concurrent PHP processes accessing the same table (using table locks).
There is no replication involved, we're working on a monolith with the mysqli-interface of PHP 5.6.40 (I know, upgrading is due, we're working on it).
Let's say the initial value of a field namend "value" in xyz is 0;
PHP-Process 1: Modifies the table
LOCK TABLE xyz WRITE;
UPDATE xyz SET value = 1;
UNLOCK TABLE xyz;
PHP-Process 2: Depends on a value in that table (e.g. a check for access rights)
SELECT value from xyz;
Now, if we manage to make Process 2 halt and wait for the lock to be released, on a local dev-Environment (XAMPP, MariaDB 10.1.x), everything is fine, it will get the value 1;
BUT, on our production server (DebianLinux, MySQL 5.6.x) there is a seemingly necessary wait period for the value to materialize in query results.
An immediate SELECT statement delivers 0
sleep(1) then SELECT delivers 1
We always assumend that a) LOCK / UNLOCK will Flush Tables or b) A manual FLUSH TABLES xyz WITH READ LOCK will also flush caches, enforcing writing to the disc and generally will ensure that every following query of every other process will yield the expected result.
What we tried so far:
FLUSH TABLES as mentioned - no result
Explicitly acquire a LOCK before executing the SELECT statement - no result
Just wait some time - yielded the result we are looking for, but this is a dirty, unreliable solution.
What do you guys think? What might be the cause? I was thinking of: The query cache not updating in time, paging of the underlying OS not writing stuff back to the disk in time / not validating the memory page of the table data.
Is there any way you know to definitely assure consecutive consistentcy of the data?
There are different transaction isolation modes by default in the different MariadB versions.
You have set up the same mode if you expect the same result. It also seems weird to test it on different MySQL versions.
https://mariadb.com/kb/en/mariadb-transactions-and-isolation-levels-for-sql-server-users/
Your second process do start of transaction may be far before the commit actually issued.
If you do not want dig in transaction isolation just try do rollback before select(but correct solution is determine what exactly isolation your app require).
Rollback; -- may give error, but it is okay.
SELECT value from xyz;
Software: Django 2.1.0, Python 3.7.1, MariaDB 10.3.8, Linux Ubuntu 18LTS
We recently added some load to a new application, and starting observing lots of deadlocks. After a lot digging, I found out that the Django select_for_update query resulted in an SQL with several subqueries (3 or 4). In all deadlocks I've seeen so far, at least one of the transactions involves this SQL with multiple subqueries.
my question is... Does the select_for_udpate lock records from every table involved? In my case, would record from the main SELECT, and from other tables used by subqueries get locked? Or only records from the main SELECT?
From Django docs:
By default, select_for_update() locks all rows that are selected by the query. For example, rows of related objects specified in select_related() are locked in addition to rows of the queryset’s model.
However, I'm not using select_related() , at least I don't put it explicitly.
Summary of my app:
with transaction.atomic():
ModelName.objects.select_for_update().filter(...)
...
update record that is locked
...
50+ clients sending queries to the database concurrently
Some of those queries ask for the same record. Meaning different transactions will run the same SQL at the same time.
After a lot of reading, I did the following to try to get the deadlock under control:
1- Try/Catch exception error '1213' (deadlock). When this happens, wait 30 seconds and retry the query. Here, I rely on the ROLLBACK function from the database engine.
Also, print output of SHOW ENGINE INNODB STATUS and SHOW PROCESSLIST. But SHOW PROCESSLIST doesn't give useful information.
2- Modify the Django select_on_update so that it doesn't build an SQL with subqueries. Now, the SQL generated contains a single WHERE with values and no subqueries.
Anything else that could be done to reduce the deadlocks?
If u hv select_for_update inside a transaction, it will only be released went the whole transaction commits or rollbacks. With nowait set to true the other concurrent requests will immediately fail with:
3572, 'Statement aborted because lock(s) could not be acquired immediately and NOWAIT is set.')
So if we cant use optimistic locks and cannot make transactions shorter, we can set nowait=true in our select_for_update, and we will see a lot of failures if our assumptions are correct. Here we can just catch deadlock failures and retry them with backoff strategy. This is based on the assumption that all people are trying to write to the same thing like an auction item, or ticket booking with a short window of time. If that is not the case consider changing the db design a bit to make deadlocks common
I have a simple use case to solve. Imagine that somebody tells you "hey, this particular set of queries is not transactional!" and your job is to deny that statement. How could one do that?
Assumptions:
user is able to reproduce this by "clicking on one magic button" which triggers, lets say 3 following INSERTs
we have access to mysql client and all privileges
we do not have access to code base of an application, so no integration tests possible to verify this from code perspective
we are using MySQL server with InnoDb
we can tweak MySQL configuration as we want (slow queries, etc.)
There's a transaction section in the output of:
SHOW ENGINE INNODB STATUS\G
Which looks like (that's from my local MySQL currently not running any queries):
TRANSACTIONS
------------
Trx id counter 900
Purge done for trx's n:o < 0 undo n:o < 0
History list length 0
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 47, OS thread handle 0x7fc8b85d3700, query id 120 localhost root
SHOW ENGINE INNODB STATUS
I don't know if you can actively monitor this information, so that you can see it exactly in the moment of you 3 insert operations. You can probably use that last bullet of yours (using slow queries) here...
In addition MySQL has command counters. This counters can be accessed via:
SHOW GLOBAL STATUS LIKE "COM\_%"
Each execution of a command increments the counter associated with it. Transaction related counters are Com_begin, Com_commit and Com_rollback, so you can execute your code and monitor those counters.
I occasionally run into a problem with my application which results in a what I guess is an unfinished transaction that is not committed nor rolled back. I first notice the problem the next time my application tries to start a transaction to the database.
My question is how to find out what queries have been executed within the transaction but not yet committed, what tables are affected, etc...? Basically helping me to track down what causes the problem.
I have the binary log enabled but according to documentation, a transaction is only written to the binary log when committed.
The innodb undo log is supposed to be written a idbfile contained in the same directory as the binlogs, and it is, but I can't say I've found any way of parsing it for this purpose.
SHOW PROCESSLIST shows my session with status SLEEP
SHOW INNODB STATUS:
...
...
---TRANSACTION 0 10661864, ACTIVE 4401 sec, process no, 4831, OS thread id 3023358896
3 lock struct(s), heap size 320, undo log entries 40
MySQL thread id 2, query id 2419 localhost masteruser
Trx read view will not see trx with id >= 0 10661865, sees < 0 10661865
...
...
!PS I have the same question on ServerFault but I guess this question is somewhere in between when it comes to classification, + I find that site having a much lower activity than StackOverflow so the chances of getting an answer feels higher here, hope this ok.
/Kristofer
You can find all the information you are looking for in the information schema. There are three tables (only if your using innodb plugin. Reference : http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-plugin-installation.html)
INNODB_TRX
INNODB_LOCKS
INNODB_LOCK_WAITS
This table will give you picture of what transaction is running within your database, queries within the transaction, including what transaction is blocking what other transaction, resources it is holding lock.
Reference : http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-information-schema-transactions.html
You should start by enabling the general and slow query logs. You may want to apply microslowpatch to see slow queries that are completed within 1 second.