I am using SELECT ... FOR UPDATE statement to achieve row-level locking in my SpringBoot app. Database: MySQL 5.7.28, connector - MariaDb java client 2.5.2, connection pool HikariCP 2.7.9, spring boot version - 2.0.3 Release.
Persistence is accomplished by Spring JDBC Template, not by JPA. I am using Spring Transaction management, annotation-based by slapping #Transactional annotation on my DAO methods. Transactional proxies are generated via AspectJ compile-time weaving (#EnableTransactionManagement(mode = AdviceMode.ASPECTJ)). I am sure that the transaction manager is configured correctly.
I have written a couple of integration tests that would verify the possibility of a race condition when multiple threads are competing to update the same row, that is supposed to be locked with SELECT ... FOR UPDATE.
Now, the tests work 95% of the time, however, there is one test that is failing when there is a particular sequence of ITs executed.
I am certain that when a test fails, a row lock is not imposed.
I have enabled the MySql query log on the server to help with troubleshooting.
Here is what I see when the first thread is executing the SELECT ... FOR UPDATE statement:
2020-01-26T12:54:06.681319Z 1219 Query set autocommit=0
2020-01-26T12:54:36.616097Z 1209 Query SELECT _listed_fields_ FROM _my_table_ WHERE id IN ('19qix6lvsfx') FOR UPDATE
It seems that the auto-commit is set on a wrong connection object. Do I read it right? What are those numbers 1219 and 1209?
When everything works right, the log looks like this:
2020-01-26T13:24:22.940787Z 1243 Query set autocommit=0
2020-01-26T13:24:36.515016Z 1243 Query SELECT _listed_fields_ FROM _my_table_ WHERE id IN ('19xbs7vv53r') FOR UPDATE
Any help will be greatly appreciated.
I dislike autocommit=0 because I am likely to forget to COMMIT eventually.
SELECT ... FOR UPDATE only applies if you are in a transaction (autocommit=0 means you are always in a transaction).
Your snippet of the general log implies that all the connections are setting that value, perhaps repeatedly. No, each connection is unaffected by the setting for other connections.
Do you see any COMMITs in the log? Do you ever issue ROLLBACK?
I am using springboot application and connecting to Azure Mysql.
In a method call inside application there is an around spring aspect written which makes call to Azure MySql db.
Following is the squence of queries executed from the aspect and method
#Autowired
EntityManager entityManager
From Aspect : insert into <table name > values ();
For executing this query following piece of code is used
EntityManager newEntityManager = entityManager.getEntityManagerFactory().createEntityManager();
String nativeSql = "insert into table1 values('1','abc;)";
newEntityManager.getTransaction().begin();
try{
newEntityManager.createNativeQuery(nativeSQL).executeUpdate();
entityManager.getTransaction().commit();
} catch (RuntimeException e) {
newEntityManager.getTransaction().rollback();
}finally{
newEntityManager.close();
}
Read calls are done on databse using JPA with annotation
#Transactional(readOnly=true)
Next following piece of executed
EntityManager newEntityManager = entityManager.getEntityManagerFactory().createEntityManager();
String nativeSql = "update table2 set status='dead' where name='abc'";
newEntityManager.getTransaction().begin();
try{
newEntityManager.createNativeQuery(nativeSQL).executeUpdate(); //error occurs at this line
entityManager.getTransaction().commit();
} catch (RuntimeException e) {
newEntityManager.getTransaction().rollback();
}finally{
newEntityManager.close();
}
Following is the complete error
2019-02-10 23:18:00.959 ERROR [bootstrap,c577f32a3b673745,c577f32a3b673745,false] 10628 --- [nio-8106-exec-2] o.h.engine.jdbc.spi.SqlExceptionHelper : Cannot execute statement in a READ ONLY transaction.
But the same code works fine when it the application is connected with local mysql(MariaDb in my case).
This code works fine even if connected with Azure MSSQL.
But the error occurs when connected with Azure MySQL.
Not sure it's the right solution but I had the same problem with MariaDB and it was very confusing. I tried to track the state of the transaction using show variables WHERE Variable_name like 'tx_%' but it always showed tx_read_only as OFF - even when I actually ran read-only transaction. So I don't know how to find out whether the transaction is read-only.
But I pinpointed my problem to a rather obscure scenario. Before the problematic read-write transaction that is reported to be read-only I ran a different read-only transaction. Both used the same physical transaction and "read-only-ness" somehow leaked. But it failed when I used Connection.getMetaData() and then metaData.getColumns(...) in the first RO transaction. We needed to check the columns for a single table.
The problem did not appear, when I switched the transaction reading metadata to read-write, which makes sense if we suspect read-only leaking to another (logical) connection. BTW: we use combination of Hibernate and plain JDBC above a single Hikari connection pool, so this may be factor.
Later we changed the way to find metadata. We prepared a statement with SELECT returning nothing and then asking resultSet.getMetaData(). This was not only much faster for a single table (especially on Oracle, where it took like 5 minutes using Connection.getMetaData()) - but it did not cause the problem, not even when I read the metadata in read-only transaction.
So this was strange - the problem occurred when:
Specific database was used (MariaDB, but MySQL was fine and other types too).
When I read the metadata using Connection.getMetaData() in a read-only transaction (both conditions must be met to assure the failure).
The same wrapped (physical) connection was used for the next read-write transaction.
I checked that the very same JDBC connection was indeed used also in successful scenario when metadata was read from empty SELECT result set using read-only transaction. No problem with the read-write transaction. I thought the problem can be in handling the connection's meta data, I added resultSet.close() to the result set for MetaData.getColumns(), but no help here. In the end I avoided Connection.getMetaData() altogether.
So while I still don't know exactly where the problem is, I could go around it and assure that the connection seems to work for the next read-write transaction indicating no missing cleanup/read-only leak.
Another peculiarity was: We use SET TRANSACTION READ ONLY statement to start read-only transaction for both MySQL and MariaDB. It should work the same way on both databases. When I switched to START TRANSACTION READ ONLY instead, it worked fine. Even when I used the "wrong" way to access table metadata.
But then, sorry, it does not indicate why the OP has problem on MySQL.
I am using 2 separate processes via multiprocessing in my application. Both have access to a MySQL database via sqlalchemy core (not the ORM). One process reads data from various sources and writes them to the database. The other process just reads the data from the database.
I have a query which gets the latest record from the a table and displays the id. However it always displays the first id which was created when I started the program rather than the latest inserted id (new rows are created every few seconds).
If I use a separate MySQL tool and run the query manually I get correct results, but SQL alchemy is always giving me stale results.
Since you can see the changes your writer process is making with another MySQL tool that means your writer process is indeed committing the data (at least, if you are using InnoDB it does).
InnoDB shows you the state of the database as of when you started your transaction. Whatever other tools you are using probably have an autocommit feature turned on where a new transaction is implicitly started following each query.
To see the changes in SQLAlchemy do as zzzeek suggests and change your monitoring/reader process to begin a new transaction.
One technique I've used to do this myself is to add autocommit=True to the execution_options of my queries, e.g.:
result = conn.execute( select( [table] ).where( table.c.id == 123 ).execution_options( autocommit=True ) )
assuming you're using innodb the data on your connection will appear "stale" for as long as you keep the current transaction running, or until you commit the other transaction. In order for one process to see the data from the other process, two things need to happen: 1. the transaction that created the new data needs to be committed and 2. the current transaction, assuming it's read some of that data already, needs to be rolled back or committed and started again. See The InnoDB Transaction Model and Locking.
I have 6 scripts/tasks. Each one of them starts a MySQL transaction, then do its job, which means SELECT/UPDATE/INSERT/DELETE from a MySQL database, then rollback.
So if the database is at a given state S, I launch one task, when the task terminates, the database is back to state S.
When I launch the scripts sequentially, everything works fine:
DB at state S
task 1
DB at state S
task 2
DB at state S
...
...
task 6
DB at state S
But I'd like to speed up the process by multiple-threading and launching the scripts in parallel.
DB at state S
6 tasks at the same time
DB at state S
Some tasks randomly fail, I sometimes get this error:
SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction
I don't understand, I thought transactions were meant for that. Is there something I'm missing ? Any experience, advice, clue is welcome.
The MySQL configuration is:
innodb_lock_wait_timeout = 500
transaction-isolation = SERIALIZABLE
and I add AUTOCOMMIT = 0 at the beginning of each session.
PS: The database was built and used under the REPEATABLE READ isolation level which I changed afterwards.
You can prevent deadlocks by ensuring that every transaction/process does a SELECT...FOR UPDATE on all required data/tables with the same ORDER BY in all cases and with the same order of the tables itself (with at least repeateable read isolation level in MySQL).
Apart from that, isolation levels and transactions are not meant to handle deadlocks, it is vice versa, they are the reason why deadlocks exist. If you encounter a deadlock, there are good chances that you would have an inconsistent state of your dataset (which might be much more serious - if not, you might not need transactions at all).
From a script I sent a query like this thousands of times to my local database:
update some_table set some_column = some_value
I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times.
I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since then, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message:
Lock wait timeout exceeded; try restarting transaction
It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?
I had a similar problem and solved it by checking the threads that are running.
To see the running threads use the following command in mysql command line interface:
SHOW PROCESSLIST;
It can also be sent from phpMyAdmin if you don't have access to mysql command line interface.
This will display a list of threads with corresponding ids and execution time, so you can KILL the threads that are taking too much time to execute.
In phpMyAdmin you will have a button for stopping threads by using KILL, if you are using command line interface just use the KILL command followed by the thread id, like in the following example:
KILL 115;
This will terminate the connection for the corresponding thread.
You can check the currently running transactions with
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`
Your transaction should be one of the first, because it's the oldest in the list. Now just take the value from trx_mysql_thread_id and send it the KILL command:
KILL 1234;
If you're unsure which transaction is yours, repeat the first query very often and see which transactions persist.
Check InnoDB status for locks
SHOW ENGINE InnoDB STATUS;
Check MySQL open tables
SHOW OPEN TABLES WHERE In_use > 0;
Check pending InnoDB transactions
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`;
Check lock dependency - what blocks what
SELECT * FROM `information_schema`.`innodb_locks`;
After investigating the results above, you should be able to see what is locking what.
The root cause of the issue might be in your code too - please check the related functions especially for annotations if you use JPA like Hibernate.
For example, as described here, the misuse of the following annotation might cause locks in the database:
#Transactional(propagation = Propagation.REQUIRES_NEW)
This started happening to me when my database size grew and I was doing a lot of transactions on it.
Truth is there is probably some way to optimize either your queries or your DB but try these 2 queries for a work around fix.
Run this:
SET GLOBAL innodb_lock_wait_timeout = 5000;
And then this:
SET innodb_lock_wait_timeout = 5000;
When you establish a connection for a transaction, you acquire a lock before performing the transaction. If not able to acquire the lock, then you try for sometime. If lock is still not obtainable, then lock wait time exceeded error is thrown. Why you will not able to acquire a lock is that you are not closing the connection. So, when you are trying to get a lock second time, you will not be able to acquire the lock as your previous connection is still unclosed and holding the lock.
Solution: close the connection or setAutoCommit(true) (according to your design) to release the lock.
Restart MySQL, it works fine.
BUT beware that if such a query is stuck, there is a problem somewhere :
in your query (misplaced char, cartesian product, ...)
very numerous records to edit
complex joins or tests (MD5, substrings, LIKE %...%, etc.)
data structure problem
foreign key model (chain/loop locking)
misindexed data
As #syedrakib said, it works but this is no long-living solution for production.
Beware : doing the restart can affect your data with inconsistent state.
Also, you can check how MySQL handles your query with the EXPLAIN keyword and see if something is possible there to speed up the query (indexes, complex tests,...).
Goto processes in mysql.
So can see there is task still working.
Kill the particular process or wait until process complete.
I ran into the same problem with an "update"-statement. My solution was simply to run through the operations available in phpMyAdmin for the table. I optimized, flushed and defragmented the table (not in that order). No need to drop the table and restore it from backup for me. :)
I had the same issue. I think it was a deadlock issue with SQL. You can just force close the SQL process from Task Manager. If that didn't fix it, just restart your computer. You don't need to drop the table and reload the data.
I had this problem when trying to delete a certain group of records (using MS Access 2007 with an ODBC connection to MySQL on a web server). Typically I would delete certain records from MySQL then replace with updated records (cascade delete several related records, this streamlines deleting all related records for a single record deletion).
I tried to run through the operations available in phpMyAdmin for the table (optimize,flush, etc), but I was getting a need permission to RELOAD error when I tried to flush. Since my database is on a web server, I couldn't restart the database. Restoring from a backup was not an option.
I tried running delete query for this group of records on the cPanel mySQL access on the web. Got same error message.
My solution: I used Sun's (Oracle's) free MySQL Query Browser (that I previously installed on my computer) and ran the delete query there. It worked right away, Problem solved. I was then able to once again perform the function using the Access script using the ODBC Access to MySQL connection.
Issue in my case: Some updates were made to some rows within a transaction and before the transaction was committed, in another place, the same rows were being updated outside this transaction. Ensuring that all the updates to the rows are made within the same transaction resolved my issue.
issue resolved in my case by changing delete to truncate
issue-
query:
delete from Survey1.sr_survey_generic_details
mycursor.execute(query)
fix-
query:
truncate table Survey1.sr_survey_generic_details
mycursor.execute(query)
This happened to me when I was accessing the database from multiple platforms, for example from dbeaver and control panels. At some point dbeaver got stuck and therefore the other panels couldn't process additional information. The solution is to reboot all access points to the database. close them all and restart.
Fixed it.
Make sure you doesn't have mismatched data type insert in query.
I had an issue where i was trying "user browser agent data" in VARCHAR(255) and having issue with this lock however when I changed it to TEXT(255) it fixed it.
So most likely it is a mismatch of data type.
I solved the problem by dropping the table and restoring it from backup.