We are seeing strange behaviour in our DB transactions- they are not behaving atomically. We use MySQL 5.6.25 Innodb, Eclipselink 2.5.2 as JPA provider and HikariCP 2.6.2 as the connection pool.
This problem surfaces when Eclipselink fails to acquire a connection from the pool during a entityManager.flush call. For sometime, we were swallowing this exception because entry to a particular table was being made on best-effort basis- a sort of audit mode you can say. However,this led to the case where only a part of the transaction was committed- out of 5, only 1,2 or 3 entries were persisted.
To be sure, here are is the flow of events
tx.begin();
em.persist(entity1);
try{
em.persist(entity2);
em.flush(); ---> this is where connection acquisition fails.
} catch(Throwable tx){
//do nothing, except log.
}
em.persist(entity3);
em.flush();
em.persist(entity4);
em.flush();
em.persist(entity5);
em.flush();
em.persist(entity6);
tx.commit();
We are seeing transactions committed till entity3,entity4,entity5, when connection acquisition again fails at some point in the later flushes.
Can anyone point to how exactly this is happening?
The main problem you face is that the connection is not available. Exceptions of that kind must lead to a rollback of the transaction.
Catching those transactions unhandled will change the behaviour of the transaction. The exception during the first em.flush() also obliterates the first em.persist(entity1) which you did not want to lose.
So the solution is to add em.flush() before the try, to make sure, that persisting of entity1 either is guaranteed or leads to an exception which will lead to rollback of the complete transaction.
I would not recommend this kind of solution though.
If persisting of entity2 is optional then normally you can to do that in an extra transaction which means the system will need (for a short time) an additional db-connection for that.
How to create an extra transaction? In Ejb you use the REQUIRES_NEW annotated methods. I am not sure what kind of TransactionManagement you are using here, but I am quite sure that there should be the possibility to create a kind of separate transactions (not to be confused by nested transactions!!).
Related
I am using springboot application and connecting to Azure Mysql.
In a method call inside application there is an around spring aspect written which makes call to Azure MySql db.
Following is the squence of queries executed from the aspect and method
#Autowired
EntityManager entityManager
From Aspect : insert into <table name > values ();
For executing this query following piece of code is used
EntityManager newEntityManager = entityManager.getEntityManagerFactory().createEntityManager();
String nativeSql = "insert into table1 values('1','abc;)";
newEntityManager.getTransaction().begin();
try{
newEntityManager.createNativeQuery(nativeSQL).executeUpdate();
entityManager.getTransaction().commit();
} catch (RuntimeException e) {
newEntityManager.getTransaction().rollback();
}finally{
newEntityManager.close();
}
Read calls are done on databse using JPA with annotation
#Transactional(readOnly=true)
Next following piece of executed
EntityManager newEntityManager = entityManager.getEntityManagerFactory().createEntityManager();
String nativeSql = "update table2 set status='dead' where name='abc'";
newEntityManager.getTransaction().begin();
try{
newEntityManager.createNativeQuery(nativeSQL).executeUpdate(); //error occurs at this line
entityManager.getTransaction().commit();
} catch (RuntimeException e) {
newEntityManager.getTransaction().rollback();
}finally{
newEntityManager.close();
}
Following is the complete error
2019-02-10 23:18:00.959 ERROR [bootstrap,c577f32a3b673745,c577f32a3b673745,false] 10628 --- [nio-8106-exec-2] o.h.engine.jdbc.spi.SqlExceptionHelper : Cannot execute statement in a READ ONLY transaction.
But the same code works fine when it the application is connected with local mysql(MariaDb in my case).
This code works fine even if connected with Azure MSSQL.
But the error occurs when connected with Azure MySQL.
Not sure it's the right solution but I had the same problem with MariaDB and it was very confusing. I tried to track the state of the transaction using show variables WHERE Variable_name like 'tx_%' but it always showed tx_read_only as OFF - even when I actually ran read-only transaction. So I don't know how to find out whether the transaction is read-only.
But I pinpointed my problem to a rather obscure scenario. Before the problematic read-write transaction that is reported to be read-only I ran a different read-only transaction. Both used the same physical transaction and "read-only-ness" somehow leaked. But it failed when I used Connection.getMetaData() and then metaData.getColumns(...) in the first RO transaction. We needed to check the columns for a single table.
The problem did not appear, when I switched the transaction reading metadata to read-write, which makes sense if we suspect read-only leaking to another (logical) connection. BTW: we use combination of Hibernate and plain JDBC above a single Hikari connection pool, so this may be factor.
Later we changed the way to find metadata. We prepared a statement with SELECT returning nothing and then asking resultSet.getMetaData(). This was not only much faster for a single table (especially on Oracle, where it took like 5 minutes using Connection.getMetaData()) - but it did not cause the problem, not even when I read the metadata in read-only transaction.
So this was strange - the problem occurred when:
Specific database was used (MariaDB, but MySQL was fine and other types too).
When I read the metadata using Connection.getMetaData() in a read-only transaction (both conditions must be met to assure the failure).
The same wrapped (physical) connection was used for the next read-write transaction.
I checked that the very same JDBC connection was indeed used also in successful scenario when metadata was read from empty SELECT result set using read-only transaction. No problem with the read-write transaction. I thought the problem can be in handling the connection's meta data, I added resultSet.close() to the result set for MetaData.getColumns(), but no help here. In the end I avoided Connection.getMetaData() altogether.
So while I still don't know exactly where the problem is, I could go around it and assure that the connection seems to work for the next read-write transaction indicating no missing cleanup/read-only leak.
Another peculiarity was: We use SET TRANSACTION READ ONLY statement to start read-only transaction for both MySQL and MariaDB. It should work the same way on both databases. When I switched to START TRANSACTION READ ONLY instead, it worked fine. Even when I used the "wrong" way to access table metadata.
But then, sorry, it does not indicate why the OP has problem on MySQL.
I have a general understanding question about how Slick/the database manage asynchronous operations. When I compose a query, or an action, say
(for {
users <- UserDAO.findUsersAction(usersInput.map(_.email))
addToInventoriesResult <- insertOrUpdate(inventoryInput, user)
deleteInventoryToUsersResult <- inventoresToUsers.filter(_.inventoryUuid === inventoryInput.uuid).delete if addToInventoriesResult == 1
addToInventoryToUsersResult <- inventoresToUsers ++= users.map(u => DBInventoryToUser(inventoryInput.uuid, u.uuid)) if addToInventoriesResult == 1
} yield(addToInventoriesResult)).transactionally
Is there a possibility that another user can for example remove the users just after the first action UserDAO.findUsersAction(usersInput.map(_.email)) is executed, but before the rest, such that the insert will fail (because of foreign key error)? Or a scenario that can lead to a lost update like: transaction A reads data, then transaction B updates this data, then transaction A does an update based on what it did read, it will not see B's update an overwrite it
I think this probably depends on the database implementation or maybe JDBC, as this is sent to the database as a block of SQL, but maybe Slick plays a role in this. I'm using MySQL.
In case there are synchronisation issues here, what is the best way to solve this?. I have read about approaches like a background queue that processes the operations sequentially (as semantic units), but wouldn't this partly remove the benefit of being able to access the database asynchronously -> have bad performance?
First of all, if the underlying database driver is blocking (the case with JDBC based drivers) then Slick cannot deliver async peformance in the truly non-blocking sense of the word (i.e. a thread will be consumed and blocked for however long it takes for a given query to complete).
There's been talk of implementing non-blocking drivers for Oracle and SQL Server (under a paid Typesafe subscription) but that's not happening any time soon AFAICT. There are a couple of projects that do provide non-blocking drivers for Postegres and MySQL, but YMMV, still early days.
With that out of the way, when you call transactionally Slick takes the batch of queries to execute and wraps them in a try-catch block with underlying connection's autocommit flag set to false. Once the queries have executed successfully the transaction is committed by setting autocommit back to the default, true. In the event an Exception is thrown, the connection's rollback method is called. Just standard JDBC session boilerplate that Slick conveniently abstracts away.
As for your scenario of a user being deleted mid-transaction and handling that correctly, that's the job of the underlying database/driver.
I'm benchmarking MySQL under different isolation levels.
For the case of SERIALIZABLE I frequently get this error: "Deadlock found when trying to get lock; try restarting transaction at the client side".
Reading http://dev.mysql.com/doc/refman/5.6/en/innodb-deadlocks.html didn't help me much.
I have the following question:
What is the state of the database whenever I get the message ? Everything is frozen and the system expects from me to do something or my transaction has been already aborted and I just get informed about it ?
I'm using jdbc driver to connect to MySQL. Supposed my policy is to re-issue such failing transactions, do I need to call connection.rollback() or MySQL has already done that for me ?
In the case of a deadlock, MySQL will detect it and automatically roll back transactions as necessary to break the deadlock. It favors rolling back smaller transactions (affected rows).
If your transaction is rolled back, it is assumed that you will re-issue the transaction. MySQL does not do it for you.
When you receive such an alert, MySQL is not waiting on anything. It has already performed the rollback.
I'm using JDBC with mysql. I have a pretty complex series of inserts and updates that I'm doing in a single transaction. This seems to work for the most part, but about 1% of the time I find that the data in one of my tables is in an inconsistent state.
I'm rolling back the transaction if an error occurs, but am not sure how to start debugging. My setup generally looks like:
try {
conn.setAutoCommit(false);
PreparedStatement stmt1 = conn.prepareStatement("insert into table1");
stmt1.executeUpdate();
stmt1.close();
PreparedStatement stmt2 = conn.prepareStatement("update table2");
stmt2.executeUpdate();
stmt2.close();
... more statements ...
conn.commit();
}
catch (Exception ex) {
conn.rollback();
}
I'm using a 2010 version of mysql. I might try updating that, but I have a feeling it's more something in my application code that's causing the inconsistency.
Are there any debugging tools I might find helpful at the database level to help? Any other pointers? I'm using JDBC with all default settings, I wonder if there is any stricter transaction level I need to use for this kind of scenario?
Thanks
----- Note -----
All my tables are InnoDb.
Hm.. Interesting. Yes, it should work. We used really huge transactions accross multiple tables many times, never experienced even strange things...
Are you sure it is not you who produce the inconsistency (whatever this means here, you didn't specify this)? By simply inserting/updating wrong things? :-)
Just an idea - we ran into this several times. Deadlock resolving. DB servers used to handle that. The chance a deadlock occurs is higher if you have several parallel threads and the transaction blocks are manipulating more tables. In this case some of your transactions could be aborted (and rolled back) by the DB server itself. And those transactions will result in an error.
The code you wrote above only rollbacks in the exception case (aborted transaction already rolled back, so it doesn't do too much..), but have you tried to print/log the exceptions? If not you should.
Of course transactions are running separated from each other. But this could explain why you experience this strange behaviour in only 1-2% of the cases...
You should check the logs of your mysql server too. It is also possible the server itself fails for any reason. And one more tip: you may try to run "mysqltop" (or "mtop", hope I remember the name of this tool correctly..). This is able to monitor and show you what happens inside the DB server. However it is mostly used to track the performance of our sqls, this also shows failures. Maybe running this could also help you out...
Perhaps you use DDL (create table, alter table, and so on) in your statements?
I am not sure about MySQL but it may not be able to roll back DDL statements.
For example:
PostgreSQL can rollback DDL,
Oracle performs commit before executing DDL.
See here: http://dev.mysql.com/doc/refman/5.0/en/cannot-roll-back.html
In our applications we don't use either ADO.NET transaction or SQL Server transactions in procedures and now we are getting the below error in our website when multiple people are using.
Transaction (Process ID 73) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction
Is this error due to the lack of transactions? I thought the consistency will be handled by the DB itself.
And one thing I noticed that SQLCommand.Timeout property has set to 10000. Will this be an issue for the error?
I am trying to solve this issue ASAP. Please help.
EDIT
I saw the Isolationlevel property of ADO.NET transaction, so if I use ADO.NET transaction with proper isolationlevel property like "ReadUncommitted" during reading and "Serializable" during writing?
Every SQL DML (INSERT, UPDATE, DELETE) or DQL (SELECT) statement runs inside a transaction. The default behaviour for SQL Server is for it to open a new transaction (if one doesn't exist), and if the statement completes without errors, to automatically commit the transaction.
The IMPLICIT_TRANSACTIONS behaviour that Sidharth mentions basically gets SQL Server to change it's behaviour somewhat - it leaves the transaction open when the statement completes.
To get better information in the SQL Server error log, you can turn on a trace flag. This will then tell you which connections were involved in the deadlock (not just the one that got killed), and which resources were involved. You may then be able to determine what pattern of behaviour is leading to the deadlocks.
If you're unable to determine the underlying cause, you may have to add some additional code to your application - that catches sql errors due to deadlocks, and retries the command multiple times. This is usually the last resort - it's better to determine which tables/indexes are involved, and work out a strategy that avoids the deadlocks in the first place.
IsolationLevel is your best bet. Default serialization level of transactions is "Serializable" which is the most stringent and if at this level there is a circular reference chances of deadlock are very high. Set it to ReadCommitted while reading and let it be Serializable while writing.
Sql server can use implicit transactions which is what might be happening in your case. Try setting it off:
SET IMPLICIT_TRANSACTIONS OFF;
Read about it here: http://msdn.microsoft.com/en-us/library/ms190230.aspx