Doctrine2 dead lock - how to handle - mysql

On a symfony2 project I'm working on sometimes dead locks occur when calling flush on my entitymanager. This results in an exception. Most of the times this error occures just once and a second attempt to insert the same data is working correctly.
Is there a good approach to execute (flush) the same transaction again. As simple
$em->flush();
won't do, since the entitymanager gets closed if an error occures.
I've found https://github.com/doctrine/doctrine2/pull/806 bit that doesn't provide a solution.

Doctrine throws a RetryableException for this kind of error where you simply have to try it one more time to make it work.
The problem is that with Doctrine 2, those exceptions make the EntityManager unusable and you have to re-instantiate a new one.
Hopefully this will be corrected in Doctrine 3 : issue tracking
Until Doctrine 3 is released, I went with a solution that passed the test of time in production projects in my company. Everything is explained in this blog post

I would use explicit transaction demarcation, to hopefully prevent the deadlock in the first place. By default only flush() is wrapped in a transaction.
Alternatively, you might be able to change your procedure to use the DQL UPDATE query, which should be atomic.
Or re-submit the request back to the action (with some recursion limit).
I'm not sure there is going to be a good way of restarting the entity manager, but keeping the unit of work.

Related

TypeORM with MySQL Error: Pool is closed. Test is getting stuck when calling the Database

we're having a weird issue with TypeORM, specifically with Jest(might be related, might not be). A certain test is getting completely stuck/hung and we’re having a hard time figuring out what the issue is.
In terms of stack: Typescript, NodeJS, Apollo Graphql, Jest, MySQL.
The test in question is actually an integration test using Apollo’s integration test framework.
What happened first is that a specific test just completely got stuck, and after several long minutes an error is thrown in the console: QueryFailedError: ER_LOCK_WAIT_TIMEOUT: Lock wait timeout exceeded; try restarting transaction
Trying to pinpoint the problem led me to a function we run on afterEach which “destroys” the database. It initially ran:
await queryRunner.query('DELETE FROM Table1');
await queryRunner.query('DELETE FROM Table2');
...
The error and "deadlock" was initially fixed after I changed it from queryRunner to queryBuilder:
await queryBuilder.delete().from('Table1').execute();
...
This was done after fidgeting around with SHOW PROCESSLIST; and with SHOW ENGINE InnoDB STATUS; to try figuring out what was happening. I also changed the transaction isolation to READ-COMMITTED but to no avail. Nothing really worked except changing it from queryRunner to queryBuilder.
This worked for a bit but now it seems like the test is getting stuck again (The test hasn’t changed but the code it’s testing has). Now after the test hangs, we get this error: Error: Pool is closed. Afterwards the test is "released" and all the tests just start failing one by one.
We found out that this is the sequence of events that causes the test to get stuck:
1. open a transaction with queryRunner
2. perform a read query
3. then perform a write
4. commit the transaction and release the queryRunner
5. delete the DB
6. perform a write - deadlock
Furthermore we noticed the following:
If we make sure that we only use the queryRunner for updates, and not
for queries, then the deadlock doesn’t happen.
Changing the code such that we first make all of the read queries with the regular connection
object (not queryRunner) and only then if we connect with
queryRunner and make all of the writes - then the deadlock does not happen.
Does anyone have any insight as to what might be going on? Is there some instability with queryRunner or some specific things we need to take into account when using it?
Thanks!
I was faced same issue and my problem was unhandled async/await.
Check if some promise object is not handled.
If you use async keyword, you must handle all async functions using await keyword.
Also don't forget calling done method of jest.

Is it a bad practice to run many queries with the same transaction at parallel mode?

I was executing many queries against a MySQL database with the same transaction using Promise.all(), so all queries are executing in parallel, the if anything bad happens I rollback the transaction. But a friend said that running queries in parallel is a bad practice because if a query failed and the transaction rolled back there will be other queries still running in MySQL that using the same transaction and if they didn't find the transaction they will fir errors in MySQL itself.
He sugged executing the queries in series so if something bad happens the transaction will rollback and the next query will not execute.
I tried to find some proves about this issue but I couldn't find any or I missed some if exist.
Hopefully, someone can provide me with a clear answer or reference and thank in advanced.
Promise.all() method as described here waits for all promises to resolve and if only one of them gets rejected it will reject too. So the problems are 1. Are all of your methods passed to promise.all() returning promise or they are using callback functions? 2. Is it important that which one of those methods runs at first? because promise.all() doesn't care in what order they resolve 3. Is it important that how many methods are returning reject because promise.all() will reject at first rejection.
Moreover if you are using this method for MySQL and so on, sometimes your ORM may handle that somehow but rejecting.
So I personally agree with your friends as this method is hard to control but maybe find a use for it :)
PS: Hopfully other contributers will help me with other points I missed.

Unit testing mysql rollback on node.js

I'm using mysql in a node project.
I would like to unit test a javascript function that makes an sql transaction. If the transaction becomes the victim of a lock monitor, the function has code that handles the failure.
Or does it?
Because I'm unit testing, I'm only making one transaction at a time on a local database, so there's never going to be a deadlock, right? How can I test the deadlock handling if it's never going to happen? Is there a way I can force it to happen?
Example:
thisMustBeDoneBeforeTheQuery();
connection.queryAsync(/*This is an update*/).catch(function(err) {
undoThatStuffIDidBeforeTheQuery();
// I hope that function worked, because my unit tests can't
// make a deadlock happen, so I can't know for sure.
}
What is the essential behavior that your tests need to guard or verify?? Do you need to test your mysql driver? Or MySql itself? I think #k0pernikus Identified the highest value test:
Assuming that the database client results in an exception because of a deadlock, how does your application code handle it?
You should be able to pretty easily create a test harness using a mocking library or Dependency Injection and test stubs to simulate the client driver returning a deadlock exception. This shouldn't require any interaction with mysql, beyond the initial investigation to see what the return code/error exception propagation looks like for your mysql client driver.
This isn't a 100% perfect test, and still leaves your vulnerable in the case the mysql client library changes.
Reproducing concurrent issues deterministically is often times extremely difficult because of timing. But using SELECT ... FOR UPDATE and multiple transactions should be able to deterministically reproduce a deadlock on mysql, to verify your client libraries code.

Logging fewer entries into Yii console.log

I have a Yii application with many concurrent console jobs writing to one database. Due to the high concurrency sometimes I get MySQL deadlock errors. Sometimes these can be too many. The console.log file becomes too big, and it translates to more expenses.
I want to prevent logging of specific CDbException instances, or at least suppress them altogether (I am handling the exceptions and can generate more compact log sentences from there).
YII__DEBUG is already commented out.
Can anyone please help me figure out how to do this?
Thanks a lot!!
Regards.
I decided to modify the log statement in yii/framwework/db/CDbCommand.php that was logging the failed SQL. I converted it into a trace statement:
Yii::trace(Yii::t('yii','CDbCommand::{method}() failed: {error}. The SQL statement executed was: {sql}.', array('{method}'=>$method, '{error}'=>$message, '{sql}'=>$this->getText().$par)),CLogger::LEVEL_ERROR,'system.db.CDbCommand');
I am anyway catching the exception and logging a more compact version of the sentence, so it is OK for me to do it.
This was the easiest way I could find. We don't upgrade Yii very often, so if and when we go to the next version I'll probably repeat the change.

Unit Of Work side effects could be fixed with TransactionScope, but it is not available with MySql.Data

I'm using NHibernate with MySql.Data ADO connector.
I have Dao classes with CRUD methods: Create, Update, Delete.
These methods open their own NHibernate transactions and commit them.
Now I am changing my design to use Unit Of Work pattern. The session and transaction would be opened and commited in an upper level, not in the Dao methods. So I have to remove the commits from the Dao classes.
I see several problems with this approach:
I was catching the database specific exceptions at the commit() point. Now the commit is done in a layer above. So I will have to add all that database layer specific code to the outer layer? Things like catching FK violation or NH specific exceptions.
I will get all the possible exceptions at the same time, having to discern from which concrete Dao class they come, and implementing code to treat them.
I will not be able to abort the operation if one of the steps fails, because I will not know it until the final commit is done. I know that the transaction rollback will prevent data inconsistences, but it seems a performance waste to run all the following code just because I don't know that the previous lines caused an error.
I don't like this consecuences. I wish I coud use nested transactions in a transaction scope, so I could do the commit locally, but it seems that MySql.Data connector does not support them. I tried and I got exceptions:
System.InvalidOperationException : Nested transactions are not
supported
Is there any workaround that allows me to get the possible exceptions in insert, update or delete operations in the moment they are done? Would session.Flush() do it? Or is there any way to use TransactionScope with MySql.Data?
Sorry if the question seems naive, but I have been googling for a while and I did not find any clear solution. About the transaction scope not working with MySql.Data, all the information I got seems a little old, and I'm not sure if it really can't be done by now.
Finally I decided to use Flush() any time I would have used nested transaction Commit(). It seems to work OK, I get the exceptions at that moment and I am able to treat them locally.
I know that NHibernate best practices include not to use Flush() so liberally, but I didn't find a better solution as far as nested transactions are not available with MySql and NHibernate, so this seems to be the lesser evil.