I'm using mysql in a node project.
I would like to unit test a javascript function that makes an sql transaction. If the transaction becomes the victim of a lock monitor, the function has code that handles the failure.
Or does it?
Because I'm unit testing, I'm only making one transaction at a time on a local database, so there's never going to be a deadlock, right? How can I test the deadlock handling if it's never going to happen? Is there a way I can force it to happen?
Example:
thisMustBeDoneBeforeTheQuery();
connection.queryAsync(/*This is an update*/).catch(function(err) {
undoThatStuffIDidBeforeTheQuery();
// I hope that function worked, because my unit tests can't
// make a deadlock happen, so I can't know for sure.
}
What is the essential behavior that your tests need to guard or verify?? Do you need to test your mysql driver? Or MySql itself? I think #k0pernikus Identified the highest value test:
Assuming that the database client results in an exception because of a deadlock, how does your application code handle it?
You should be able to pretty easily create a test harness using a mocking library or Dependency Injection and test stubs to simulate the client driver returning a deadlock exception. This shouldn't require any interaction with mysql, beyond the initial investigation to see what the return code/error exception propagation looks like for your mysql client driver.
This isn't a 100% perfect test, and still leaves your vulnerable in the case the mysql client library changes.
Reproducing concurrent issues deterministically is often times extremely difficult because of timing. But using SELECT ... FOR UPDATE and multiple transactions should be able to deterministically reproduce a deadlock on mysql, to verify your client libraries code.
Related
we're having a weird issue with TypeORM, specifically with Jest(might be related, might not be). A certain test is getting completely stuck/hung and we’re having a hard time figuring out what the issue is.
In terms of stack: Typescript, NodeJS, Apollo Graphql, Jest, MySQL.
The test in question is actually an integration test using Apollo’s integration test framework.
What happened first is that a specific test just completely got stuck, and after several long minutes an error is thrown in the console: QueryFailedError: ER_LOCK_WAIT_TIMEOUT: Lock wait timeout exceeded; try restarting transaction
Trying to pinpoint the problem led me to a function we run on afterEach which “destroys” the database. It initially ran:
await queryRunner.query('DELETE FROM Table1');
await queryRunner.query('DELETE FROM Table2');
...
The error and "deadlock" was initially fixed after I changed it from queryRunner to queryBuilder:
await queryBuilder.delete().from('Table1').execute();
...
This was done after fidgeting around with SHOW PROCESSLIST; and with SHOW ENGINE InnoDB STATUS; to try figuring out what was happening. I also changed the transaction isolation to READ-COMMITTED but to no avail. Nothing really worked except changing it from queryRunner to queryBuilder.
This worked for a bit but now it seems like the test is getting stuck again (The test hasn’t changed but the code it’s testing has). Now after the test hangs, we get this error: Error: Pool is closed. Afterwards the test is "released" and all the tests just start failing one by one.
We found out that this is the sequence of events that causes the test to get stuck:
1. open a transaction with queryRunner
2. perform a read query
3. then perform a write
4. commit the transaction and release the queryRunner
5. delete the DB
6. perform a write - deadlock
Furthermore we noticed the following:
If we make sure that we only use the queryRunner for updates, and not
for queries, then the deadlock doesn’t happen.
Changing the code such that we first make all of the read queries with the regular connection
object (not queryRunner) and only then if we connect with
queryRunner and make all of the writes - then the deadlock does not happen.
Does anyone have any insight as to what might be going on? Is there some instability with queryRunner or some specific things we need to take into account when using it?
Thanks!
I was faced same issue and my problem was unhandled async/await.
Check if some promise object is not handled.
If you use async keyword, you must handle all async functions using await keyword.
Also don't forget calling done method of jest.
I'm working on a akka-http/slick web service, and I need to do the following in a transaction:
Insert a row in a table
Call some external web service
Commit the transaction
The web service I need to call is sometimes really slow to respond (let's say ~2 seconds).
I'm worried that this might keep the SQL connection open for too longer, and that'll exhaust Slick's connection pool and affect other independent requests.
Is this a possibility? Or does Slick do something to make sure this "idle" mid-transaction connection does not starve the pool?
If it is something I should be worried about - is there anything I can do to remedy this?
If it matters, I'm using MySQL with TokuDB.
The slick documentation seems to say that this will be a problem.
The use of a transaction always implies a pinned session.
And
You can use withPinnedSession to force the use of a single session, keeping the existing session open even when waiting for non-database computations.
From: http://slick.lightbend.com/doc/3.2.0/dbio.html#transactions-and-pinned-sessions
We have a web service built using Asp.net Web API. We use NHibernate as our ORM connecting to a MySQL database.
We have a couple of controller methods that do a large number (1,000-3,000) of relatively cheap queries.
We're looking at improving the performance of these controller methods and almost all of the time is spent doing the NHibernate queries so that's where we're focusing our attention.
In the medium term the solutions are things like reducing the number of queries (perhaps by doing fewer larger queries) and/or to parallelize the queries (which would take some work since NHibernate does not have an async api and sessions are single threaded) and things like that.
In the short term we're looking at improving the performance without taking on either of those larger projects.
We've done some performance profiling and were surprised to find that it looks like a lot of the time in each query (over half) is spent opening the connection to MySQL.
It appears that NHibernate is opening a new connection to MySQL for each query and that MySqlConnection.Open() makes two round trips to the database each time a connection is opened (even when the connection is coming from the pool).
Here's a screenshot of one of our performance profiles where you can see these two things:
We're wondering if this is expected or if we're missing something like a misconfiguration/misuse of NHibernate or a way to eliminate the two round trips to the database in MySqlConnection.Open().
I've done some additional digging and found something interesting:
If we add .SetProperty(Environment.ReleaseConnections, "on_close") to the NHibernate configuration then Open() is no longer called and the time it takes to do the query drops by over 40%.
It seems this is not a recommended setting though: http://nhibernate.info/doc/nhibernate-reference/transactions.html#transactions-connection-release
Based on the documentation I expected to get the same behavior (no extra calls to Open()) if I wrapped the reads inside a single NHibernate transaction but I couldn’t get it to work. This is what I did in the controller method:
using (var session = _sessionFactory.OpenSession()) {
using (var transaction = session.BeginTransaction()) {
// controller code
transaction.Commit();
}
}
Any ideas on how to get the same behavior using a recommended configuration for NHibernate?
After digging into this a bit more it turns out there was a mistake in my test implementation and after fixing it using transactions eliminates the extra calls to Open() as expected.
Not using transaction is considered a bad practice, so starting to add them is anyway welcome.
Moreover, as you seem to have find out by yourself, the default connection release mode auto currently always translates to AfterTransaction, which with NHibernate (v2 to v4 at least) releases connections after each statement when no transactions are ongoing for the session.
From Connection Release Modes:
Note that with ConnectionReleaseMode.AfterTransaction, if a session is considered to be in auto-commit mode (i.e. no transaction was started) connections will be released after every operation.
So simply transacting your session usages should do it. As this is not the case for your application, I suspect other issues.
Is your controller code using other sessions? NHibernate explicit transactions apply only to the session from which their were started (or to sessions opened from that session with ISession.GetSession(EntityMode.Poco)).
So you need to handle a transaction for each opened session.
You may use a TransactionScope instead for wrapping many sessions in a single transaction. But each session will still open a dedicated connection. This will in most circumstances promote the transaction to distributed, which has a performance penalty and may fail if your server is not configured to enable it.
You may configure and use a contextual session instead for replacing many sessions per controller action by only one. Of course you can use dependency injection instead for achieving this too.
Notes:
About reducing the number of queries issued by an application, there are some easy to leverage features in NHibernate:
Batching of lazy-loads (Batch fetching): configure your lazily loaded entities and collections of entities to not only load themselves, but also some others awaiting entities (of the same class) or collections of entities (same collections of other parent entities). Add batch-size attribute on collections and classes. I have written a detailed explanation of it in this other answer.
Second level cache, which will allow caching of data across http requests. Transactions mandatory for it to work.
Future queries, as proposed by Low.
Going parallel for a web API looks to me as a doomed road. Threads are a valuable ressource for web application. The more threads a request uses, the less requests the web application will be able to serve in parallel. So going that way will very likely be a major pain for your application scalability.
The OnClose mode is not recommended because it delays connection releasing to session closing, which may occur quite late after the last transaction, especially when using contextual session. Since it looks like your session usage is very localized, likely with a closing very near the last query, it should not be an issue for your application.
parallelize the queries (which would take some work since NHibernate
does not have an async api and sessions are single threaded) and
things like that.
You can defer the execution of the queries using NHibernate Futures,
Following code (extracted from reference article) will execute single query despite there are 2 values retrieved,
using (var s = sf.OpenSession())
using (var tx = s.BeginTransaction())
{
var blogs = s.CreateCriteria<Blog>()
.SetMaxResults(30)
.Future<Blog>();
var countOfBlogs = s.CreateCriteria<Blog>()
.SetProjection(Projections.Count(Projections.Id()))
.FutureValue<int>();
Console.WriteLine("Number of blogs: {0}", countOfBlogs.Value);
foreach (var blog in blogs)
{
Console.WriteLine(blog.Title);
}
tx.Commit();
}
You can also use NHibernate Batching to reduce the number of queries
I'm using NHibernate with MySql.Data ADO connector.
I have Dao classes with CRUD methods: Create, Update, Delete.
These methods open their own NHibernate transactions and commit them.
Now I am changing my design to use Unit Of Work pattern. The session and transaction would be opened and commited in an upper level, not in the Dao methods. So I have to remove the commits from the Dao classes.
I see several problems with this approach:
I was catching the database specific exceptions at the commit() point. Now the commit is done in a layer above. So I will have to add all that database layer specific code to the outer layer? Things like catching FK violation or NH specific exceptions.
I will get all the possible exceptions at the same time, having to discern from which concrete Dao class they come, and implementing code to treat them.
I will not be able to abort the operation if one of the steps fails, because I will not know it until the final commit is done. I know that the transaction rollback will prevent data inconsistences, but it seems a performance waste to run all the following code just because I don't know that the previous lines caused an error.
I don't like this consecuences. I wish I coud use nested transactions in a transaction scope, so I could do the commit locally, but it seems that MySql.Data connector does not support them. I tried and I got exceptions:
System.InvalidOperationException : Nested transactions are not
supported
Is there any workaround that allows me to get the possible exceptions in insert, update or delete operations in the moment they are done? Would session.Flush() do it? Or is there any way to use TransactionScope with MySql.Data?
Sorry if the question seems naive, but I have been googling for a while and I did not find any clear solution. About the transaction scope not working with MySql.Data, all the information I got seems a little old, and I'm not sure if it really can't be done by now.
Finally I decided to use Flush() any time I would have used nested transaction Commit(). It seems to work OK, I get the exceptions at that moment and I am able to treat them locally.
I know that NHibernate best practices include not to use Flush() so liberally, but I didn't find a better solution as far as nested transactions are not available with MySql and NHibernate, so this seems to be the lesser evil.
I have one complex web application which intensive interact with the database. I lock db (MySQL InnoDB) within some request`s subset to prevent data integrity violation (use 'begin' ... 'commit' command sequence). Before request amount is less than N app works good. But when request amount will be greater than N locking errors has appears ('Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction').
I have a lot of functional test. All functional tests use 'single-client schema' emulation to test various scenarious of app using. They all is passed well. But how can I test my app with multiple clients connections (I want to able verify DB state at any time while test is run)? It means this is not simple load testing AFAIK.
You can use JMeter for that using :
Http sampler at start
once you identify the queries involved, you could use db sampler if you want to reproduce more simply or rapidly to test resolution
Regards