In our applications we don't use either ADO.NET transaction or SQL Server transactions in procedures and now we are getting the below error in our website when multiple people are using.
Transaction (Process ID 73) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction
Is this error due to the lack of transactions? I thought the consistency will be handled by the DB itself.
And one thing I noticed that SQLCommand.Timeout property has set to 10000. Will this be an issue for the error?
I am trying to solve this issue ASAP. Please help.
EDIT
I saw the Isolationlevel property of ADO.NET transaction, so if I use ADO.NET transaction with proper isolationlevel property like "ReadUncommitted" during reading and "Serializable" during writing?
Every SQL DML (INSERT, UPDATE, DELETE) or DQL (SELECT) statement runs inside a transaction. The default behaviour for SQL Server is for it to open a new transaction (if one doesn't exist), and if the statement completes without errors, to automatically commit the transaction.
The IMPLICIT_TRANSACTIONS behaviour that Sidharth mentions basically gets SQL Server to change it's behaviour somewhat - it leaves the transaction open when the statement completes.
To get better information in the SQL Server error log, you can turn on a trace flag. This will then tell you which connections were involved in the deadlock (not just the one that got killed), and which resources were involved. You may then be able to determine what pattern of behaviour is leading to the deadlocks.
If you're unable to determine the underlying cause, you may have to add some additional code to your application - that catches sql errors due to deadlocks, and retries the command multiple times. This is usually the last resort - it's better to determine which tables/indexes are involved, and work out a strategy that avoids the deadlocks in the first place.
IsolationLevel is your best bet. Default serialization level of transactions is "Serializable" which is the most stringent and if at this level there is a circular reference chances of deadlock are very high. Set it to ReadCommitted while reading and let it be Serializable while writing.
Sql server can use implicit transactions which is what might be happening in your case. Try setting it off:
SET IMPLICIT_TRANSACTIONS OFF;
Read about it here: http://msdn.microsoft.com/en-us/library/ms190230.aspx
Related
I am having trouble finding an answer to this using google or Stack Overflow, so perhaps people familiar with Percona XtraDB can help answer this. I fully understand how unexpected deadlocks can occur as outlined in this article, and the solution is to make sure you wrap your transactions with retry logic so you can restart them if they fail. We already do that.
https://www.percona.com/blog/2012/08/17/percona-xtradb-cluster-multi-node-writing-and-unexpected-deadlocks/
My questions is about normal updates that occur outside of a transaction in auto commit mode. Normally if you are writing only to a single SQL DB and perform an update, you get a last in wins scenario so whoever executes the statement last, is golden. Any other data is lost so if two updates occur at the same time, one of them will take hold and the others data is essentially lost.
Now what happens in a multi master environment with the same thing? The difference in cluster mode with multi master is that the deadlock can occur at the point where the commit happens as opposed to when the lock is first taken on the table. So in auto commit mode, the data will get written to the DB but then it could fail when it tries to commit that to the other nodes in the cluster if something else modified the exact same record at the same time. Clearly the simply solution is to re-execute the update again and it would seem to me that the database itself should be able to handle this, since it is a single statement in auto commit mode?
So is that what happens in this scenario, or do I need to start wrapping all my update code in retry handling as well and retry it myself when this fails?
Autocommit is still a transaction; a single statement transaction. Your single statement is just wrapped up in BEGIN/COMMIT for you. I believe your logic is inverted. In PXC, the rule is "commit first wins". If you start a manual transaction on node1 (ie: autocommit=0; BEGIN;) and UPDATE id=1 and don't commit then on node2 you autocommit an update to the same row, that will succeed on node2 and succeed on node1. When you commit the manual UPDATE, you will get a deadlock error. This is correct behavior.
It doesn't matter if autocommit or not; whichever commits first wins and the other transaction must re-try. This is the reason why we don't recommend writing to multiple nodes in PXC.
Yes, if you want to write to multiple nodes, you need to adjust your code to "try-catch-retry" handle this error case.
Can some one explain me "TRANSACTION" and "transaction isolation levels" with good example. I am very much confused for using this within my application. I am doing many Insert/Update/Select transaction within Stored Procedure, so please explain in this context, (consider auto-commit too). I am using connection pooling too on my application server.
Thanks.
These are different concepts that all play well together. Transactions are one very basic and important concept in databases that i use every day. You can read quite a bit about the most important properties of transactions, ACID, here: http://en.wikipedia.org/wiki/ACID
But I'll try to give you an overview with my own words:
Transactions can be seen as grouping together a set of commands. If you change/add/delete anything in the database within a transaction, depending on the isolation-level noone outside that transaction can see these changes. And if you rollback the transaction (for example if an error occurs) there are no changes applied to the database at all. If you otherwise decide to commit your transaction, everything that happend within the transaction is executed at once. So as a good habit, grouping every logical action together in one transaction is a brilliant idea.
Auto-commit is the opposite: Every update/insert/delete is implicitly/directly commited as transaction. So it can still be seen as a transaction, but you omit the explicit commit at the end of it.
Connection pooling can only work if you ensure to use only one connection for a transaction. But usually you have to first get a connection from the pool to execute your statements, so this is no issue.
Prepared statements are a bit unconnected to transactions. You can of course use transactions within prepared statements and have to consider that because nested transactions are not possible in MySQL.
I'm starting out with MySQL trnsactions and I have a doubt:
In the documentation it says:
Beginning a transaction causes any pending transaction to be
committed. See Section 13.3.3, “Statements That Cause an Implicit
Commit”, for more information.
I have more or less 5 users on the same web application ( It is a local application for testing ) and all of them share the same MySQL user to interact with the database.
My question is: If I use transactions in the code and two of them start a transaction ( because of inserting, updating or something ) Could it be that the transactions interfere with each other?
I see in the statements that cause an implicit commit Includes starting a transaction. Being a local application It's fast and hard to tell if there is something wrong going on there, every query turns out as expected but I still have the doubt.
The implicit commit occurs within a session.
So for instance you start a transaction, do some updates and then forget to close the transaction and start a new one. Then the first transaction will implicitely committed.
However, other connections to the database will not be affected by that; they have their own transactions.
You say that 5 users use the same db user. That is okay. But in order to have them perform separate operations they should not use the same connection/session.
With MySQl by default each connection has autocommit turned on. That is, each connection will commit each query immediately. For an InnoDb table each transaction is therefore atomic - it completes entirely and without interference.
For updates that require several operations you can use a transaction by using a START TRANSACTION query. Any outstanding transactions will be committed, but this won't be a problem because mostly they will have been committed anyway.
All the updates performed until a COMMIT query is received are guaranteed to be completed entirely and without interference or, in the case of a ROLLBACK, none are applied.
Other transations from other connections see a consistent view of the database while this is going on.
This property is ACID compliance (Atomicity, Consistency, Isolation, Durability) You should be fine with an InnoDB table.
Other table types may implement different levels of ACID compliance. If you have a need to use one you should check it carefully.
This is a much simplified veiw of transaction handling. There is more detail on the MySQL web site here and you can read about ACID compliance here
I'm using JDBC with mysql. I have a pretty complex series of inserts and updates that I'm doing in a single transaction. This seems to work for the most part, but about 1% of the time I find that the data in one of my tables is in an inconsistent state.
I'm rolling back the transaction if an error occurs, but am not sure how to start debugging. My setup generally looks like:
try {
conn.setAutoCommit(false);
PreparedStatement stmt1 = conn.prepareStatement("insert into table1");
stmt1.executeUpdate();
stmt1.close();
PreparedStatement stmt2 = conn.prepareStatement("update table2");
stmt2.executeUpdate();
stmt2.close();
... more statements ...
conn.commit();
}
catch (Exception ex) {
conn.rollback();
}
I'm using a 2010 version of mysql. I might try updating that, but I have a feeling it's more something in my application code that's causing the inconsistency.
Are there any debugging tools I might find helpful at the database level to help? Any other pointers? I'm using JDBC with all default settings, I wonder if there is any stricter transaction level I need to use for this kind of scenario?
Thanks
----- Note -----
All my tables are InnoDb.
Hm.. Interesting. Yes, it should work. We used really huge transactions accross multiple tables many times, never experienced even strange things...
Are you sure it is not you who produce the inconsistency (whatever this means here, you didn't specify this)? By simply inserting/updating wrong things? :-)
Just an idea - we ran into this several times. Deadlock resolving. DB servers used to handle that. The chance a deadlock occurs is higher if you have several parallel threads and the transaction blocks are manipulating more tables. In this case some of your transactions could be aborted (and rolled back) by the DB server itself. And those transactions will result in an error.
The code you wrote above only rollbacks in the exception case (aborted transaction already rolled back, so it doesn't do too much..), but have you tried to print/log the exceptions? If not you should.
Of course transactions are running separated from each other. But this could explain why you experience this strange behaviour in only 1-2% of the cases...
You should check the logs of your mysql server too. It is also possible the server itself fails for any reason. And one more tip: you may try to run "mysqltop" (or "mtop", hope I remember the name of this tool correctly..). This is able to monitor and show you what happens inside the DB server. However it is mostly used to track the performance of our sqls, this also shows failures. Maybe running this could also help you out...
Perhaps you use DDL (create table, alter table, and so on) in your statements?
I am not sure about MySQL but it may not be able to roll back DDL statements.
For example:
PostgreSQL can rollback DDL,
Oracle performs commit before executing DDL.
See here: http://dev.mysql.com/doc/refman/5.0/en/cannot-roll-back.html
Once in a while I get following error in production enviornment which goes away on running the same stored procedure again.
Transaction (Process ID 86) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction
Someone told me that if I use NOLOCK hint in my stored procedures, it will ensure it will never be deadlocked. Is this correct? Are there any better ways of handling this error?
Occasional deadlocks on an RDBMS that locks like SQL Server/Sybase are expected.
You can code on the client to retry as recommended my MSDN "Handling Deadlocks".
Basically, examine the SQLException and maybe a half second later, try again.
Otherwise, you should review your code so that all access to tables are in the same order. Or you can use SET DEADLOCK_PRIORITY to control who becomes a victim.
On MSDN for SQL Server there is "Minimizing Deadlocks" which starts
Although deadlocks cannot be completely avoided
This also mentions "Use a Lower Isolation Level" which I don't like (same as many SQL types here on SO) and is your question. Don't do it is the answer... :-)
What can happen as a result of using (nolock) on every SELECT in SQL Server?
https://dba.stackexchange.com/q/2684/630
Note: MVCC type RDBMS (Oracle, Postgres) don't have this problem. See http://en.wikipedia.org/wiki/ACID#Locking_vs_multiversioning but MVCC has other issues.
While adding NOLOCK can prevent readers and writers from blocking each other (never mind all of the negative side effects it has), it is not a magical fix for deadlocks. Many deadlocks have nothing at all to do with reading data, so applying NOLOCK to your read queries might not cause anything to change at all. Have you run a trace and examined the deadlock graph to see exactly what the deadlock is? This should at least let you know which part of the code to look at. For example, is the stored procedure deadlocking because it is being called by multiple users concurrently, or is it deadlocking with a different piece of code?
Here is a good link on learning to troubleshoot deadlocks. I always try avoid using nolock for the reasons above. Also you might want to better understand Lock Compatibility.