Deadlock troubleshooting in Sql Server 2008 - sql-server-2008

My website doesn't seem to handle a high number of visitors, I believe it's because the server is too simple.
2 hours ago my website was getting a lot of hits and I noticed that 3 deadlock errors occurred, the error is:
System.Data.SqlClient.SqlException
:
Transaction (Process ID 58) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I'm not sure why this happened... Looking at the stack trace, I could see that this happened with a select query.
Anyone knows what may be the cause of this error?
The server is running Windows 2008 and Sql Server 2008.

SQL Server 2008 has multiple ways to identify processes and queries involved in deadlock.
If deadlocks are easy to reproduce,frequency is higher and you can profile SQL server (you have the access and performance cost on server when profiler is enabled) using SQL Profiler will give you nice graphical view of deadlock.
This page has all the information you need to use deadlock graphs
http://sqlmag.com/database-performance-tuning/gathering-deadlock-information-deadlock-graph
Most of the times reproducing deadlocks is hard, or they happen in production environment where we dont want to attach Profiler to it and affect performance.
I would use this query to get deadlocks happened:
SELECT
xed.value('#timestamp', 'datetime') as Creation_Date,
xed.query('.') AS Extend_Event
FROM
(
SELECT CAST([target_data] AS XML) AS Target_Data
FROM sys.dm_xe_session_targets AS xt
INNER JOIN sys.dm_xe_sessions AS xs
ON xs.address = xt.event_session_address
WHERE xs.name = N'system_health'
AND xt.target_name = N'ring_buffer'
) AS XML_Data
CROSS APPLY Target_Data.nodes('RingBufferTarget/event[#name="xml_deadlock_report"]') AS XEventData(xed)
ORDER BY Creation_Date DESC
I would NOT go in the direction of using (NOLOCK) to fix deadlocks. That is slippery slope and hiding the original problem.

Writes will block reads on SQL Server, unless you have row versioning enabled. You should use the sp_who2 stored procedure and a SQL Profiler trace. sp_who2 will tell you which processes are blocking which, and the profiler will tell you what the last statement was for the blocking process.

If you don't mind dirty reads you can try putting (NOLOCK) after your table names in your SELECT queries. The trade off here is that you are not guaranteed the most up to date data as UPDATE and INSERT statements currently executing are ignored.
Usually this is not to much of a train-smash as most systems read far more than they update/insert, but obviously it depends on the nature of your application.
Alternatively have a look at http://www.sql-server-performance.com/tips/deadlocks_p1.aspx

Related

SSRS Report Timing out in Production Server (except after refreshing 3 times)

The report works fine in the DEV and QA server but when placed in Production the following error comes up:
An error occurred during client rendering.
An error has occurred during report processing.
Query execution failed for dataset 'Registration_of_Entity'.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The strange part was that the Admins have assured me that this report has now been set so there is no timeout at all.
Refresh the report 3 times every morning and the error message goes away.
What can I do to fix this issue so that the report never receives this error?
There are several steps to resolve correctly this issue.
I advise following them in the following order:
1. Reduce the query execution time
Execute the query of the DataSet Registration_of_Entity in SSMS and see how long it takes to complete.
If your query requires more time to execute than the timeout specified for the DataSet, you should first try to reduce this time, for example:
Change the query structure (rethink joins, use CTEs, ...)
Add indexes
Looking at the execution plan can help.
2. Reduce the query complexity
Do you need all those rows/columns?
Do you need to have all these calculations on the database side?
Could it be done in the report instead?
You could try to:
Reduce the query complexity
Split the query in smaller queries
Again, looking at the execution plan can help.
3. Explore additional optimizations not related to the query itself
You really need this query, but do you need the data real-time?
Are there a lot of other queries being executed on this server?
You could look into:
Caching
Replication / Load Balancing
Note that from SSRS 2008 R2, the new Shared DataSets can be cached. I
know it doesn't apply in your case but who knows, it could help
others.
4. Last resort
If all the above steps failed to solve the issue, then you can increase the timeouts.
Here is a link to a blog post explaining the different timeouts and how to increase them.
Do you know if your query is becoming deadlocked? It could be that the report gets blocked on the server during peak times.
Consider optimizing your query or, if the data can be read uncommitted, add WITH (NOLOCK) after each FROM and Join Clause. Be sure to google WITH(NOLOCK) if you are unfamiliar with it so you know what read uncommitted can do.

Does a transaction work for statements across multiple tables?

I'm using JDBC with mysql. I have a pretty complex series of inserts and updates that I'm doing in a single transaction. This seems to work for the most part, but about 1% of the time I find that the data in one of my tables is in an inconsistent state.
I'm rolling back the transaction if an error occurs, but am not sure how to start debugging. My setup generally looks like:
try {
conn.setAutoCommit(false);
PreparedStatement stmt1 = conn.prepareStatement("insert into table1");
stmt1.executeUpdate();
stmt1.close();
PreparedStatement stmt2 = conn.prepareStatement("update table2");
stmt2.executeUpdate();
stmt2.close();
... more statements ...
conn.commit();
}
catch (Exception ex) {
conn.rollback();
}
I'm using a 2010 version of mysql. I might try updating that, but I have a feeling it's more something in my application code that's causing the inconsistency.
Are there any debugging tools I might find helpful at the database level to help? Any other pointers? I'm using JDBC with all default settings, I wonder if there is any stricter transaction level I need to use for this kind of scenario?
Thanks
----- Note -----
All my tables are InnoDb.
Hm.. Interesting. Yes, it should work. We used really huge transactions accross multiple tables many times, never experienced even strange things...
Are you sure it is not you who produce the inconsistency (whatever this means here, you didn't specify this)? By simply inserting/updating wrong things? :-)
Just an idea - we ran into this several times. Deadlock resolving. DB servers used to handle that. The chance a deadlock occurs is higher if you have several parallel threads and the transaction blocks are manipulating more tables. In this case some of your transactions could be aborted (and rolled back) by the DB server itself. And those transactions will result in an error.
The code you wrote above only rollbacks in the exception case (aborted transaction already rolled back, so it doesn't do too much..), but have you tried to print/log the exceptions? If not you should.
Of course transactions are running separated from each other. But this could explain why you experience this strange behaviour in only 1-2% of the cases...
You should check the logs of your mysql server too. It is also possible the server itself fails for any reason. And one more tip: you may try to run "mysqltop" (or "mtop", hope I remember the name of this tool correctly..). This is able to monitor and show you what happens inside the DB server. However it is mostly used to track the performance of our sqls, this also shows failures. Maybe running this could also help you out...
Perhaps you use DDL (create table, alter table, and so on) in your statements?
I am not sure about MySQL but it may not be able to roll back DDL statements.
For example:
PostgreSQL can rollback DDL,
Oracle performs commit before executing DDL.
See here: http://dev.mysql.com/doc/refman/5.0/en/cannot-roll-back.html

C3p0 - APPARENT DEADLOCK on MSSQL, but not PostgreSQL or MySQL

We are getting exceptions like this
com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#5b7a7896 -- APPARENT DEADLOCK!!! Complete Status:
Managed Threads: 3
Active Threads: 3
Active Tasks:
com.mchange.v2.c3p0.stmt.GooGooStatementCache$1StatementCloseTask#55bc5e2a (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1)
com.mchange.v2.c3p0.stmt.GooGooStatementCache$1StatementCloseTask#41ca435f (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2)
com.mchange.v2.c3p0.stmt.GooGooStatementCache$1StatementCloseTask#460d33b7 (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0)
Pending Tasks:
when load testing our application on MSSQL 2008 R2 (jTDS or official MS JDBC doesn't matter). We never get this exception when running the same tests against PostgreSQL or MySQL.
We don't just want to increase the number of helper threads for c3p0 (which solves the problem, but how long?). We want to know what's the problem as it is workings with other DBMS'.
The applications behaves like:
Send X requests
Wait for a while -> DEADLOCK
Send X requests
Wait for a while -> DEADLOCK
Does anyone know or has an idea why we have this behavior with MSSQL?
Thanks, Adrian
(Btw. BoneCP works without any problem too.)
SQL Server has a much more restrictive locking strategy compared to PostgreSQL or InnoDB.
Especially it will block SELECTs on rows (tables?) that are updated from a different connection/transaction (in the default installation).
You should make sure that you are not selecting the same rows in one session that are being updated from another.
If you can't change the sequence of your code, you might get away with using "dirty reads" in SQL Server.
If I remember that correctly, this is accomplished by adding WITH NOLOCK to the SELECT statements (but I'm not entirely sure)
Edit
A different possibility (if you are on SQL Server 2005 or later) would be to use the new "snapshot isolation" to avoid blocking selects.

Avoiding deadlock by using NOLOCK hint

Once in a while I get following error in production enviornment which goes away on running the same stored procedure again.
Transaction (Process ID 86) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction
Someone told me that if I use NOLOCK hint in my stored procedures, it will ensure it will never be deadlocked. Is this correct? Are there any better ways of handling this error?
Occasional deadlocks on an RDBMS that locks like SQL Server/Sybase are expected.
You can code on the client to retry as recommended my MSDN "Handling Deadlocks".
Basically, examine the SQLException and maybe a half second later, try again.
Otherwise, you should review your code so that all access to tables are in the same order. Or you can use SET DEADLOCK_PRIORITY to control who becomes a victim.
On MSDN for SQL Server there is "Minimizing Deadlocks" which starts
Although deadlocks cannot be completely avoided
This also mentions "Use a Lower Isolation Level" which I don't like (same as many SQL types here on SO) and is your question. Don't do it is the answer... :-)
What can happen as a result of using (nolock) on every SELECT in SQL Server?
https://dba.stackexchange.com/q/2684/630
Note: MVCC type RDBMS (Oracle, Postgres) don't have this problem. See http://en.wikipedia.org/wiki/ACID#Locking_vs_multiversioning but MVCC has other issues.
While adding NOLOCK can prevent readers and writers from blocking each other (never mind all of the negative side effects it has), it is not a magical fix for deadlocks. Many deadlocks have nothing at all to do with reading data, so applying NOLOCK to your read queries might not cause anything to change at all. Have you run a trace and examined the deadlock graph to see exactly what the deadlock is? This should at least let you know which part of the code to look at. For example, is the stored procedure deadlocking because it is being called by multiple users concurrently, or is it deadlocking with a different piece of code?
Here is a good link on learning to troubleshoot deadlocks. I always try avoid using nolock for the reasons above. Also you might want to better understand Lock Compatibility.

Deadlock issue with SQL Server 2008 and ADO.NET

In our applications we don't use either ADO.NET transaction or SQL Server transactions in procedures and now we are getting the below error in our website when multiple people are using.
Transaction (Process ID 73) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction
Is this error due to the lack of transactions? I thought the consistency will be handled by the DB itself.
And one thing I noticed that SQLCommand.Timeout property has set to 10000. Will this be an issue for the error?
I am trying to solve this issue ASAP. Please help.
EDIT
I saw the Isolationlevel property of ADO.NET transaction, so if I use ADO.NET transaction with proper isolationlevel property like "ReadUncommitted" during reading and "Serializable" during writing?
Every SQL DML (INSERT, UPDATE, DELETE) or DQL (SELECT) statement runs inside a transaction. The default behaviour for SQL Server is for it to open a new transaction (if one doesn't exist), and if the statement completes without errors, to automatically commit the transaction.
The IMPLICIT_TRANSACTIONS behaviour that Sidharth mentions basically gets SQL Server to change it's behaviour somewhat - it leaves the transaction open when the statement completes.
To get better information in the SQL Server error log, you can turn on a trace flag. This will then tell you which connections were involved in the deadlock (not just the one that got killed), and which resources were involved. You may then be able to determine what pattern of behaviour is leading to the deadlocks.
If you're unable to determine the underlying cause, you may have to add some additional code to your application - that catches sql errors due to deadlocks, and retries the command multiple times. This is usually the last resort - it's better to determine which tables/indexes are involved, and work out a strategy that avoids the deadlocks in the first place.
IsolationLevel is your best bet. Default serialization level of transactions is "Serializable" which is the most stringent and if at this level there is a circular reference chances of deadlock are very high. Set it to ReadCommitted while reading and let it be Serializable while writing.
Sql server can use implicit transactions which is what might be happening in your case. Try setting it off:
SET IMPLICIT_TRANSACTIONS OFF;
Read about it here: http://msdn.microsoft.com/en-us/library/ms190230.aspx