C3p0 - APPARENT DEADLOCK on MSSQL, but not PostgreSQL or MySQL - mysql

We are getting exceptions like this
com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#5b7a7896 -- APPARENT DEADLOCK!!! Complete Status:
Managed Threads: 3
Active Threads: 3
Active Tasks:
com.mchange.v2.c3p0.stmt.GooGooStatementCache$1StatementCloseTask#55bc5e2a (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1)
com.mchange.v2.c3p0.stmt.GooGooStatementCache$1StatementCloseTask#41ca435f (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2)
com.mchange.v2.c3p0.stmt.GooGooStatementCache$1StatementCloseTask#460d33b7 (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0)
Pending Tasks:
when load testing our application on MSSQL 2008 R2 (jTDS or official MS JDBC doesn't matter). We never get this exception when running the same tests against PostgreSQL or MySQL.
We don't just want to increase the number of helper threads for c3p0 (which solves the problem, but how long?). We want to know what's the problem as it is workings with other DBMS'.
The applications behaves like:
Send X requests
Wait for a while -> DEADLOCK
Send X requests
Wait for a while -> DEADLOCK
Does anyone know or has an idea why we have this behavior with MSSQL?
Thanks, Adrian
(Btw. BoneCP works without any problem too.)

SQL Server has a much more restrictive locking strategy compared to PostgreSQL or InnoDB.
Especially it will block SELECTs on rows (tables?) that are updated from a different connection/transaction (in the default installation).
You should make sure that you are not selecting the same rows in one session that are being updated from another.
If you can't change the sequence of your code, you might get away with using "dirty reads" in SQL Server.
If I remember that correctly, this is accomplished by adding WITH NOLOCK to the SELECT statements (but I'm not entirely sure)
Edit
A different possibility (if you are on SQL Server 2005 or later) would be to use the new "snapshot isolation" to avoid blocking selects.

Related

Does a transaction work for statements across multiple tables?

I'm using JDBC with mysql. I have a pretty complex series of inserts and updates that I'm doing in a single transaction. This seems to work for the most part, but about 1% of the time I find that the data in one of my tables is in an inconsistent state.
I'm rolling back the transaction if an error occurs, but am not sure how to start debugging. My setup generally looks like:
try {
conn.setAutoCommit(false);
PreparedStatement stmt1 = conn.prepareStatement("insert into table1");
stmt1.executeUpdate();
stmt1.close();
PreparedStatement stmt2 = conn.prepareStatement("update table2");
stmt2.executeUpdate();
stmt2.close();
... more statements ...
conn.commit();
}
catch (Exception ex) {
conn.rollback();
}
I'm using a 2010 version of mysql. I might try updating that, but I have a feeling it's more something in my application code that's causing the inconsistency.
Are there any debugging tools I might find helpful at the database level to help? Any other pointers? I'm using JDBC with all default settings, I wonder if there is any stricter transaction level I need to use for this kind of scenario?
Thanks
----- Note -----
All my tables are InnoDb.
Hm.. Interesting. Yes, it should work. We used really huge transactions accross multiple tables many times, never experienced even strange things...
Are you sure it is not you who produce the inconsistency (whatever this means here, you didn't specify this)? By simply inserting/updating wrong things? :-)
Just an idea - we ran into this several times. Deadlock resolving. DB servers used to handle that. The chance a deadlock occurs is higher if you have several parallel threads and the transaction blocks are manipulating more tables. In this case some of your transactions could be aborted (and rolled back) by the DB server itself. And those transactions will result in an error.
The code you wrote above only rollbacks in the exception case (aborted transaction already rolled back, so it doesn't do too much..), but have you tried to print/log the exceptions? If not you should.
Of course transactions are running separated from each other. But this could explain why you experience this strange behaviour in only 1-2% of the cases...
You should check the logs of your mysql server too. It is also possible the server itself fails for any reason. And one more tip: you may try to run "mysqltop" (or "mtop", hope I remember the name of this tool correctly..). This is able to monitor and show you what happens inside the DB server. However it is mostly used to track the performance of our sqls, this also shows failures. Maybe running this could also help you out...
Perhaps you use DDL (create table, alter table, and so on) in your statements?
I am not sure about MySQL but it may not be able to roll back DDL statements.
For example:
PostgreSQL can rollback DDL,
Oracle performs commit before executing DDL.
See here: http://dev.mysql.com/doc/refman/5.0/en/cannot-roll-back.html

What happens when multiple simultaneous update requests received for a SQL table?

I have a table in SQL server database in which I am recording the latest activity time of users. Can somebody please confirm me that SQL server will automatically handle the scenario when multiple update requests received simultaneously for different users. I am expecting 25-50 concurrent update request on this table but each request is responsible for updating different rows in the table. Do i need something extra like connection pooling etc..?
Yes, Sql Server will handle this scenario.
It is a SGDB and it expects scenarios like this one.
When you insert/update/delete a row in Sql, sql will lock the table/row/page to garantee that you will be able to do what you want. This lock will be released when you are done inserting/updating/deleting the row.
Check this Link
And introduction-to-locking-in-sql-server
But there are a few thing you should do:
1 - Make sure you will do whatener you want fast. Because of the lock issue, if you stay connected for too long other requests to the same table may be locked until you are done and this can lead to a timeout.
2 - Always use a transaction.
3 - Make sure to adjust the fill factor of your indexes. Check Fill Factor on MSDN.
4 - Adjust the Isolation level according to what you want.
5 - Get rid of unused indexes to speed up your insert/update.
Connection pooling are not very related to your question. Connection pooling is a technique that avoid the extra overhead of creating new connections to the Database every time you send a request. In C# and other languages that uses ADO this is automatically done. Check this out: SQL Server Connection Pooling.
Other links that may be usefull:
best-practices-for-inserting-updating-large-amount-of-data-in-sql-2008
Speed Up Insert Performance

Deadlock issue with SQL Server 2008 and ADO.NET

In our applications we don't use either ADO.NET transaction or SQL Server transactions in procedures and now we are getting the below error in our website when multiple people are using.
Transaction (Process ID 73) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction
Is this error due to the lack of transactions? I thought the consistency will be handled by the DB itself.
And one thing I noticed that SQLCommand.Timeout property has set to 10000. Will this be an issue for the error?
I am trying to solve this issue ASAP. Please help.
EDIT
I saw the Isolationlevel property of ADO.NET transaction, so if I use ADO.NET transaction with proper isolationlevel property like "ReadUncommitted" during reading and "Serializable" during writing?
Every SQL DML (INSERT, UPDATE, DELETE) or DQL (SELECT) statement runs inside a transaction. The default behaviour for SQL Server is for it to open a new transaction (if one doesn't exist), and if the statement completes without errors, to automatically commit the transaction.
The IMPLICIT_TRANSACTIONS behaviour that Sidharth mentions basically gets SQL Server to change it's behaviour somewhat - it leaves the transaction open when the statement completes.
To get better information in the SQL Server error log, you can turn on a trace flag. This will then tell you which connections were involved in the deadlock (not just the one that got killed), and which resources were involved. You may then be able to determine what pattern of behaviour is leading to the deadlocks.
If you're unable to determine the underlying cause, you may have to add some additional code to your application - that catches sql errors due to deadlocks, and retries the command multiple times. This is usually the last resort - it's better to determine which tables/indexes are involved, and work out a strategy that avoids the deadlocks in the first place.
IsolationLevel is your best bet. Default serialization level of transactions is "Serializable" which is the most stringent and if at this level there is a circular reference chances of deadlock are very high. Set it to ReadCommitted while reading and let it be Serializable while writing.
Sql server can use implicit transactions which is what might be happening in your case. Try setting it off:
SET IMPLICIT_TRANSACTIONS OFF;
Read about it here: http://msdn.microsoft.com/en-us/library/ms190230.aspx

Extremely slow insert from Delphi to Remote MySQL Database

Having a major hair-pulling issue with extremely slow inserts from Delphi 2010 to a remote MySQL 5.09 server.
So far, I have tried:
ADO using MySQL ODBC Driver
Zeoslib v7 Alpha
MyDAC
I have used batching and direct insert with ADO (using table access), and with Zeos I have used SQL insertion with a Query, then used Table direct mode and also cached updates Table mode using applyupdates and commit. With MyDAC I used table access mode, then direct SQL insert and then batched SQL insert
All technologies I have tried, I set compression on and off with no discernable difference.
So far I have seen a pretty much the same across the board 7.5 records per second!!!
Now, I would from this point assume that the remote server is just slow, but the MySQL Workbench is amazingly fast, and the Migration toolkit managed the initial migration very quickly (to be honest, I don't recall how quickly - which kind of means that it was quick)
Edit 1
It is quicker for me to write the sql to a file, upload the file to the server via ftp and then import it direct on the remote server - I wonder if they perhaps are throttling incoming MySQL traffic, but that doesn't explain why the MySQL Workbench was so quick!
Edit 2
At the most basic level, the code has been:
while not qMSSQL.EOF do
begin
qMySQL.SQL.Clear;
qMySQL.SQL.Add('INSERT INTO tablename (fieldname1) VALUES (:fieldname1)');
qMySQL.ParamByName('fieldname1').asString:=qMSSQL.FieldByName('fieldname1').asString;
qMySQL.ExecSQL;
qMSSQL.Next;
end;
I then tried
qMySQL.CachedUpdates:=true;
i:=0;
while not qMSSQL.EOF do
begin
qMySQL.SQL.Clear;
qMySQL.SQL.Add('INSERT INTO tablename (fieldname1) VALUES (:fieldname1)');
qMySQL.ParamByName('fieldname1').asString:=qMSSQL.FieldByName('fieldname1').asString;
qMySQL.ExecSQL;
inc(i);
if i>100 then
begin
qMySQL.ApplyUpdates;
i:=0;
end;
qMSSQL.Next;
end;
qMySQL.ApplyUpdates;
Now, in this code with CachedUpdates:=False (which obviously never actually wrote back to the database) the speed was blisteringly fast!!
To be perfectly honest, I think it's the connection - I feel it's the connection... Just waiting for them to get back to me!
Thanks for all your help!
You can try AnyDAC and it Array DML feature. It may speedup a standard SQL INSERT for few times.
Sorry that this reply comes long after you asked the question.
I had a similar problem. BDS2006 to MySQL via ODBC across the network - took 25 minutes to run - around 25 inserts per second. I was using a TDatabase connection and attached the TTable Tquery to it. Prepared the SQL statements.
The major improvement was when I started starting transactions within the loop. A simple example, Memebrships have Member Period. Start a transaction before the insert of the Membership and Members, Commit after. The number of memberships was 01585 and before transactions it took 279.90 seconds to process all the Membership records but after it took 6.71 seconds.
Almost too good to believe and am still working through fixing the code for the other slow bits.
Maybe Mark you have solved your problem but it may help someone else.
Are you using query parameters? The fastest way to insert should be using plain queries and parameters (i.e. INSERT INTO table (field) VALUES (:field) ), preparing the query and then assigning parameters and executing as many times as required within a single transaction - committing at the end (don't use any flavour of autocommit)
That in most databases avoids hard parses each time the query is executed, which requires time. Parameters allow the query to be parsed only once, and then re-executed many times as needed.
Use the server facilites to check what's going on - many offer a way to inspect what running statements are doing.
I'm not sure about ZeosLib, but using ADO with ODBC driver, you will not get the fastest way to insert the records, here few step that may make your insertion faster:
Use Mydac for direct access, they work without the slow ODBC > ADO > OLEDB > MySqlLib to connect to Mysql.
Open the connection at first before the insertion.
if you have large insertion such as 1000 or more, try use transaction and commit after 100 record or more depend on number of records.
Point 3 may makes your insertion faster even with ZeosLib or ADO.
You've got two separate things going on here. First, your Delphi program is creating Insert statements and sending them to the DB server, and then the server is handling them. You need to examine both ends to find the bottleneck. I'm not to familiar with MySql tools, but I bet you could find a SQL profiler for it easily enough. Use it to profile your inserts from the Delphi app, and compare it to running inserts from the Workbench tool and see if there's a significant difference.
If not, then the slowdown is in your app. Try hooking it up to Sampling Profiler or some other profiling tool that understands Delphi, and it'l show you where you're spending lots of time on. Once you know that, then you can work on attacking the problem, or maybe come back here to ask a more specific question. But until you know where the problem is coming from, any answers you get here are just gonna be educated guesses at best.

Deadlock troubleshooting in Sql Server 2008

My website doesn't seem to handle a high number of visitors, I believe it's because the server is too simple.
2 hours ago my website was getting a lot of hits and I noticed that 3 deadlock errors occurred, the error is:
System.Data.SqlClient.SqlException
:
Transaction (Process ID 58) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I'm not sure why this happened... Looking at the stack trace, I could see that this happened with a select query.
Anyone knows what may be the cause of this error?
The server is running Windows 2008 and Sql Server 2008.
SQL Server 2008 has multiple ways to identify processes and queries involved in deadlock.
If deadlocks are easy to reproduce,frequency is higher and you can profile SQL server (you have the access and performance cost on server when profiler is enabled) using SQL Profiler will give you nice graphical view of deadlock.
This page has all the information you need to use deadlock graphs
http://sqlmag.com/database-performance-tuning/gathering-deadlock-information-deadlock-graph
Most of the times reproducing deadlocks is hard, or they happen in production environment where we dont want to attach Profiler to it and affect performance.
I would use this query to get deadlocks happened:
SELECT
xed.value('#timestamp', 'datetime') as Creation_Date,
xed.query('.') AS Extend_Event
FROM
(
SELECT CAST([target_data] AS XML) AS Target_Data
FROM sys.dm_xe_session_targets AS xt
INNER JOIN sys.dm_xe_sessions AS xs
ON xs.address = xt.event_session_address
WHERE xs.name = N'system_health'
AND xt.target_name = N'ring_buffer'
) AS XML_Data
CROSS APPLY Target_Data.nodes('RingBufferTarget/event[#name="xml_deadlock_report"]') AS XEventData(xed)
ORDER BY Creation_Date DESC
I would NOT go in the direction of using (NOLOCK) to fix deadlocks. That is slippery slope and hiding the original problem.
Writes will block reads on SQL Server, unless you have row versioning enabled. You should use the sp_who2 stored procedure and a SQL Profiler trace. sp_who2 will tell you which processes are blocking which, and the profiler will tell you what the last statement was for the blocking process.
If you don't mind dirty reads you can try putting (NOLOCK) after your table names in your SELECT queries. The trade off here is that you are not guaranteed the most up to date data as UPDATE and INSERT statements currently executing are ignored.
Usually this is not to much of a train-smash as most systems read far more than they update/insert, but obviously it depends on the nature of your application.
Alternatively have a look at http://www.sql-server-performance.com/tips/deadlocks_p1.aspx