Kill SQL Server transactions after some minutes - sql-server-2008

Is there any way to configure SQL Server 2008 to kill a transaction if it has neither been canceled nor committed for some time? (Say power, network connection or whatever gets cut to the computer having it open.)
Either so it happens automatically after some defined rule-sets or by making and calling a command line application that queries the SQL server for active transactions + time they have been running... and then instructs SQL Server to close those down that are "frozen".

To quote Gail Shaw from here:
SQL Server does not time queries out, the connecting application (in this case query analyser) does.
Whichever tech that you're using to connect (ADO, etc.) will probably have a connection timeout and and execution timeout property that you can change in your calling code. Defaults are usually 30 secs.
You could potentially wrap something like this in a loop that kills each offending spid:
select datediff(second, last_batch, getdate()) as secs_running, *
from sys.sysprocesses
where hostname != ''
and open_tran = 1
There would probably be many opinions on how to best find which processes are "safe" to kill, and I would certainly be a little worried about automatically doing such a thing based upon an arbitrary timespan. I'm also not sure that any data changes done in the process are guaranteed to be rolled-back.

Related

What could cause MySQL to intermittently fail to return a row?

I have a MySQL database that I am running very simple queries against as part of a webapp. I have received reports from users starting today that they got an error saying that their account doesn't exist, and when they log in again, it does (this happened to only a few people, and only once to each, so clearly it is rare). Based on my backend code, this error can only occur if the same query returns 0 row the first time, and 1 row the second. My query is basically SELECT * FROM users WHERE username="...". How is this possible? My suspicion is that the hard disk is having I/O failures, but I am unsure because I would not expect MySQL to fail silently in this case. That said, I don't know what else it could be.
This could be a bug with your mysql client (Though I'm unsure as to how the structure of your code is, it could just be bad query). However let's assume that your query has been working fine up until now with no prior issues, so we'll rule out bad code.
With that in mind, I'm assuming it's either a bug in your mysql client or your max connection count is reached (Had this issue with my previous host - Hostinger).
Let's say your issue is a bug in your mysql client, set your sessions to per session basis by running this
SET SESSION optimizer_switch="index_merge_intersection=off";
or in your my.cnf you can set it globally
[mysqld] optimizer_switch=index_merge_intersection=off
As for max connection you can either increase your max_connection value (depending if your host allows it), or you'll have to make a logic to close the mysql connection after a query execution.
$mysqli->close();

Performance Issues with MySQL Proxy

One of or customers has a MySQl back end as part of their solution.
They have configured it to have a common master database and a specific slave database per client (they have 10+ slaves). They are using MySQL proxy for this.
They are facing some performance issues including database inserts/updates being queued and taking quite some time to write to the slave databases.
Can you suggest how this can be improved? Are there tools that can be used to help identify where the problems are? Does this seem like a standard approach to you (common master with client specific slaves controlled via MySQL proxy)?
Any advice would be appreciated.
Thanks,
Andy
I had the same behavior, but my trouble was next:
one of my updates was finishing with error, mysql proxy (and rw-splitter.lua specific) treat this situation like connection might be reused by another client, and return connection to the pool. That is mean, that when client receive an error and tried to rollback transaction, it is was passed to another connection or new connection from pool and it is has no effect. Meanwhile failed UPDATE process which had an error in transaction was in lock, until transaction will not rolled back by timeout (but in my case and mysql-proxy by default it is 28800 seconds) so quite long.
Problem was resolved.
Patch:
find in rw-splitter.lua next block:
is_in_transaction = flags.in_trans
local have_last_insert_id = (res.insert_id and (res.insert_id > 0))
if not is_in_transaction and
not is_in_select_calc_found_rows and
not have_last_insert_id then
and change it to
if res.query_status == proxy.MYSQLD_PACKET_ERR and is_in_transaction then
if is_debug then
print ("(read_query_result) ERROR happened while transaction staying on the same backend")
end
return
end
is_in_transaction = flags.in_trans
local have_last_insert_id = (res.insert_id and (res.insert_id > 0))
if not is_in_transaction and
not is_in_select_calc_found_rows and
not have_last_insert_id then

Lost connection to MySQL server during query mysqlclient-dev multithreading

Folks, I had another issue now regarding to libmysqlclient-dev api
Here the story:
I created about 10 threads which will do SQL query in every 2 seconds, and loop,
in math you could say 10 query in 2 seconds in the same time, and it ended itself with MySQL error message lost connection during SQL query
Questions:
in these case, is my algorithm will always cause MySQL server lost connection ?
if so, will these issue appear if I use another db (say Oracle, Postgre, etc), or result will same?
What are you doing in the thread that loses the connection, and what are the other threads doing at the same time?
It depends. Consider your question the equivalent of "my car makes a funny noise when I drive down Street X. Should I get a new car?". If the noise is due to you falling into a pothole, changing cars won't make any difference.
well everything is now run better once
the MYSQL* is protected by posix-mutex https://computing.llnl.gov/tutorials/pthreads/#MutexLocking
the code compiiled with mysql_config --libs_r parameter
thats for now, any addition ?

Query Execution time in Management Studio & profiler. What does it measure?

I have my production SQL Server in a remote data center(and the web servers are located in the same data center). During development we observed that one particular view takes a long time to execute (about 60-80 secs) in our local development SQL Server, and we were OK with it.It was promoted to production and when I run the same query on Production DB (which is in the data center)from my local Management Studio I see that the query takes about 7 minutes,17 secs to run (available the bottom right corner of the management studio).When I ran a profiler I see that the time taken to execute that query is 437101 microseconds milliseconds, though it shows up in management studio as 7:17. , which actually is about 437101 milliseconds. My DBA says that in prod the view takes just about 60 to 80 seconds though I see different numbers from profiler and management studio.Can someone tell me what these durations mean in Profiler and management studio ?
My guess: duration between sending the last request byte and receiving the last response byte from the server. The client statistics were as follows:
Client Processing time: 90393
Total Execution time:92221
Wait time on server replies: 1828
My best guess on what "duration" on the profiler means is "the time taken by SQL Server (optimization engine to parse the query,generate the query plan or use the existing query plan + fetch records from different pages) to generate the result set which excludes the time taken by data to travel over the wire to the client"
Edit: I find that both these times are about the same (management studio vs profiler). How do they relate with the times I see in client statistics ?
Can some one throw more light on these ?
If I'm understanding your question correctly, you are first questioning the difference between the Duration reported by Profiler and the statistics presented in SSMS (either in lower right-hand corner for general time and/or by SET STATISTICS TIME ON). In addition to that, you seem to be unconvinced of the production DBA's comment that the view is executing in the expected duration of ~60 seconds.
First, from Books Online, the statics that SSMS would report back via SET STATISTICS TIME ON:
"Displays the number of milliseconds
required to parse, compile, and
execute each statement."
You're spot-on for this. As for Duration in Profiler, it is described as:
"The duration (in microseconds) of the
event."
From where I sit, these two should be functionally equivalent (and, as I'm sure you noticed, Profiler will report in microseconds if your going against SQL 2005 or later). I say this because the "event" in this case (regarding Duration in Profiler) is the execution of the select, which includes delivery to the client; this is consistent in both cases.
It seems you suspect that geography is the culprit to the long duration when executing the query remotely. This very well may be. You can test for this by executing the select on the view in one query window then spawning another query window and reviewing the wait type on the query:
select
a.session_id
,a.start_time
,a.status
,a.command
,db_name(a.database_id) as database_name
,a.blocking_session_id
,a.wait_type
,a.wait_time
,a.cpu_time
,a.total_elapsed_time
,b.text
from sys.dm_exec_requests a
cross apply sys.dm_exec_sql_text(a.sql_handle) b
where a.session_id != ##spid;
I would suspect that you would see something like ASYNC_NETWORK_IO as the wait type if geography is the problem - otherwise, check out what does come of this. If you're Profiling the query of your remote execution, the Duration will be reflective of the time statistics you see in SSMS. HOWEVER, if you're using Profiler and finding that the duration of this query when executed from one of the web servers that sits in the same data center as the SQL Server is still taking 7 minutes, then the DBA is a big, fat liar :). I would use Profiler to record queries that take longer than 1 minute, try to filter for your view and take the average to see if you're on target for performance.
Because there are no other answers posted, I'm concerned that I'm way off base here - but it's late and I'm new to this so I thought I'd give it a go!
I was struggling with that until i found this...
http://blog.sqlauthority.com/2009/10/01/sql-server-sql-server-management-studio-and-client-statistics/
Also, if you open the Property tab for your query you may find some magical "Elapsed Time" that may give you some execution time...
Hope it helps...
Try with this:
DECLARE #time AS DATETIME = CURRENT_TIMESTAMP
-- Your Query
SELECT CAST(DATEDIFF(SECOND, #time, CURRENT_TIMESTAMP) AS VARCHAR)
+ ','
+ CAST(DATEDIFF(MICROSECOND, #time, CURRENT_TIMESTAMP) AS VARCHAR)
AS 'Execution Time'

Extremely slow insert from Delphi to Remote MySQL Database

Having a major hair-pulling issue with extremely slow inserts from Delphi 2010 to a remote MySQL 5.09 server.
So far, I have tried:
ADO using MySQL ODBC Driver
Zeoslib v7 Alpha
MyDAC
I have used batching and direct insert with ADO (using table access), and with Zeos I have used SQL insertion with a Query, then used Table direct mode and also cached updates Table mode using applyupdates and commit. With MyDAC I used table access mode, then direct SQL insert and then batched SQL insert
All technologies I have tried, I set compression on and off with no discernable difference.
So far I have seen a pretty much the same across the board 7.5 records per second!!!
Now, I would from this point assume that the remote server is just slow, but the MySQL Workbench is amazingly fast, and the Migration toolkit managed the initial migration very quickly (to be honest, I don't recall how quickly - which kind of means that it was quick)
Edit 1
It is quicker for me to write the sql to a file, upload the file to the server via ftp and then import it direct on the remote server - I wonder if they perhaps are throttling incoming MySQL traffic, but that doesn't explain why the MySQL Workbench was so quick!
Edit 2
At the most basic level, the code has been:
while not qMSSQL.EOF do
begin
qMySQL.SQL.Clear;
qMySQL.SQL.Add('INSERT INTO tablename (fieldname1) VALUES (:fieldname1)');
qMySQL.ParamByName('fieldname1').asString:=qMSSQL.FieldByName('fieldname1').asString;
qMySQL.ExecSQL;
qMSSQL.Next;
end;
I then tried
qMySQL.CachedUpdates:=true;
i:=0;
while not qMSSQL.EOF do
begin
qMySQL.SQL.Clear;
qMySQL.SQL.Add('INSERT INTO tablename (fieldname1) VALUES (:fieldname1)');
qMySQL.ParamByName('fieldname1').asString:=qMSSQL.FieldByName('fieldname1').asString;
qMySQL.ExecSQL;
inc(i);
if i>100 then
begin
qMySQL.ApplyUpdates;
i:=0;
end;
qMSSQL.Next;
end;
qMySQL.ApplyUpdates;
Now, in this code with CachedUpdates:=False (which obviously never actually wrote back to the database) the speed was blisteringly fast!!
To be perfectly honest, I think it's the connection - I feel it's the connection... Just waiting for them to get back to me!
Thanks for all your help!
You can try AnyDAC and it Array DML feature. It may speedup a standard SQL INSERT for few times.
Sorry that this reply comes long after you asked the question.
I had a similar problem. BDS2006 to MySQL via ODBC across the network - took 25 minutes to run - around 25 inserts per second. I was using a TDatabase connection and attached the TTable Tquery to it. Prepared the SQL statements.
The major improvement was when I started starting transactions within the loop. A simple example, Memebrships have Member Period. Start a transaction before the insert of the Membership and Members, Commit after. The number of memberships was 01585 and before transactions it took 279.90 seconds to process all the Membership records but after it took 6.71 seconds.
Almost too good to believe and am still working through fixing the code for the other slow bits.
Maybe Mark you have solved your problem but it may help someone else.
Are you using query parameters? The fastest way to insert should be using plain queries and parameters (i.e. INSERT INTO table (field) VALUES (:field) ), preparing the query and then assigning parameters and executing as many times as required within a single transaction - committing at the end (don't use any flavour of autocommit)
That in most databases avoids hard parses each time the query is executed, which requires time. Parameters allow the query to be parsed only once, and then re-executed many times as needed.
Use the server facilites to check what's going on - many offer a way to inspect what running statements are doing.
I'm not sure about ZeosLib, but using ADO with ODBC driver, you will not get the fastest way to insert the records, here few step that may make your insertion faster:
Use Mydac for direct access, they work without the slow ODBC > ADO > OLEDB > MySqlLib to connect to Mysql.
Open the connection at first before the insertion.
if you have large insertion such as 1000 or more, try use transaction and commit after 100 record or more depend on number of records.
Point 3 may makes your insertion faster even with ZeosLib or ADO.
You've got two separate things going on here. First, your Delphi program is creating Insert statements and sending them to the DB server, and then the server is handling them. You need to examine both ends to find the bottleneck. I'm not to familiar with MySql tools, but I bet you could find a SQL profiler for it easily enough. Use it to profile your inserts from the Delphi app, and compare it to running inserts from the Workbench tool and see if there's a significant difference.
If not, then the slowdown is in your app. Try hooking it up to Sampling Profiler or some other profiling tool that understands Delphi, and it'l show you where you're spending lots of time on. Once you know that, then you can work on attacking the problem, or maybe come back here to ask a more specific question. But until you know where the problem is coming from, any answers you get here are just gonna be educated guesses at best.