Sql Server query optimisation - sql-server-2008

I am optimising a large SP in Sql Server 2008 which uses a lot of dynamic Sql. It is a query which searches the database with a number of optional parameters and short of coding for every possible combination of parameters dynamic sql has proven to be the most efficient method of executing this. The sql striung is built including parameters and then passed to sp_executesql with the param list. When running this in SSMS with any combination of parameters it runs very quickly (<1s) and returns results. When running from a windows forms application however, it sometimes takes considerably longer.
I have read that the difference in the ARITHABORT option can cause this (ON as default in SSMS and OFF in ADO) however I am unsure as to whether turning this on fixes the issue or whether it masks it? Does the difference in settings make a difference to the query itself or does it just mean that Sql Server will use different cached execution plans? If so should clearing the cache and statistics reset the playing field?
I have also read differing points of view on the OPTION RECOMPILE setting. My understanding is that when sp_executesql is used with a parameter list then each combination of parameters will produce an execution plane however as the possible combinations of parameters are finite this will result in optimised queries. Other sources say it should be set to ON at the start of any SP that uses dynamic sql.
I realise that different situations require different settings however I am looking to understand these further before trying the arbritraily on my very busy 24x7 production server. Apologies for the ramblings, I guess my question boils down to:
What causes sql to run differently in SSMS and Window Forms?
If it is ARITHABORT then is this an issue related to execution plans or should I turn it on as a server default?
What is the optimal way to run queries with dynamic sql?

Run a trace in SQL Profiler to see what's actually being submitted to the server. Of course, you need to be aware of the impact of traces on production servers. In my experience very short traces that are limited to a small set are not a big problem for servers that don't have a very high transactions per second load. Also, you can run a trace server-side which reduces its impact so that's an option for you.
Once you see what's actually being submitted to the database this may help you understand the problem. For example, sometimes DB libraries prepare statements (getting a handle to a sort of temporary stored proc) but this can be costly if it is done for each issuance of the query, plus it's not needed with sp_executesql. Anyway, there's no way of knowing for sure whether it will be helpful until you try it.

Related

Different results based on different client communicating with sql server

I have a very weird scenario occur at work today in our production system. Wondering if anyone has seen anything like this and have a good explanation for me.
We have a stored procedure in Sql Server 2014 and it was not returning any data when our .NET system called it.
We captured the call using Sql Profiler and replayed it in Sql Management Studio using the same Sql Authentication credentials and it returned results as expected.
No matter how many times we tried each interchangeably, they were consistent in that when the client was the .NET Client, it gave no results and when it was SSMS it worked fine. Keep in mind, its the exact same sp, params, etc.
We were able to resolve the issue by doing an SP Recompile but that feels like a temporary solution and not know the original cause means that it can recur without warning. Furthermore, I was under the impression that sp recompile only affects performance issues not differing results.
Has anyone seen this before? Can you explain why an sp recompile fixed it?
Many Thanks!
Usually what you see is that the query will execute just fine with SSMS, but with the .Net client it will time out. That might be what you're describing here.
There's some debate about the exact cause of this issue.
For one side, problem is that SSMS and the .Net Client have different defaults. The most common offender is ARITHABORT, which SSMS set to ON but most SQL Server providers leave at the server default (OFF):
WARNING
The default ARITHABORT setting for SQL Server Management Studio is ON.
Client applications setting ARITHABORT to OFF can receive different
query plans making it difficult to troubleshoot poorly performing
queries. That is, the same query can execute fast in management studio
but slow in the application. When troubleshooting queries with
Management Studio always match the client ARITHABORT setting.
This results in a cached query plan (a fairly complex topic) that works well in SSMS, but not so well with the.Net client.
For the other side, the problem is just parameter sniffing, meaning your stored procedure has a bad plan cached. This side argues that the ARITHABORT setting causes the server to select a different plan, skipping the bad one. But the core problem is the parameter sniffing, and the ARITHABORT setting is actually a workaround.
This SO question covers a lot of the possible solutions (setting ARITHABORT ON, using OPTION RECOMPILE, using OPTIMIZE FOR UNKNOWN, etc.). That question also links to the seminal work of Erland Sommarskog, Slow in the Application, Fast in SSMS?: Understanding Performance Mysteries, which is probably more than you'll ever, ever want to know.

History of queries in MySql

Is there any way to check the query that occurs in my MySql database?
For example:
I have an application (OTRS) that allows you to generate reports according to the frames that I desire. I would like to know which query is made by the application in the database.
Because I will use it to integrate with other reporting software.
Is this possible?
Yes, you can enable logging in your MySQL server. there are several types of logs you can use, depending on what you want to log, starting from errors only or slow queries, and to logs that write everything done on your server.
See the full doc here
Although, as Nir says, mysql can log all queries (you should be looking at the general log or the slow log configured with a threshold of 0 seconds) this will show all the queries being run; on a production system it may prove difficult to match what you are doing in your browser with specific entries in the log.
The reason I suggest using the slow query log is that there are tools available which will remove the parameters from the queries, allowing you to see what SQL code is running more frequently.
If you have some proficiency in Perl it should be straightforward to output - all queries are processed via an abstraction layer.
(Presumably you are aware that the schema is published)

SQL query optimization and debugging

the question is about the best practice.
How to perform a reliable SQL query test?
That is the question is about optimization of DB structure and SQL query itself not the system and DB performance, buffers, caches.
When you have a complicated query with a lot of joins etc, one day you need to understand how to optimize it and you come to EXPLAIN command (mysql::explain, postresql::explain) to study the execution plan.
After tuning the DB structure you execute the query to see any performance changes but here you're on the pan of multiple level of optimization/buffering/caching. How to avoid this? I need the pure time for the query execution and be sure it is not affected.
If you know different practise for different servers please specify explicitly: mysql, postgresql, mssql etc.
Thank you.
For Microsoft SQL Server you can use DBCC FREEPROCCACHE (to drop compiled query plans) and DBCC DROPCLEANBUFFERS (to purge the data cache) to ensure that you are starting from a completely uncached state. Then you can profile both uncached and cached performance, and determine your performance accurately in both cases.
Even so, a lot of the time you'll get different results at different times depending on how complex your query is and what else is happening on the server. It's usually wise to test performance multiple times in different operating scenarios to be sure you understand what the full performance profile of the query is.
I'm sure many of these general principles apply to other database platforms as well.
In the PostgreSQL world you need to flush the database cache as well as the OS cache as PostgreSQL leverages the OS caching system.
See this link for some discussions.
http://archives.postgresql.org/pgsql-performance/2010-08/msg00295.php
Why do you need pure execution time? It depends on so many factors and almost meaningless on live server. I would recommend to collect some statistic from live server and analyze queries execution time using pgfouine tool (it's for postgresql) and make decisions based on it. You will see exactly what do you need to tune and how effective was your changes on a report.

PDO statement execution time

How can I determine the time a statement took to execute on the database server with PDO?
I am using MySQL. Most MySQL client utilities seem to be able to show how long a query ran on the server, irrespective of the total time which includes the transfer of the result over the network. This leads me to believe that the native MySQL API offers this information somewhere. Is it exposed in PDO? If so, how can I get to it?
Note: I have found a MySQL query method, but would prefer not to execute more statements just for this if the execution time is already kept track of somewhere else. If it isn't, then I will fall back on this method.
It seems that there is no way to do this at the PDO layer. In retrospect, this makes quite a bit of sense, since PDO is abstracting all of the DB-specific features away.

Logging mysql queries

I am about to begin developing a logging system for future implementation in a current PHP application to get load and usage statistics from a MYSQL database.
The statistic will later on be used to get info about database calls per second, query times etc.
Of course, this will only be used when the app is in testing stage, since It will most certainly cause a bit of additional load itself.
However, my biggest questionmark right now is if i should use MYSQL to log the queries, or go for a file-based system. I'll guess that it would be a bit of a headache to create something that would allow writings from multiple locations when using a file based system to handle the logs?
How would you do it?
Use the general log, which will show client activity, including all the queries:
http://dev.mysql.com/doc/refman/5.1/en/query-log.html
If you need very detailed statistics on how long each query is taking, use the slow log with a long_query_time of 0 (or some other sufficiently short time):
http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
Then use http://www.maatkit.org/ to analyze the logs as needed.
MySQL already had logging built in- Chapter 5.2 of the manual describes these. You'll probably be interested in The General Query Log (all queries), the Binary Query Log (queries that change data) and the Slow log (queries that take too long, or don't use indexes).
If you insist on using your own solution, you will want to write a database middle layer that all your DB calls go through, which can handle the timing aspects. As to where you write them, if you're in devel, it doesn't matter too much, but the idea of using a second db isn't bad. You don't need to use an entirely separate DB, just as far as using a different instance of MySQL (on a different machine, or just a different instance using a different port). I'd go for using a second MySQL instance instead of the filesystem- you'll get all your good SQL functions like SUM and AVG to parse your data.
If all you are interested in is longer-term, non-real time analysis, turn on MySQL's regular query logging. There are tons of tools for doing analysis on the query-logs (both regular and slow-query), giving you information about the run-times, average rows returned, etc. Seems to be what you are looking for.
If you are doing tests on MySQL you should store the results in a different database such as Postgres, this way you won't increase the load with your operations.
I agree with macabail but would only add that you could couple this with a cron job and a simple script to extract and generate any statistics you might want.