I want to know how long a certain query took to execute in SQL Server 2008. I could have known if I had put a Profiler trace on the process ID before I executed the query, but I forgot.
Is there any way to pull this information out of SQL Server without running the query again?
You can use the DMV sys.dm_exec_query_stats. There is a lot more information you can get from the query below such as reads/writes just use * to see all the information available.
SELECT
t.TEXT QueryName,
last_elapsed_time
FROM sys.dm_exec_query_stats s
CROSS APPLY sys.dm_exec_sql_text( s.sql_handle ) t
Related
Questions
What is/are the most cheapest SQL-Statment(s) (in terms of Processing Overhead/CPU Cycles).
Are there (this will most likely be DB-Client specific) any Statments that are evaluated directly by the client and even do not go to the database server?
The result doesn't matter, if an empty statement (which produces an SQL Error) is the cheapest OK, then this is good too. But I am more interested in non Error Responses.
Background:
I have an application that queries a lot of data from the DB. However I do not require this data. Sadly, I have no possibility to skip this query. But I have the possibility to change the SQL Query itself. So I am trying to find the cheapst SQL Statement to use, ideally it should not even go to the SQL Server and the SQL-Client Library should answer it. I will be using MySQL.
UPDATES (on comments):
Yes, it can be a No-Operation. It must be something I can pass as a regular SQL String to the mysql client library. Whatever that string could be, is the question. The goal is, that this Query then somehowreturns nothing, using the least Resources on the SQL Server as possible. But in idealcase the client itself will realize that this query doesnt even have to go to the server, like a version Check of the client library (OK I know this is no standard SQL then but maybe there is something I do not know about, a statement that will be "short circuited/answered" on the client itself).
Thanks very much!
DO 0
DO executes the expressions but does not return any results. In most respects, DO is shorthand for SELECT expr, ..., but has the advantage that it is slightly faster when you do not care about the result.
I am currently working on a query in Access 2010 and I am trying to get the below query to work. I have the connection string between my local DB and the server that I am passing through to working just fine.
Select column1
, column2
from serverDB.dbo.table1
where column1 in (Select column1 from tbl_Name1)
In this situation table1 is the table on the server that I am passing through to get to, but the tbl_Name1 is the table that is actually in my Access DB that I am trying to use to create constraints on the data that I am pulling from the server.
When I try to run the query, I am getting the error that it doesn't think tbl_Name1 exists.
Any help is appreciated!
I just came across a solution that may help others in a similar situation.
This approach is easy because you can just run one query on your local Access database and get everything you need all at once. However, a lot of filtering/churning-through-results may be done on your own local computer behind the scenes, as opposed to on the remote server, so it may not necessarily be quick.
Steps
Create a query, make it a "Pass Through" query, and set up its "ODBC Connect Str" property to connect to the remote database.
Write the pass through query, something like SELECT RemoteId From RemoteTable and give your pass through query a name, maybe PassThroughQuery
Create a new query, make it a regular "Select" query.
Write your new query, using the pass through query you just created as a table in this new query (seems weird to use a query as a table, but it works) and join that PassThroughQuery "table" to your local table and filter it based on values in the local table, something like SELECT R.RemoteId, L.LocalValue FROM PassThroughQuery R INNER JOIN LocalTable L ON L.LocalId = R.RemoteId where L.LocalValue = 'SomeText'
This approach allows you to mix/join the results of a pass through query and the data in a local Access database table cleanly, albeit potentially slowly if there is a lot of data involved.
I think the issue is that a pass through query is one that is run on the server. Since one of the tables is located on the local Access file, it won't find the table.
Possible workaround if you must stay with the pass-through is you can build an SQL string with the results of the nested query rather than the query string itself (depending on the number of results this may or may not be practical)
e.g. Instead of Select column1 from tbl_Name1 you use "c1result1","c1result2",....
I was just wondering whether the FileMaker Pro command "ExecuteSQL()" supports subqueries within the SQL Query?
This is the query that I have got at the moment:
"SELECT Google_Calendar FROM SCHEDULE WHERE Group_ID = ( SELECT Group_ID FROM SCHEDULE WHERE Schedule_ID = "& EscapeSQL( GSPNo( 1 ) ) &" )"
I keep getting an error and I know all of the fields are correct and the actual query would work in something like PHPMyAdmin.
So, does anyone know whether this would work or is there some limitations on the queries?
Thanks!
If you're referring to the script step "Execute SQL", then it can only work with external data sources. It cannot run SQL queries against FileMaker tables. If you're referring to the internal SQL API which is available through some plug-ins (and via FileMaker ODBC/JDBC driver), then yes, this API does support subqueries.
More recent versions of Filemaker (certainly 13+) will indeed let you do this now. Statements are limited to SELECT, but you can use subqueries, according to the docs.
I've been trying to formulate a query to help myself identify resource-heavy queries/database/users using SQL Server and haven't gotten it down quite yet. I want to build a query that will do what 'mysqladmin processlist' would do for MySQL.
I've referred to this related question but haven't gotten what I really need from it. I'm using sp_who, sp_who2 and queries like this:
select master.dbo.sysprocesses.loginame,
count(master.dbo.sysprocesses.loginame)
from master.dbo.sysprocesses
group by master.dbo.sysprocesses.loginame
The problem always is that one of these tools doesn't give me everything I need. My goal would be to have a query that would be of this format:
LOGIN, DATABASE, QUERY, CPU, MEM, etc.
If anyone knows how to do this, I would appreciate the help. If anyone has any SQL Server DBA cheatsheets that would be great, too.
Does it have to be done with a sproc call? SQL Server Management Studio (the link is for the express edition, but a full install of SQL Server already has it) has an "Activity Monitor" feature which lists exactly what you want.
Other than that,
EXECUTE sp_who2
Gives you exactly what you asked for: Login, DBName, Command, CPUTime, DiskIO, are all there...
If you want the exact command that a SPID is executing, you can use the
DBCC INPUTBUFFER(spid)
command (sp_who2 just tells you whether it's a DELETE, SELECT, etc)
If you bust out sp_who2, you could extract the fields that you're interested in:
select
spid
,status
,hostname
,program_name
,cmd
,cpu
,physical_io
,blocked
,dbid
,convert(sysname, rtrim(loginame))
as loginname
from sys.sysprocesses with (nolock)
order by cpu desc
I'm trying to troubleshoot this problem using SQL Profiler (SQL 2008)
After a few days running the trace in production, finally the error happened again, and now i'm trying to diagnose the cause. The problem is that the trace has 400k rows, 99.9% of which are coming from "Report Server", which I don't even know why it's on, but it seems to be pinging SQL Server every second...
Is there any way to filter out some records from the trace, to be able to look at the rest?
Can I do this with the current .trc file, or will I have to run the trace again?
Are there other applications to look at the .trc file that can give me this functionality?
You can load a captured trace into SQL Server Profiler: Viewing and Analyzing Traces with SQL Server Profiler.
Or you can load into a tool like ClearTrace (free version) to perform workload analysis.
You can load into a SQL Server table, like so:
SELECT * INTO TraceTable
FROM ::fn_trace_gettable('C:\location of your trace output.trc', default)
Then you can run a query to aggregate the data such as this one:
SELECT
COUNT(*) AS TotalExecutions,
EventClass,
CAST(TextData as nvarchar(2000)) ,
SUM(Duration) AS DurationTotal ,
SUM(CPU) AS CPUTotal ,
SUM(Reads) AS ReadsTotal ,
SUM(Writes) AS WritesTotal
FROM
TraceTable
GROUP BY
EventClass,
CAST(TextData as nvarchar(2000))
ORDER BY
ReadsTotal DESC
Also see: MS SQL Server 2008 - How Can I Log and Find the Most Expensive Queries?
It is also common to set up filters for the captured trace before starting it. For example, a commonly used filter is to limit to only events which require more than a certain number of reads, say 5000.
Load the .trc locally and then Use save to database to local db and then query to your hearts content.
These suggestions are great for an existing trace - if you want to filter the trace as it occurs, you can set up event filters on the trace before you start it.
The most useful filter in my experience is application name - to do this you have to ensure that every connection string used to connect to your database has an appropriate Application Name value in it, ie:
"...Server=MYDB1;Integrated Authentication=SSPI;Application Name=MyPortal;..."
Then in the trace properties for a new trace, select the Events Selection tab, then click Column Filters...
Select the ApplicationName filter, and add values to LIKE to include only the connections you have indicated, ie using MyPortal in the LIKE field will only include events for connections that have that application name.
This will stop you from collecting all the crud that Reporting Services generates, for example, and make subsequent analysis a lot faster.
There are a lot of other filters available as well, so if you know what you are looking for, such as long execution (Duration) or large IO (Reads, Writes) then you can filter on that as well.
Since SQL Server 2005, you can filter a .trc file content, directly from SQL Profiler; without importing it to a SQL table. Just follow the procedure suggested here:
http://msdn.microsoft.com/en-us/library/ms189247(v=sql.90).aspx
An additional hint: you can use '%' as a filter wildcard. For instance, if you want to filter by HOSTNAME like SRV, then you can use SRV%.
Here you can find a complete script to query the default trace with the complete list of events you can filter:
http://zaboilab.com/sql-server-toolbox/anayze-sql-default-trace-to-investigate-instance-events
You have to query sys.fn_trace_gettable(#TraceFileName,default) joining sys.trace_events to decode events numbers.