Why is Reporting Services report vastly slower than its query? - reporting-services

I have a query that takes roughly 2 minutes to run. It's not terribly complex in terms of parameters or anything, and the report itself doesn't do any truly extensive processing. Basically just spits the data straight out in a nice format. (Actually one of the reports doesn't format the data at all, just returns a flat table meant to be manipulated in excel.)
It's not returning a massive set of data either.
Yet the report takes upwards of 30 minutes to run.
What could cause this?
This is SSRS 2005 against a SQL 2005 database btw.
EDIT: OK, I found that with the addition of WITH (NOLOCK) in the report it takes the same time as the query does through SSMS. Why would the query be handled differently if it's coming from reporting services (or visual studio on my local machine) than if coming from SSMS on my local machine? I saw the query running in Activity Monitor a couple times in SLEEP_WAIT mode, but not blocked by anything...
EDIT2: The connection string is:
Data Source=SERVERNAME;Initial Catalog=DBName

Is it definitely the query taking a long time to run, or is the processing being done by the server that is slow? Some reports call queries multiple times. For instance, if you have a subreport inside a of a paging list control, each page of that report calls the query separately. So maybe there's something the report is doing with the data causing the delay?

How large is the data set that is returned by your query? If it is very large the majority of the time that is taken on the report server could be related to the time it takes the report to render. To be sure you could look at the ExecutionLog table on the report server to see if the TimeRendering is a large number in comparison to the overall execution time.

I think that this is not uncommon, but we looked into similar issues.
From memory, one thing that we did notice was that our subreport had parameters, and we've configured the "possible values" to be queried from the database.
I think that every time the subreport runs, SSRS re-queries the possible values of the parameters (& runs any other queries in your report even if you don't use the results).
In this case, once we were happy the subreport was working OK, we removed the queries for vaidating the parameter values and allowed "any value", assuming the parent report would not feed us bad parameter values.

A tad late to the party, but for anybody from the future having a similar problem.
Parameter sniffing
If a stored procedure with parameters is being used, it might be due to a phenomenon called 'parameter sniffing'.
In short, the first time a stored procedure is executed from SSRS an execution plan, based on the specified parameter values, is determined. This execution plan is then stored and used every time the stored procedure is executed from SSRS. Even though this execution plan might not be optimal for any future parameter values.
For an excellent and more extensive explanation have a look at: https://www.brentozar.com/archive/2013/06/the-elephant-and-the-mouse-or-parameter-sniffing-in-sql-server/
Other questions
Also have a look at this similar question:
Fast query runs slow in SSRS

Related

Dumb MS Access Q: Query based on other queries

I have been trying to run a report for my CEO that shows income. Our agency management software uses FoxPro databases (it originally came out in the early '80s, I think). I have linked the .dbf files to an Access database, and I have been setting up queries based on queries to get the information I need on a live basis without having to export the data. The problem that I have run into is that I cleaned up the selection criteria in the first query, but I did not run that query (it takes about ten minutes to run each of these). When I ran the last query (with data based on the first), I still had bad data in that result.
So here's the dumb question: (a) do I need to create a macro that runs the queries (there are four of them) in sequence so that they are all updated each time, (b) is there some better way to do this, and/or (c) does Access automatically run the prior queries when I run the downstream query?

SSRS Report Timing out in Production Server (except after refreshing 3 times)

The report works fine in the DEV and QA server but when placed in Production the following error comes up:
An error occurred during client rendering.
An error has occurred during report processing.
Query execution failed for dataset 'Registration_of_Entity'.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The strange part was that the Admins have assured me that this report has now been set so there is no timeout at all.
Refresh the report 3 times every morning and the error message goes away.
What can I do to fix this issue so that the report never receives this error?
There are several steps to resolve correctly this issue.
I advise following them in the following order:
1. Reduce the query execution time
Execute the query of the DataSet Registration_of_Entity in SSMS and see how long it takes to complete.
If your query requires more time to execute than the timeout specified for the DataSet, you should first try to reduce this time, for example:
Change the query structure (rethink joins, use CTEs, ...)
Add indexes
Looking at the execution plan can help.
2. Reduce the query complexity
Do you need all those rows/columns?
Do you need to have all these calculations on the database side?
Could it be done in the report instead?
You could try to:
Reduce the query complexity
Split the query in smaller queries
Again, looking at the execution plan can help.
3. Explore additional optimizations not related to the query itself
You really need this query, but do you need the data real-time?
Are there a lot of other queries being executed on this server?
You could look into:
Caching
Replication / Load Balancing
Note that from SSRS 2008 R2, the new Shared DataSets can be cached. I
know it doesn't apply in your case but who knows, it could help
others.
4. Last resort
If all the above steps failed to solve the issue, then you can increase the timeouts.
Here is a link to a blog post explaining the different timeouts and how to increase them.
Do you know if your query is becoming deadlocked? It could be that the report gets blocked on the server during peak times.
Consider optimizing your query or, if the data can be read uncommitted, add WITH (NOLOCK) after each FROM and Join Clause. Be sure to google WITH(NOLOCK) if you are unfamiliar with it so you know what read uncommitted can do.

How to cache infrequently changing mysql query?

I have a mysql query that is taking 8 seconds to execute/fetch (in workbench).
I won't go into the details of why it may be slow (I think GROUPBY isnt helping though).
What I really want to know is, how I can basically cache it to work more quickly because the tables only change like 5-10 times/hr, while users access the site 1000s times/hour.
Is there a way to just have the results regenerated/cached when the db changes so results are not constantly regenerated?
I'm quite new to sql so any basic thought may go a long way.
I am not familiar with such a caching facility in MySQL. There are alternatives.
One mechanism would be to use application level caching. The application would store the previous result and use that if possible. Note this wouldn't really work well for multiple users.
What you might want to do is store the report in a separate table. Then you can run that every five minutes or so. This would be a simple mechanism using a job scheduler to run the job.
A variation on this would be to have a stored procedure that first checks if the data has changed. If the underlying data has changed, then the stored procedure would regenerate the report table. When the stored procedure is done, the report table would be up-to-date.
An alternative would be to use triggers, whenever the underlying data changes. The trigger could run the query, storing the results in a table (as above). Alternatively, the trigger could just update the rows in the report that would have changed (harder, because it involves understanding the business logic behind the report).
All of these require some change to the application. If your application query is stored in a view (something like vw_FetchReport1) then the change is trivial and all on the server side. If the query is embedded in the application, then you need to replace it with something else. I strongly advocate using views (or in other databases user defined functions or stored procedures) for database access. This defines the API for the database application and greatly facilitates changes such as the ones described here.
EDIT: (in response to comment)
More information about scheduling jobs in MySQL is here. I would expect the SQL code to be something like:
truncate table ReportTable;
insert into ReportTable
select * from <ReportQuery>;
(In practice, you would include column lists in the select and insert statements.)
A simple solution that can be used to speed-up the response time for long running queries is to periodically generate summarized tables, based on underlying data refreshing or business needs.
For example, if your business don't care about sub-minute "accuracy", you can run the process once each minute and make your user interface to query this calculated table, instead of summarizing raw data online.

How can I find the bottleneck in my slow MySQL routine (stored procedure)?

I have a routine in MySQL that is very long and has multiple SELECT, INSERT, and UPDATE statements in it with some IFs and REPEATs. It's been running fine until lately, where it's hanging an taking over 20 seconds to complete (which is unacceptable considering it used to take 1 second or so).
What is the quickest and easiest way for me to find out where in the routine the bottleneck is coming from? Basically the routine is getting stopped up and some point... how can I find out where that is without breaking apart the routine and testing one-by-one each section?
If you use Percona Server (a free distribution of MySQL with many enhancements), you can make the slow-query log record times for individual queries, using the log_slow_sp_statements configuration variable. See http://www.percona.com/doc/percona-server/5.5/diagnostics/slow_extended_55.html
If you're using stock MySQL, you can add statements in the stored procedure to set a series of session variables to the value returned by the SYSDATE() function. Use a different session variable at different points in the SP. Then after you run the SP in a test execution, you can inspect the values of these session variables to see what section of the SP took the longest.
To analyze the query can see the execution plan of the same. It is not always an easy task but with a bit of reading will find the solution. I leave some useful links
http://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html
http://dev.mysql.com/doc/refman/5.0/en/explain.html
http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
http://www.lornajane.net/posts/2011/explaining-mysqls-explain

SQL Server 2008 Reporting Services slow reports

I have a problem in SQL Server 2008 Reporting Services. The problem is that the report is sometimes too slow to render (it takes more than 30 min), although I took the query and executed it in SQL Server Management Studio and it didn't take more than 25 seconds.
The query returns a large table (about 5000 rows) and I use it to draw a pie chart in the report, I tried to optimize the query so that it returns only 4 rows but the report was slow again.
What confuses me is that sometimes the report (with different input) is as fast as the query (about 30 sec), I thought it might be because of low number of users so I tried with some colleagues to view it at the same time but the reports still are fast, I tried to change in the configuration but I had no luck.
I've been searching for a solution for this problem for more than two months, so if anyone could help me on this I will be very thankful.
If you have access to the ReportServer sql database execute the following query or similar against the ExecutionLog view:
select TimeStart, TimeEnd, TimeDataRetrieval, TimeProcessing, TimeRendering, Status, ReportID from executionlog
This will provide you with a good breakdown of your report rendering (with different parameters).
Pay close attention to TimeRendering, TimeProcessing and TimeDataRetrieval.
Large or high values for any of these columns will illustrate where your bottleneck is.
One problem that I have experienced in the past is when you are returning a fairly large dataset to the report (5000 rows is large enough for this scenario) and then you are using the inbuilt ssrs filtering, the rendering is very slow and this would result in a very high TimeRendering value.
All rendering should be done at the database layer, grouping and filtering does not perform well will large amounts of data when performed in the ssrs report itself.