Data driven SSRS subscription delayed execution on the first run - reporting-services

I have a weird issue with one of the data driven subscriptions on SSRS.
The subscription is a timed subscription that generates invoices (pdf/excel) and gets triggered by a stored procedure.
The issue we are facing is that the first run always takes 30-60 minutes regardless of how many invoices are being generated. Once the first run has completed the subsequent runs are completed under a minute throughout the day.
There is a second version of the same report that is run manually and it runs fine(ruling out any delays with the data extraction bit).
I have looked at some other questions here but that didnt help identify the problem:
SQL Reporting services: First call is very slow
SSRS report subscription not working sometime

Without knowing more about the query, data, database setup, other process, etc.; it will be quite difficult to say for sure. But if I had to guess, based on your description, it sounds like the query plan cache is lost and is rebuilt on the first run of the day. Without the plan the query can be less efficient. Each subsequent run will use the plan created on the first run, and will therefor run more quickly. There are a number of reasons that could cause the query plan to be wiped from cache. A recompile, other queries using too much memory, not enough system memory to begin with etc.
Hope that helps!

Related

MySQL Interface with LabView - LabView is Freezing

I have a LabView data acquisition system that is writing data to a MySQL Database. It is writing data every second. The LabView system recently froze around the time when I was playing with the SQL queries.
I have a client computer, which is supposed to send queries to that MySQL Database every hour. This client computer set up a cron job to send the command to query the database every hour.
I recently added an index to my time_stamp, in order to optimize my query.
This may be a shot in the dark, but could there be any deleterious interaction between the fact that I had created an index on our time_stamp (to optimize the query), and set up a cron job to send the query every hour? Around that time, I think I may have also sent a query and aborted quickly before it completed, so I was wondering if something like that may cause the LabView system to freeze?
It doesn't appear to be an issue on the MySQL side, because the server was still running.
Check if you currently have deadlocks at your MySQL server, the query from your LabVIEW application might be suspended and is waiting for completion forever. Hint, look to the query execution time. Why it is happened should be investigated separately, but there is a good chance that if you just kill that suspended query the system will unfreeze and keeps running normally.

SQL Server continuous job

I have a requirement to run a job continuously which includes a stored procedure. This stored procedure does a critical task where it processes huge load of data as they come. As I know, it is not allowed to run 2 or more instances of a job in the same time by SQL Server it self. So, my questions are
Is there a way to run SQL Sever job continuously?
Do continuously running jobs hurt performance of the server?
There are continuous replication jobs; however, those are continuous because of an inline switch used in the command line and not due to the job being scheduled as continuous.
The only way to emulate a continuous job is to simply have it run often. There is an option under scheduling to run the job down to every second 24/7/365. With that said, you will need to be careful that the job isn't overrunning itself and that it is efficient enough to not cause issues with your server.
Whether it will effect performance is going to be reliant on what it does. If the job only selects the current date/time (not a very useful thing to do but an example), I would not expect an issue; however, if it runs complicated algorithms then it almost certainly going to cause issues.
I would recommend running this on a test server before putting it into production.

SSRS reports time out on dynamic queries after a few days

We're using SSRS 2012 with a number of reports driven by a Query.CommandText reference to a stored procedure executing dynamic sql (sp-executesql). These are consumed from a web application where the user specifies the report, criteria, etc. After a few days, the report requests will timeout, even though the underlying stored procedure executes within a few seconds (the same stored procedure feeds a search result screen and the report). Other reports that do not use dynamic sql continue to execute fine. The only remedy we've found is to restart the SSRS service. After the initial spin-up, the same report will execute within a few seconds.
The SSRS logs don't seem point to any issue, though I'm certainly not an expert reading them. Comparing a slow to a quick one only seems to differ by the time stamps evenly spread out between the start and the end. We do see "ReportProcessingException: There is no data for the field at position xx", but on both the slow and fast runs. Running the report from the Reports portal takes about 10 minutes when it's in slow mode.
My suspicion is that some caching is going on and SSRS is influencing the SQL execution plan.
Any suggestions or requests for more specifics would be very welcome.

ssrs processing time spikes

I'm developing a report in ssrs for display in a public location within the company. The report shows up-to-the-minute data on department activity, so is set to refresh every 30 seconds or so.
The problem I'm running into is that once every few hours, the report throws an error (rsErrorReadingNextDataRow) & (rsProcessingAborted). All I have to do is hit refresh in the browser (rendered in a an internet browser) and it reruns and charts along happily for a while.
After looking at the reportserver execution log, it appears that these errors coincide with a phenominal spike in processing time. On average, each time the report refreshes, processing times is 400ms~800ms, but when it spikes it goes to ~90000ms (yes, five zeroes) and then throws this error.
Being new to SSRS I am not sure where to begin looking for the root cause.
Can anyone give me pointers as to where I can start to find out what is causing the processing time to rocket like that? Data retrieval is stable at ~5000ms, and Time rendering is stable at around 200ms. It's only the processing that goes haywire.
Some background on the report and data:
Data is pretty straight forward. It's based on a view that pulls transactions from last 7 weeks. Number or records, therefore ebbs and flows each week ~ 8000 records. When select * from View is run in SSMS, query takes about 5 seconds to run. I'll work on speeding that up after I resolve this processing issue.
No parameters are used, though the view does use getdate() to figure out which records to show from base tables.
No stored procedures are used.
Report itself is comprised of 6 panes within a single tablix, none of which have to draw more than a few dozen marks ('cept one is a map of US states).
The report does have one feature that may be related, though I am not sure how. Report definition filters the dataset based on a mod 3 of the built in execution time variable, rotating the report to show "This Month", "This Week", or "Today" activity. There is an additional mod 3 on execution time which rotates visibility for the panes. The result is that each 30 seconds, a report with different combinations of charts shows up on a rotating time-frame.
Don't know if this could be a cause, but it's the only element of the report that makes it remotely fancy. Everything else about the report is actually rather straightforward/plain.
While I would love to identify and eliminate the spikes, even a mechanism to automatically refresh on error, or refresh on timeout (or something to that effect) would do the trick. I need to be able to launch the report in the morning and have it run unmanned all day without any human interaction, refreshing every 30 seconds for the duration of business hours.
Just a recap of the comments:
it might be possible to push the web page into a frame, then add javascript to the main page to periodically refresh the frame. It wouldn't fix the underlying problem, but it might help.
it might be worth looking at the performance monitoring tools in Sql Server - it might be a database issue (temp table filling up, etc)
You noted that adding a with (nolock) seemed to help locate the problem.

max time a report should take to generate report

I am just curious to know how long, in minutes, does the reporting service take to generate report when it returns 1MB of data. Maybe using views and table is properly index. SSRS reporting and server side generation.
Report generation time has two components:
- Data Acquisition time
- Render Time
So for 1 Mb of data, how many records (rows) are we talking? How many pages will the report have? How many controls per page? Does the report use charting? These are the factors that will determine generation time.
For most reports, data acquisition time is the most significant factor. Your report is never going to run faster than the raw data acquisition. So if you are using SQL, the report can't generate faster than the time required to run the query. I have seen queries that return far more than 1Mb of data very quickly. I have also seen queries that return very little data, that run for a long time.
On the render side, there are a couple of things that that can cause a report to run be slow. The first is in report aggregation. If a report needs to receive all of the records prior to starting rendering, then its performance will suffer. In particular, depending on the reporting tool. With large data sets (more than 10,000 records), you can have significant improvements in rendering by doing aggregation at the source (DB). The other is charting, which typically involves heavy rendering overhead and aggregation.
Most reporting systems allow you to build in timers or logging that will help you to performance tune the report. It is best to build timers into the report that will tell you what percentage of time the report is spending getting the data, and what percentage is spent rendering. When you have this information, you will know where to focus your energies.
If you are really trying to evaluate the performance of the reporting tool, the best way is to build a report that either reads a flat file or generates the data through code. In other words, eliminate the impact of the database and see how fast your reporting tool can generate pages.
Hope this helps.
How long is acceptable? Depends on what it's doing, how much it's run, things like that. Anything below 30 seconds would be fine if it's run once every day or two. If it's run once a week or once a month that number could be a lot higher.
The report itself is generally very fast, if you're seeing a hangup you may want to check the execution time of the query which generates the data. A complex query can take a long time, even if it only returns a little data...
I've found, when using BIRT and other reporting systems that the best improvements tend to come by offloading most of the work to the database at the back end.
In other words, don't send lots of data across the wire and sort or group it locally. The database is almost certainly going to outperform you with its SQL orderby and groupby clauses and optimizing indexes (among other things).
That way, you get faster extraction of the data you want AND less network traffic.
As several said already, a general question like this really can't be answered. However, I wrote up Turbo-charge Your Report Speed – General Rules & Guidelines (disclaimer - I'm the CTO at Windward Reports, a competitor of SSRS). I think that will help you look for what you can do to speed up the process.
And with all the caveats about the specifics matter a lot, on a 3GHz workstation we generally see 7-30 pages/second. Keep in mind this is numbers for Windward, not SSRS.