We have a 2010 BI Sharepoint (SSRS 2012) site that has links to several databases:
Database A will be available #12:00am
Database B will be available #1:00am
Database B will be available #2:00am
So I have a shared schedule setup for each database for the above times. How many reports should I have running in each shared schedule? For now I only have 10 sample reports and they all kicked off and ran within the same second (maybe some kicked off a couple seconds later). But I infer from that they don't run in order, rather asynchronously.
So, what is the limit and will it kill my server's performance if I have 100's of reports running. Or should I make a schedule and limit each schedule to run about 30-40 reports each?
I can not infer from your question if you are creating a snapshot or an email service...in any case there are tools that you can use to fine tune scheduling outside of ssrs. In ssrs I would recommend that you stager your report requests, if is makes sense for huge reports. In any case, You should stager your schedules to some degree.
Related
We wanted to show data on browser from reporting server, but sometime it's failing to load and taking more than 3 mins. Is there any better approach to get the data faster.
It can be many factors why your report is slow or failing to load. First step would be analyzing executionlog (on server where is reporting services installed, database of report server, probaby ReportServer db, view ExecutionLog2). There you can see three crutial columns TimeDataRetrieval, TimeProcessing and TimeRendering
Extract from this discussion
TimeDataRetrieval - contains the sum of all DataSet durations
TimeProcessing - The number of milliseconds spent in the processing engine for the request
TimeRendering - The number of milliseconds spent after the Rendering Object Model is exposed to the rendering extension
That way you will know if you need to tune your query or your report and that's the good start.
Currently we have all our users sign up for subscriptions (almost all at 8AM daily) to each report they need and currently we do not have caching enabled on the reports. Is it safe to say when each subscription runs it's a full DB lookup and report generation? If we enable Caching for say 30 minutes would that reduce DB workloads?
Yes, the individual report is going to run the query and generate the report every time. If you had one subscription sending to multiple people, it would only occur once. Sounds like caching would be a good idea.
We're using SSRS 2012 with a number of reports driven by a Query.CommandText reference to a stored procedure executing dynamic sql (sp-executesql). These are consumed from a web application where the user specifies the report, criteria, etc. After a few days, the report requests will timeout, even though the underlying stored procedure executes within a few seconds (the same stored procedure feeds a search result screen and the report). Other reports that do not use dynamic sql continue to execute fine. The only remedy we've found is to restart the SSRS service. After the initial spin-up, the same report will execute within a few seconds.
The SSRS logs don't seem point to any issue, though I'm certainly not an expert reading them. Comparing a slow to a quick one only seems to differ by the time stamps evenly spread out between the start and the end. We do see "ReportProcessingException: There is no data for the field at position xx", but on both the slow and fast runs. Running the report from the Reports portal takes about 10 minutes when it's in slow mode.
My suspicion is that some caching is going on and SSRS is influencing the SQL execution plan.
Any suggestions or requests for more specifics would be very welcome.
Background Information
Our application reads/writes from 3 components:
ASP.NET MVC 3 customer front end website (write actions)
Winform verification tool at stores (write actions)
Silverlight Dashboard for tenant (95% aggregate reads 5% write actions)
(3) is the only piece that can use some performance improvements.
Our storage is Sql Server Standard OLTP database that has stored procedures that aggregate data consumed by the silverlight app.
When using database tuning advisor or execution plan we don't see any critical indexes missing and we rebuild indexes with sql agent job.
Most of the widgets are sparklines
x = time selected by interval (day, week, month, year)
y = aggregate (sum,avg,ect)
currently we return about 14 - 20 points per widget. Our dashboard opens with 10 widgets initially.
Our dimensions would be: tenant, store, (day,week,month,year)
Our facts: completed, incomplete, redeemed, score ...
I know a denormalized table will remove needing sql server from recalculating for
store managers, franchise owners, corporate viewing the data ~50 (simultaneous users)
each time
I'll be honest if we go with OLAP it will be my first hands on experience with it.
Questions
What is the long term solution for a rich reporting dashboard?
I would assume OLAP. If so, how would you keep it up to date to be near realtime dashboard that we have today?
Putting a maintenance page while OLAP rebuilds itself is not an option.
Ideally, we would want to do this incrementally and see Nservicebus (which we use today already) as a great bridge to update these
denormalized views. Do we put these denormalized views in oltp as just another table or is there a way to incrementally update OLAP datasource?
References
http://www.udidahan.com/2009/12/09/clarified-cqrs/
http://www.udidahan.com/2011/10/02/why-you-should-be-using-cqrs-almost-everywhere%E2%80%A6/
“Putting a maintenance page while OLAP rebuilds itself is not an option.“
Why would you say that? The OLAP cube is available while it’s rebuilding.
There are several ways you can configure how the refresh works, ROLAP, HOLAP and MOLAP. You can have automatically refreshes at X hours or even make the data available in real-time. Try reading about proactive caching on SSAS, it may give you some ideas.
Our shop relies heavily on SSIS to run our back end processes and database tasks. Overall, we have hundreds of jobs, and for the most part, everything runs efficiently and smoothly.
Most of the time, we have a job failure due to an external dependency failing (data not available, files not delivered, etc). Right now, our process is set up to email us every time a job fails. SSIS will generate an email sending us the name of the job and the step it failed on.
I'm looking at creating a dashboard of sorts to monitor this process more efficiently. I know that the same information available in the Job History window from SSIS is also available by querying the msdb database. I want to set up a central location to report failures (probably using SQL Reporting Services), and also a more intelligent email alert system.
Has anyone else dealt with this issue? If so, what kind of processes/reporting did you create around the SSIS procedures to streamline notification of job failures or alerts?
We have a similar setup at our company. We primarily rely on letting the jobs notify us when there is a problem and we have employees who check job statuses at specific times to ensure that everything is working properly and nothing was overlooked.
My team receives a SQL Server Agent Job Activity Report HTML email every morning at 6am and 4pm that lists all failed jobs at the top, running jobs below that, and all other jobs below that grouped into daily, weekly, monthly, quarterly, and other categories. We essentially monitor SQL Server Agent jobs, not the SSIS packages themselves. We rely on job categories and job schedule naming conventions to automate grouping in the report.
We have a similar setup for monitoring our SSRS subscriptions. However, we only monitor this one once a day since most of our subscriptions are triggered around 3am-4am in the morning. The SSRS Subscription Activity Report goes one step further than the SQL Server Agent Job Activity Report in that it has links to the subscription screen for the report and has more exception handling built into it.
Aside from relying on reports, we also have a few jobs that are set to notify the operator via email upon job completion instead of upon job failure. This makes it easy to quickly check if all the major ETL processes have run successfully. It's sort of an early indicator of the health of the system. If we haven't received this email by the time the first team member gets into the office, then we know something is wrong. We also have a series of jobs that will fail with a job error if certain data sources haven't been loaded by a specific time. Before I had someone working an early shift, I use to check my iPhone for the email anytime I woke up in the middle of the night (which happened a lot back then since I had a newborn baby). On the rare occasion that I didn't receive an email indicating everything completed or I received an error regarding a job step, then I logged onto my machine via remote desktop to check the status of the jobs.
I considered having our data center guys check on the status of the servers by running a report each morning around 4am, but in the end I determined this wouldn't be necessary since we have a person who starts work at 6am. The main concern I had about implementing this process is that our ETL changes over time and it would have been necessary for me to maintain documentation on how to check the jobs properly and how to escalate the notifications to my team when a problem was detected. I would be willing to do this if the processes had to run in the middle of the night. However, our ETL runs every hour of the day, so if we had to kick off all our major ETL processes in the early morning, we would still complete loading our data warehouse and publishing reports before anyone made it into the office. Also, our office starts REALLY late for some reason, so people don't normally run our reports interactively until 9am onward.
If you're not looking to do a total custom build out, you can use https://cronitor.io to monitor etl jobs.
Current SSRS Job monitoring Process:
There are no SSRS job monitoring process. If any SSRS job is failed, user creates the incident, then TOPS Reporting and SSRS developer team are started to work on basis of incident. As a result, this process has taken huge turnaround time to resolve this issue.
Proposed SSRS Job monitoring Process:
SSRS subscriptions monitoring job will help to TOPS Reporting and SSRS developer for proactively monitoring the SSRS job. This job will create the report to display the failed list of report along with generic execution log status and subscription errors log status. Initially, developer can understand report failure reason from this report and then developer can start working proactively to resolve the issue.