Why does SSRS need to recycle the application domain - reporting-services

I'm working with MS Reporting Services 2016. I noticed that the application domain is set by default to recycle every 12 hours. Now the impact on users after a recycle is either slow response from reporting services or a failed report. Both disappear after a refresh of the report, but this is not ideal.
I have come across a SO answer where people suggest that you can turn off the scheduled recycle by setting the configuration attribute RecycleTime to zero.
I have also read that writing a script to manually restart reporting services, which also recycles the app domain. Then a script that simply loads a report at a controlled time to remove the first time load issues. However this all seems like a work around to me and I would rather not have to do this.
My concern is that there must be a logical reason for having the scheduled recycle time, but I cannot find any information explaining this. Does anyone know if there is a negative impact from turning off the scheduled application domain recycle?

The RecycleTime is a function aimed at making sure SSRS isn't consuming RAM it doesn't need and potentially starving the rest of the machine. Disabling the refresh essentially removes the ability to claw back any memory used for a brief period of intensive processing.
If you are confident your machine is suitably resourced you can turn the refresh off or, if not, alternatively schedule the refresh for an out of hours time and define a Cache Refresh Plan to cache any super important reports immediately afterwards to minimise any user impact.
Further reading here: https://www.mssqltips.com/sqlservertip/2735/prevent-sql-server-reporting-services-slow-startup/

I guess I'm possibly over simplifying this, but SSRS was designed to recycle every 12 hours (default) for a reason. If it ain't broke, don't fix it. In my case, I wanted to control when the recycle occurred. I execute a 1 line powershell script from a SQL Agent job at 6:50 am, then generate a subscription report at 7 am, which kick starts SSRS and the users do not see any performance degradation.
restart-service 'ReportServer'
Leaving the SSRS config file setting at 720 minutes lets the recycle occur again at 6:50 pm. Subscription reports generate throughout the night, so if a human gets on SSRS after hours there should be no performance issue because the system is already running.
Are we possibly overthinking it?

Related

MS Access Network Interruption

I have an MS access system on a network with 15 users. The Front end is installed on users C:\ and BE on a mapped drive X:.The front end is about 8 meg, backend around 25.
Since day 1, one user constantly (every 30 mins at best) and some other users have a network interrupted error. Apart from being quite annoying to the users, this causes a temporarily masked/hidden issue where update queries run without error on 2 tables but do not update actually update/insert data.
A compact and repair resolves the issue, but is not feasible to run daily as users have the system open throughout the day. This is such a headache that I've had to write code to check that the data has been written after each query is run to detect if the issue is present.
Both myself and IT are 3rd parties to the business and are in the difficult opposing positions of "its your the network" and "its your database". Thankfully its all calm and peaceful but its not getting a solution for the client.
I've installed MS access FE/BE systems on over a hundred networks over the last 10 years and only ever seen the same issue on a peer to peer network. I'm aware that Access is very picky about network stability, but am faced with users who don't believe that there is a problem with the network as their email works and the internet radio doesn't drop out.
What I'm hoping to get assistance with here is either a tool / method that can test a network for stability / robustness with MS access and prove either one of us right/wrong with MS access or perhaps some advice on how I could move forward on this deadlock.
Thanks
I have seen a similar instance with damaged cables. A client of mine had mice that chewed through part of the cable, causing an intermittent interruption. Also, in another case, a cubicle wall was on top of the network cable (poor cable installation) and causing a short.
In order to bypass Access's need for constant network connection, I have my systems create local temporary tables for any view, and a local, 1-record table for any detail form that they are actively editing. Once they hit 'save' it runs the update query, and once done, no active connection with the server is needed again. It allows me to run much faster access systems, and eliminated the need for stable wireless or Ethernet. It does require quite a bit of structure change at first - as you will have to insert code to create local temporary tables in the FE file, and also code in an UPDATE Sequence in the AFTERUPDATE Form events too - but the time that it is has saved me and my users has been tremendous.
To put in in perspective, i have 1200+ users in the same Access database in a given week (sometimes 400+ in a day) and since they only 'pull' data from the server to make local table copies, there are only a handful of connections at any one time. My users can now dock and undock from their desks without needing to close the database.

Subscriptions delayed behind single report running thousands of times

I find it difficult to think I'm the first to run into this so either my searching ability has truly become sad or the solution is so apparent that nobody has asked before. Mox nix, I must ask.
SSRS 2012 which has a few hundred reports on it, over a hundred subscriptions and generally speaking, works fine.
This is a Native box, we have a separate box for the SharePoint version.
The monthly 'statement' report is data-driven and is fed over 100K personIds to process and export to pdf on a file server. The SQL takes .3 seconds, the pdf not much more. There are just so many of them. So when this one runs, all others queue up behind it and wait, often for it to completely finish. Not good. Month end reports are important to a few departments.
My question- can I set the priority of this report somehow (or some other setting) to allow for other reports to process when they are scheduled?
It just boggles my mind that this is even an issue, but it is.
Thanks for any insights-
Craig
There's no way to set an individual report's priority, unfortunately.
I would look at either reducing the number of required executions, if possible, and failing that, you can try manually separating them into batches, and assign a different subscription to each batch. Sub 1 could handle rows 1 - 5000, sub 2 5001 - 10000, and so on. Stagger the execution times and it will allow other subscriptions to slip in 'in between' the batch processing subscriptions.
But really, if you truly need to generate 100k-plus reports, I don't think SSRS is the best option. Have you considered SSIS instead?
Dean Kalanquin has an excellent examination of how subscriptions in SSRS work, if you're looking to see what's happening under the hood :
http://blogs.msdn.com/b/deanka/archive/2009/01/13/diagnosing-and-troubleshooting-subscriptions.aspx

ssrs processing time spikes

I'm developing a report in ssrs for display in a public location within the company. The report shows up-to-the-minute data on department activity, so is set to refresh every 30 seconds or so.
The problem I'm running into is that once every few hours, the report throws an error (rsErrorReadingNextDataRow) & (rsProcessingAborted). All I have to do is hit refresh in the browser (rendered in a an internet browser) and it reruns and charts along happily for a while.
After looking at the reportserver execution log, it appears that these errors coincide with a phenominal spike in processing time. On average, each time the report refreshes, processing times is 400ms~800ms, but when it spikes it goes to ~90000ms (yes, five zeroes) and then throws this error.
Being new to SSRS I am not sure where to begin looking for the root cause.
Can anyone give me pointers as to where I can start to find out what is causing the processing time to rocket like that? Data retrieval is stable at ~5000ms, and Time rendering is stable at around 200ms. It's only the processing that goes haywire.
Some background on the report and data:
Data is pretty straight forward. It's based on a view that pulls transactions from last 7 weeks. Number or records, therefore ebbs and flows each week ~ 8000 records. When select * from View is run in SSMS, query takes about 5 seconds to run. I'll work on speeding that up after I resolve this processing issue.
No parameters are used, though the view does use getdate() to figure out which records to show from base tables.
No stored procedures are used.
Report itself is comprised of 6 panes within a single tablix, none of which have to draw more than a few dozen marks ('cept one is a map of US states).
The report does have one feature that may be related, though I am not sure how. Report definition filters the dataset based on a mod 3 of the built in execution time variable, rotating the report to show "This Month", "This Week", or "Today" activity. There is an additional mod 3 on execution time which rotates visibility for the panes. The result is that each 30 seconds, a report with different combinations of charts shows up on a rotating time-frame.
Don't know if this could be a cause, but it's the only element of the report that makes it remotely fancy. Everything else about the report is actually rather straightforward/plain.
While I would love to identify and eliminate the spikes, even a mechanism to automatically refresh on error, or refresh on timeout (or something to that effect) would do the trick. I need to be able to launch the report in the morning and have it run unmanned all day without any human interaction, refreshing every 30 seconds for the duration of business hours.
Just a recap of the comments:
it might be possible to push the web page into a frame, then add javascript to the main page to periodically refresh the frame. It wouldn't fix the underlying problem, but it might help.
it might be worth looking at the performance monitoring tools in Sql Server - it might be a database issue (temp table filling up, etc)
You noted that adding a with (nolock) seemed to help locate the problem.

Should proactive caching be used instead of processing dimensions?

I am confused about best practice for updating cube data throughout the day. We have a small order processing environment, where I would like to update a dashboard containing Order statuses. I am able to get this to work by creating an SSIS package and scheduling it to run every 4 minutes. This works.
But when I disable the SSIS job above, and instead turn on Real-time ROLAP on all the dimensions and the Cube, nothing ever changes in dashboard. Do I misunderstand the purpose of proactive caching?
I'm using SQL Server standard containing our production data, but our Analysis Server is Enterprise, in case that makes a difference. I'd also be willing to use Automatic or Scheduled MOLAP if that works.
no, you did not. I think you have configuration issues.
I assume the job you disabled was coping data from your database to your data warehouse, right?
And your cube reads from your data warehouse, right?
so now, your OLAP database is being updated (by your application) but the changes are not being pushed to the cube (because the job is off)
Proactive caching (specially with ROLAP) is a way to get your data live without having to schedule a cube refresh for every x minutes. But the job that populates your DW must still be running.
I can guess that the package you disabled was, besides updating the DW, also refreshing the cube. Check it's source.

get machine time in air/as3

Is there a way to get a reliable machine time that cannot be altered by changing the time/date on a mobile device or computer?
I'm looking to find a way to see time elapsed in an air app where you can...
run the app
change the system time (back an hour, or change the date forward by two weeks, etc.)
run the app again with the correct time passed
any ideas how to implement this without relying on server connection?
I feel fairly confident saying it can't be done. Without a remote server to ping, all the app knows is what the OS tells it - and if the app isn't running, it can't be collecting its own data independently of the OS.
The closest you could get would be storing the system time at app shutdown, then comparing that to the system time on app startup to see if the OS' clock has been turned back - if the current time is before the previous time, you know something's up. Even then, the data about the last known system time has to be stored somewhere locally - which means the user is free to edit it if they want.
You could also mess with having the app running in the background constantly, or even as a service, though that moves out of the viability of using AIR - and even then, time would only pass while the system was on, and only if the user decides to let the background process continue running.
If your problem is measuring the elapsed time between two moments, getTimer should do the job.