How is Notification.ProcessAfter set ? (SSRS 2008R2) - reporting-services

We've got some data driven subscriptions running on SSRS.
Sometimes they take an unusually long time to complete, if I check the activity on the server I find that things are relatively quite.
What I did notice is that in the ReportServer database on the Notification table there's a column called ProcessAfter.
Sometimes this value is set about 15 minutes into the future, and the subscription only completes after the time stated in that column.
What is setting this value? Since this behaviour is relatively rare.

After a few days I posted this question here, and got an answer:
When a subscription runs, there are several things happen: The SQL
Server Agent job fires and puts a row in the Event table in the RS
catalog with the settings necessary to process the subscription. The
RS server service has a limited number of threads (2 per CPU) that
poll the Event table every few seconds looking for subscriptions to
process. When it finds an event, it puts a row in the Notifications
table and starts processing the subscription.
The only reason that rows would stay in the Notification table is that
the RS service event processing threads are not processing the events.
As per my understanding, the NotificationEntered column stores the
time when the notification enters. Delivery extension provide some
settings for specifies the number of times a report server will retry
a delivery if the first attempt does not succeed (MaxRetries property)
and specifies the interval of time (in seconds) between each retry
attempt (SecondsBeforeRetry property). The default value for
SecondsBeforeRetry is 900 seconds, means 15 minutes. When the delivery
fails, it retry attempts every 15 minutes.
Reference: Monitoring and Troubleshooting Subscriptions Delivery
Extension(s) General configuration
If there are any other questions, please feel free to let me know.
Thanks, Katherine Xiong
I found the Extension(s) General Configuration link especially helpful

Related

Linux: schedule command at a specific, different predetermined time every day

I have to run a command every day at a different time. The times are known in advance and saved in a MySQL database in the familiar YYYY-MM-DD HH:MM:SS format.
What I thought of:
cron schedule the job for the exact time the first day, then make the script itself modify the crontab entry with the correct time for the next day.
cron scheduling the job at approximately the right time, then make it read the exact time from the database and sleep until then.
cron schedule the job execution every minute, and leave it to the script to determine whether the current date/time corresponds to the right execution time; proceded if it is, exit if not.
at submit the job the first day with at, then make it read the next day's time from the database and resubmit itself for then with at.
Additional info:
The command is a PHP script that composes the message of the day and sends it to all users registered to the website. I can consider other technologies if they solve this problem better. I would like to retain the ability of rebooting the server (outside of the intended execution hour) without worrying too much about jobs getting lost, therefore solutions 1. and 3. look better under this aspect. I'm starting with two commands to be run at two different times of the day, but I could soon end up with dozens more of similar jobs to be scheduled at different times every day, so I would prefer to avoid clutter as much as possible. I'd probably go with option 3 at this point.
The question(s):
Is there a better / preferred / established way of accomplishing this task? Solutions other that those mentioned above are welcome. What are the main drawbacks (of your recommended solution) I should be aware of?
I do believe you need to build your custom application for implementing the logic you want to implement.
Eventually you can use the cron system to start the process or to make sure that the process is running (in case it died or it was killed).
In your place what I would do, is to write a custom PHP program (or python or you name it) that performs the following:
Opens a connection to DB
Checks when the next execution is scheduled
Calculates if it is time to run
if not, it sleeps for X seconds (this depends on your preference)
it it is time to run, it performs its duty
sleeps again, and the loop begins
An alternative would be to check the every time the execution schedule, to check for changes in the schedule.
Another one would be read once and sleep until the execution time, but in this other case you would not catch changes in the schedule
This all depends on you, all in the all the program is an extremely easy one
I ended up using solution 3. above and am quite satisfied with it so far.
All the logic is in the .php file, which is responsible to:
save the current date/time in a variable (e.g. $now)
perform any considerations on it
scan the database in search of a matching date/time
This actually allows for a reasonable degree of flexibility:
I can choose not to run any commands if a certain semaphore file exists:
if (file_exists($filename)) {exit;}
I can set parameters in an option file enabling e.g. debug or test modes:
include parameters.php
if ($debug === true) {error_reporting(E_ALL);}
I can avoid bothering users if it is, let's say, new year's day:
if (date('m-d') == '01-01') {exit;}
I can introduce delays based on custom logic:
if (date('w', strtotime($now)) === '0') {$now = date('Y-m-d H:i:s', strtotime($now . ' +15 minutes'));}

MySql - Missed event schedule

I am trying to use mysql event schedule in my application, I have not use it before so i have some confusions.
I want to know if my computer is off on the schedule date, then schedule will continue on next day, after starting my computer?
Like:
my schduled is for beginning at every month (no predefined time set)
if in the above date my computer/Server is off,
will mysql continue scheduled event in next day after turning on my computer/server?
If no, then please suggest a solution.
Hmmmm, have you looked at something like this?
MySQL: Using the Event Scheduler
... or:
How to create MySQL Events
... or even: [MySQL :: MySQL 5.1 Reference Manual: 19.4.1. Event Scheduler Overview](19.4.1. Event Scheduler Overview)?
Also please keep in mind that SQL DBMS servers are written with the rather strong presumption that they will be kept up and operating 24 hours per day with only brief periods of downtime for maintenance or repairs. There is generally very little consideration for operation on machines which are shutdown at night and while not in use.
If you simply store a table of dates and events then your can simply query that table for events which have passed or are upcoming within any range you like ... and you can run the program(s) containing those queries (and performing any appropriate activities based on the results) whenever you start you computer and periodically while it's up and running.
These links refer to a feature of MySQL which is designed to have the server internally execute certain commands (MySQL internal commands, such as re-indexing, creating/updating views, cleaning tables of data which "expires" and so on. I don't know if a MySQL server would attempt to execute all events which have passed during downtime, though it should only be a little bit of work to follow the tutorial, schedule some event for some time (say 15 minutes after the time you expect to hit [Enter]) ... then shutdown your computer (or even just the MySQL server) and go off to lunch. Then come back, start it up and see what happens.
The scheduled event could be something absurdly simple, like inserting the "current" time into some table you set up.

Set eventual consistency (late commit) in MySQL

Consider the following situation: You want to update the number of page views of each profile in your system. This action is very frequent, as almost all visits to your website result in a page view incremental.
The basic way is update Users set page_views=page_views+1. But this is totally not optimal because we don't really need instant update (1 hour late is ok). Is there any other way in MySQL to postpone a sequence of updates, and make cumulative updates at a later time?
I myself tried another method: storing a counter (# of increments) for each profile. But this also results in handling a few thousands of small files, and I think that the disk IO cost (even if a deep tree-structure for files is applied) would probably exceed the database.
What is your suggestion for this problem (other than MySQL)?
To improve performance you could store your page view data in a MEMORY table - this is super fast but temporary, the table only persists while the server is running - on restart it will be empty...
You could then create an EVENT to update a table that will persist the data on a timed basis. This would help improve performance a little with the risk that, should the server go down, only the number of visits since the last run of the event would be lost.
The link posted by James via the comment to your question, wherein lies an accepted answer with another comment about memcached was my first thought also. Just store the profileIds in memcached then you could set up a cron to run every 15 minutes and grab all the entries then issue the updates to MySQL in a batch, but there are a few things to consider.
When you run the batch script to grab the ids out of memcached, you will have to ensure you remove all entries which have been parsed, otherwise you run the risk of counting the same profile views multiple times.
Being that memcache doesn't support wildcard searching via keys, and that you will have to purge existing keys for the reason stated in #1, you will probably have to setup a separate memcache server pool dedicated for the sole purpose of tracking profile ids, so you don't end up purging cached values which have no relation to profile view tracking. However, you could avoid this by storing the profileId and a timestamp within the value payload, then have your batch script step through each entry and check the timestamp, if it's within the time range you specified, add it to queue to be updated, and once you hit the upper limit of your time range, the script stops.
Another option may be to parse your access logs. If user profiles are in a known location like /myapp/profile/1234, you could parse for this pattern and add profile views this way. I ended up having to go this route for advertiser tracking, as it ended up being the only repeatable way to generate billing numbers. If they had any billing disputes we would offer to send them the access logs and parse for themselves.

Microsoft Sql server schedule jobs - history being deleted

I've got several Scheduled jobs in Microsoft SQL server. I would like the history on these jobs to last indefinitely or at least a few months.
I see that by right clicking on "SQL Server agent" and going to properties, I can set the maximum number of lines to keep, but it’s currently set to 1000 and I’m well under that limit. I can also set the amount of time to keep records but it’s currently unchecked.
Any thoughts on what else could be deleting my records?
Do you have any maintenance plans running on the server. If so, one of them might contain an agent history cleanup task or be calling sp_purge_jobhistory .
http://msdn.microsoft.com/en-us/library/ms186524(v=SQL.105).aspx

Query Execution time in Management Studio & profiler. What does it measure?

I have my production SQL Server in a remote data center(and the web servers are located in the same data center). During development we observed that one particular view takes a long time to execute (about 60-80 secs) in our local development SQL Server, and we were OK with it.It was promoted to production and when I run the same query on Production DB (which is in the data center)from my local Management Studio I see that the query takes about 7 minutes,17 secs to run (available the bottom right corner of the management studio).When I ran a profiler I see that the time taken to execute that query is 437101 microseconds milliseconds, though it shows up in management studio as 7:17. , which actually is about 437101 milliseconds. My DBA says that in prod the view takes just about 60 to 80 seconds though I see different numbers from profiler and management studio.Can someone tell me what these durations mean in Profiler and management studio ?
My guess: duration between sending the last request byte and receiving the last response byte from the server. The client statistics were as follows:
Client Processing time: 90393
Total Execution time:92221
Wait time on server replies: 1828
My best guess on what "duration" on the profiler means is "the time taken by SQL Server (optimization engine to parse the query,generate the query plan or use the existing query plan + fetch records from different pages) to generate the result set which excludes the time taken by data to travel over the wire to the client"
Edit: I find that both these times are about the same (management studio vs profiler). How do they relate with the times I see in client statistics ?
Can some one throw more light on these ?
If I'm understanding your question correctly, you are first questioning the difference between the Duration reported by Profiler and the statistics presented in SSMS (either in lower right-hand corner for general time and/or by SET STATISTICS TIME ON). In addition to that, you seem to be unconvinced of the production DBA's comment that the view is executing in the expected duration of ~60 seconds.
First, from Books Online, the statics that SSMS would report back via SET STATISTICS TIME ON:
"Displays the number of milliseconds
required to parse, compile, and
execute each statement."
You're spot-on for this. As for Duration in Profiler, it is described as:
"The duration (in microseconds) of the
event."
From where I sit, these two should be functionally equivalent (and, as I'm sure you noticed, Profiler will report in microseconds if your going against SQL 2005 or later). I say this because the "event" in this case (regarding Duration in Profiler) is the execution of the select, which includes delivery to the client; this is consistent in both cases.
It seems you suspect that geography is the culprit to the long duration when executing the query remotely. This very well may be. You can test for this by executing the select on the view in one query window then spawning another query window and reviewing the wait type on the query:
select
a.session_id
,a.start_time
,a.status
,a.command
,db_name(a.database_id) as database_name
,a.blocking_session_id
,a.wait_type
,a.wait_time
,a.cpu_time
,a.total_elapsed_time
,b.text
from sys.dm_exec_requests a
cross apply sys.dm_exec_sql_text(a.sql_handle) b
where a.session_id != ##spid;
I would suspect that you would see something like ASYNC_NETWORK_IO as the wait type if geography is the problem - otherwise, check out what does come of this. If you're Profiling the query of your remote execution, the Duration will be reflective of the time statistics you see in SSMS. HOWEVER, if you're using Profiler and finding that the duration of this query when executed from one of the web servers that sits in the same data center as the SQL Server is still taking 7 minutes, then the DBA is a big, fat liar :). I would use Profiler to record queries that take longer than 1 minute, try to filter for your view and take the average to see if you're on target for performance.
Because there are no other answers posted, I'm concerned that I'm way off base here - but it's late and I'm new to this so I thought I'd give it a go!
I was struggling with that until i found this...
http://blog.sqlauthority.com/2009/10/01/sql-server-sql-server-management-studio-and-client-statistics/
Also, if you open the Property tab for your query you may find some magical "Elapsed Time" that may give you some execution time...
Hope it helps...
Try with this:
DECLARE #time AS DATETIME = CURRENT_TIMESTAMP
-- Your Query
SELECT CAST(DATEDIFF(SECOND, #time, CURRENT_TIMESTAMP) AS VARCHAR)
+ ','
+ CAST(DATEDIFF(MICROSECOND, #time, CURRENT_TIMESTAMP) AS VARCHAR)
AS 'Execution Time'