I've got several Scheduled jobs in Microsoft SQL server. I would like the history on these jobs to last indefinitely or at least a few months.
I see that by right clicking on "SQL Server agent" and going to properties, I can set the maximum number of lines to keep, but it’s currently set to 1000 and I’m well under that limit. I can also set the amount of time to keep records but it’s currently unchecked.
Any thoughts on what else could be deleting my records?
Do you have any maintenance plans running on the server. If so, one of them might contain an agent history cleanup task or be calling sp_purge_jobhistory .
http://msdn.microsoft.com/en-us/library/ms186524(v=SQL.105).aspx
Related
I just joined my new office as database administrator. Here we are using SQL Server merge replication. It surprises me that 3 of major replication tables
Msmergre_contents
Msmergre_genhistory
Msmergre_tombstone
Size of Msmergre_contents grew up to 64GB & no of records about to 64 billion and this is happening due to None set as the Expiration Period for subscriptions.
Now I want to clean up this table. As we are using a Simple recovery model, when I wrote a delete query on this table, everything got stuck. I have no downtime to stop/pause replication process.
Can anyone help me out how to minimize its size or delete half of its data?
You should not be directly deleting from the Merge system tables, that is not supported.
Instead, the proper way to cleanup the metadata from the Merge system tables is to set your Subscription expiration to something other than None, the default is 14 days. Metadata cleanup will be run when the Merge Agent runs, it executes sp_mergemetadataretentioncleanup. More information on subscription expiration and metadata cleanup can be found in How Merge Replication Manages Subscription Expiration and Metadata Cleanup.
However, since you most likely have a lot of metadata that needs to be cleaned up, I would gradually reduce the retention period. An explanation of this approach can be found here:
https://blogs.technet.microsoft.com/claudia_silva/2009/06/22/replication-infinite-retention-period-causing-performance-issues/
Hope this helps.
We've got some data driven subscriptions running on SSRS.
Sometimes they take an unusually long time to complete, if I check the activity on the server I find that things are relatively quite.
What I did notice is that in the ReportServer database on the Notification table there's a column called ProcessAfter.
Sometimes this value is set about 15 minutes into the future, and the subscription only completes after the time stated in that column.
What is setting this value? Since this behaviour is relatively rare.
After a few days I posted this question here, and got an answer:
When a subscription runs, there are several things happen: The SQL
Server Agent job fires and puts a row in the Event table in the RS
catalog with the settings necessary to process the subscription. The
RS server service has a limited number of threads (2 per CPU) that
poll the Event table every few seconds looking for subscriptions to
process. When it finds an event, it puts a row in the Notifications
table and starts processing the subscription.
The only reason that rows would stay in the Notification table is that
the RS service event processing threads are not processing the events.
As per my understanding, the NotificationEntered column stores the
time when the notification enters. Delivery extension provide some
settings for specifies the number of times a report server will retry
a delivery if the first attempt does not succeed (MaxRetries property)
and specifies the interval of time (in seconds) between each retry
attempt (SecondsBeforeRetry property). The default value for
SecondsBeforeRetry is 900 seconds, means 15 minutes. When the delivery
fails, it retry attempts every 15 minutes.
Reference: Monitoring and Troubleshooting Subscriptions Delivery
Extension(s) General configuration
If there are any other questions, please feel free to let me know.
Thanks, Katherine Xiong
I found the Extension(s) General Configuration link especially helpful
Operating system on all servers: Windows Server 2008 R2.
Publisher: Sql Server 2008 R2 Standard
Distributor: SQL Server 2008 R2 Standard
Web Synchronization Agent: sqlce35.dll under IIS 7.5
Subscriber: Windows XP SP 3 or Windows 7 SP1
SQL CE Client 3.1
I have an issue where a merge replication stops updating subscriptions without knowing why.
Premises:
1 Publication
15 items filtered by hostname(). All set to download, None in bidirectional mode.
20 or 30 subscribers
You create a merge replication with several articles (15 tables) filtered by a HostName (). This replica push over 20 or 30 subscribers and the synchronization is done correctly. Data collection is done by request from the subscriber, it pull changes two or three times a day. And all the changes are received by the subscriber without any problem.
All this works fine until there comes a time, after a few days without problems the replica will no longer updating a few changes to the subscriber. We search on publicator and have the changes, check the subscriber and not have them. Returning to modify the changes in publicator, sometimes if they are updated in the subscriber and sometimes not.
The problem is that this replica is no longer reliable, we do not know that it is updated and what not.
Focusing on a single subscription-EVDBASD342232 '013243 ... 'and a single article' table1 'is not getting new data made the following verification steps
Run "sp_showpendingchanges NULL, NULL, 'table1', 1" means the
procedure returns me a series of rows that correspond to the data
that should be replicated, I have found that the ID matches the
subscription-EVDBASD342232 '013243 ... 'and the guid corresponds to
the row on the 'table1' that should be replicated.
We request subscription and we observe in the Replication Monitor to
view subscription-EVDBASD342232 '013243 ... 'indicating 0 changes and
everything is correct, shows no errors but will not say anything
pending synchronization.
After analyzing all the data we do not understand is what is going wrong in the process.
Once a replica does not synchronize, the others do not either.
Please if anyone can help, thanks in advance.
If you need more clarification or details
SQL CE has known issues with replication.
Most advice is to not use CE.
Try contacting Hilary Cotter - I think he has a blog and a twitter.
Twitter #SQLHELP is a great place to get an answer as soon as possible. Most of the SQL gurus are there to help you.
We have a performance issue with the current transactional replication setup on sql server 2008.
When a new snapshot is created and the snapshot is applied to the subscriber, we see network utilization on the publisher and the distributor jump to 99%, and we are seeing disk queues going to 30
This is causing application timeouts.
Is there any way, we can throttle the replicated data that is being sent over?
Can we restrict the number of rows being replicated?
Are there any switches which can be set on/off to accomplish this?
Thanks!
You have an alternative to deal with this situation
While setting up transaction replication on a table that has millions of records
Initial snapshot would take time for the records to be delivered to subscriber
In SQL 2005 we have an option to create the tables on both transaction and publish server, populate dataset and setup replication on top of it
When you add subscription with command EXEC sp_addsubscription set The #sync_type = 'replication support only'.
Reference article http://www.mssqltips.com/tip.asp?tip=1117
Our DBA has forced us to break down dml code to run in batches of 50000 rows at a time with a couple of minutes in between. He plays with that batch size time to time but this way our replicating databases are ok.
For batching, everything has to go into temp tables, a new column (name it Ordinal) that does row_number(), and then a BatchID to be like Ordinal / 50000. Finally comes a loop to count BatchID and update target table batch by batch. Hard on devs, easier for DBAs and no need to pay more for infrastructure.
I have my production SQL Server in a remote data center(and the web servers are located in the same data center). During development we observed that one particular view takes a long time to execute (about 60-80 secs) in our local development SQL Server, and we were OK with it.It was promoted to production and when I run the same query on Production DB (which is in the data center)from my local Management Studio I see that the query takes about 7 minutes,17 secs to run (available the bottom right corner of the management studio).When I ran a profiler I see that the time taken to execute that query is 437101 microseconds milliseconds, though it shows up in management studio as 7:17. , which actually is about 437101 milliseconds. My DBA says that in prod the view takes just about 60 to 80 seconds though I see different numbers from profiler and management studio.Can someone tell me what these durations mean in Profiler and management studio ?
My guess: duration between sending the last request byte and receiving the last response byte from the server. The client statistics were as follows:
Client Processing time: 90393
Total Execution time:92221
Wait time on server replies: 1828
My best guess on what "duration" on the profiler means is "the time taken by SQL Server (optimization engine to parse the query,generate the query plan or use the existing query plan + fetch records from different pages) to generate the result set which excludes the time taken by data to travel over the wire to the client"
Edit: I find that both these times are about the same (management studio vs profiler). How do they relate with the times I see in client statistics ?
Can some one throw more light on these ?
If I'm understanding your question correctly, you are first questioning the difference between the Duration reported by Profiler and the statistics presented in SSMS (either in lower right-hand corner for general time and/or by SET STATISTICS TIME ON). In addition to that, you seem to be unconvinced of the production DBA's comment that the view is executing in the expected duration of ~60 seconds.
First, from Books Online, the statics that SSMS would report back via SET STATISTICS TIME ON:
"Displays the number of milliseconds
required to parse, compile, and
execute each statement."
You're spot-on for this. As for Duration in Profiler, it is described as:
"The duration (in microseconds) of the
event."
From where I sit, these two should be functionally equivalent (and, as I'm sure you noticed, Profiler will report in microseconds if your going against SQL 2005 or later). I say this because the "event" in this case (regarding Duration in Profiler) is the execution of the select, which includes delivery to the client; this is consistent in both cases.
It seems you suspect that geography is the culprit to the long duration when executing the query remotely. This very well may be. You can test for this by executing the select on the view in one query window then spawning another query window and reviewing the wait type on the query:
select
a.session_id
,a.start_time
,a.status
,a.command
,db_name(a.database_id) as database_name
,a.blocking_session_id
,a.wait_type
,a.wait_time
,a.cpu_time
,a.total_elapsed_time
,b.text
from sys.dm_exec_requests a
cross apply sys.dm_exec_sql_text(a.sql_handle) b
where a.session_id != ##spid;
I would suspect that you would see something like ASYNC_NETWORK_IO as the wait type if geography is the problem - otherwise, check out what does come of this. If you're Profiling the query of your remote execution, the Duration will be reflective of the time statistics you see in SSMS. HOWEVER, if you're using Profiler and finding that the duration of this query when executed from one of the web servers that sits in the same data center as the SQL Server is still taking 7 minutes, then the DBA is a big, fat liar :). I would use Profiler to record queries that take longer than 1 minute, try to filter for your view and take the average to see if you're on target for performance.
Because there are no other answers posted, I'm concerned that I'm way off base here - but it's late and I'm new to this so I thought I'd give it a go!
I was struggling with that until i found this...
http://blog.sqlauthority.com/2009/10/01/sql-server-sql-server-management-studio-and-client-statistics/
Also, if you open the Property tab for your query you may find some magical "Elapsed Time" that may give you some execution time...
Hope it helps...
Try with this:
DECLARE #time AS DATETIME = CURRENT_TIMESTAMP
-- Your Query
SELECT CAST(DATEDIFF(SECOND, #time, CURRENT_TIMESTAMP) AS VARCHAR)
+ ','
+ CAST(DATEDIFF(MICROSECOND, #time, CURRENT_TIMESTAMP) AS VARCHAR)
AS 'Execution Time'