We started one month ago to heavily use the Release Management part of TFS and I noticed that the collection's database size is increasing with an accelerate rate.
In order to stabilize the database size growth we're adjusting the retention policies of the release definitions.
Do you know when the purging process of the deleted builds/releases is occurring?
Can be manually triggered?
TFS v15.117.26714.0
Update 11/03
When you change the Permanently Destroy Releases in Retention Policy Settings page, after save changes, it will prompt:
Changes to the settings will be effective only for the new Release
Definitions and Environments created after the save.
So it will not influence your old deleted release, afraid the old deleted release will keep 30 days(default setting) when it created. You may have to wait.
Update
When you delete a build through the retention policy, however this is only a logical delete and the records remain in the database but are flagged with a delete flag.
The Destroy command will physically delete the build from the system, it will be completely gone and all records will be removed from the DB.
TFS will not recover the space straight away, which is deleted later by a job ran by the TFS Background Job Agent. The TFSJob Agent will recover the space during its normal running process so you may not see the space recovered for 24 hrs +.
If you don’t want to wait that much, you can run tf destroy with the /startcleanup switch which immediately kicks off the cleanup job.
When are builds deleted
TFS: Your retention policies run every day at 3:00 A.M. UTC. There is no option to change this process.
Source Link
It's not able to manually trigger the purging process of the deleted builds/releases in TFS for now.
Note builds that are deployed as part of releases are also governed by the release retention policy. The build linked to a release has its own retention policy, which may be shorter than that of the release.
If you want to retain the build for the same period as the release, set the Retain build checkbox for the appropriate environments. This overrides the retention policy for the build, and ensures that the artifacts are available if you need to redeploy that release.
When you delete a release definition, delete a release, or when the retention policy deletes a release automatically, the retention policy for the associated build will determine when that build is deleted.
Related
I have an Azure SQL db where I am executing a change with a c# call (using await db.SaveChangesAsync();)
This works fine and I can see the update in the table, and in the APIs that I call which pull the data. However, roughly 30-40 minutes later, I run the API again and the value is back to the initial value. I check the database and see that it is indeed back to the initial value.
I can't figure out why this is, and I'm not sure how to go about tracking it down. I tried to use the Track Changes SQL command but it doesn't give me any insight into WHY the change is happening, or in what process, just that it is happening.
BTW, This is a test Azure instance that nobody has access to but me, and there are no other processes. I'm assuming this is some kind of delayed transaction rollback, but it would be nice to know how to verify that.
I figured out the issue.
I'm using an Azure Free Tier service, which is done on a shared virtual machine. When the app went inactive, it was being shut down, and restarted on demand when I issued a new request.
In addition, I had a Seed method in my Entity Framework Migration set up to set the particular record I was changing to 0, and when it restarted, it re-ran the migration, because it was configured to do so in my web config.
Simply disabling the EF Migrations and republishing does the trick (or when I upgrade to a better tier for real production, it will also go away). I verified that records outside of those expressly mentioned in the Migration Seed method were not affected by this change, so it was clearly that, and after disabling the migrations, I am not seeing it any more.
There is a bug on the Mysql 5.7.14 regarding password hash and has been fixed on version 5.7.19. But the Mysql in the GCP doesn't have any option to do a minor upgrade. So can anyone suggest how to go about this issue?
Version 5.7.25, which includes the fix for this bug, will be in the next maintenance release later this month.
No you cannot do minor upgrades by yourself inCloud SQL becasue it is a fully managed service by Google and all updates and upgrades are done behind the scenes for their customers instances. These updates can be done at any time during the next maintenance cycle. However, you can control the day and time and specify a maintenance window for the instance in question.
When you specify a maintenance window, Cloud SQL will not initiate the updates outside of that window. This way you can specify the window when there is less or no traffic on your applications which help reduce the disruptive side effects of that maintenance. Maintenance usually takes between 1-3 minutes for the new update to be pushed and the instance become available again.
To specify a maintenance window:
1- Go to the project page and select a project.
2- Click an Instance name.
3- On the Cloud SQL Instance details page, click Edit maintenance preferences.
4- Under Configuration options, open Maintenance.
5- Configure the following options:
Preferred window. Set the day and hour range when updates can occur on this instance.
Order of update. Set the order for updating this instance, in relation to updates to other instances. Set timing to Any, Earlier, or Later. Earlier instances receive updates up to a week earlier than later instances within the same location.
read more on it here.
I'm working with MS Reporting Services 2016. I noticed that the application domain is set by default to recycle every 12 hours. Now the impact on users after a recycle is either slow response from reporting services or a failed report. Both disappear after a refresh of the report, but this is not ideal.
I have come across a SO answer where people suggest that you can turn off the scheduled recycle by setting the configuration attribute RecycleTime to zero.
I have also read that writing a script to manually restart reporting services, which also recycles the app domain. Then a script that simply loads a report at a controlled time to remove the first time load issues. However this all seems like a work around to me and I would rather not have to do this.
My concern is that there must be a logical reason for having the scheduled recycle time, but I cannot find any information explaining this. Does anyone know if there is a negative impact from turning off the scheduled application domain recycle?
The RecycleTime is a function aimed at making sure SSRS isn't consuming RAM it doesn't need and potentially starving the rest of the machine. Disabling the refresh essentially removes the ability to claw back any memory used for a brief period of intensive processing.
If you are confident your machine is suitably resourced you can turn the refresh off or, if not, alternatively schedule the refresh for an out of hours time and define a Cache Refresh Plan to cache any super important reports immediately afterwards to minimise any user impact.
Further reading here: https://www.mssqltips.com/sqlservertip/2735/prevent-sql-server-reporting-services-slow-startup/
I guess I'm possibly over simplifying this, but SSRS was designed to recycle every 12 hours (default) for a reason. If it ain't broke, don't fix it. In my case, I wanted to control when the recycle occurred. I execute a 1 line powershell script from a SQL Agent job at 6:50 am, then generate a subscription report at 7 am, which kick starts SSRS and the users do not see any performance degradation.
restart-service 'ReportServer'
Leaving the SSRS config file setting at 720 minutes lets the recycle occur again at 6:50 pm. Subscription reports generate throughout the night, so if a human gets on SSRS after hours there should be no performance issue because the system is already running.
Are we possibly overthinking it?
With no warning e-mail, it seems that europe-west1 Zone B has gone down for maintenance, for 16 days until the 1st April 2014. Being that GCE is a cloud based service and that I have the automatic 'migrate on maintenance' setting enabled, I assumed that I had nothing to worry about. However, after the VM was terminated last night and I reread 'Designing Robust Systems' it seems that I was badly mistaken/misled! It will take 3 days work to rebuild a new server and I have 20 students with data locked up for two weeks in the middle of the semester. Does anybody have any suggestions?
I made the exact same mistake. I have been in contact with Google, and there is NOTHING we can do but wait for the maintenace window to exit.
Also, my instance is now removed, so I will have to re-create the instance and attach to my persistent disk.
We have a SQL Server 2008 R2 database that backs up transaction logs every now and then. Today there was a big error in the database caused at around 12am... I have transaction logs up to 8am and then 12am - 16pm - etc.
My question is: can I sort of reverse-merge those transaction logs into database, so that I return to the database state at 8am?
Or is my best chance to recover an older full backup and restore all transaction logs up to 8am?
The first option is preferable since full backup has been performed a bit of a while ago and I am afraid to f*ck things up restoring from there and applying trn logs. Am I falsely alarmed about that? Is it actually possible for anything bad to happen if going by that scenario (restoring the full backup and applying trn logs)?
The fact that you don’t create regular transaction log backups doesn’t affect the success of the recovery process. As long as your database is in the Full recovery model, the transactions are stored in the online transaction log and kept in it until a transaction log backup is made. If you make a transaction log backup later than usual, it only means that the online transaction log may grow and that the backup might be bigger. It will not cause any transaction history to be lost.
With a complete chain of transaction log backups back to 8 AM, you can successfully roll back the whole database to a point in time.
As for restoring the full backup and applying trn logs – nothing should go wrong, but it’s always recommended to test the scenario on a test server first, and not directly in production
To restore to a point in time:
In SSMS expand Databases
Right-click the database, select Tasks | Restore| Database
In the General tab, in the Backup sets the available backups will be listed. Click Timeline
Select Specific date and time, change the Time interval to show a wider time range, and move the slider to the time you want to roll back to
You can find more detailed instructions here: How to: Restore to a Point in Time (SQL Server Management Studio)
Keep in mind that this process will roll back all changes made to the database. If you want to roll back only specific changes (e.g. only recover some deleted data, or reverse wrong updates), I suggest a third party tool, such as ApexSQL Log
Reverting your SQL Server database back to a specific point in time
Restore a database to a point in time
Disclaimer: I work for ApexSQL as a Support Engineer