I use LGPO tool to apply changes to GPO. But for changes to take effect, server needs to be reapplied. Any possibility of applying changes without restart?
Related
we have recently come across an issue where an engineer has made a disk change in the console to a boot disk and it has caused our cloudbuild trigger to fail as it detects the size difference between what it is in code and in console (100GB vs 128GB)
The issue we have is in terraform we have deletion protection set to true so it wont allow us to make any changes until this is set to false.
However if i change this to false and push this change it will delete and recreate the boot disk as it will see the 100gb in terraform being less than the 130gb in console.
Is it possible to increase the disk size and disable the delete protection in the same code push?
For reference no you cannot make any amendment to an instance at the same time as the flag change if already enabled. Terraform looks at that status first and if enabled will ignore all other changes.
I'd do a terraform apply -refresh-only first in order to sync the size of the manual change with terraform state and then disable the delete protection set on the resource.
I have to set some instance properties for working on consistent region projects. I set those properties, but they got reflected only when I restarted the instance, which obviously resulted in all the running jobs getting cancelled.
I have 100s of jobs running which I cannot disturb, but want to set the 'checkpointRepositoryConfiguration', etc on the instance that is required to work with consistent regions.
Is there a way I can set these instance properties without restarting the instance or if at all I have to restart the instance, is there a way I can prevent the jobs running on that instance from getting cancelled?
Sorry, we do not support dynamically updating both properties. We have a task for this enhancement, but it is not supported so far.
I know that ip changes over time but is there a way to force openshift to change it every X hours without restarting (If not, I will consider restart) an application? Like some command, cartridge or cron script? Does this option become available with plan upgrade to bronze?
If there is absolutely no way to do that, can someone recommend me a platform similar to openshift which allows changing ip on fly.
With OpenShift Online, the applications are sometimes moved to a different node. However, users are unable to initiate or "force" moving to another node, thus changing the application's IP address (even with Bronze or Silver plans).
The other part of your question does not seem suitable for Stack Overflow.
I'm running a complex server setup for a defacto high-availability service. So far it takes me about two days to set everything up so I would like to automate the provisioning.
However I do a quite a lot of manual changes to (running) server(s). A typical example is changing a firewall configuration to cope with various hacking attempts, packet floods etc. Being able to work on active nodes quickly is important. Also the server maintains a lot of active TCP connections and loosing those for a simple config change is out of question.
I don't understand if either Chef or Puppet is designed to deal with this. Once I change some system config, I would like to store it somewhere and use it while the next instance is being provisioned. Should I stick with one of those tools or choose a different one?
Hand made changes and provisioning don't take hands. They don't even drink tea together.
At work we use puppet to manage all arquitecture, and as you we need to do hand made changes in a hurry due to performance bottlenecks, attacks, etc.
What we do is first make sure puppet is able to setup every single part of the arquitecture ready to be delivered without any specific tuning.
Then when we need to do hand made changes, if in a hurry as long you don't mess with files managed by puppet there's no risk, if it's a puppet managed file what we need to change then we just stop puppet agent and do whatever we need.
After hurry ended, we proceed as follows:
These changes should be applied to all servers with same symptoms ?
If so, then you can develop what puppet call 'facts' which is code that it's run on the agent on each run and save results in variables available in all your puppet modules, so if for example you changed ip conntrack max value because a firewall was not able to deal with all connections, you could easily (ten lines of code) have in puppet on each run a variable with current conntrack count value, and so tell puppet to set a max value related to current usage. Then all other servers will benefit for this tunning and likely you won't ever have to deal with conntrack issues anymore (as long you keep running puppet with a short frequency which is the default)
These changes should be always applied by hand on given emergencies?
If configuration is managed by puppet, find a way to make configuration include other file and tell puppet to ignore it. This is the easiest way, however it's not always possible (e.g. /etc/network/interfaces does not support includes). If it's not possible, then you will have to stop puppet agent during emergencies to be able to change puppet files without risk of being removed on next puppet run.
Are this changes only for this host and no other host will ever need it?
Add it to puppet anyway! Place a sweet if $fqdn == my.very.specific.host and put inside whatever you need. Even for a single case it's always beneficial (and time consuming) to migrate all changes you do to a server, as will allow you to do a full restore of server setup if for some reason your server crash to a not recoverable state (e.g. hardware issues)
In summary:
For me the trick in dealing with hand made changes it's putting a lot of effort in reasoning how you decided to do the change and after emergency is over move that logic into puppet. If you felt something was wrong because for a given software slots all were used but free memory was still available on the server so to deal with the traffic peak was reasonable to allow more slots to be run, then spend some time moving that logic into puppet. Very carefully of course, and as time consuming as the amount of different scenarios on your architecture you want to test it against, but at the end it's very, VERY rewarding.
I would like to complete Valor's excellent answer.
puppet is a tool to enforce a configuration. So you must think of it this way :
on the machine I run puppet onto...
I ask puppet client...
to ensure that the config of the current machine...
is as specified in the puppet config...
which is taken from a puppet server, or directly from a bunch of puppet files (easier)
So to answer one of your questions, puppet doesn't require a machine or a service reboot. But if a change in a config file you set with puppet requires a reboot of the corresponding service/daemon/app, then there is no way to avoid it. There are method in puppet to tell that a service needs to be relaunched in case of config change. Of course, puppet will not relaunch the service if it sees that nothing changed.
Valor is assuming you use puppet in client/server way, with (for example) puppet clients polling a puppet server for config every hours. But it is also possible to move your puppet files from machine to machine, for example with git, and launch puppet manually. This way is :
far simpler than the client/server technique (authentication is a headache)
only enforce config change when you explicitely ask for it, thus avoiding any overwrite of your handmade changes
This is obviously not the best way to use puppet if you manage a lot of machines, but it may be a good start or a good transition.
And also, puppet is very hard to learn at an interesting level. It took me 2 weeks to be able to automatically install an AWS server from scratch. I don't regret it, but you may want to know that fact if you must convince a boss to allocate you time.
We want to use Change Tracking to implement a two-way sync between a SQL Server 2008 Enterprise/Standard instance, and an Express 2008 instance.
When we read the remote changes, and then make the adjustments on the local server, how can we keep those statements from being change tracked? I foresee endless loops of one server tracking a change, then the other making the change and also tracking the change, the other server making the change, etc.
Disabling change tracking on that table while performing the sync operations could potentially miss changes from other processes on that table, so I don't think that's the answer.
Is there a way to disable change tracking on a per-statement or per-transaction basis?
EDIT: I discovered the WITH CHANGE_TRACKING_CONTEXT command, so I might be able to use that to specify when the changes are performed by the sync code so that the sync code itself won't pick those up and use them.
Change Tracking isn't really meant to be used as bi-directional replication. You should figure out some way to determine the instance where the change was actually made, then your "replication" code should be able to ensure that the changed rows on the replicated server do not wrap back to the original server again.