Can a TeamCity build agent be configured to only run builds with a particular parameter dependency? - configuration

I have a TeamCity build agent installed on a machine which in theory is dedicated to running dynamic security scans and I don't want it doing anything else (i.e. running the duplicates finder).
Short of either creating custom agent configuration properties then customising each build's agent dependencies (which perhaps strictly speaking I should be doing anyway) or configuring the agent to only run selected configurations, is there any way to avoid this? Both of these approaches require additional configuration on a per-build basis either on every single build.
In a perfect world, I'd like to be able to tell the agent to only ever run builds which match a particular agent dependency. Is this possible or am I coming at it from the wrong direction?

I'm afraid TeamCity doesn't provide a way to specify that agent can run only configurations with a specific property (and not run other configurations).
So, there are only two ways to specify agents: either with agent requirements, or with configuring the agent to only run selected configurations.
You could probably try to make some batch change in your build configuration properties, because all build configuration settings/properties are stored in XML files on disk.

In current versions of TeamCity (e.g. 8.1) you can create a pool just for your security machine, and only assign the one machine to that pool, remembering to remove it from other pools.
Then you can assign the security project to that pool. That should solve your problem.

Related

What techniques exist for ensuring production environment variables are persisted in some form within a project?

Apologies for title phrasing; I'm sure it could be clearer.
In the Twelve-Factor App methodology, we are encouraged to store web app configuration using environment variables. When using a managed platform such as Heroku, this configuration is safely persisted as a feature of the platform, automatically made available to each deployment, and readily inspectable by developers. This feature is assumed to be stable and, as far as I know, no separate copy of production config need be maintained elsewhere.
When using a simpler unmanaged deployment process, e.g. git push-ing non-containerised code to a VPS, environment variables can still be used (e.g. a non-source-controlled .env file) but they are now effectively ephemeral, and if the VPS is destroyed through some error or incident, the project can be redeployed elsewhere but the configuration variables will need to be reconstructed from something.
My question is, in such a scenario, what is considered best practice around what that "something" should be? When joining a new project I can often cp .env.example .env to set up a typical local configuration. The values in the example file are usually safe to save in source control. However, I don't know where (if anywhere) I should be saving production configuration in order that I could configure a new production deployment of the kind described above. In the Heroku example, the configuration can always be inspected. But in the VPS example, if that running VPS is the only location where the complete production configuration exists, its unexpected disappearance presents a problem.
Obviously any credentials in the config could be regenerated, but that could quickly turn into a non-trivial exercise. I'm wondering how more experienced folks deal with this issue. Thanks!

How to setup and save qemu running option

I'm using qemu to replace bochs (since it doesn't update anymore)
In bochs, I can save the running settings into files and reload it. Furthermore, there will be a listed table of running options while boot up.
I'm wondering if I can do the same with qemu, save running settings such as cpu model, and other stuffs into some files and reload it next time I run emulation.
And if there exists a full listed running option table like thing for me to have a complete view on which options I can set.
Thanks a lot!
For this sort of UI and management of VMs you should look at a "management layer" program that sits on top of QEMU. libvirt's "virt-manager" is one common choice here. A management-layer will generally allow you to define options for a VM and save them so you can start and stop that VM without having to specify all the command line options every time. It will also configure QEMU in a more secure and performant way than you get by default, which often requires rather long QEMU command lines.
QEMU itself doesn't provide this kind of facility because its philosophy is to just be the low-level tool which runs a VM, and leave the UI and persistent-VM-management to other software which can do a better job of it.

Using berks for local development only?

I don't want to use berks in production because I don't like the idea of nodes going out to the web to pull cookbooks (I only want them to pull them from the Chef server in the normal way). But I like using Berks for local development because it resolves the dependencies for kitchen for me.
I was thinking about just adding berksfile and berksfile.lock to gitignore, but I figured I'd ask if it is possible to accomplish this with berks without removing it from production.
"nodes" will never go to the internet looking for cookbooks, they'll always be sourced from the chef server, so.... The question back is: how do you propose to deliver cookbooks to the chef server used to manage your production nodes?
What most people appear to do is commit the Berkshelf lock file and just run a "berks apply" against the target chef server. That will most likely fit your needs.
Personally, I like better separation between development and my production/non-production systems. I create a release tarball containing all the cookbooks that I've tested in development, using the "vendor" command in Berkshelf, and store this binary in a revision control system like Nexus. I suspect many would consider this over-kill, but it enables me to deliver an off-line (no internet connection required) and traceable delivery of my configuration.

Changing the configuration store location for the OSGi Configuration Admin service?

Is there a way to change the configuration store location for the OSGi Configuration Admin service? I'd like to have the properties files exist in another bundle so they'd exist in source control & in the deployment rather than the OSGi store.
In the end I decided to use Apache Felix File Install to update the configuration properties of a Configuration Admin ManagedService. This seems to work passably well.
It's a little kludgy because when the files are updated the new configuration properties get pushed to the managed service without regard to their being correct values. This means that on next startup the values will still be bad & need to be set to defaults.
It should work for now.
The Config Admin implementations cannot do this, at least not in a portable way via the specification. Instead you need a "management agent" that pushes configuration data into Config Admin via the API; it can derive that configuration data from any source it wishes.
FileInstall is a very simple example of a management agent. If it does not do exactly what you want then it is not too difficult to write your own.
The ManagedServices will still need to perform validation of incoming configuration data and dynamically react to new configuration data. OSGi is a dynamic platform and Config Admin is designed to allow for on-the-fly reconfiguration of a running system.

Bring Hudson slave nodes online at certain times

I am setting up a number of slaves to my Hudson master, grouped by labels. I would like to be able to have a set of nodes that run during the day and an additional set of nodes that are turned on during the evening.
Is this possible, either directly by hudson or via plugin or script? If so what is your recommended solution?
There is an experimental feature to schedule when each slave should be available. It is in core, but you have to set a system property to enable it. So if you start Hudson with
java -Dhudson.scheduledRetention=true -jar hudson.war
You will get an extra configuration option on each node, allowing you to specify a schedule of when that node should be used.
Let the OS (or any other scheduler) control the start and stop of a node. Hudson only uses what's available. Not sure how Hudson acts if a node dies while running a job.
Update: The feature that Michael Donohue is not experimental anymore and is available for all nodes (I use the SSH node). Works great (at least the take only if needed feature).
Expanding on what #Peter Schuetze said...
Unless the nodes are VMs that you want Hudson to manage (see the VMware plugin), the start and stop operations are out of Hudson's control. Depending on how you have your slaves set up, Hudson may just automatically connect when it sees the node is online, or you may need to make sure the slave runs something at startup.
You can use the Hudson API (generally HTTP POSTs to URLs on the Hudson master) to tell Hudson that nodes are going offline ahead of time. This will help avoid builds that get killed when the node goes down. Check out the HTML source on the node's page (http://hudson/computer/node_name) to see what the web interface does for the "mark offline" and "disconnect" operations.