Bring Hudson slave nodes online at certain times - hudson

I am setting up a number of slaves to my Hudson master, grouped by labels. I would like to be able to have a set of nodes that run during the day and an additional set of nodes that are turned on during the evening.
Is this possible, either directly by hudson or via plugin or script? If so what is your recommended solution?

There is an experimental feature to schedule when each slave should be available. It is in core, but you have to set a system property to enable it. So if you start Hudson with
java -Dhudson.scheduledRetention=true -jar hudson.war
You will get an extra configuration option on each node, allowing you to specify a schedule of when that node should be used.

Let the OS (or any other scheduler) control the start and stop of a node. Hudson only uses what's available. Not sure how Hudson acts if a node dies while running a job.
Update: The feature that Michael Donohue is not experimental anymore and is available for all nodes (I use the SSH node). Works great (at least the take only if needed feature).

Expanding on what #Peter Schuetze said...
Unless the nodes are VMs that you want Hudson to manage (see the VMware plugin), the start and stop operations are out of Hudson's control. Depending on how you have your slaves set up, Hudson may just automatically connect when it sees the node is online, or you may need to make sure the slave runs something at startup.
You can use the Hudson API (generally HTTP POSTs to URLs on the Hudson master) to tell Hudson that nodes are going offline ahead of time. This will help avoid builds that get killed when the node goes down. Check out the HTML source on the node's page (http://hudson/computer/node_name) to see what the web interface does for the "mark offline" and "disconnect" operations.

Related

How to setup and save qemu running option

I'm using qemu to replace bochs (since it doesn't update anymore)
In bochs, I can save the running settings into files and reload it. Furthermore, there will be a listed table of running options while boot up.
I'm wondering if I can do the same with qemu, save running settings such as cpu model, and other stuffs into some files and reload it next time I run emulation.
And if there exists a full listed running option table like thing for me to have a complete view on which options I can set.
Thanks a lot!
For this sort of UI and management of VMs you should look at a "management layer" program that sits on top of QEMU. libvirt's "virt-manager" is one common choice here. A management-layer will generally allow you to define options for a VM and save them so you can start and stop that VM without having to specify all the command line options every time. It will also configure QEMU in a more secure and performant way than you get by default, which often requires rather long QEMU command lines.
QEMU itself doesn't provide this kind of facility because its philosophy is to just be the low-level tool which runs a VM, and leave the UI and persistent-VM-management to other software which can do a better job of it.

Gnome 3 automatic execution of a script that needs network

my old father is using ubuntu-gnome. He has no static ip address. In order to perform remote administration, I need to know his ip. I was using dyndns free account (configuration in the adsl modem), but this will stop working in a couple of days.
I would like to run a script each time he logs in to publish his ip on my website. I have tried to put a script on the boot, but the network is not available. It seems that it is gnome 3 that starts the network, but I do not know much about gnome 3.
How should I do to have my script run automatically as soon as the network is available ?
One possible non-elegant solution for this is to put your script in his cron to run every X minutes :)
Looking to mine /etc/NetworkManager/ looks like there is a folder dispatcher.d that I think it'll do what you want. Just experiment with a bash/perl/python w/e script in there set the permission appropriately. You can find the UUID in the system-connections/ folder. More information is available in man networkmanager.
EDIT: Look what I found: https://askubuntu.com/questions/13963/call-script-after-connecting-to-a-wireless-network. Seems like this is exactly what you want.
The easiest way is to use another dynamic DNS service. I used to use my own. You could also put curl or wget command to cron or create a systemd service that will call that command periodically. As a target you would have to use your machine with a web server where you can see the IP in your logs.
It is not Gnome that connects the network, it is a system service called NetworkManager. It tries to connect at boot if possible. In some cases it waits for wireless signal, in other cases it waits for a user password. I recently verified that in Fedora, NetworkManager properly implements the systemd's network-online.target but it may have yet to be fixed in other distributions, see the upstream bug report.
https://bugzilla.gnome.org/show_bug.cgi?id=728965
If you want to run a system service just after boot, you need to use:
[Unit]
...
Wants=network-online.target
After=network-online.target
You could also just run a script that calls nm-online at the beginning to wait for the network connectivity if you can expect the connectivity to come up in reasonable time, otherwise it times out. Such a script can be run from any environment including a user session.
And, as noted already, you can put a script into /etc/NetworkManager/dispatcher.d that will be called on any network configuration change and such a script can then filter connection up events and start the notification script.

Can a TeamCity build agent be configured to only run builds with a particular parameter dependency?

I have a TeamCity build agent installed on a machine which in theory is dedicated to running dynamic security scans and I don't want it doing anything else (i.e. running the duplicates finder).
Short of either creating custom agent configuration properties then customising each build's agent dependencies (which perhaps strictly speaking I should be doing anyway) or configuring the agent to only run selected configurations, is there any way to avoid this? Both of these approaches require additional configuration on a per-build basis either on every single build.
In a perfect world, I'd like to be able to tell the agent to only ever run builds which match a particular agent dependency. Is this possible or am I coming at it from the wrong direction?
I'm afraid TeamCity doesn't provide a way to specify that agent can run only configurations with a specific property (and not run other configurations).
So, there are only two ways to specify agents: either with agent requirements, or with configuring the agent to only run selected configurations.
You could probably try to make some batch change in your build configuration properties, because all build configuration settings/properties are stored in XML files on disk.
In current versions of TeamCity (e.g. 8.1) you can create a pool just for your security machine, and only assign the one machine to that pool, remembering to remove it from other pools.
Then you can assign the security project to that pool. That should solve your problem.

Hudson slaves, how to access workspace

Howto configure system to have one master and multiple slaves where building normal c-code with gmake? How slaves can access workspace from master? I guess NFS share is way to go, but if that's not possible any other options?
http://wiki.hudson-ci.org/display/HUDSON/Distributed+builds is there but cannot understand how workspace sharing is handled?
Rsync? From master: SCM job -> done -> rsync to all slaves -> build job and if was done on slave -> rsync workspace back to master?
Any proof of concept or real life solutions?
When Hudson runs a build on a slave node, it does a checkout from source control on that node. If you want to copy other files over from the master node, or copy other items back to the master node after a build, you can use the Copy to Slave plugin.
It's surely a late answer, but may help others.
I'm currently using the "Copy Artifact plug-in" with great results.
http://wiki.hudson-ci.org/display/HUDSON/Copy+Artifact+Plugin
(https://stackoverflow.com/a/4135171/2040743)
Just one way of doing things, others exist.
Workspaces are actually not shared when distributed to multiple machines, as they exist as directories in each of the multiple machines. To solve the coordination of items, any item that needs distributed from one workspace to another is copied into a central repository via SCP.
This means that sometimes I have a task which needs to wait on the items landing in the central repository. To fix this, I have the task run a shell script which polls the repository via SCP for the presence of the needed items, and it errors out if the items aren't available after five minutes.
The only downside to this is that you need to pass around a parameter (build number) to keep the builds on the same page, preventing one build from picking up a previous version built artifact. That and you have to set up a lot of SSH keys to avoid the need to pass a password in when running the SSH scripts.
Like I said, not the ideal solution, but I find it is more stable than the ssh artifact grabbing code for my particular release of Hudson (and my set of SSH servers).
One downside, the SSH servers in most Linux machines seem to really lack performance. A solution like mine tends to swamp your SSH server with a lot of connections coming in at about the same time. If you find the same happens with you, you can add timer delays (easy, imperfect solution) or you can rebuild the SSH server with high-performance patches. One day I hope that the high-performance patches make their way into the SSH server base code, provided that they don't negatively impact the SSH server security.

How to setup Hudson to do remote deployment of WAR to Tomcat?

I have bit of experience in running a simple build upon every SVN commit (it is a piece of cake)
Regarding deployment of the war to a remote production server via
Hudson, there seem to be some alternatives:
use the 'deploy' target in the app's build.xml
use the deploy-plugin of Hudson
which I fail to get working :(
What is the simplest way to do a remote deployment to Tomcat?
Are there any examples available?
And what about release management? How do we tag our releases in
your SCM?
I use Maven for builds.
Since I am working with WAS I use the WAS Builder Plugin for deployment. However, I could also just fire up a batch/shell script for deployment. My current approach is to use slaves that run on the target machine where I want to deploy and assign my deployment jobs to them.
For tagging you can use whatever you prefer. The are 3 basic options:
Let maven do it
run a command line command
use a Hudson plugin
We use subversion, so the Subversion Tagging Plugin would be a natural match. However, we don't use Hudson for tagging right now. However, there are several plugins out there for different SCMs. I usually prefer a plugin over a command line, one reason is that company policy forbids to store passwords unencrypted and it is usually fairly easy to configure.
Our strategy has been to combine the promoted builds plugin and the deploy plugin in hudson. Then, when something is "promoted," send an email and do the deployment. The deployment plugin is cargo based and works with a variety of web and app servers.
Maybe post a little info about why the deploy plugin isn't working for you?