Hudson/Jenkins - Run steps in master and slave under the same job - hudson

I have a master and slave machines and one job.
This job should have two steps: One to run Unit tests on the master machine
and the other to run some executable laying in the slave machine.
Can this be done under one job? I know that I can restrict the job to run in slave only
but i couldn't find a way to restrict in the step level.

AS far as i know you can only bind a job to a particular node but not parts of the job.

Related

How does MySQL replication work?

I have a questions here about MySQL replication. I have a very limited knowledge about database. Please someone help me to clarify this. My goal is to be able to do a deployment that can avoid downtime.
Suppose I have a DB replicated (master and slave). Suppose I want to do a new release, and I need to run a migration script. My plan is to stop the replication. And run the script in the slave. The migration script can be as:
Based on some business logic, running multiple queries to set new values for a column in a table.
Adding new column
What would actually happen when I start the replication again? The slave will catch up of any changes on the master. But how would the master get the changes that was applied to the slave? If i run the same database script, the migration script won't be run again against the same data set on the master.
Would it make sense, if once the slave catch up with the master, to use the snapshot of the slave and use it as the new slave. And old slave become master?
I hope this actually is clear. Thanks. Any help is really appreciated.
You either have to do cross master replication in order for the slave to catch up with the master and the master to copy the modifications carried out on the salve or have some down time and run the script of the master.
1- You can change the slave master replcaition to cross master without any down time.
2- stop the ex-slave from replicating the master.
3- run your script.
4- start the ex-slave again.
I recommand that you setup a testing environment using a tool like vmware and try it out. That's what I have done.
HERE IS A LINK THAT EXPLAINS HOW TO SET IT UP
http://onlamp.com/onlamp/2006/04/20/advanced-mysql-replication.html
I can't stress enough on testing before applying the changes on a real environment, so test again and again untill you think that you're ready. When that happens test one more time. DON'T FORGET TO MAKE A BACKUP TOO

How force jenkins to upload another jenkins file in project?

Hello is it possible to work with two jenkins?
Jenkins slave do dirty job, and save image, then he upload to jenkins master. Or jenkins master download from slave. What would be the easiest way to do sthing like this?
ps. i dont know how tag this topic/help
edit1: I mean about working with two computers in the same network. I tried some with manage jenkins > new nodes but with no success. I will report if i will success.
edit2
Okay i set up slave to work by JWS, and tied to project. Then build project to refresh configuration. Now i have problem with port listening by slave but i think i will have to ask admin to unlock. I gues unlock number which value is above 60.000.
I still don't know how use slave to do something and upload to master workspace /or/ use slave to save data in his temp and force master to grab this data from slave.Ant suggestion will be helpful.
Michal,
You don't need two instances of Jenkins to get information from the slave to the master. There are plugins for that.
===========================================================
There are two ways to add a Windows slave:
by service
by JNLP (this should work for sure)
Once you have a slave connected, install the following plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
===========================================================
Once this is done, set up two jobs. The first job runs on your slave (restrict where this job is run), and make it run your Windows XP program and store the image. Archive this file as an artifact.
On the second job, use the plugin above to copy the artifact from the first job, and use the data in the file as you need it.
For more information, look at the links below:
Restricting jobs to a slave: https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds
Using the copy plugin: https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
Archiving an artifact: Archive the artifacts in hudson/jenkins (the first picture in the question should be useful)
===========================================================

Hudson - capturing logs from slaves

I have MASTER and SLAVE configured (ssh-slave-plugin).
I would like to display output of the slave executed script under job on master,
as so far I get
Building remotely on SubAgent
Triggering SubAgent
Triggering a new build of XXXX #126
Finished: SUCCESS
and that is all. So the whole execution is hidden.
Is there any way to do that?
I am using the same master-slave configuration (ssh) on Hudson and all the logs are visible on the Hudson Interface.
There might be a couple of things that you can check:
What tool are you using to build (eg, ANT, MAVEN...). Check the execution if the logs are being produced at all
Check the Console Output [raw]
Manage Hudson > Manage Nodes > Select the Slave > Configure
Make Sure that "Remote FS root" is mentioned.
Check Launch Method. I am Connecting to my slaves via jnlp (I believe this could be the key)
Cheers!!
Go to Nodes, choose the node (or hover on the name) and select Build History. The logs of the job that ran on the slave node will be there.

Reconfigure and reboot a Hudson/Jenkins slave as part of a build

I have a Jenkins (Hudson) server setup that runs tests on a variety of slave machines. What I want to do is reconfigure the slave (using remote APIs), reboot the slave so that he changes take effect, then continue with the rest of the test. There are two hurdles that I've encountered so far:
Once a Jenkins job begins to run on the slave, the slave cannot go down or break the network connection to the server otherwise Jenkins immediately fails the test. Normally, I would say this is completely desirable behavior. But in this case, I would like for Jenkins to accept the disruption until the slave comes back online and Jenkins can reconnect to it - or the slave reconnects to Jenkins.
In a job that has been attached to the slave, I need to run some build tasks on the Jenkins master - not on the slave.
Is this possible? So far, I haven't found a way to do this using Jenkins or any of its plugins.
EDIT - Further Explanation
I really, really like the Jenkins slave architecture. Combined with the plugins already available, it makes it very easy to get jobs to a slave, run, and the results pulled back. And the ability to pick any matching slave allows for automatic job/test distribution.
In our situation, we use virtualized (VMware) slave machines. It was easy enough to write a script that would cause Jenkins to use VMware PowerCLI to start the VM up when it needed to run on a slave, then ship the job to it and pull the results back. All good.
EXCEPT Part of the setup of each test is to slightly reconfigure the virtual machine in some fashion. Disable UAC, logon as a different user, have a different driver installed, etc - each of these changes requires that the test VM/slave be rebooted before the changes take affect. Although I can write slave on-demand scripts (Launch Method=Launch slave via execution of command on the master) that handle this reconfig and restart, it has to be done BEFORE the job is run. That's where the problem occurs - I cannot configure the slave that early because the type of configuration changes are dependent on the job being run, which occurs only after the slave is started.
Possible Solutions
1) Use multiple slave instances on a single VM. This wouldn't work - several of the configurations are mutually exclusive, but Jenkins doesn't know that. So it would try to start one slave configuration for one job, another slave for a different job - and both slaves would be on the same VM. Locks on the jobs don't prevent this since slave starting isn't part of the job.
2) (Optimal) A build step that allows a job to know that it's slave connection MIGHT be disrupted. The build step may have to include some options so that Jenkins knows how to reconnect the slave (will the slave reconnect automatically, will Jenkins have to run a script, will simple SSH suffice). The build step would handle the disconnect of the slave, ignore the usually job-failing disconnect, then perform the reconnect. Once the slave is back up and running, the next build step can occur. Perhaps a timeout to fail the job if the slave isn't reconnectable in a certain amount of time.
** Current Solution ** - less than optimal
Right now, I can't use the slave function of Jenkins. Instead, I use a series of build steps - run on the master - that use Windows and PowerShell scripts to power on the VM, make the configurations, and restart it. The VM has a SSH server running on it and I use that to upload test files to the test VM, then remote execute them. Then download the results back to Jenkins for handling by the job. This solution is functional - but a lot more work than the typical Jenkins slave approach. Also, the scripts are targeted towards a single VM; I can't easily use a pool of slaves.
Not sure if this will work for you, but you might try making the Jenkins agent node programmatically tell the master node that it's offline.
I had a situation where I needed to make a Jenkins job that performs these steps (all while running on the master node):
revert the Jenkins agent node VM to a powered-off snapshot
tell the master that the agent node is disconnected (since the master does not seem to automatically notice the agent is down, whenever I revert or hard power off my VMs)
power the agent node VM back on
as a "Post-build action", launch a separate job restricted to run on the agent node VM
I perform the agent disconnect step with a curl POST request, but there might be a cleaner way to do it:
curl -d "offlineMessage=&json=%7B%22offlineMessage%22%3A+%22%22%7D&Submit=Yes" http://JENKINS_HOST/computer/THE_NODE_TO_DISCONNECT/doDisconnect
Then when I boot the agent node, the agent launches and automatically connects, and the master notices the agent is back online (and will then send it jobs).
I was also able to toggle a node's availability on and off with this command (using 'toggleOffline' instead of 'doDisconnect'):
curl -d "offlineMessage=back_in_a_moment&json=%7B%22offlineMessage%22%3A+%22back_in_a_moment%22%7D&Submit=Mark+this+node+temporarily+offline" http://JENKINS_HOST/computer/NODE_TO_DISCONNECT/toggleOffline
(Running the same command again puts the node status back to normal.)
The above may not apply to you since it sounds like you want to do everything from one jenkins job running on the agent node. And I'm not sure what happens if an agent node disconnects or marks itself offline in the middle of running a job. :)
Still, you might poke around in this Remote Access API doc a bit to see what else is possible with this kind of approach.
Very easy. You create a Master job that runs on the Master, from the master job you call the client job as a build step (it's a new kind of build step and I love it). You need to check that the master job should wait for the client job to finish. Then you can run your script to reconfigure your client and run the second test on the client.
An even better strategy is to have two nodes running on your slave machines. You need to configure two nodes in Jenkins. I used that strategy successfully with a unix slave. The reason was that I needed different environment variables to be set up and I didn't wanted to push that into the jobs. I used ssh clients, so I don't know if it is possible with different client types. Than you might be able to run both tests at the same time or you chain the jobs or use the master strategy mentioned above.

Multiple slaves on a single machine with hudson

Can I run multiple hudson slaves on a single machine, I mean real slaves with only one build process?
My problem is, I have a slave with 3 build processes, using locks-and-latches (V0.4) to run three different kinds of build jobs. But sometimes I have the problem that more than one build job of one kind runs at the same time, or it blocks the build process from the slave and doesn't run.
Thank you in advance for your insights.
Yes, Hudson should be capable of running multiple slaves on a single machine. I do a limited form of this with my builds so that each job runs on a separate hard drive. In my case, this means I have a master, with a slave that is run on the same machine as the master. Having 3 slaves each with 1 executor could be done instead of one slave with 3 executors could be done, but it shouldn't impact locking so I only see a use for that if you have different physical drives and want more throughput.
I believe locks in both Hudson (i.e. this job is running) and locks-and-latches (this lock is in use) span all slaves & the master for a given hudson setup. So if slave 1 is running a job that holds Lock A, slave 2 won't be able to start a job that holds lock A either. It isn't entirely clear to me if this is the behavior you're seeking.
There is one important note, though:
Supposedly there is currently a bug in the hudson core that sometimes allows multiple jobs to start with the same lock when using the locks-and-latches plugin. I am not an expert on the internals of Hudson locking, nor the locks-and-latches plugin, but if you want a more in-depth explanation there is a conversation that sounds related on the hudson users mailing list (users#hudson.dev.java.net).
here is the archived conversation
The author of the locks-and-latches plugin is usually pretty responsive to questions.