Hudson - capturing logs from slaves - hudson

I have MASTER and SLAVE configured (ssh-slave-plugin).
I would like to display output of the slave executed script under job on master,
as so far I get
Building remotely on SubAgent
Triggering SubAgent
Triggering a new build of XXXX #126
Finished: SUCCESS
and that is all. So the whole execution is hidden.
Is there any way to do that?

I am using the same master-slave configuration (ssh) on Hudson and all the logs are visible on the Hudson Interface.
There might be a couple of things that you can check:
What tool are you using to build (eg, ANT, MAVEN...). Check the execution if the logs are being produced at all
Check the Console Output [raw]
Manage Hudson > Manage Nodes > Select the Slave > Configure
Make Sure that "Remote FS root" is mentioned.
Check Launch Method. I am Connecting to my slaves via jnlp (I believe this could be the key)
Cheers!!

Go to Nodes, choose the node (or hover on the name) and select Build History. The logs of the job that ran on the slave node will be there.

Related

Google compute engine, instance dead? How to reach?

I have a small instance running in GCE, had some troubles with the MongoDb so after some tries decided to reset the instance. But... it didn't seem to come back online. So i stopped the instance and restarted it.
It is an Bitnami MEAN stack which starts apache and stuff at startup.
But... i can't reach the instance! No SCP, no SSH, no webservice running. When i try to connect via SSH (in GCE) it times out, cant make connection on port 22. In the information it says 'The instance is booting up and sshd is not running yet', which is possible of course.... But i cant reach the instance in no possible manner not even after an hour wait :) Not sure what's happening if i cant connect to it somehow :(
There is some activity in the console... some CPU usage, mostly 0%, some incomming traffic but no outgoing...
I hope someone can give me a hint here!
Update 1
After the helpfull tip form Serhii... if found this in the logs...
Booting from Hard Disk 0...
[ 0.872447] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
/dev/sda1 contains a file system with errors, check forced.
/dev/sda1: Inodes that were part of a corrupted orphan linked list found.
/dev/sda1: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
fsck exited with status code 4
The root filesystem on /dev/sda1 requires a manual fsck
Update 2...
So, i need to fsck the drive...
Created a snapshot, made a new disk from that snapshot, added the new disk as an extra disk to another instance. Now that instance wont boot with the same problem... removing the extra disk fixed it again. So adding the disk makes it crash even though it isn't the boot-disk?
First, have a look at the Compute Engine -> VM instances -> NAME_OF_YOUR_VM -> Logs -> Serial port 1 (console) and try to find errors and warnings that could be connected to lack of free space or SSH. It'll be helpful if you updated your post by providing this information. In case if your instance run out of free space follow this instructions.
You can try to connect to your VM via Serial console by following this guide, but keep in mind that:
The interactive serial console does not support IP-based access
restrictions such as IP whitelists. If you enable the interactive
serial console on an instance, clients can attempt to connect to that
instance from any IP address.
more details you can find in the documentation.
Have a look at the Troubleshooting SSH guide and Known issues for SSH in browser. In addition, Google provides a troubleshooting script for Compute Engine to identify issues with SSH login/accessibility of your Linux based instance.
If you still have a problem try to use your disk on a new instance.
EDIT It looks like your test VM is trying to boot from the disk that you created from the snapshot. Try to follow this guide.
If you still have a problem, you can try to recreate the boot disk from a snapshot to resize it.

Execute command when replicating database on publisher and subscriber?

I have two MSSQL 2012 databases.
I have snapshot replication configured where the first server is a publisher and distributer, and the other is a subscriber.
I would like to be able to execute a command on the publisher just before the replication job occurs, and then another command on the subscriber just after the replication finishes.
I belive this should be a pull snapshot replication, so that the agent is located on the subscriber server.
Is this even possible?
EDIT. Due to the nature of snapshot replication, i switched to using transactional replication, thus removing my ability to execute scripts on replication-start and -stop.
I never did find a way to execute commands successfully when data is replicating, as i switched to transactional replication. The job handling this transaction type, will start and then just keep running, and not like snapshot replication where the job starts - replicates data - stops.
Instead i set up the jobs i needed executed, using the task scheduler. My services transfers files, to and from a webserver, through the database. And will only transfer files if not already present.
Using task scheduler is working pretty good, and it is MUCH more simple and stable than having something execute a sql script, which would then execute a powershell remoting command to connecto to the server and execute the service.
I just thought i would add this if anyone else stumbles on a similar problem :)

Hudson/Jenkins - Run steps in master and slave under the same job

I have a master and slave machines and one job.
This job should have two steps: One to run Unit tests on the master machine
and the other to run some executable laying in the slave machine.
Can this be done under one job? I know that I can restrict the job to run in slave only
but i couldn't find a way to restrict in the step level.
AS far as i know you can only bind a job to a particular node but not parts of the job.

How force jenkins to upload another jenkins file in project?

Hello is it possible to work with two jenkins?
Jenkins slave do dirty job, and save image, then he upload to jenkins master. Or jenkins master download from slave. What would be the easiest way to do sthing like this?
ps. i dont know how tag this topic/help
edit1: I mean about working with two computers in the same network. I tried some with manage jenkins > new nodes but with no success. I will report if i will success.
edit2
Okay i set up slave to work by JWS, and tied to project. Then build project to refresh configuration. Now i have problem with port listening by slave but i think i will have to ask admin to unlock. I gues unlock number which value is above 60.000.
I still don't know how use slave to do something and upload to master workspace /or/ use slave to save data in his temp and force master to grab this data from slave.Ant suggestion will be helpful.
Michal,
You don't need two instances of Jenkins to get information from the slave to the master. There are plugins for that.
===========================================================
There are two ways to add a Windows slave:
by service
by JNLP (this should work for sure)
Once you have a slave connected, install the following plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
===========================================================
Once this is done, set up two jobs. The first job runs on your slave (restrict where this job is run), and make it run your Windows XP program and store the image. Archive this file as an artifact.
On the second job, use the plugin above to copy the artifact from the first job, and use the data in the file as you need it.
For more information, look at the links below:
Restricting jobs to a slave: https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds
Using the copy plugin: https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
Archiving an artifact: Archive the artifacts in hudson/jenkins (the first picture in the question should be useful)
===========================================================

Reconfigure and reboot a Hudson/Jenkins slave as part of a build

I have a Jenkins (Hudson) server setup that runs tests on a variety of slave machines. What I want to do is reconfigure the slave (using remote APIs), reboot the slave so that he changes take effect, then continue with the rest of the test. There are two hurdles that I've encountered so far:
Once a Jenkins job begins to run on the slave, the slave cannot go down or break the network connection to the server otherwise Jenkins immediately fails the test. Normally, I would say this is completely desirable behavior. But in this case, I would like for Jenkins to accept the disruption until the slave comes back online and Jenkins can reconnect to it - or the slave reconnects to Jenkins.
In a job that has been attached to the slave, I need to run some build tasks on the Jenkins master - not on the slave.
Is this possible? So far, I haven't found a way to do this using Jenkins or any of its plugins.
EDIT - Further Explanation
I really, really like the Jenkins slave architecture. Combined with the plugins already available, it makes it very easy to get jobs to a slave, run, and the results pulled back. And the ability to pick any matching slave allows for automatic job/test distribution.
In our situation, we use virtualized (VMware) slave machines. It was easy enough to write a script that would cause Jenkins to use VMware PowerCLI to start the VM up when it needed to run on a slave, then ship the job to it and pull the results back. All good.
EXCEPT Part of the setup of each test is to slightly reconfigure the virtual machine in some fashion. Disable UAC, logon as a different user, have a different driver installed, etc - each of these changes requires that the test VM/slave be rebooted before the changes take affect. Although I can write slave on-demand scripts (Launch Method=Launch slave via execution of command on the master) that handle this reconfig and restart, it has to be done BEFORE the job is run. That's where the problem occurs - I cannot configure the slave that early because the type of configuration changes are dependent on the job being run, which occurs only after the slave is started.
Possible Solutions
1) Use multiple slave instances on a single VM. This wouldn't work - several of the configurations are mutually exclusive, but Jenkins doesn't know that. So it would try to start one slave configuration for one job, another slave for a different job - and both slaves would be on the same VM. Locks on the jobs don't prevent this since slave starting isn't part of the job.
2) (Optimal) A build step that allows a job to know that it's slave connection MIGHT be disrupted. The build step may have to include some options so that Jenkins knows how to reconnect the slave (will the slave reconnect automatically, will Jenkins have to run a script, will simple SSH suffice). The build step would handle the disconnect of the slave, ignore the usually job-failing disconnect, then perform the reconnect. Once the slave is back up and running, the next build step can occur. Perhaps a timeout to fail the job if the slave isn't reconnectable in a certain amount of time.
** Current Solution ** - less than optimal
Right now, I can't use the slave function of Jenkins. Instead, I use a series of build steps - run on the master - that use Windows and PowerShell scripts to power on the VM, make the configurations, and restart it. The VM has a SSH server running on it and I use that to upload test files to the test VM, then remote execute them. Then download the results back to Jenkins for handling by the job. This solution is functional - but a lot more work than the typical Jenkins slave approach. Also, the scripts are targeted towards a single VM; I can't easily use a pool of slaves.
Not sure if this will work for you, but you might try making the Jenkins agent node programmatically tell the master node that it's offline.
I had a situation where I needed to make a Jenkins job that performs these steps (all while running on the master node):
revert the Jenkins agent node VM to a powered-off snapshot
tell the master that the agent node is disconnected (since the master does not seem to automatically notice the agent is down, whenever I revert or hard power off my VMs)
power the agent node VM back on
as a "Post-build action", launch a separate job restricted to run on the agent node VM
I perform the agent disconnect step with a curl POST request, but there might be a cleaner way to do it:
curl -d "offlineMessage=&json=%7B%22offlineMessage%22%3A+%22%22%7D&Submit=Yes" http://JENKINS_HOST/computer/THE_NODE_TO_DISCONNECT/doDisconnect
Then when I boot the agent node, the agent launches and automatically connects, and the master notices the agent is back online (and will then send it jobs).
I was also able to toggle a node's availability on and off with this command (using 'toggleOffline' instead of 'doDisconnect'):
curl -d "offlineMessage=back_in_a_moment&json=%7B%22offlineMessage%22%3A+%22back_in_a_moment%22%7D&Submit=Mark+this+node+temporarily+offline" http://JENKINS_HOST/computer/NODE_TO_DISCONNECT/toggleOffline
(Running the same command again puts the node status back to normal.)
The above may not apply to you since it sounds like you want to do everything from one jenkins job running on the agent node. And I'm not sure what happens if an agent node disconnects or marks itself offline in the middle of running a job. :)
Still, you might poke around in this Remote Access API doc a bit to see what else is possible with this kind of approach.
Very easy. You create a Master job that runs on the Master, from the master job you call the client job as a build step (it's a new kind of build step and I love it). You need to check that the master job should wait for the client job to finish. Then you can run your script to reconfigure your client and run the second test on the client.
An even better strategy is to have two nodes running on your slave machines. You need to configure two nodes in Jenkins. I used that strategy successfully with a unix slave. The reason was that I needed different environment variables to be set up and I didn't wanted to push that into the jobs. I used ssh clients, so I don't know if it is possible with different client types. Than you might be able to run both tests at the same time or you chain the jobs or use the master strategy mentioned above.