Is there any way to change the "master" node name?
In my experience, the Master node behaves just like any other node. You can click on the node, then change the "name" field to whatever you like.
We connect via JNLP. So, we also need to make sure that the JNLP service and Hudson are looking for the same name.
I ended up doing the following:
Set the number of executors to 0 in the "System Configuration"
Create a slave node in the master server and assigned a proper name for it
This solved my problem of having an unwanted "master" name in the list of slave nodes. The cost is to have two java processes running on the master server: one for hudson itself and another for the slave node.
Related
We are running a cluster "Single master and multiple nodes" (in https://docs.okd.io/latest/install/index.html#multi-masters-using-native-ha-colocated). Let's call our servers oomaster1, oonode1 and oonode2.
I would like to add other masters one day and I think the first step would be to add a VIP oomaster, now pointing only to oomaster1, and then rename the cluster (currently oomaster1) to oomaster.
What would be the best way to proceed ? I mean I can just stop all okd related services and replace oomaster1 (and its address) with oomaster (and its address) in every file in /etc/origin and /etc/etcd and then restart services. But I suppose it is more complex...
Thanks in advance for advices
I think you should replace the existing cluster(master and nodes) with new cluster which is configured as new hostname, because the master and nodes are deployed the various certificates based on master hostname for encrypted communication and authentication. And I have no idea whether or not the existing master hostname can change.
From the docs # http://docs.ejabberd.im/admin/guide/clustering/#clustering-setup
Adding a node into the cluster is done by starting a new ejabberd node within the same network, and running a command from a cluster node. On second node for example, as ejabberd is already started, run the following command as the ejabberd daemon user, using the ejabberdctl script: ejabberdctl join_cluster 'ejabberd#first'
How does this translate into deployment in the cloud- where instances can (hopefully) be shutdown/restarted based on a consistent image and behind a load balancer?
Can all of them, including the initial instance, use "example.com" as "first" in the example above (assuming "example.com" is setup in DNS to point to the cloud load balancer)?
Or must the first instance not attempt to join a cluster, and subsequent ones all use the ip address of that initial instance instead of "first" (and if this is the case- does it get wacky if that initial instance goes down)?
Can all of them, including the initial instance, use "example.com" as "first" in the example above (assuming "example.com" is setup in DNS to point to the cloud load balancer)?
No, the node name parameter is the node name of an Erlang ejabberd node. It should even more be on the internal Amazon network, not the public one, so it should not rely on a central DNS. It must be a name of an Erlang node as the newly started node with connect to the existing node to share the same "cluster schema" and do an initial sync of the data.
So, the deployment is done as follow:
first instance does not need to join a cluster indeed as there is no cluster schema to share.
New instance can use the node name of any other node of the cluster. It means they will add themselves to the ejabberd cluster schema. It means ejabberd knows that users can be on any node of this cluster. You can point to any running node in the cluster to add a new one, as they are all equivalent (there is no master).
You still need to configure the load balancer to balance traffic to public XMPP port on all nodes.
You only need to perform the cluster config for each once for each extra cluster node. The configuration with all the node is kept locally, so when you stop and restart a node, it will then automatically rejoined the cluster after it has been properly set up.
Hello is it possible to work with two jenkins?
Jenkins slave do dirty job, and save image, then he upload to jenkins master. Or jenkins master download from slave. What would be the easiest way to do sthing like this?
ps. i dont know how tag this topic/help
edit1: I mean about working with two computers in the same network. I tried some with manage jenkins > new nodes but with no success. I will report if i will success.
edit2
Okay i set up slave to work by JWS, and tied to project. Then build project to refresh configuration. Now i have problem with port listening by slave but i think i will have to ask admin to unlock. I gues unlock number which value is above 60.000.
I still don't know how use slave to do something and upload to master workspace /or/ use slave to save data in his temp and force master to grab this data from slave.Ant suggestion will be helpful.
Michal,
You don't need two instances of Jenkins to get information from the slave to the master. There are plugins for that.
===========================================================
There are two ways to add a Windows slave:
by service
by JNLP (this should work for sure)
Once you have a slave connected, install the following plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
===========================================================
Once this is done, set up two jobs. The first job runs on your slave (restrict where this job is run), and make it run your Windows XP program and store the image. Archive this file as an artifact.
On the second job, use the plugin above to copy the artifact from the first job, and use the data in the file as you need it.
For more information, look at the links below:
Restricting jobs to a slave: https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds
Using the copy plugin: https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
Archiving an artifact: Archive the artifacts in hudson/jenkins (the first picture in the question should be useful)
===========================================================
I have MASTER and SLAVE configured (ssh-slave-plugin).
I would like to display output of the slave executed script under job on master,
as so far I get
Building remotely on SubAgent
Triggering SubAgent
Triggering a new build of XXXX #126
Finished: SUCCESS
and that is all. So the whole execution is hidden.
Is there any way to do that?
I am using the same master-slave configuration (ssh) on Hudson and all the logs are visible on the Hudson Interface.
There might be a couple of things that you can check:
What tool are you using to build (eg, ANT, MAVEN...). Check the execution if the logs are being produced at all
Check the Console Output [raw]
Manage Hudson > Manage Nodes > Select the Slave > Configure
Make Sure that "Remote FS root" is mentioned.
Check Launch Method. I am Connecting to my slaves via jnlp (I believe this could be the key)
Cheers!!
Go to Nodes, choose the node (or hover on the name) and select Build History. The logs of the job that ran on the slave node will be there.
I have a Jenkins (Hudson) server setup that runs tests on a variety of slave machines. What I want to do is reconfigure the slave (using remote APIs), reboot the slave so that he changes take effect, then continue with the rest of the test. There are two hurdles that I've encountered so far:
Once a Jenkins job begins to run on the slave, the slave cannot go down or break the network connection to the server otherwise Jenkins immediately fails the test. Normally, I would say this is completely desirable behavior. But in this case, I would like for Jenkins to accept the disruption until the slave comes back online and Jenkins can reconnect to it - or the slave reconnects to Jenkins.
In a job that has been attached to the slave, I need to run some build tasks on the Jenkins master - not on the slave.
Is this possible? So far, I haven't found a way to do this using Jenkins or any of its plugins.
EDIT - Further Explanation
I really, really like the Jenkins slave architecture. Combined with the plugins already available, it makes it very easy to get jobs to a slave, run, and the results pulled back. And the ability to pick any matching slave allows for automatic job/test distribution.
In our situation, we use virtualized (VMware) slave machines. It was easy enough to write a script that would cause Jenkins to use VMware PowerCLI to start the VM up when it needed to run on a slave, then ship the job to it and pull the results back. All good.
EXCEPT Part of the setup of each test is to slightly reconfigure the virtual machine in some fashion. Disable UAC, logon as a different user, have a different driver installed, etc - each of these changes requires that the test VM/slave be rebooted before the changes take affect. Although I can write slave on-demand scripts (Launch Method=Launch slave via execution of command on the master) that handle this reconfig and restart, it has to be done BEFORE the job is run. That's where the problem occurs - I cannot configure the slave that early because the type of configuration changes are dependent on the job being run, which occurs only after the slave is started.
Possible Solutions
1) Use multiple slave instances on a single VM. This wouldn't work - several of the configurations are mutually exclusive, but Jenkins doesn't know that. So it would try to start one slave configuration for one job, another slave for a different job - and both slaves would be on the same VM. Locks on the jobs don't prevent this since slave starting isn't part of the job.
2) (Optimal) A build step that allows a job to know that it's slave connection MIGHT be disrupted. The build step may have to include some options so that Jenkins knows how to reconnect the slave (will the slave reconnect automatically, will Jenkins have to run a script, will simple SSH suffice). The build step would handle the disconnect of the slave, ignore the usually job-failing disconnect, then perform the reconnect. Once the slave is back up and running, the next build step can occur. Perhaps a timeout to fail the job if the slave isn't reconnectable in a certain amount of time.
** Current Solution ** - less than optimal
Right now, I can't use the slave function of Jenkins. Instead, I use a series of build steps - run on the master - that use Windows and PowerShell scripts to power on the VM, make the configurations, and restart it. The VM has a SSH server running on it and I use that to upload test files to the test VM, then remote execute them. Then download the results back to Jenkins for handling by the job. This solution is functional - but a lot more work than the typical Jenkins slave approach. Also, the scripts are targeted towards a single VM; I can't easily use a pool of slaves.
Not sure if this will work for you, but you might try making the Jenkins agent node programmatically tell the master node that it's offline.
I had a situation where I needed to make a Jenkins job that performs these steps (all while running on the master node):
revert the Jenkins agent node VM to a powered-off snapshot
tell the master that the agent node is disconnected (since the master does not seem to automatically notice the agent is down, whenever I revert or hard power off my VMs)
power the agent node VM back on
as a "Post-build action", launch a separate job restricted to run on the agent node VM
I perform the agent disconnect step with a curl POST request, but there might be a cleaner way to do it:
curl -d "offlineMessage=&json=%7B%22offlineMessage%22%3A+%22%22%7D&Submit=Yes" http://JENKINS_HOST/computer/THE_NODE_TO_DISCONNECT/doDisconnect
Then when I boot the agent node, the agent launches and automatically connects, and the master notices the agent is back online (and will then send it jobs).
I was also able to toggle a node's availability on and off with this command (using 'toggleOffline' instead of 'doDisconnect'):
curl -d "offlineMessage=back_in_a_moment&json=%7B%22offlineMessage%22%3A+%22back_in_a_moment%22%7D&Submit=Mark+this+node+temporarily+offline" http://JENKINS_HOST/computer/NODE_TO_DISCONNECT/toggleOffline
(Running the same command again puts the node status back to normal.)
The above may not apply to you since it sounds like you want to do everything from one jenkins job running on the agent node. And I'm not sure what happens if an agent node disconnects or marks itself offline in the middle of running a job. :)
Still, you might poke around in this Remote Access API doc a bit to see what else is possible with this kind of approach.
Very easy. You create a Master job that runs on the Master, from the master job you call the client job as a build step (it's a new kind of build step and I love it). You need to check that the master job should wait for the client job to finish. Then you can run your script to reconfigure your client and run the second test on the client.
An even better strategy is to have two nodes running on your slave machines. You need to configure two nodes in Jenkins. I used that strategy successfully with a unix slave. The reason was that I needed different environment variables to be set up and I didn't wanted to push that into the jobs. I used ssh clients, so I don't know if it is possible with different client types. Than you might be able to run both tests at the same time or you chain the jobs or use the master strategy mentioned above.