Can Hudson slaves run plugins? - hudson

We have a custom plugin for Hudson which uploads the output of a build onto a remote machine. We have just started looking into using a Hudson slave to improve throughput of builds, but the projects which use the custom plugin are failing to deploy with FileNotFoundExceptions.
From what we can see, the plugin is being run on the master even when the build is happening on the slave. The file that is not being found does exist on the slave but not on the master.
Questions:
Can plugins be run on slaves? If so, how? Is there a way to identify a plugin as being 'serializable'? If Hudson slaves can't run plugins, how does the SVN checkout happen?
Some of the developers here think that the solution to this problem is to make the Hudson master's workspace a network drive and let the slave use that same workspace - is this as bad an idea as it seems to me?

Firstly, go Jenkins! ;)
Secondly, you are correct — the code is being executed on the master. This is the default behaviour of a Hudson/Jenkins plugin.
When you want to run code on a remote node, you need to get a reference to that node's VirtualChannel, e.g. via the Launcher that's probably passed into your plugin's main method.
The code to be run on the remote node should be encapsulated in a Callable — this is the part that needs to be serialisable, as Jenkins will automagically serialise it, pass it to the node via its channel, execute it and return the result.
This also hides the distinction between master and slave — even if the build is actually running on the master, the "callable" code will transparently run on the correct machine.
For example:
#Override
public boolean perform(AbstractBuild<?, ?> build, Launcher launcher,
BuildListener listener) {
// This method is being run on the master...
// Define what should be run on the slave for this build
Callable<String, IOException> task = new Callable<String, IOException>() {
public String call() throws IOException {
// This code will run on the build slave
return InetAddress.getLocalHost().getHostName();
}
};
// Get a "channel" to the build machine and run the task there
String hostname = launcher.getChannel().call(task);
// Much success...
}
See also FileCallable, and check out the source code of other Jenkins plugins with similar functionality.
I would recommend making your plugin work properly rather than using the network share solution.. :)

Related

Containers reconfiguration in real-time

I have faced with following case and haven't found clear answer for me.
Preconditions:
I have kubernetes cluster
there are some options related to my application (for example debug_level=Error)
there are pods running and each of them uses configuration (ENV, mount path or cli args)
later I need to change value of some option (the same 'debug_level' Error -> Debug)
The Q is:
how should I notify my Pods that configuration has changed?
Earlier we could just send HUP signal to the exact process directly or call systemctl reload app.service
What are the best practices for this use-case?
Thanks.
I think this is something you could achieve using sidecar containers. This sidecar container could monitor for changes in the configuration and send the signal to the appropiate process. More info here: http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html
Tools like kubediff or kube-applier can compare your kubernetes YAML files, to what's running on the cluster.
https://github.com/weaveworks/kubediff
https://github.com/box/kube-applier

Clone Openshift application in scalable

I have an application in Openshift free plan with only one gear. I want to change it to scalabe and take usage of all of 3 free gears.
I read this blog post from openshift and I found that there is a way to do it. I should clone my current application to a new one as a scalable which will use the 2 remaining gears and then I will delete the original application. Thus, the new one will have 3 free gears.
The way that blog suggest is: rhc create-app <clone> --from-app <existing> --scaling
I have the following error: invalid option --from-app
Update
After running the command gem update rhc, I don't have the error above but...A new application with the given name has created with the same starting package (Python 2.7) just like the existing one, but all the files are missing. It actually create a blank application and not a clone of the existing.
Update 2
Here is the structure of the folder:
-.git
-.openshift
-wsgi
---static
---views
---application
---main.py
-requirements.txt
-setup.py
From what we've talked on IRC, your problem was around missing SSH configuration on Windows machine:
Creating application xxx ... done
Waiting for your DNS name to be available ...done
Setting deployment configuration ... done
No system SSH available. Please use the --ssh option to specify the path to your SSH executable, or install SSH.
I've double checked it, and it appears to be working without any problem.
The only requirement is to have the latest rhc client and putty or any other
SSH client. I'd recommend going through this tutorial once again and double-check everything to make sure everything is working properly.
Make sure you are using the newest version of the rhc gem with "gem update rhc" to make sure that you have access to that feature from the command line.
The --from-app will essentially do a 'rhc snapshot save & snapshot restore` (amoung other things) as you can see here from the source:
if from_app
say "Setting deployment configuration ... "
rest_app.configure({:auto_deploy => from_app.auto_deploy, :keep_deployments => from_app.keep_deployments , :deployment_branch => from_app.deployment_branch, :deployment_type => from_app.deployment_type})
success 'done'
snapshot_filename = temporary_snapshot_filename(from_app.name)
save_snapshot(from_app, snapshot_filename)
restore_snapshot(rest_app, snapshot_filename)
File.delete(snapshot_filename) if File.exist?(snapshot_filename)
paragraph { warn "The application '#{from_app.name}' has aliases set which were not copied. Please configure the aliases of your new application manually." } unless from_app.aliases.empty?
end
However this will not copy over anything in your $OPENSHIFT_DATA_DIR directory so if you're storing files there, you'll need to copy them over manually.

Can Jenkins store artifacts outside the job directory?

I currently have Jenkins set up with a number of jobs, but it's proving difficult to back up because the artifacts are stored within the job directory. I'd like to back up the job configurations and artifacts separately. I'm sure I remember reading somewhere that Jenkins now has an option to store them outside the job, but I can't find this.
Is there any configuration option that does this while still making the artifacts visible from within the job on the Jenkins interface? (ie rather than merely an add-in that copies the artifacts elsewhere)
Go to your jenkins configuration page, e.g.
http://mybuildserver.acme.com/configure
At the top of the configuration page there is a "home directory" setting. Click the "advanced..." button below it.
Now set the "Workspace Root Directory" to e:\jenkins-workspaces\${ITEM_FULL_NAME}, and "Build Record Root Directory" to e:\jenkins-builds\${ITEM_FULL_NAME} or something similar.
Warning: I run Jenkins 2.7.2 and noticed that certain features don't work properly after configuring Jenkins like that. I saw problems with folders and problems with the multi-branch project plugin. Check the status of those issues if your rely on these features.
As you can see here, there are many plugins to deploy artifacts anywhere you want/need, on FTP, CIFS, Confluence, Artifactory.... especially the ArtifactsDeployer that will allow you to make a copy of the artifacts in the Jenkins Home.
Thank you Sam, for your post, which directed me into the right direction to solve my problem.
Have been searching for a way on how can I make a symlink to the Job-Archive of a build for multibranch projects. Up to now, we used to manually search for the correct folder basename in the filesystem and added that one to the Jenkinsfile.
Now, I can simply use
jobOutputFolder = currentBuild.rawBuild.artifactsDir.path
and use that in my script.
If security is a concern, I could implement that as a shared library additionally.
Try the Use Custom Workspace build option. From the Jenkins popup help:
For each job on Jenkins, Jenkins allocates a unique "workspace
directory." This is the directory where the code is checked out and
builds happen. Normally you should let Jenkins allocate and clean up
workspace directories, but in several situations this is problematic,
and in such case, this option lets you specify the workspace location
manually.
This option is also available under advanced project properties of multi-configuration project builds.
A groovy script under "Prepare an environment for the run" will always run on the master, and this groovy script can create a symlink to where you really want artifacts archiving to archive_to which SHOULD include the job name and build number:
if (! Files.createSymbolicLink(Paths.get(currentBuild.artifactsDir.path),
Paths.get(archive_to.getCanonicalPath()))) {
throw new RuntimeException("Can't create symlink to archive dir")
}
Of course (sadly) when old builds are purged by Jenkins the old artifacts are left because jenkins will not follow a symlink when purging, even if jenkins owns the symlink and the target (shame).
I workaround for that may be to point a symlink back from the new archive dir, then, when jenkins purges it's archive dir, the new symlink will dangle and a cron job can then later delete the new job archive dir
Copy Artifact Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin) adds a build step for retrieving files from another project's workspace to current and work from there.

Hudson svn credentials

How to enter subversion credentials in Hudson by shell?
I've tried to generate file hudson.scm.SubversionSCM.xml in HUDSON_HOME and reload configuration, but changes weren't applied.
The easiest way to enter a credential from the shell is to use "svn" executable. Hudson recognizes the ~/.subversion/auth directory that it creates.
Under Windows the global credenentials are stored under %APPDATA%\Subversion\auth. The following Groovy code helps generating these credentials:
SVNRepository repository = SVNRepositoryFactory.create(SVNURL.parseURIEncoded(url))
ISVNAuthenticationManager authManager = SVNWCUtil.createDefaultAuthenticationManager(SVNWCUtil.defaultConfigurationDirectory,"AD\user","password",true)
repository.setAuthenticationManager(authManager)
repository.getDir("", -1, null ,(Collection)null) // or some random SVN operation
Libraries used in the code above (example in Gradle):
compile 'org.tmatesoft.svnkit:org.tmatesoft.svnkit:1.7.8'
compile 'net.java.dev.jna:jna:3.4.0' // so wincrypt is available
Make sure you run the code with the same user Hudson runs on the Windows machine.
Just start with the Hudson.
Install all required Plug-Ins.
Hit the link,EX:-localhost:8080/hudson
Click on the add job/Create job.
While choosing the options SVN will be present there,Give the SVN location.
Credentials link is present out there.Click on that link.
A form will get open,provide valid credentials for that location of SVN.
Observe the Success message on the screen and then get back to the Create job,Complete with Job creation and Build the task.

On a Hudson master node, what are the .tmp files created in the workspace-files folder?

Question:
In the path HUDSON_HOME/jobs/<jobname>/builds/<timestamp>/workspace-files, there are a series of .tmp files. What are these files, and what feature of Hudson do they support?
Background
Using Hudson version 1.341, we have a continuous build task that runs on a slave instance. After the build is otherwise complete, including archiving the artifacts, task scanner, etc., the job appears to hang for a long period of time. In monitoring the master node, I noted that many .tmp files were being created and modified under builds//workspace=files, and that some of them were very large. This appears to be causing the delay, as the job completed at the same time that files in this path stopped changing.
Some key configuration points of the job:
It is tied to a specific slave node
It builds in a 'custom workspace'
It runs the Task Scanner plugin on a portion of the workspace to find "todo" items
It triggers a downstream job that builds in the same custom workspace on the same slave node
In this particular instance, the .tmp files were being created by the Task Scanner plugin. When tasks are found, the files in which they are found are copied back to the master node. This allows the master node to serve those files in the browser interface for Tasks.
Per this answer, it is likely that this same thing occurs with other plug-ins, too.
Plug-ins known to exhibit this behavior (feel free to add to this list)
Task Scanner
Warnings
FindBugs
There's an explanation on the hudson users mailing list:
...it looks like the warnings plugin copies any files that have compiler warnings from the workspace (possibly on a slave) into a "workspace-files" directory within HUDSON_HOME/jobs//builds/
The files then, I surmise, get processed resulting in a "compiler-warnings.xml" file within the HUDSON_HOME/jobs//builds/
I am using the "warnings" plugin, and I suspect it's related to that.