Different Hudson folders for wars and jobs - hudson

Is there any way to have the war files of Hudson in an different directory or drive that the job files.
We want to have all executables in c:\programme\hudson and all jobs in f:\data\hudson.
I've alredy played around with in hudson.xml. But this redirects not only the job directory but copies also the whole war directory to the new destination folder.
Is there any way to configure Hudson (on a windows server) to have a separation of the executable and the data/job directories?

Seting HUDSON_HOME to f:\data\hudson should do the trick

I think this problem has not an easy solution. Besides deploying to an app server, I can come up with two options.
Configure the workspace explicitly in every job to point to F:\data\hudson
create a file system link from c:\programme\hudson\jobs to f:\data\hudson. I have never used it. So have fun reading through the following links. hard links and junctions, symbolic links

I'm not sure if this is what you want, but I run hudson simply via java -jar, and then I can specify freely where the hudson war is. It seems the war unpacks into HUDSON_HOME when starting up, but I still have a separate directory where I keep the wars and download upgrades, and I can just change the shortcut when I want to run a newer war.

We run Hudson on a Windows server and use Tomcat as our container.
In this setup, you can set HUDSON_HOME to whatever you want, which holds the job configuration, and then the HUDSON.WAR file lives in C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps.

Related

How to set where to download the VM in minishift?

It downloads openshift into C:\Users\[user]\.minishift\machines folder. How to change this location to, say, D:\My VMs\? The config set is not very helpful in explaining setting which config for which.
Minishift verision: v1.15.1
Platform: Windows
Driver: Hyper-V
Any help would be greatly appreciated.
It looks like the machines directory can't be set directly through config. It is set relative to a base directory in instance_dirs.go.
That base directory, by default, is the .minishift directory in the home directory of the user, e.g. C:\Users\[user]\.minishift on Windows, but this can be overridden by setting the environment variable MINISHIFT_HOME.
The base directory could also be a profile directory, if you are not using the default profile (the default being minishift).
$ minishift profile list
- minishift Stopped
$ minishift profile myprofile
Profile 'myprofile' set as active profile.
The machines directory for myprofile would then be created under $MINISHIFT_HOME/profiles/myprofile/machines, e.g. on Windows C:\Users\[user]\.minishift\profiles\myprofile\machines.
So you can set MINISHIFT_HOME and move the whole contents of the .minishift directory, including machines, somewhere else but it doesn't look like you can move just machines alone.
Perhaps, you could solve this at the OS-level by creating a symlink between C:\Users\[user]\.minishift\machines and D:\My VMs\.
In case it helps others and so they don't need to test the different ways of using symlink as well as to expand on #codemonkey great answer this is what I did to use symlink as my C drive had no available space. I'm also using hyper-v as the driver.
Note: I do have minishift.exe installed in the apps folder on my D drive
Note 2: I did have to run the command prompt in admin mode
From the C:\Users\[user]\.minishift folder I moved the "machines" folder to D:\Apps\minishift-1.32.0-windows-amd64\
I first tried a soft link which didn't work, I then tried a hadr link, but I was getting errors so I used a "directory junction" link with the /J switch as such C:\WINDOWS\system32>mklink /J C:\Users\[user]\.minishift\machines D:\Apps\minishift-1.32.0-windows-amd64\machines
You should get the following result Junction created for C:\Users\[user]\.minishift\machines <<===>> D:\Apps\minishift-1.32.0-windows-amd64\machines
Then if necessary run minishift delete --clear-cache WARNING this will delete any previous images and hosts you might have!
Then start minishift as normal with minishift start
Grab a cup of coffee or go smoke a cigarette or vape as it will take awhile for the OpenShift server to be started.
Hope this answer might help others who face a similar issue.

Openshift: where to put resource files that I want outside of the deployment folder

I'm starting a new web app with Openshift (jboss, mysql). It's the first time I use openshift and after reading through some doc and experimenting a bit with it, I'm having one question regarding best practices for the architecture of my app.
There will be some files generated by- or uploaded to the application (resources). I'd like those files to be outside the deployment folder so they are not erased/overwritten when the app deploys again. I have browsed through the directories and I was wondering:
is it ok to use the /var/lib/openshift/[openshift-id]/app-root/data folder for these files?
Yes, you should use your ~/app-root/data folder for any files that you want to not be erased when you do a git push, there is also an environment variable that you can use that points to that folder called OPENSHIFT_DATA_DIR. Please note that if you are using a scaled application, that folder is not shared among your gears.

Can Jenkins store artifacts outside the job directory?

I currently have Jenkins set up with a number of jobs, but it's proving difficult to back up because the artifacts are stored within the job directory. I'd like to back up the job configurations and artifacts separately. I'm sure I remember reading somewhere that Jenkins now has an option to store them outside the job, but I can't find this.
Is there any configuration option that does this while still making the artifacts visible from within the job on the Jenkins interface? (ie rather than merely an add-in that copies the artifacts elsewhere)
Go to your jenkins configuration page, e.g.
http://mybuildserver.acme.com/configure
At the top of the configuration page there is a "home directory" setting. Click the "advanced..." button below it.
Now set the "Workspace Root Directory" to e:\jenkins-workspaces\${ITEM_FULL_NAME}, and "Build Record Root Directory" to e:\jenkins-builds\${ITEM_FULL_NAME} or something similar.
Warning: I run Jenkins 2.7.2 and noticed that certain features don't work properly after configuring Jenkins like that. I saw problems with folders and problems with the multi-branch project plugin. Check the status of those issues if your rely on these features.
As you can see here, there are many plugins to deploy artifacts anywhere you want/need, on FTP, CIFS, Confluence, Artifactory.... especially the ArtifactsDeployer that will allow you to make a copy of the artifacts in the Jenkins Home.
Thank you Sam, for your post, which directed me into the right direction to solve my problem.
Have been searching for a way on how can I make a symlink to the Job-Archive of a build for multibranch projects. Up to now, we used to manually search for the correct folder basename in the filesystem and added that one to the Jenkinsfile.
Now, I can simply use
jobOutputFolder = currentBuild.rawBuild.artifactsDir.path
and use that in my script.
If security is a concern, I could implement that as a shared library additionally.
Try the Use Custom Workspace build option. From the Jenkins popup help:
For each job on Jenkins, Jenkins allocates a unique "workspace
directory." This is the directory where the code is checked out and
builds happen. Normally you should let Jenkins allocate and clean up
workspace directories, but in several situations this is problematic,
and in such case, this option lets you specify the workspace location
manually.
This option is also available under advanced project properties of multi-configuration project builds.
A groovy script under "Prepare an environment for the run" will always run on the master, and this groovy script can create a symlink to where you really want artifacts archiving to archive_to which SHOULD include the job name and build number:
if (! Files.createSymbolicLink(Paths.get(currentBuild.artifactsDir.path),
Paths.get(archive_to.getCanonicalPath()))) {
throw new RuntimeException("Can't create symlink to archive dir")
}
Of course (sadly) when old builds are purged by Jenkins the old artifacts are left because jenkins will not follow a symlink when purging, even if jenkins owns the symlink and the target (shame).
I workaround for that may be to point a symlink back from the new archive dir, then, when jenkins purges it's archive dir, the new symlink will dangle and a cron job can then later delete the new job archive dir
Copy Artifact Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin) adds a build step for retrieving files from another project's workspace to current and work from there.

Jenkins build outside of workspace

I am new to Jenkins/Hudson and am trying to migrate a C make-based project from buildbot. For legacy reasons, the build system is hard-coded to build outside of the versioned source tree (git), one directory above, in a separate directory. E.g.:
workspace
.git
foo
bar
build
artifacts
Besides the fact that it ends up creating a directory outside the workspace, Jenkins won't recognize items in the build/ directory above to archive as artifacts.
How can I make this kind of build system work with Hudson? Building in-source-tree is not a short-term option. The only option I found was "use custom workspace," but all this does it hard-code the workspace directory to some other directory.
To answer my own question: there is indeed an option in Jenkins git plugin to check out to a local subdirectory instead of the root of the workspace. With the git plugin, click on the Advanced button and fill in the field "Local subdirectory for repo (optional)".
I don't find the option that djs mentioned, but you can specify a different work directory:
Configure job
Extended Project settings
Use custom work space
This can be set to everywhere you want, also the workspace of a different job.

Get changes from mercurial to FTP site

I work with a partner on an PHP site for a client. We have a common Mercurial repository (on Bitbucket), both local copies and the live site. We have only FTP access to the live site (which can't be changed since it is a hosting package with FTP only).
I want to be able to push changes from the repository to the live site.
Until now I simply keep track of changed files in the repo and copy them manually with FileZilla - a error prone and annoying task. My idea is, to mount the remote location locally (i.e. using CurlFtpFS) and tell mercurial to automagically copy changed files to the site. Ideally I want to be able to specify which changes but this would be a bonus. It would be sufficient if the local state of the files within the repo are synced.
Is there any good way to do this using linux commandline tools?
My first recommendation is, if at all possible, get a package that allows more access. FTP only is just brutal.
But since you are looking for a real answer to your question, I have two ideas for you:
I would suggest looking into the mercurial FTP Extension. I personally have never used it since I have never gotten myself stuck in a ftp-only situation (not for a long time at least), but it looks promising. Looks like if you make sure that you tag your production releases it will work really well for you. (make sure to use the -uploaded param)
Also, if you only ever want the tip to be installed on your production env, then you could look at the suggestion Martin Geisler made on the bitbucket user group a few days ago. Basically his suggestion is to utilize bitbucket's "ping url" functionality. You would have to write a server-side script/url handler that would accept that ping, then fetch the tip from bitbucket (as a zip) and then unzip/unpack it. This is a bit complicated, but if you are looking for complete automation and the tip will always be the best this could work for you.
One notion is the use the hg archive command:
hg archive /path/to/curlftpsfs
which will put a snapshot of your repo in that location -- it will however overwrite any file already there.
Another option is to create a Mercurial clone in that same /path/to/curlftpsfs and then just do a hg pull ; hg update in it on your local system with the remote one mounted. Setting that up initially will mean transferring the whole thing but subsequently you'll only be sending deltas.
Some folks don't like this last options because it exposes your entire /.hg repository too, but you can block access to that at the web server.
I came across this problem a while ago after switching from AWS to a local web hosting that provides only ssh/ftp.
My previous approach of updating a production site on AWS using "hg pull; hg update -C" can no longer be used on the new web hosting. They don't have mercurial installed for shared hosts.
So, what I did is to mount the remote location using ftp, to a local machine (i.e. your laptop), then run the hg pull and update commands locally on your machine at the path where has the remote ftp site mounted.
Windows solution:
BeyondCompare (http://www.scootersoftware.com/) is an awesome piece of software. Apart from being awesome it can mirror your local folder to the FTP site. It's comparing files and only transfers what's new.