Is there a way to set the next build number in Hudson from a script?
I have the nextBuildNumber plug-in installed, and attempted to use wget with --post-data, but that page appears to require login.
I have two steps of a chained build and I want to keep the build numbers in sync.
There's a file named jobs/$JOBNAME/nextBuildNumber. It contains the next build number to be used in plain text.
Use HTTP authentication to log into your Hudson server with a user who has suitable privileges for scheduling a build.
The authenticating scripted clients page on the Hudson wiki describes this.
Related
So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)
We are developing a service for our QA staff.
The main goal is that a tester from our web interface be able to select from a github branch a dump for this particular machine and click "Deploy" button, then the rails app for testing will be deployed to Digital Ocean.
The feature I am now working on, is collecting deployment logs and displaying them through our web interface.
On DO droplet there is a "logs" folder which contains different log files which are populated during deployment:
migrations_result_#{machine_id}.log, bundle_result_#{machine_id}.log, etc.
Where #{machine_id} is the id of deployed machine on our service(it is not droplet id).
With the help of remote_syslog gem we are monitoring "logs" folders on each droplet and send them through udp to our main service server, and with the help of rsyslog we store them in a particular folder, let's say /var/log/deplogs/
So in /var/log/deplogs/ we have:
migrations_result_1.log, bundle_result_1.log,
migrations_result_2.log, bundle_result_2.log,
...
migrations_result_n.log, bundle_result_n.log
How do I need to monitor this folder and save contents of each log file to mysql database?
I need to achieve something like the following (Ruby code):
Machine.find(#{machine_id}).logs.create!(text: "migrations_result_#{machine_id}.log contents")
Rsyslog does not seems to be able to achieve this. Or am I missing something?
Any advices?
Thanks in advance, and sorry for my English, I hope you can get the idea.
First of all, congratulations! You are in front of a beautiful problem. My suggestion is to use divide and conquer.
Here are my considerations:
Put the relevant folder(s) under version control (for example, GIT)
Check via GIT commands the files that changed every X amount of time.
Also obtain the differences between the prior version of each file, and the new ones, so you can update your database parsing the new info.
Just in case, here are ways to call system commands from ruby.
Hope that helps,
I currently have Jenkins set up with a number of jobs, but it's proving difficult to back up because the artifacts are stored within the job directory. I'd like to back up the job configurations and artifacts separately. I'm sure I remember reading somewhere that Jenkins now has an option to store them outside the job, but I can't find this.
Is there any configuration option that does this while still making the artifacts visible from within the job on the Jenkins interface? (ie rather than merely an add-in that copies the artifacts elsewhere)
Go to your jenkins configuration page, e.g.
http://mybuildserver.acme.com/configure
At the top of the configuration page there is a "home directory" setting. Click the "advanced..." button below it.
Now set the "Workspace Root Directory" to e:\jenkins-workspaces\${ITEM_FULL_NAME}, and "Build Record Root Directory" to e:\jenkins-builds\${ITEM_FULL_NAME} or something similar.
Warning: I run Jenkins 2.7.2 and noticed that certain features don't work properly after configuring Jenkins like that. I saw problems with folders and problems with the multi-branch project plugin. Check the status of those issues if your rely on these features.
As you can see here, there are many plugins to deploy artifacts anywhere you want/need, on FTP, CIFS, Confluence, Artifactory.... especially the ArtifactsDeployer that will allow you to make a copy of the artifacts in the Jenkins Home.
Thank you Sam, for your post, which directed me into the right direction to solve my problem.
Have been searching for a way on how can I make a symlink to the Job-Archive of a build for multibranch projects. Up to now, we used to manually search for the correct folder basename in the filesystem and added that one to the Jenkinsfile.
Now, I can simply use
jobOutputFolder = currentBuild.rawBuild.artifactsDir.path
and use that in my script.
If security is a concern, I could implement that as a shared library additionally.
Try the Use Custom Workspace build option. From the Jenkins popup help:
For each job on Jenkins, Jenkins allocates a unique "workspace
directory." This is the directory where the code is checked out and
builds happen. Normally you should let Jenkins allocate and clean up
workspace directories, but in several situations this is problematic,
and in such case, this option lets you specify the workspace location
manually.
This option is also available under advanced project properties of multi-configuration project builds.
A groovy script under "Prepare an environment for the run" will always run on the master, and this groovy script can create a symlink to where you really want artifacts archiving to archive_to which SHOULD include the job name and build number:
if (! Files.createSymbolicLink(Paths.get(currentBuild.artifactsDir.path),
Paths.get(archive_to.getCanonicalPath()))) {
throw new RuntimeException("Can't create symlink to archive dir")
}
Of course (sadly) when old builds are purged by Jenkins the old artifacts are left because jenkins will not follow a symlink when purging, even if jenkins owns the symlink and the target (shame).
I workaround for that may be to point a symlink back from the new archive dir, then, when jenkins purges it's archive dir, the new symlink will dangle and a cron job can then later delete the new job archive dir
Copy Artifact Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin) adds a build step for retrieving files from another project's workspace to current and work from there.
we've updated hudson to jenkins and have a few dependencies upon the "hudson" user we used to have.
Now that we have jenkins running (works fine) we'd like it to run as the user "hudson" in order to keep our other processes intact without having to rewrite them.
We found instructions on how to do this BEFORE installing jenkins, but we're already past that point. Jenkins is installed and up and running. Is there a way to let jenkins run as the user "hudson"?
We are running CENTOS
Jenkins usually runs with it's own user, so there are two main issues to handle:
Make sure user 'hudson' has full access to the files of user 'jenkins' (or whatever user it was set to run as).
Start the Jenkins-daemon (or other initiator) with the 'hudson' user.
(another approach is to change the user-ID so it is actually the same user but with two names)
Good luck!
If you've installed Jenkins from RPM, there should be an /etc/sysconfig/jenkins file with a JENKINS_USER setting that defaults to 'jenkins' that you can change to 'hudson'.
I second Gonen's comment above about making sure you change the ownership of the 'jenkins' owned files to 'hudson'. Don't forget about the /var/log/jenkins logs.
Also don't forget to restart the Jenkins service after updating the files.
How to enter subversion credentials in Hudson by shell?
I've tried to generate file hudson.scm.SubversionSCM.xml in HUDSON_HOME and reload configuration, but changes weren't applied.
The easiest way to enter a credential from the shell is to use "svn" executable. Hudson recognizes the ~/.subversion/auth directory that it creates.
Under Windows the global credenentials are stored under %APPDATA%\Subversion\auth. The following Groovy code helps generating these credentials:
SVNRepository repository = SVNRepositoryFactory.create(SVNURL.parseURIEncoded(url))
ISVNAuthenticationManager authManager = SVNWCUtil.createDefaultAuthenticationManager(SVNWCUtil.defaultConfigurationDirectory,"AD\user","password",true)
repository.setAuthenticationManager(authManager)
repository.getDir("", -1, null ,(Collection)null) // or some random SVN operation
Libraries used in the code above (example in Gradle):
compile 'org.tmatesoft.svnkit:org.tmatesoft.svnkit:1.7.8'
compile 'net.java.dev.jna:jna:3.4.0' // so wincrypt is available
Make sure you run the code with the same user Hudson runs on the Windows machine.
Just start with the Hudson.
Install all required Plug-Ins.
Hit the link,EX:-localhost:8080/hudson
Click on the add job/Create job.
While choosing the options SVN will be present there,Give the SVN location.
Credentials link is present out there.Click on that link.
A form will get open,provide valid credentials for that location of SVN.
Observe the Success message on the screen and then get back to the Create job,Complete with Job creation and Build the task.