Configure or Create hudson job automatically - configuration

Is there any way to create new Hudson job by one more Hudson job based one previous Jobs?
For example if I need to create new bunch of jobs one by one, Automatically create 4 jobs with similar configuration with different parameter
Basically steps like this
create SVN branch I can call svn cp command and make it parametrized using script
Create some build based on new svnbranch name
Later tag it
Or other word, I need to clone the previous job and give the new branch name where ever $ Branch comes in new job.
Thanks

You can try the Hudson Remote API for this kind of task (setting up an Hudson project).
See this tutorial for instance, and remember you can display the help quite easily:
java -jar hudson-cli.jar -s http://your_Hudson_server/ help
So, to copy a job:
java -jar hudson-cli.jar -s http://your_Hudson_server/ copy-job myjob copy-myjob

You could use groovy system script like this :
def jenkins = hudson.model.Hudson.instance
def template = jenkins.getItem("MyTemplate")
def job = jenkins.copy(template,"MyNewJob")
job.scm = new hudson.scm.SubversionSCM("http://base/branches/mybranche")
job.save()

Kind of already covered in the other answers, but for an easy way to copy the config.xml over:
curl --user USER:PASS -H "Content-Type: text/xml" -s
--data-binary "#config.xml" "http://hudsonserver:8080/createItem?name=newjobname"

There seems to be a plugin for jenkins.
https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
I have not tested the plug-in yet. But if the plugin works, it should alleviate some of human errors from straight copying a job and modifying variables/values.

def jenkins = hudson.model.Hudson.instance
def template = jenkins.getItem("MyTemplate")
def job = jenkins.copy(template,"MyNewJob")
job.save()
I used this now I have to change the parameter values of MyNewJob using Groovy how will I do that?
ex I have a parameter called "Build_BranchName" and the default is //perforce/mybranch
I have to change it to
//perforce/mynewbranch

You have the option that VonC just gave you (which is probably the safest way but you can also go a different rout by just creating a new directory in {Hudson_Home}\jobs (the directory name will be the job name) and copy a modified config.xml in there. The modification will basically just be the SVN URL. You should check out the xml from the job that you are copying. You need to find out how you change the xml file via script, but this is a secondary problem.
Unfortunately, you have to either restart Hudson, or force a reload of the configuration. (visit the page http://:/reload to reload the config).

In case you're willing to use GIT (like I do, mirroring the main SVN repo, onto the Hudson/Jenkins server, and it works great)....
..you could try Stephen Haberman's post-receive-hudson:
This hook creates new jobs for each
branch in the Hudson continuous
integration tool. Besides creating the
job if needed, the user who pushed is
added to the job's email list if they
were not already there.
In any case, that script can give you new hints on how to remote control Jenkins(Hudson).

Related

Azkaban job configuration per environment

I plan to use the Azkaban https://azkaban.github.io/ for running batch jobs. According to CI ideas we have few environments like dev, test, stage, production and of course job should have different configuration for each environment.
According to Azkaban documentation http://azkaban.github.io/azkaban/docs/latest/#job-configuration Azkaban allows for replacing of parameters whenever a ${parameter} is found. And solution looks like:
system.properties
myFlow/
dev.properties
....
prod.properties
foo.job
#system.properties
env=dev
#dev.properties
dev.database=localhost:2181
#prod.properties
prod.database=aws:port
#foo.job
some command --db ${${env}.database}
And later on each environment we can override the env variable. From my point of view this solution looks strange. Can I just say to Azkaban which property file should be used on environment?
What is the best approach to do it?
An other approach can be found here https://github.com/azkaban/azkaban/issues/1545#issuecomment-339750417

How to set up mercurial hooks in Kallithea

I've been at it for a while now and I can't seem to get it working.
As per Kallithea documentation:
To add another custom hook simply fill in the first textbox with <name>.<hook_type> and the second with the hook path. Example hooks can be found in kallithea.lib.hooks.
So my first attempt was to add a new method to hooks.py. Basically to test out the hook I want to prevent ALL push to the repo. So I'll use pretxnchangegroup and return non 0 non false value as Mercurial documentation states
A hook that executes successfully must exit with a status of zero if external, or return boolean “false” if in-process. Failure is indicated with a non-zero exit status from an external hook, or an in-process hook returning boolean “true”. If an in-process hook raises an exception, the hook is considered to have failed.
So I did this:
def myhook(ui, repo, **kwargs):
return True
And I added the hook to the GUI in Kallithea hook options:
pretxnchangegroup <=> python:kallithea.lib.hooks.myhook
This however failed because for some reason the method can't be found
abort: pretxnchangegroup hook is invalid ("kallithea.lib.hooks.myhook" is not defined)
So I tried putting it in another file ( in the same 'lib' folder where hooks.py is ). Created a file called canpush.py and added the same method there. I changed the hook path to target the new file name:
pretxnchangegroup <=> python:kallithea.lib.hooks.myhook
However the hook does not trigger, and I can push to my repo without a problem. I plan to change the actual hook implementation in the future, push will be allowed, but first I need to get any hook functional with Kallithea.
What am I doing wrong here ?
Also, if someone knows how to use hgrc settings from individual repo within Kallithea an example would be great. Original question here.
Answering my own question, but just to keep it as reference.
As it turned out the setup was fine, but in an act of desperation I decided to restart kallithea daemon ( which was nowhere in the documentation ), basically thinking 'what could go wrong' - and that did the trick!
I guess during the startup process things get compiled / cached and the hook definition methods are found and functional ( If someone has a better explanation as to what happens on kallithea restart please share it )
So bare in mind, after every change to the hooks files kallithea daemon must be restarted in order for hooks to have any effect.
sudo service kallithea restart
Something else that wasn't clear to me from reading the kallithea documentation is that the hooks are mercurial hooks, they're not really some kallithea/rhodecode API, it's mercurial all the way.
This means that the best source of documentation on how to write one is something like http://hgbook.red-bean.com/read/handling-repository-events-with-hooks.html

Make: Redo some targets if configuration changes

I want to reexecute some targets when the configuration changes.
Consider this example:
I have a configuration variable (that is either read from environment variables or a config.local file):
CONF:=...
Based on this variable CONF, I assemble a header file conf.hpp like this:
conf.hpp:
buildConfHeader $(CONF)
Now, of course, I want to rebuild this header if the configuration variable changes, because otherwise the header would not reflect the new configuration. But how can I track this with make? The configuration variable is not tied to a file, as it may be read from environment variables.
Is there any way to achieve this?
I have figured it out. Hopefully this will help anyone having the same problem:
I build a file name from the configuration itself, so if we have
CONF:=a b c d e
then I create a configuration identifier by replacing the spaces with underscores, i.e.,
null:=
space:= $(null) #
CONFID:= $(subst $(space),_,$(strip $(CONF))).conf
which will result in CONFID=a_b_c_d_e.conf
Now, I use this $(CONFID) as dependency for the conf.hpp target. In addition, I add a rule for $(CONFID) to delete old .conf files and create a new one:
$(CONFID):
rm -f *.conf #remove old .conf files, -f so no error when no .conf files are found
touch $(CONFID) #create a new file with the right name
conf.hpp: $(CONFID)
buildConfHeader $(CONF)
Now everything works fine. The file with name $(CONFID) tracks the configuration used to build the current conf.hpp. If the configuration changes, then $(CONFID) will point to a non-existant .conf file. Thus, the first rule will be executed, the old conf will be deleted and a new one will be created. The header will be updated. Exactly what I want :)
There is no way for make to know what to rebuild if the configuration changed via a macro or environment variable.
You can, however, use a target that simply updates the timestamp of conf.hpp, which will force it to always be rebuilt:
conf.hpp: confupdate
buildConfHeader $(CONF)
confupdate:
#touch conf.hpp
However, as I said, conf.hpp will always be built, meaning any targets that depend upon it will need rebuilt as well. A much more friendly solution is to generate the makefile itself. CMake or the GNU Autotools are good for this, except you sacrifice a lot of control over the makefile. You could also use a build script that creates the makefile, but I'd advise against this since there exist tools that will allow you to build one much more easily.

Run build on change but don't checkout in Hudson

I have kind of an interesting problem...
So I'm trying to run a build every time I see a change on a directory in my SCM in Hudson. However, I don't need to pull the directory to run my script. Is there any way to check if there's a change in a directory, but don't pull it?
In addition, there is another directory which I do need to pull from Hudson at the same time. So basically I want something like:
On change of directory A or B:
pull directory B only
run script
I was told there was functionality like this in Hudson, but I can't find it. Any suggestions? Thanks for the help!
In case anyone is interested, I was able to accomplish this with just Hudson and Perforce.
When using Perforce as the SCM (don't know about the others) there is a 'Use View Mask' checkbox. Checking that give you the ability to choose which directories/files in Perforce to poll without actually pulling those files. For example, I had in my view:
//depot/my_script
I didn't want my script to run automatically when I had a new version, so I put it int o the "Poll Exclude File(s) text box:
//depot/my_script
Which pulls the latest version of my script. Then I checked the Use View Mask checkbox and put:
//depot/my_code_to_compile/
into the View Mask box.
To make Perforce poll for changes, I just checked the "Poll SCM" in build triggers and then made it check every minute. (by inserting "* * * * *" into the Scheduler box)
So to sum up, with the variables set as above, my Hudson job had the following behavior:
check for changes every minute
On changes to //depot/my_code_to_compile/, the Hudson job will run
On changes to //depot/my_script, nothing will happen
The job will pull changes to my_script, but will download nothing from //depot/my_code_to_compile/.
I think you need to install the FSTrigger Plugin for this functionality. To what the wiki pages show this is supported in Jenkins, I am not sure about Hudson compatibility.

How to get a response from a script back to Hudson and to fail/success emails?

I'm starting a Python script from a Hudson job. The script is started though 'Execute Windows batch command' in build section as 'python my_script.py'
Now I'd need to get some data created by the script back to Hudson and add it to the fail/success emails. My current approach is that the Python script writes data to stderr which is read to a temp file by the batch and then taken into an environment variable. I can see the environment variable correctly right after the script execution (using set command), but in the post-build actions it's not visible any more. The email sending is probably done in different process, so the variables are not visible anymore. I'm accessing the env vars in the email as ${ENV, varname} (or actually in debug mode as $ENV to print them all)
Is there a way to make the environment variable global inside Hudson?
Or can someone provide a better solution for getting data back from Python script to Hudson.
All the related parts (Hudson, batch and Python script) are under my control and can be modified as needed.
Thanks.
Every build step get's is own shell. This implies, that your environment variables are only valid within the build step.
You can just write the data in a nice format to the std output (use a header that is easy to identify) and if the job fails, the data output gets attached in the email.
If you insist on only putting in the data, you can use the following token for the Editable Email Notification post build action (Email-ext plugin).
${BUILD_LOG_REGEX, regex, linesBefore, linesAfter, maxMatches, showTruncatedLines, substText}