Archive builds in hudson, Save old builds - hudson

Is it possible to configure hudson to save a copy of all the files per build? As in everytime a build is triggered it grabs the files from the repository and stores them in a directory and builds it. Then when another build is triggered it grabs the files from the repository and stores it in a different directory to keep the build copies separate instead of having it update the same copy over and over again?

You can always use the archive plugin and set the filter to include as much as you want or you can use the clone workspace plugin. I don't see too much value in keeping all files, except if you want to run tests on the code that are so time consuming that you want to give a first feedback after the build and afterward run the tests in a separate job.

Related

how to change sequence of ccnet file labeller

I have a ccnet project that uses a file labeller for reporting the build number. The problem is that the build process increments the value in this file, but the labeller event always occurs at the beginning of the build no matter where I sequence the task in the config xml file. I suspect that internally CCNET is processing the entire file and sequences certain events in a pre-defined order.
Is there a way to sequence this event to occur after the file has been updated? In my particular case I need to run a clean on all .NET projects before changing the value of the target file.
One thought I have but have never played with, is to create a project that cleans the VB.NET projects and updates the build number. Once this event has completed then the main project can kick off. This process would have to check for modifications for CI.
Perhaps someone has a better solution?
Thank you.
Dan
I developed a solution by creating a separate BuildLabeller file and adding a powershell routine that just increments the build version value of that version number and then uses Set-Content to overwrite the file. The build labeller file will always be one value higher than the current build so that when the next build starts, it is the correct value for the file labeller task for ccnet.

A practical way to provide code updates via Mercurial without sharing main BitBucket account

I suspect this might be really obvious but I can't find a straightforward solution in the documentation or forums:
I have written some code that is held in a Mercurial repository on BitBucket.
I use this code to build Linux virtual servers. When I build a server, I clone the repo onto the server, run my build script, and then delete the clone. The result is a configured server with several files from my repo located in various folders on the server.
Now, I'm looking for a mechanism where I can roll out bug fixes and improvements to my users' servers after I have handed them over. At that time, I won't have SSH access to the servers and I cannot expect my end users to do anything more complicated than kick off a cron job or launch a script.
To achieve this, I'm thinking of setting up a BitBucket account for my users with read-only access to my repo.
I have no problem writing a script to clone my repo, via this read-only account, and apply the updates, but I don't want to include all my files. In particular, I want to exclude my build script as it is commercially sensitive. I know I could remove it from my repo, but then my build wouldn't work.
Reading around, it seems I may need to create a branch or a fork of my repo (which?). Or maybe a sub-repo? Then, I could remove the sensitive files from that branch/fork/sub-repo and allow my users to clone it via a script.
That's OK, but I need a way to update the branched/forked/sub repo as I make changes to the main one. Can this be automatic? In other words, can it be set up to always reflect the updates made in the main repo? Excluding the sensitive files of course.
I'm not sure I'd want updates to be automatic though, so I'd also like to know how to transfer updates from the main to the branch/fork/sub manually. A merge? If I do a merge, how do I make sure my sensitive files don't get copied across?
To sum up, I have a main repo which contains some sensitive files and I need a way to roll out updates of all but those sensitive files to my read-only users.
Sorry if this is hugely obvious. I'm sure it's a case of not seeing the wood for the trees and being overwhelmed by the possibilities. Thanks.
I don't think that you need to solve this in Mercurial at all.
What you actually need is Continuous Integration / a build server.
The simplest solution goes like this:
Set up a build server with something like TeamCity or Jenkins, that's always online and monitors changes in your Bitbucket repository.
You can set it up so that when there's a change in your repository, the build server runs your build script and copies the output to some FTP server, or download site, or whatever.
Now you have a single location that always contains the most recent code changes, but without the sensitive files like the build script.
Then, you can set up a script or cron job that your end users can run to get the newest version of the code from that central location.
You are ok with two branches, one for the users clone (main) and other for your main development (dev), the tricky part is merging the new changes from dev to main.
You can solve this by excluding files in the merge process. Excluding a file while merging in Mercurial
By setting the [merge-patterns] section in your .hgrc you can sepcify what files are not affected by the merge.
[merge-patterns]
build.sh = internal:local
For more info read hg help merge-tools.
"internal:local"
Uses the local version of files as the merged version.
Entire Mercurial trees always get moved around together, so you can't clone or pull just part of a repository (along the file tree axis). You could keep a branch that has only part of the files, and then keep another branch that has everything, making it easy to merge the the partial (in terms of files) branch into the other branch (but merging the other way wouldn't be particularly easy).
I'm thinking maybe subrepositories work for your particular use case.

simultaneous instances of the same hudson/jenkins job

I would like a way for individual users to send a repo path to a hudson server and have the server start a build of that repo. I don't want to leave behind a trail of dynamically created job configuration. I'd like to start multiple simultaneous instances of the same job. Obviously this requires that the workspaces different for the different instances. I believe this isn't possible using any of the current extensions. I'm open to different approaches to what I'm trying to accomplish.
I just want the hudson server to be able to receive requests for builds from outside sources, and start them as long as there are free executors. I want the build configuration to be the same for all the builds except the location of the repo. I don't want to have dozens of identical jobs sitting around with automatically generated names.
Is there anyone out there using Hudson or Jenkins for something like this? How do you set it up? I guess with enough scripting I could dynamically create the necessary job configuration through the CLI API from a script, and then destroy it when it's done. But I want to keep the artifacts around, so destroying the job when it's done running is an issue. I really don't want to write and maintain my own extension.
This should be pretty straightforward to do with Jenkins without requiring any plugins, though it depends on the type of SCM that you use.
It's worth upgrading from Hudson in any case; there have certainly been improvements to the features required to support your use case in the many releases since becoming Jenkins.
You want to pass the repo path as a parameter to your build, so you should select the "This build is parameterized" option in the build config. There you can add a string parameter called REPO_PATH or similar.
Next, where you specify where code is checked-out from, replace the path with ${REPO_PATH}.
If you are checking out the code — or otherwise need access to the repo path — from a script, the variable will automatically be added to your environment, so you can refer to ${REPO_PATH} from your shell script or Ant file.
At this point, when pressing Build Now, you will be prompted to enter a repo path before the build will start. As mentioned in the wiki page above, you can call the buildWithParameters URL to start a build directly with the desired parameter, e.g. http://server/job/myjob/buildWithParameters?REPO_PATH=foo
Finally, if you want builds to execute concurrently, Jenkins can manage this for you by creating temporary workspaces for concurrent builds. Just enable the option
"Execute concurrent builds if necessary" in your job config.
The artifacts will be available, the same as any other Jenkins build. Though probably you want to manage how many recent artifacts are kept; this can be done by checking "Discard Old Builds", and then under Advanced…, you can select enter a value for "Max # of builds to keep with artifacts".

Modifying Saved Artifacts On a Particular Jenkins Build for Deployment

We have a .NET Jenkins installation that builds a few .NET apps. These apps include a bunch of *.exe and *.exe.config files. Right now, I save the app as a zipfile containing all of the *.exe, the required DLLs and *.xml files, and the default *.exe.config files. The default *.exe.config get their value from what is in the Subversion repository and is tuned for the production environment.
The *.exe.config files contain the database name, the database server, the name of the server, etc. These are correct for the production environment, but not for UAT, QA, or developer testing.
What I'd like to do is have some sort of post-build task where a user can specify the particular build, and the values for those particular parameters that vary from environment to environment. If I got that, I could run an Nant or Ant task that unzips the zipfile, munges the *.exe.config file and either deploy it (my ultimate goal), or at least zip that up and put it somewhere the user can access it.
I know there's a parameterized build, and I know there are batch tasks, but I need a combination of the two. Is that possible?
It's not as elegant, but I think you can implement what you want as a separate build. You need:
A Parameterized Build (which you know about)
A way to access artifacts from another build
Given these pieces, you should be able to create a Parameterized Build that does exactly what you describe: grabs the build artifact, munges the configuration and provides it somewhere for the user. Of course the devil is in the details, e.g. it may be tricky to make it easy for a user to "select the right build".
Update (now that I've learned about Batch Tasks - thanks!): I don't see a way to parameterize the batch task like you asked. I'm guessing that the combination of variables make it prohibitive to define a bunch of different batch tasks. You could define a couple of batch tasks for common release and testing configurations and also provide the "munger" program for more user-specific configuration.

Maintaining multiple workspaces for each build in Hudson

Is it possible to maintain multiple workspaces for each build in Hudson? Suppose if i want to keep the last 5 builds, is it possible to have the five corresponding workspace folders also? Currently whenever a new build is scheduled it overwrites the workspace.
Right now, the idea is to reuse the workspace.
It is based on the SCM used (a SVN workspace or a Git workspace or a ClearCase snapshot or dynamic view or ...), and in none of those SCM plugins I see the option to build a new workspace or to save (copy) an old one for each run of the Job.
One (poor) solution would be to:
copy the job four times, resulting in 5 jobs to be modified for specifying 5 different workspaces (based on the same SCM configuration, meaning those 5 workspaces select the same versions in each one of them),
and have them scheduled to run one after the other.
As far as I know, there's no built in way to do it.
You do have a couple of options:
As one of your build steps, you could tar (or zip) up the workspace and record it as a build artifact.
Generate a tag with each successful build (e.g. with the Subversion Tagging Plugin)
Although not ideal, you could use the Backup Plugin.
The backup plugin allows you to back up the workspace. So, you could run the plugin after every build and it would archive the workspace.
Again, not ideal, but if this is a must-have requirement, and if it works with the way you're using Hudson, then it could work.
Depending on what you want to do, you have a few options.
If you need the last five workspace for another job, you can use the clone workspace SCMlink text plugin. Since I have never used it, I don't know if you can access the archived workspace manually (through the UI) later.
Another option worth to try, is to use the archive option and archive the whole workspace (I think the filter setting for the archive option would be **/*). You can than download the workspace in a zipped version from every job run. The beauty of this solution is, that the artifacts will be cleaned up when you delete the particular job run (manually or through the job setting to delete old builds).
Of course you can also do it manually and run an copy as the last step of your build. You will need five directories (you can name them 1 to 5). First delete the oldest one and rename the others (4->5, 3->4, ..). The last step would be to copy the workspace to the directory holding the newest copy (in our example 1). This will require you to maintain your own archive job. Therefore I prefer one of the above mentioned options.