I have a Hudson job for an x number of extension projects. I say X because the number might vary over time and I would like to avoid creating one job for each.
I already set up a job and I'm able to compile, test, generate javadocs and PMD analysis for all of them. Hudson is able merge everything but I would like to merge javadocs as well.
Basically the project structure is as follows:
ProjectAExtension
ProjectBExtension
.
.
.
ProjectNExtension
extension-build.xml
I use the Extension within the folder name to iterate thru the folders. Any idea in how to merge the javadocs into one (there are no links between the projects)?
The best way so far is having all the projects set up in Maven as a parent child relationship. The Maven plug-ins take care of the rest.
Related
(disclaimer: I am completely new to mercurial and version control)
So I have a folder structure
Programs
CPPLib1
CPPProject11
CPPProject12
CPPLib2
CPPProject21
CPPProject22
Each group of three is completely independent of the other group, but within each group the code is related and I'd like to manage it under version control as a whole (commit/extract everything in one transaction). As I understood by googling it, I must have a repository for each group in their common parent (Programs), but I cannot have 2 different repositories there, right? Does it mean I must have this structure instead:
Programs
Group1
CPPLib1
CPPProject11
CPPProject12
Group2
CPPLib2
CPPProject21
CPPProject22
A related question, this site http://help.fogcreek.com/8169/using-more-than-one-repository says
"Since Mercurial and Git are Distributed Version Control Systems (DVCSs), you should use at least use one separate repository per project, including shared projects and libraries."
So what does this advice mean? I can't have a separate repository for each of
CPPLib1
CPPProject11
CPPProject12
and manage them as a whole. I am confused.
For each of you project groups you'll need to create one repository in a separate directory. On how you structure that beneath is up to debate and depends a bit on your preferences.
You say that you want everything in that project group managed within a single repository. It means you can simply create a directory structure as you described with the sub-projects residing in different directories within this repository.
Within each group, you can take it further and make each of these directories (library, programme 1, programme 2, ...) a separate repository which in turn become a sub-repository to the main repository, as described in the link given by Lasse Karlsen (Subrepository).
You could also handle it differently, if you allow a more flexible layout and let go of checking out one group in its entirety: For instance you could declare the libraries a sub-repository to each of the programmes which uses the library. It would have the advantage that the programme defines this way directly which library version it depends on
Further, before jumping to sub-repositories, you might want to look at the alternative implementation of guest repositories as well. They handle the dependency less strict, thus a failure to find the sub-repository becomes less fatal: https://bitbucket.org/selinc/guestrepo
In our project, we currently have two different configurations. The first one builds the assemblies. The other packages (including moving stuff to the right directories etc.) everything for InstallShield.
Now, we can't agree if it's better to move all the build steps into a single configuration and run it as a whole chain or if it's better to keep the build process separate from creating installation package.
Googling results in guides on how to do that but not in what way to do that (and our confusion is mainly due to the architecture of the configurations' order). We'll be using a few steps from PowerShield in order to move a number of files between different directories due to certain local considerations. The total number of steps will land on 5 or less.
The suggestion that I have is the following three configurations. They run separately, independently and their build steps overlap (being super sets of each other, consecutively regarded).
Configuration Build.
Configuration Build and test.
Configuration Build, test and package.
The main point of my suggestion is that e.g. the step that compiles the software is implemented in each configuration (as opposed to reusing the artifacts from an independent run of other configuration).
I would argue like this:
if you ever need to perform just one of the two steps - then leave them as separate steps.
This gives you the flexibility to run one, or the other, or both steps. E.g. could it be that you need to just build the solution, but not create the final installation package? E.g. for local testing?
However, if you never ever use one of the steps separately (you always run both together), then I'd probably just merge them together into one - having two separate steps doesn't make much sense to me
We work with a lot of legacy code and we think about introducing some metrics for new code. Is it possible to let Findbugs and Checkstyle run on changed files only instead of a complete project?
It would be nice to assure that only file with a minimum of quality is checked in, but the code base itself is not (yet) touched and evaluated not to confuse people by thousands of issues.
In theory, it would be possible. You would use a shell script to parse the SVN (or whatever SCM) change logs after a given start date, identify the .java files from these change sets and build two patterns from these:
The Findbugs Maven Plugin expects a comma-separated list of class (or
package) names for the parameter onlyAnalyze, so you'll have
to translate file names to fully qualified class names (this will get
tricky when you're dealing with inner classes)
The Maven Checkstyle Plugin is even worse, it expects a
configuration file for its packageNamesLocation parameter.
Unfortunately, only packages are allowed, not individual files. So
you'll have to translate file names to packages.
In the above examples I assume that you are using maven. I am pretty sure that similar things can be done with ant, but I wouldn't know.
I myself would probably use a Groovy script instead of a shell script to achieve the above results.
Findbugs has ant tasks that can do diffs against different findbugs results to see just the deltas, so only reporting new bugs, see
http://findbugs.sourceforge.net/manual/datamining.html
I'm thinking about the best way to structure jobs in Hudson, and what to divide
into jobs. I'll use a .NET application as an example as that is what I
am working on now, but I think a lot of the ideas are generic.
These are the steps which I want to perform, without thinking about
dividing things into jobs but still thinking about what the
dependencies are: ( I hope you understand my notation, <- means
depends on and [X] = aaaaa means that aaaaa is a description of task
[X]. )
[C] = Check out the project, using Mercurial in this case.
[C] <- [S] = Run StyleCop on the source files to make sure they comply
with our coding standard.
[C] <- [D] = Create documentation from our project using DoxyGen or Sandcastle.
[C] <- [O] = Run the code tasks plugin to get a nice presentation of
our TODO etc comments.
[C] <- [B] = Build the solution using MSBuild with the Release target.
The result in this case will be library files compiled to DLL assembly
files. We would like to archive these artifacts.
[B] <- [T] = Run NUnit tests on the library files.
[B] <- [F] = Use FxCop to get some nice static code analysis from the
library files.
[B] <- [W] = Use the compiler warnings plugin on the build log to
extract all warnings given during the compilation.
[D], [B] <- [R] = Release, create a release archive and upload it to a server.
If I split all of these up into different jobs:
How should I get the checked out source code which I got in step [C]
in step [S], [D], [O], [B] which all need the source code?
How should I get the MSBuild log file in step [W] which was
generated in step [B]?
How do I get the resulting DLL artifacts generated in step [B] in
step step [T] and [F] who both needs them?
My main problem if I split all the steps up into different projects is
how to get these things, these files, between the different projects
in a nice manner (I could of course turn to hard coding file paths,
but that seems inflexible, but I might be wrong).
On the other hand, if I do split them into different projects I get
less complexity for each project than I would if I crammed all these
steps into a single project. It might be hard to maintain if I have
that many things in one project. And I would also not be able to run
disjunct projects in parallel which I guess would speed up the whole
process.
I have a different understanding of the 'job'. In my case I'm using Hudson for building several projects, for some projects I've more than one job, but not for the same reasons you describe above.
I use a project building tool like Ant or Maven to make some very specific steps of my build like your [O] or [D] tasks for example. For the more generic steps I use hudson plugins that handle this processes, like running unit tests, deploying artifacts.
I think you will find many of this plugins to be cross language.
However Hudson it's an amazing an powerful tool for continuous integration, I can say that the hard stuff it's done by Maven and it's plugins. Code coverage reports, findbugs reports, project site generation, javadoc generation, byte code instrumentation are a few of the tasks I rely on Maven to do.
So, I use different jobs when I want a different final objective for each build not for make a chain of elements I want to be the final artifact set.
For example, I have a job for hourly build my app and create email reports in case of any errors, and I've a second job for the same project that generates a release of that project, this one it's called manually and I use it to generate all the docs, reports and artifacts that I have to assemble in order to have a stable release of my project.
Hope my view of Hudson use helps.
You list quite a few tasks for your job. It usually does not make sense to have one job for each task. It makes more sense to group them. For instance, in my experience it doesn't buy you anything to have a separate job for checkout.
Remember, more jobs make a build process more brittle and makes maintaining them harder and more complex. So first set your goals/strategy and then divide the the build process in individual jobs.
The philosophy I am pursuing is frequent checkins to the repository as well as every checkin should not break the build. This means I need to make sure that the developer gets fast feedback after a checkin. So I need one job C,B,T,W as well as S in this order. If you prefer, you can also run O and F with this job. What does this order buy you. You get a fast feedback on your most important item, did the code compile. The second most important item is, whether the unit tests do what they are supposed to do (unit tests). Then you test against the less important items (compiler warnings and coding standards). After that you can run your statistics. Personally, I would run O (ToDos) and F (code analysis) in the nightly build which runs a whole release. But you can also run the whole release with every checkin.
I would only separate the build/release process into smaller steps if the artifacts are needed faster. For me it is usually acceptable when the job runs for up to 15 minutes. Why? Because I get a fast feedback if it breaks (that could be less than 2 mintes), since the job stops here and does not run the other (now useless) tasks. Sometimes I run jobs parallel. For parallel execution and when splitting a job I mostly used standard dependencies ("jobs to build after ...") so far, to trigger dependent projects, but mostly I use the parametrized trigger plugin. I increasingly also use the join plugin to run some steps in parallel but can only go on if both parts completed.
To pass files between two jobs I used to use an external repository (just a shared directory on Windows) and passes the path to the files as a parameter to the next job. I switched the behavior and use now the archive artifact function of Hudson and pass the job-run URL to the next job, to download them through HTTP. This removes the technical problems of mounting Windows shares on Unix (even though CIFS does a pretty good job). In addition you can use the Clone Workspace SCM plugin, which helps if you need the whole workspace in other jobs.
we want to use Hudson for our CI, but our project is made of code coming from different repository. For example:
- org.sourceforce... should be check out from http:/sv/n/rep1.
- org.python.... should be check out from http:/sv/n/rep2.
- com.company.product should be check out from http:/sv/n/rep3.
right now we use an ant script with a get.all target that checkout/update the code from different rep.
So i can create a job that let hudson call our get.all target to fetch out all source code and call a second target to build all. But in that case, how to monitor change in the 3 repositories ?
I'm thinking that I could just not assign any repository in the job configuration and schedule the job to fethc/build at regular time interval, but i feel that i'll miss the idea of CI if build can't be trigger from commit/repository change.
what would be the best way to do ? is there a way to configure project dependencies in hudson ?
I haven't poked at the innards of our Hudson installation too much, but it there is a button under Source Code Management that says "Add more locations..." (if that isn't the default out-of-the-box configuration, let me know and I will dig deeper).
Most of our Hudson builds require at least dozen different SVN repos to be checked out, and Hudson monitors them all automatically. We then have the Build steps invoke ant in the correct order to build of the dependencies.
I assume you're using subversion. If not, then please ignore.
Subversion, at least the newer version of it, supports a concept called 'Externals.'
An external is an API, alternate project, dependency, or whatnot that does not reside in YOUR project repository.
see:http://svnbook.red-bean.com/en/1.1/ch07s04.html