Hudson build order not honoring dependencies on simultaneous checkin - hudson

So this is a similar question:
Triggering upstream project builds before downstream project
But I don't want the all-or-nothing behavior that guy is asking for, I just want hudson to build the projects in the right order so we don't get false alarm failed builds.
We have two projects, one depending on the other. If we do a simultaneous checkin to both projects (where the dependent project will fail without the dependency being built first), Hudson seems to pick one at random, so sometimes we get a failed build, then the other project builds successfully, then the retry on the other project succeeds.
Hudson is smart enough to figure out from the maven pom's what is upstream and downstream, and even knows to build the downstream stuff when the upstream changes, but it doesn't know to build the upstream stuff before the downstream stuff if they've both changed.
Is there a configuration setting I'm missing? "Build after other projects are built" appears to just be a manual version of what it will already do for upstream projects.

Under Advanced Project Options you have the quiet period. Set for your first build the quiet period to 5 seconds and for the second to 2 minutes. This should do the trick. You can also try with 5 and 10 seconds, I just choose 5 and 120 since Hudson will check for changes not more often than every minute. I don't know how the svn check is implemented. So 2 minutes will ensure that your first project will at least be checked once before the second build starts. (assumption: both jobs check every minute for SVN changes)
You also need to make sure that both jobs are not running at the same time. So I would use Block build when upstream project is building (also advanced options) to ensure that they build not at the same time. You can also try only this option first, may be this option is already good enough.

If both projects belong to the same maven parent project, then you need only one hudson job for this maven parent project. -- And you don't need any up- or downstream dependencies.

I am facing the same issue. Unfortunately it seems to be a known bug that the Block build when upstream project is building option does not work when the hudson server is configured with multiple executors/nodes.
http://issues.hudson-ci.org/browse/HUDSON-5125
A workaround could be using the Naginator Plugin which can reschedule a build after a build failure.

Related

In a GitHub action, what is the most convenient way to use compiled C# DLLs across several jobs?

I have a GitHub action which contains the following jobs (dependent on each other, running in sequence):
Compile and deploy.
Run tests.
Job 1 compiles a C# solution. One project is an ASP.NET Core application that gets deployed to Azure. Another is an xUnit test project that will call the deployed Azure application.
The reason I want them to be two separate jobs is to be able to re-run them independently. Deployment is slow, so if deployment succeeds but tests fail, I want to be able to re-run the tests without re-running the deployment.
Note that in my sequence of jobs above, job 1 creates an artifact-on-disk that job 2 needs, namely the compiled assembly of the test project. This works as long as they are run in sequence, and sometimes it also works if I re-run job 2 much later. But I suspect that my setup is not sane. I suspect that it has just accidentally worked for me so far.
If, for example, I have two feature branches that both trigger the action, I might have a race condition on the test assembly, and one run of the action might accidentally be running a DLL that was compiled from code taken from the other feature branch.
I have read (see this SO thread) that I can do upload-artifact and download-artifact in order to transfer artifacts between jobs. But my test assemblies have dependencies, so I would need to either upload many artifacts or zip the whole folder, upload it, download it and unzip it. That is inconvenient.
Another option would be to check out the source code again and recompile the test assembly at the beginning of job 2. But that might be slow.
What is the best option here? Are there other options?

Teamcity + NUnit - exclude assembly from dll

Currently, I am working with a new set of unit tests in my project.
Let's say that on TeamCity I have a build that goes at night and one that goes after every commit.
In both, I have one same build step.
It is a NUnit runner which run tests from three dll files.
In one of them I have my new tests which are located in the same directory (same namespace).
I would like my tests not to run on this build, which moves all the time.
I know that NUnit command line allows excluding categories. Unfortunately, my tests are generated using specflow and it is not effective to add a category to all scenarios.
Is it possible to exclude tests with specified namespace?
Yes. Use option `--where "test!=some.name.space"

TestNG classes to run all the time and parallel

I want to run testNG classes to run all the time and parallel. What I mean is until I finish the execution of the test from Jenkins I want my tests to run all the time. And testclasses should run in parallel. Is this possible?
In Jenkins you can't because it is not designed that way. It's not designed that way because in spite of you clients wish it is not logical to do so. What you can do is set Jenkins to build and run the test profiles when someone checks in code to trunk and then again every few hours on a cycle. There is no value to running tests in a continuous loop. You can also create branch builds for certain feature branches, these are useful as well for immediate feedback to devs before they merge back to trunk.
You can run your tests concurrently, if they don't share state (Which, they shouldn't). Assuming you are using Maven. If you want your tests to execute in a highly parallel fashion you can configure your Maven Surefire plugin to Fork and Execute Parallel Tests. Or, if you are using Gradle set options.fork accordingly

Enforcing one build for one commit in Jenkins/Hudson

We use Jenkins for doing incremental builds of our project on each commit to the SCM. We would like to get separate builds for every single commit. However, the naive approach (setup SCM and use post-commit hooks to trigger a build) exhibits problem in the following scenario:
Build is triggered.
While build takes place (it can take up to several minutes) two separate commits to the SCM are made by two developers.
One new build is triggered. It receives changes from both of the commits, made during previous build.
This "race condition" complicates finding which one of the commits has broken the build/introduced warnings.
The currently employed solution is checking for changes in one job ("scheduler job") and triggering another job to do the actual checkout and build.
Are there any proper solutions to this problem?
Not yet, there's a Feature Request covering this kind of build, but it's still open: Issue 673
Maybe it misses the point, but we have a pretty nice build process running here.
We use git as our source control system
We use gerrit as our review tool
We use the gerrit trigger to run builds on our jenkins server
We check for changes on the develop branch to run jenkins when a changeset is merged
In short the ideal developer day is like this
developer 1 stars a new branch to do his changes, based on our main develop branch
The developer 1 commits as often as he likes
developer 1 thinks he finished his job, he combines his changes into one change and pushes it to gerrit
A new gerrit change is created and jenkins tries to build exactly this change
When there are no errors during the build, a review is made on this change
When the review is submited, the changeset is merged into the develop branch of the main repository (No change is merged into the develop branch, without review)
Jenkins builds the merged version to be sure, that there are no merge errors
no developer 2 joins the party and tries to do some work
the process is exactly the same both start working, in there branches. Developer 1 is faster and his changes are merged into the develop branch. Now, before developer 2 can publish his changes he has to rebase his changes on top of the changes made by developer 1.
So we are sure, that the build process is triggered for every change made to our codebase.
We use this for our C# development - on windows not on linux
I don't believe what you'd like to do is possible. The "quiet period" mentioned by Daniel Kutik is actually used to tell Hudson/Jenkins how much time to wait, in order to allow other commits to the same project to be picked up. Meaning -- if you set this value to 60 seconds and you've made a commit, it will wait for a minute before starting a new build, allowing time for other commits to be picked up as well (during that one minute).
If you use the rule "NO COMMIT on a broken build‏" and take it to it's logical conclusion, you actually end up with "No commit on a broken build or a build in progress", in which case the problem you describe goes away.
Let me explain. If you have two developers working on the same project and both of them try to commit (or push if you're using DVCS). One of them is going to succeed and and they other will fail and need to update before the commit.
The developer who had to do the update knows from the commit history, that the other commit was recent and thus a build in progress (even if it hasn't checked out yet). They don't know if that build is broken yet of not, so the only safe option is to wait and see.
The only thing that would stop you from using the above approach is if the build takes so long, in which case you might find that your developers never get a chance to commit (it's always building). This is then a driver to split up your build into a pipeline of multiple steps, so that the Post Commit job takes no more than 5 minutes, but is ideally 1 minute.
I think what might help, is to set the Quiet Period (Jenkins > Manage Jenkins > Configure System) to 0 and the SCM Polling to a very short time. But even during that short interval there could be two commits. As of now Jenkins does not have the feature to split build into single builds on multiple SVN commit.
Here is a tutorial about that topic: Quiet Period Feature.
As pointed out by someone in Issue 673 you could try starting a parametrized build with the parameter being the actual git commit you want to build. This in combination with a VCS commit hook.

How do you prevent a Hudson slave from archiving artifacts?

It turns out our slaves spend a considerable amount of time moving the archived artifacts back to the master Hudson node. It at least triples the duration of the build. It would be nice if there would be a way to prevent it. However, setting the maximum number of builds to keep doesn't have an influence at all. Is there another way to prevent sending the results back to the central Hudson master?
Note that I actually don't have the archive artifacts option checked. However, the slave is still 'archiving' whatever it finds to the master:
[HUDSON] Archiving .../pom.xml to .../pom.xml
[HUDSON] Archiving .../...-0.1.3-SNAPSHOT.jar to .../...-0.1.3-SNAPSHOT.jar
... with the second path in every line always being a location on the master. Is this a bug? Is there a workaround?
Maven jobs have an option for not archiving artifacts in the advanced options of the Maven section - that is, separate from the "Archive Artifacts" publisher. By default, the Maven jobs will archive the Maven artifacts of a module automatically, regardless of the "Archive Artifacts" publisher settings. The advanced option for Maven projects was added a couple months ago, if I remember correctly.
It sounds that you don't need the archived artifacts at all. So check the archive artifacts option for your jobs. If it is unchecked and it still copies the artifacts to the master to scrap them right away, open a bug report with Hudson.
If you need some, play around with the advanced options for archive artifacts. They offer an include as well as an exclude option.