Teamcity + NUnit - exclude assembly from dll - namespaces

Currently, I am working with a new set of unit tests in my project.
Let's say that on TeamCity I have a build that goes at night and one that goes after every commit.
In both, I have one same build step.
It is a NUnit runner which run tests from three dll files.
In one of them I have my new tests which are located in the same directory (same namespace).
I would like my tests not to run on this build, which moves all the time.
I know that NUnit command line allows excluding categories. Unfortunately, my tests are generated using specflow and it is not effective to add a category to all scenarios.
Is it possible to exclude tests with specified namespace?

Yes. Use option `--where "test!=some.name.space"

Related

In a GitHub action, what is the most convenient way to use compiled C# DLLs across several jobs?

I have a GitHub action which contains the following jobs (dependent on each other, running in sequence):
Compile and deploy.
Run tests.
Job 1 compiles a C# solution. One project is an ASP.NET Core application that gets deployed to Azure. Another is an xUnit test project that will call the deployed Azure application.
The reason I want them to be two separate jobs is to be able to re-run them independently. Deployment is slow, so if deployment succeeds but tests fail, I want to be able to re-run the tests without re-running the deployment.
Note that in my sequence of jobs above, job 1 creates an artifact-on-disk that job 2 needs, namely the compiled assembly of the test project. This works as long as they are run in sequence, and sometimes it also works if I re-run job 2 much later. But I suspect that my setup is not sane. I suspect that it has just accidentally worked for me so far.
If, for example, I have two feature branches that both trigger the action, I might have a race condition on the test assembly, and one run of the action might accidentally be running a DLL that was compiled from code taken from the other feature branch.
I have read (see this SO thread) that I can do upload-artifact and download-artifact in order to transfer artifacts between jobs. But my test assemblies have dependencies, so I would need to either upload many artifacts or zip the whole folder, upload it, download it and unzip it. That is inconvenient.
Another option would be to check out the source code again and recompile the test assembly at the beginning of job 2. But that might be slow.
What is the best option here? Are there other options?

TestNG classes to run all the time and parallel

I want to run testNG classes to run all the time and parallel. What I mean is until I finish the execution of the test from Jenkins I want my tests to run all the time. And testclasses should run in parallel. Is this possible?
In Jenkins you can't because it is not designed that way. It's not designed that way because in spite of you clients wish it is not logical to do so. What you can do is set Jenkins to build and run the test profiles when someone checks in code to trunk and then again every few hours on a cycle. There is no value to running tests in a continuous loop. You can also create branch builds for certain feature branches, these are useful as well for immediate feedback to devs before they merge back to trunk.
You can run your tests concurrently, if they don't share state (Which, they shouldn't). Assuming you are using Maven. If you want your tests to execute in a highly parallel fashion you can configure your Maven Surefire plugin to Fork and Execute Parallel Tests. Or, if you are using Gradle set options.fork accordingly

Commit based view of Jenkins builds

I would like to be able to present a view of Jenkins builds similar to the buildbot console view. With Jenkins out of the box, there appears to be really no good way to associate a commit with a build. You have to access the specific built to determine what commit it was building.
I would like to be able to show status on what commits have been tested in a particular branch, so we know if a commit was skipped or if the latest commit has not yet been tested.
I tried using the Jenkins API for this, but I found that I could only see the SHA1 hash for a git commit via the build itself, i.e. via http://server/job/job-name/388/api/json. So, the only way I can see to take a commit and find builds for it is to iterate through every build in a job and retrieve its associated build info. This is certainly not going to be efficient and fast. Is there another way to do it?
Imperfect Answer: put the "revision number" you care about in the package name of all related artifacts, and use the "fingerprint" feature.
For example: my "product package" artifacts have a revision number, and if I carried that through to the "test package" artifact (which includes the unpacked product artifact) you would be able to track that revision number via the "artifact/fingerprint" feature, and show which test jobs used it. Below, you can't tell with a single click which test used which "commit."

Enforcing one build for one commit in Jenkins/Hudson

We use Jenkins for doing incremental builds of our project on each commit to the SCM. We would like to get separate builds for every single commit. However, the naive approach (setup SCM and use post-commit hooks to trigger a build) exhibits problem in the following scenario:
Build is triggered.
While build takes place (it can take up to several minutes) two separate commits to the SCM are made by two developers.
One new build is triggered. It receives changes from both of the commits, made during previous build.
This "race condition" complicates finding which one of the commits has broken the build/introduced warnings.
The currently employed solution is checking for changes in one job ("scheduler job") and triggering another job to do the actual checkout and build.
Are there any proper solutions to this problem?
Not yet, there's a Feature Request covering this kind of build, but it's still open: Issue 673
Maybe it misses the point, but we have a pretty nice build process running here.
We use git as our source control system
We use gerrit as our review tool
We use the gerrit trigger to run builds on our jenkins server
We check for changes on the develop branch to run jenkins when a changeset is merged
In short the ideal developer day is like this
developer 1 stars a new branch to do his changes, based on our main develop branch
The developer 1 commits as often as he likes
developer 1 thinks he finished his job, he combines his changes into one change and pushes it to gerrit
A new gerrit change is created and jenkins tries to build exactly this change
When there are no errors during the build, a review is made on this change
When the review is submited, the changeset is merged into the develop branch of the main repository (No change is merged into the develop branch, without review)
Jenkins builds the merged version to be sure, that there are no merge errors
no developer 2 joins the party and tries to do some work
the process is exactly the same both start working, in there branches. Developer 1 is faster and his changes are merged into the develop branch. Now, before developer 2 can publish his changes he has to rebase his changes on top of the changes made by developer 1.
So we are sure, that the build process is triggered for every change made to our codebase.
We use this for our C# development - on windows not on linux
I don't believe what you'd like to do is possible. The "quiet period" mentioned by Daniel Kutik is actually used to tell Hudson/Jenkins how much time to wait, in order to allow other commits to the same project to be picked up. Meaning -- if you set this value to 60 seconds and you've made a commit, it will wait for a minute before starting a new build, allowing time for other commits to be picked up as well (during that one minute).
If you use the rule "NO COMMIT on a broken build‏" and take it to it's logical conclusion, you actually end up with "No commit on a broken build or a build in progress", in which case the problem you describe goes away.
Let me explain. If you have two developers working on the same project and both of them try to commit (or push if you're using DVCS). One of them is going to succeed and and they other will fail and need to update before the commit.
The developer who had to do the update knows from the commit history, that the other commit was recent and thus a build in progress (even if it hasn't checked out yet). They don't know if that build is broken yet of not, so the only safe option is to wait and see.
The only thing that would stop you from using the above approach is if the build takes so long, in which case you might find that your developers never get a chance to commit (it's always building). This is then a driver to split up your build into a pipeline of multiple steps, so that the Post Commit job takes no more than 5 minutes, but is ideally 1 minute.
I think what might help, is to set the Quiet Period (Jenkins > Manage Jenkins > Configure System) to 0 and the SCM Polling to a very short time. But even during that short interval there could be two commits. As of now Jenkins does not have the feature to split build into single builds on multiple SVN commit.
Here is a tutorial about that topic: Quiet Period Feature.
As pointed out by someone in Issue 673 you could try starting a parametrized build with the parameter being the actual git commit you want to build. This in combination with a VCS commit hook.

Hudson build order not honoring dependencies on simultaneous checkin

So this is a similar question:
Triggering upstream project builds before downstream project
But I don't want the all-or-nothing behavior that guy is asking for, I just want hudson to build the projects in the right order so we don't get false alarm failed builds.
We have two projects, one depending on the other. If we do a simultaneous checkin to both projects (where the dependent project will fail without the dependency being built first), Hudson seems to pick one at random, so sometimes we get a failed build, then the other project builds successfully, then the retry on the other project succeeds.
Hudson is smart enough to figure out from the maven pom's what is upstream and downstream, and even knows to build the downstream stuff when the upstream changes, but it doesn't know to build the upstream stuff before the downstream stuff if they've both changed.
Is there a configuration setting I'm missing? "Build after other projects are built" appears to just be a manual version of what it will already do for upstream projects.
Under Advanced Project Options you have the quiet period. Set for your first build the quiet period to 5 seconds and for the second to 2 minutes. This should do the trick. You can also try with 5 and 10 seconds, I just choose 5 and 120 since Hudson will check for changes not more often than every minute. I don't know how the svn check is implemented. So 2 minutes will ensure that your first project will at least be checked once before the second build starts. (assumption: both jobs check every minute for SVN changes)
You also need to make sure that both jobs are not running at the same time. So I would use Block build when upstream project is building (also advanced options) to ensure that they build not at the same time. You can also try only this option first, may be this option is already good enough.
If both projects belong to the same maven parent project, then you need only one hudson job for this maven parent project. -- And you don't need any up- or downstream dependencies.
I am facing the same issue. Unfortunately it seems to be a known bug that the Block build when upstream project is building option does not work when the hudson server is configured with multiple executors/nodes.
http://issues.hudson-ci.org/browse/HUDSON-5125
A workaround could be using the Naginator Plugin which can reschedule a build after a build failure.