hudson detecting failure when builds are succeeding - hudson

our hudson build is succeeding, but hudson is somehow reporting a failure.
what is the criteria that hudson uses for determining failure and success?
BTW, Our build updates a .xml file with the results of the test. I've checked, and it appears that hudson is correctly updating this file (The modification time matches)
Thanks

Click on the link for the build that failed (#123 for instance), and then go to Console Output link on the left. That log will tell you what step of the build failed.
Note that just because the build of the software succeeded, doesn't mean the entire build process succeeded. You might have a final step that, for instance, deletes some intermediate, unnecessary files. If one of those files was in use and couldn't be deleted (causing the batch file to return an error), then the step failed, and as a result the entire build is marked as a failure.

Related

In a GitHub action, what is the most convenient way to use compiled C# DLLs across several jobs?

I have a GitHub action which contains the following jobs (dependent on each other, running in sequence):
Compile and deploy.
Run tests.
Job 1 compiles a C# solution. One project is an ASP.NET Core application that gets deployed to Azure. Another is an xUnit test project that will call the deployed Azure application.
The reason I want them to be two separate jobs is to be able to re-run them independently. Deployment is slow, so if deployment succeeds but tests fail, I want to be able to re-run the tests without re-running the deployment.
Note that in my sequence of jobs above, job 1 creates an artifact-on-disk that job 2 needs, namely the compiled assembly of the test project. This works as long as they are run in sequence, and sometimes it also works if I re-run job 2 much later. But I suspect that my setup is not sane. I suspect that it has just accidentally worked for me so far.
If, for example, I have two feature branches that both trigger the action, I might have a race condition on the test assembly, and one run of the action might accidentally be running a DLL that was compiled from code taken from the other feature branch.
I have read (see this SO thread) that I can do upload-artifact and download-artifact in order to transfer artifacts between jobs. But my test assemblies have dependencies, so I would need to either upload many artifacts or zip the whole folder, upload it, download it and unzip it. That is inconvenient.
Another option would be to check out the source code again and recompile the test assembly at the beginning of job 2. But that might be slow.
What is the best option here? Are there other options?

Checking for Out of Sync Folders

Every day when I start my Visual Studio, my RTC (Rational Team Concert) starts with a lengthy process of 'Checking for Out Of Sync Folders'.
I am not sure what is causing this and can some one help how to remedy the same?
As mentioned here
this error message should be improved. It should say something like:
"Files in your sandbox are out of sync with your repository workspace".
The official documentation includes:
A sandbox and repository workspace can become out-of-sync when a network failure or manual cancelation interrupts an update operation.
When this happens, the next attempt to perform an operation (such as check in, accept, suspend, or resume) that updates both a sandbox and repository workspace generates a warning that the sandbox and repository workspace are out-of-sync.
(Image from "Loading Content from a Jazz Source Control Repository in Rational Team Concert 2.0, 3.0 and 3.0.1")
If any of the out-of-sync projects contain unresolved local changes in the sandbox, by default, the reload operation preserves these changes
As I advised here, if you don't have any pending changes, you can
try first a "refresh sandbox". Then close everything, re-open Visual Studio and check you don't have any more "checking out" step.
If the first workaround is not enough, "reload projects out of sync". (close and restart to see if the issue persists)

Commit based view of Jenkins builds

I would like to be able to present a view of Jenkins builds similar to the buildbot console view. With Jenkins out of the box, there appears to be really no good way to associate a commit with a build. You have to access the specific built to determine what commit it was building.
I would like to be able to show status on what commits have been tested in a particular branch, so we know if a commit was skipped or if the latest commit has not yet been tested.
I tried using the Jenkins API for this, but I found that I could only see the SHA1 hash for a git commit via the build itself, i.e. via http://server/job/job-name/388/api/json. So, the only way I can see to take a commit and find builds for it is to iterate through every build in a job and retrieve its associated build info. This is certainly not going to be efficient and fast. Is there another way to do it?
Imperfect Answer: put the "revision number" you care about in the package name of all related artifacts, and use the "fingerprint" feature.
For example: my "product package" artifacts have a revision number, and if I carried that through to the "test package" artifact (which includes the unpacked product artifact) you would be able to track that revision number via the "artifact/fingerprint" feature, and show which test jobs used it. Below, you can't tell with a single click which test used which "commit."

Hudson build order not honoring dependencies on simultaneous checkin

So this is a similar question:
Triggering upstream project builds before downstream project
But I don't want the all-or-nothing behavior that guy is asking for, I just want hudson to build the projects in the right order so we don't get false alarm failed builds.
We have two projects, one depending on the other. If we do a simultaneous checkin to both projects (where the dependent project will fail without the dependency being built first), Hudson seems to pick one at random, so sometimes we get a failed build, then the other project builds successfully, then the retry on the other project succeeds.
Hudson is smart enough to figure out from the maven pom's what is upstream and downstream, and even knows to build the downstream stuff when the upstream changes, but it doesn't know to build the upstream stuff before the downstream stuff if they've both changed.
Is there a configuration setting I'm missing? "Build after other projects are built" appears to just be a manual version of what it will already do for upstream projects.
Under Advanced Project Options you have the quiet period. Set for your first build the quiet period to 5 seconds and for the second to 2 minutes. This should do the trick. You can also try with 5 and 10 seconds, I just choose 5 and 120 since Hudson will check for changes not more often than every minute. I don't know how the svn check is implemented. So 2 minutes will ensure that your first project will at least be checked once before the second build starts. (assumption: both jobs check every minute for SVN changes)
You also need to make sure that both jobs are not running at the same time. So I would use Block build when upstream project is building (also advanced options) to ensure that they build not at the same time. You can also try only this option first, may be this option is already good enough.
If both projects belong to the same maven parent project, then you need only one hudson job for this maven parent project. -- And you don't need any up- or downstream dependencies.
I am facing the same issue. Unfortunately it seems to be a known bug that the Block build when upstream project is building option does not work when the hudson server is configured with multiple executors/nodes.
http://issues.hudson-ci.org/browse/HUDSON-5125
A workaround could be using the Naginator Plugin which can reschedule a build after a build failure.

Remove artifacts from Hudson, but retain logs in order to link to builds

We have the following requirements for our Hudson setup:
We would like to directly link to all builds that have been executed
The effective number of artifacts should be limited
It is possible to limit the number of maximum builds in Hudson per job (see this question). This option effectively removes old artifacts. The problem is that this also removes all other information related to the build.
Is there a way to retain linking directly to completed builds via http://${hudson}/job/${jobname}/${buildnumber}, even if artifacts were removed? Sometimes it may be good to commit fixes and link to the corresponding build error.
There's a checkbox under the 'advanced' button when configuring 'archive the artifacts' that allows you to delete all but the most recent artifacts. The build history is retained, but the artifacts are deleted.
There is an open issue for keeping the artifacts from the last N builds - see issue 834