Jenkins multi-configuration project doesn't aggregate test results - junit

I have a multi-configuration project set up to run FF and IE selenium tests. However, it's not aggregating the test results.
If I look at the Project Page I see this:
If I go into a specific build I see this:
But if I click on one of those specific configuration names I see this:
Is there a way to get these results to aggregate? (I have the aggregate downstream results project configuration checkbox checked)

This is currently a Jenkins bug- Kohsuke Kawaguchi specifically replied to this bug on Aug/31/2011 in the IRC channel (logs - start # [21:54:47]). Here are the work around responses from those two links:
From the Bug page >>
You can workaround this by explicitly specifying the jobs to aggregate, rather than relying on the downstream builds logic, and specifying the matrix axes (in quotes) explicitly - i.e., NonMatrixJob,"MatrixJob/label=l_centos5_x86" - the quotes in case of commas in your axis.
From the IRC Log >>
I did verify that explicitly specifying the list of jobs to aggregate test results from and using the fully qualified job name, including axis, did the trick, but it's a shame that I can't get the auto-discovery from the downstream jobs working.

Related

Get multiple coverage reports in coveralls for a single repository

Is it possible to get separate coverage reports for front-end and back-end tests for a single repository?
It seems one possible way is to concatenate the lcov reports into one and then ship to coveralls, as mentioned in this question.
However, I wanted to know if there is a way to see separate code coverage reports for front-end and back-end or provide two lcov files to coveralls. If so, how?
If you refer to Coverall's API documentation, you'll see that their Job API supports an optional parameter called service_number. Now by default this option is intended to match the build number for the CI system, but there's no reason you couldn't use it to track multiple coverage reports for each CI build.
One way you could do that would be to track the actual CI build number, multiply it by two, and have that number be the "backend" build number, and increment it by one to have it be the "frontend" build number. The doubling just ensures that you don't end up posting to the same build number more than once. Of course, you can use another method for generating these IDs - the API technically takes a string so you might be able to submit e.g. 234-frontend and 234-backend.
In theory, you could also use the required service_name parameter to the same effect. The catch there is that some of the reserved service names ("travis-ci", "travis-pro", or "coveralls-ruby") have special features, which you may be reluctant to sacrifice.

How can I check for downstream components in an SSIS custom transform?

I am working on a custom SSIS component that has 4 asynchronous outputs. It works just fine but now I have a user request for an enhancement and I am not sure how to handle it. They want to use the component in another context where only 2 of the 4 outputs will be well defined. I foolishly said that this would be trivial for me to support, I planned to just look to see if the two "undefined" streams were even connected, if not then I would skip over that part of the processing.
My problem is that I cannot figure out if an output is connected at run time, I had hoped that the output pipeline or output buffer would be missing. It doesn't look like that is the case; even when they are not hooked up the output and buffer are present.
Does anyone know where I should be looking to see if an output has a downstream consumer or not?
Thanks!
Edit: I was never able to figure out how to do this reliably, so I ended up making this behaviour configurable by the user. It is not automatic like I would have hoped but the difference I found between the BIDS environment and the DTExec environment pushed me to the conclusion that a component probably should not be making assumptions about the component graph it is embedded in.

How to remove the useless unconfigured items in matrix build of Jenkins/Hudson

I use Jenkins to configure my multiconfiguration build, which is like a snapshot.
The Axes I use are:
Labels: Mac10.6, Mac10.7, and Windows
Platforms: Mac10.6, Mac10.7, WinXP, Win7, and WinServer2008
Tasks: _App_Installer_, ATS, and so on
It is clear that it makes no sense for WinXP to build on label Mac10.6. Although it is shown as diabled/unconfigured, it still confuses people.
So is there any way to remove the useless configuration?
Inside the matrix/multiconfiguration plugin there is a field to filter out the combination available from the combination checkbox.
If you want to only execute windows with windows platform :
label=="Windows" && (platform=="WinServer2008" || platform=="WinXP" || platform=="Win7")
Of course in your case you'll have to handle a huge expression but it's doable.
I hope this helps you!
I had a similar problem. The workaround (by no means complete) was the following:
Separate builds for unrelated platforms (Mac, iOS, and Windows, for example) into different jobs.
Conduct a code review with the team explaining to them how matrix builds work.
But the truth of the matter that I also would like to see the matrix entries that do not pass the filter as blank, not disabled.

How to make hudson aggregate results provided in build artifacts over several builds

I have a hudson job that performs a stress test, torturing a virtual machine for several hours with some CPU- and IO-intensive tasks. The build scripts write a few interesting results into several files which are then stored as build artifacts. For example, one result is the time it took to perform certain operations.
I need to monitor the development of these results. For example, I need to know when the time for certain operations suddenly increases. So I need to aggregate these results over several (all?) builds. The ideal scenario would be if I could download the aggregated data from hudson.
I've been thinking about several possibilities to do this, but they all seem quite complicated. That's when I thought someone else might have had that problem already.
Maybe there already are some plugins doing this?
If you can write a script to extract the relevant numbers from the log files, you can use the Plot Plugin to visualize the data. We use this for simple stuff like tracking the executable size of build artifacts.
The Plot Plugin is more manual than the Perf Plugin mentioned by #Tao, but it might be easier to integrate depending on how much data murging the Perf Plugin requires.
Update: Java-style properties files (which are used as input to the Plot Plugin) are just simple name-value pairs in a text file, e.g.:
YVALUE=1234
Here's a build script that shows a (very stupid) example:
echo YVALUE=$RANDOM > buildtime.properties
This example plots a random number with each build.
I have not persoanlly use this plugin yet, but this might fits your need if you can just generate the xml file according to this plugin's format according to its description.
PerfPublisher Plugin
What about creating the results as JUnit results (XML files) so the results can be recorded by Hudson and will be aggregated by Hudson for different builds.

Free text search integrated with code coverage

Is there any tool which will allow me to perform a free text search over a system's code, but only over the code which was actually executed during a particular invocation?
To give a bit of background, when learning my way around a new system, I frequently find myself wanting to discover where some particular value came from, but searching the entire code base turns up far more matches than I can reasonably assess individually.
For what it's worth, I've wanted this in Perl and Java at one time or another, but I'd love to know if any languages have a system supporting this feature.
You can generally twist a code coverage tool's arm and get a report that shows the paths that have been executed during a given run. This report should show the code itself, with the first few columns marked up according to the coverage tool's particular notation on whether a given path was executed.
You might be able to use this straight up, or you might have to preprocess it and either remove the code that was not executed, or add a new notation on each line that tells whether it was executed (most tools will only show path information at control points):
So from a coverage tool you might get a report like this:
T- if(sometest)
{
x somecode;
}
else
{
- someother_code;
}
The notation T- indicates that the if statement only ever evaluated to true, and so only the first part of the code executed. The later notation 'x' indicates that this line was executed.
You should be able to form a regex that matches only when the first column contains a T, F, or x so you can capture all the control statements executed and lines executed.
Sometimes you'll only get coverage information at each control point, which then requires you to parse the C file and mark the execute lines yourself. Not as easy, but not impossible either.
Still, this sounds like an interesting question where the solution is probably more work than it's worth...
-Adam