Hudson CI: how to extract metrics from a report file - hudson

I'm using Hudson CI to automate the integration of my FPGA projects. In one of the build steps, I run a logic synthesis tool which produces a plain-text report file. The report contains a few metrics, such as the maximum frequency, which I would like to monitor over time. Here's how the maximum frequency appears in the report:
Minimum period: 5.720ns (Maximum Frequency: 174.821MHz)
How can I extract and monitor/chart such metrics in Hudson?

This question has been answered on the Hudson forum: http://www.eclipse.org/forums/index.php/mv/msg/452719/1007740/#msg_1007740
The solution is to use the Plot plugin.

Related

Code coverage difference between LCOV reports

I have an assignment of displaying the difference in coverage data between two coverage reports. For example, suppose I already have a GCOV and corresponding LCOV file, and then I made some changes and generated new GCOV and LCOV files. Now, I want to find out what the delta or the difference in these reports are, i.e., whether my latest code change covered more code or not. Is there any tool which can find that out? Or what are the suggested steps I should proceed with?
I tried searching tools in the internet which could generate this difference but could not find any.

How do I see the history of my Jenkins build test results?

I've got a collection of Jenkins jobs which are all essentially tests packs - running lots of JUnit tests.
I keep the results for 7 days and, with the aid of the global build stats plugin and build metrics plugin, I can get a percentage of the number of builds (test packs) that had at least one failure in the last week.
What I'm now interested in doing is getting the percentage of all test failures over one week, to get a better idea as to how badly the set of builds failed - was it just one test that caused each build to fail? Or all the tests?. Is it possible with an existing plugin?
I know the data is there because the home page of any of my jobs has a graph on the right where the green area represents test passes and red fails, for all of the previous builds. This gives me some idea, but I'd like a figure to report with.
You may want to take a look at the Unit Test History Generator or Test Results Analyzer plugins.

How to increase number of build queue in Hudson?

i want to know is it possible to increase the number of build queue in Hudson?
i m using Hudson version 1.395.1
Currently only 2 queue are provided but in my case, i have 6 environment managed with hudson. so i need to have more than 2 build queue number.
We only have a Jenkins but it should be similar. In the configuration menu there is an option almost at the top. Should be called something like "number of build processes". Can't say exactly because our Jenkins is also German ;-)
Sorry for the unspecific answer but maybe it helps.

Can Hudson branch promotion get based on project stability?

Hudson CI server displays stability "weather" which is cool. And it allows one project build to kick off based on the successful build of another. However, how can you make that secondary project dependent additionally on the stability of multiple builds of the first project?
Specifically, project "stable_deploy" needs to only kick off to promote a version to "stable" if project "integrate" with version 8.3.4.1233 has built and tested successfully at least 8 times--in a row. Until then, it's still in integration mode.
IMPORTANT: A significant caveat to this is that a single set of Hudson projects gets used as a "pipeline" to process each new version through to release. So a project may have built successfully 8 times in a rolw but the latest version 8.3.4.1233 may be only the 2 most recent builds. The builds prior to that may be an earlier version.
We're open to completely reorganizing this but the pipeline idea seemed to greatly reduce the amount of manually project creation and deletion. Is there a better way to track version release "pipeline"? In particular, we will have multiple versions in this pipeline simultaneously in the future due to fixes or patches to older versions. We don't see how to do that yet, except to create new pipeline projects for each version which is a real hassle.
Here's some background details:
The TickZoom application has some very complete unit tests some of which simulates real time trading environments. Add to that TickZoom makes elaborate use of parallelization for leveraging multi-core computers. Needless to say, during development of a new version, there can be stability issues during integration testing which get uncovered by running the build and auto tests repeatedly. A version which builds and tests cleanly 8 times in a row without change plus has undergone some real world testing by users can be deemed "stable" and promoted to the stable branch.
Our Hudson projects look like this:
test - Only for testing a build, zero user visibility.
integrate_deploy - Promotes a test project build to integrate branch and makes it
available to public for UA testing.
integrate - Repeatedly builds the integrate branch to determine if it's
stable enough to promote to stable branch. This runs the
builds and test hourly throughout every night.
stable_deploy - Promotes an integrate project build to the stable branch and
makes it public for users who want the latest and greatest.
stable - Builds the stable branch once every night. After 2 weeks of
successful builds (14 builds) it can go to "release candidate".
And so on... it continues with "release candidate" and then "release".
I can see the point of demonstrating stability by having multiple successive builds succeed without error, but I'd suggest a slightly different approach to make things more simple. Rather than trying to aggregate the results of multiple builds to determine whether you promote the latest build to the stable branch, run your tests 8 times against the same build; you can either do this by adding a repeat count parameter to the tests, or just repeat the test steps multiple times in the Hudson job setup.
If the build passes cleanly every time, you could use that as a gateway to send the build to your users for "real world" testing before you promote it to the stable branch.
This has a couple of advantages; it makes the Hudson setup more simple as per your request, and it gives you added confidence in the stability of the build because you're running the tests multiple times against the same code base, rather than against a different code base each time.
The answer is to create a separate pipeline of jobs for each new minor version of the software.
So they'll be like this.
integrate_0.8.3
stable_0.8.3
candidate_0.8.3
release_0.8.3
We will use the Hudson API to generate the jobs for each new version with the script.
The promotion can't be totally automated because other factors than stable builds like user reported errors can delay a version from moving through the pipeline.
sincerely,
Wayne
I guess you have to either implement some solution outside of hudson, that produces trigger files to be used in Hudson or you extend the promotion plugin with your company specific rules.

How to aggregate code coverage report in Hudson?

I have project build with hudson CBS. and i am using cobertura for test coverage. Reports are generated and i am happy about it.
but i cannot find the delta of coverage %.
for e.g.
check-in #1 - code coverage is 90%
check-in #2 - code coverage is 75% i.e down by 15%.
can i achieve this in hudson cobertura plug-in? is there any alternative?
I solve this by parsing the cobertura XML files and pushing the individual build data into a database. You can do this with other metrics, like number of tests and complexity.
Placing the results into a database gives you a wide range of display options. We use Excel and SharePoint to display are most important metrics. A simple web page with charts and graphs (is it still simple?) will also do the trick.