add summary information to a job results page in Jenkins/Hudson - hudson

I have some jobs that deploy and run automated integration tests as part of our CI system.
These jobs are shell scripts that use ssh to deploy and then run commands on the systems to be tested. Then they gather the results in a tarball and archive it. One of the files in this tarball contains a nicely formatted summary that I would like to make visible without having to read through the console output or open a tarball.
Is there a plugin for adding text to the job results page?
Is there a plugin that will produce reports from archived job results?
Is there an entirely more elegant way of doing this?

I would look at at Summary Display Plugin.
If you can output an XML from your build task, it will display it on the build page, using tables and other formatting.

If you can get your results file into HTML format, the HTML Publisher plugin will do the job for you.

Related

Automate download of Drupal webform data

I'm working on integrating my data between two different database systems - our site itself is run on Drupal 6.2.8, but our customer database is run by a third-party on a different system. It is capable of importing data from a csv.
I'm capturing data with a Drupal webform - that's working great. I know that I can manually download a csv from the webform - but I want to run imports frequently - at least daily. I found this thread: http://drupal.org/node/1276098 that talks about a drush command for exporting webforms, but it doesn't seem to be complete.
I know that I can use views to create a csv, but I don't seem to have access to the submissions themselves from views. Likewise I know that the data module can somehow be tied into the answer for this, but I am not at all sure how to get started with it.
If there were just a simple way to schedule downloads of the data, I could set up an rsync or something like that to handle the rest - any suggestions?
The following code may work for the case:
drush webform-export <Specify-WebFormID> --completion-type=finished --delimiter="|" --file=<Specify-Filename> --format=delimited
You may also use --range-start and --range-type options to be more precise in which result you'd like to get.

simultaneous instances of the same hudson/jenkins job

I would like a way for individual users to send a repo path to a hudson server and have the server start a build of that repo. I don't want to leave behind a trail of dynamically created job configuration. I'd like to start multiple simultaneous instances of the same job. Obviously this requires that the workspaces different for the different instances. I believe this isn't possible using any of the current extensions. I'm open to different approaches to what I'm trying to accomplish.
I just want the hudson server to be able to receive requests for builds from outside sources, and start them as long as there are free executors. I want the build configuration to be the same for all the builds except the location of the repo. I don't want to have dozens of identical jobs sitting around with automatically generated names.
Is there anyone out there using Hudson or Jenkins for something like this? How do you set it up? I guess with enough scripting I could dynamically create the necessary job configuration through the CLI API from a script, and then destroy it when it's done. But I want to keep the artifacts around, so destroying the job when it's done running is an issue. I really don't want to write and maintain my own extension.
This should be pretty straightforward to do with Jenkins without requiring any plugins, though it depends on the type of SCM that you use.
It's worth upgrading from Hudson in any case; there have certainly been improvements to the features required to support your use case in the many releases since becoming Jenkins.
You want to pass the repo path as a parameter to your build, so you should select the "This build is parameterized" option in the build config. There you can add a string parameter called REPO_PATH or similar.
Next, where you specify where code is checked-out from, replace the path with ${REPO_PATH}.
If you are checking out the code — or otherwise need access to the repo path — from a script, the variable will automatically be added to your environment, so you can refer to ${REPO_PATH} from your shell script or Ant file.
At this point, when pressing Build Now, you will be prompted to enter a repo path before the build will start. As mentioned in the wiki page above, you can call the buildWithParameters URL to start a build directly with the desired parameter, e.g. http://server/job/myjob/buildWithParameters?REPO_PATH=foo
Finally, if you want builds to execute concurrently, Jenkins can manage this for you by creating temporary workspaces for concurrent builds. Just enable the option
"Execute concurrent builds if necessary" in your job config.
The artifacts will be available, the same as any other Jenkins build. Though probably you want to manage how many recent artifacts are kept; this can be done by checking "Discard Old Builds", and then under Advanced…, you can select enter a value for "Max # of builds to keep with artifacts".

With an SSISDeploymentManifest file, is there a way to pre-select the Installation Folder?

Short Version:
I have 7 SSISDeploymentManifest files I need to run. Is there a way to alter the SSISDeploymentManifiest file to per-populate Installation value?
Rant Version
At first running 7 deployments did not seem like much of a problem. But the part of the process where you "Select Installation Folder" for package dependencies is horribly designed.
First, you have to enter a network path here if you are not running from the server you will install to. This is because the dialog box makes sure path you enter a valid path... on the local machine you run the manifest from. But when the package is run it will need to also work for the server. (dumb huh?))
The next problem with this screen is that the field is read only. So I cannot just specify the path directly.
Second, the dialog box to "browse" won't let me enter a path.
So... I have to browse my entire network (from home, over a vpn). That takes a long time.
Is there a way to alter the SSISDeploymentManifiest file to pre-populate this value?
No, dtsinstall doesn't accept any command line arguments, pity. My first approach to this was to write a heavy, command line application that made repeated calls to dtutil to get things done. I never finished it but if you want to peek, it's on codeplex
What I do like and prefer is a PowerShell script that handles my SSIS deployments now. Even if PowerShell isn't your cup of tea, the concepts should apply to whatever .NET language you might want to use to handle it.
Attractive features of it are that it will create the folders in SQL Server for you and correctly deploy those packages into said folders. The other nice thing that might be of assistance to you is that if all the 7 deploys are in a common folder structure, the script walks the folder structure looking for manifests and deploys all the packages per manifest so you could conceivably deploy all with a single mouse click.

Generating jmeter results into graphs off several trials through Hudson

I'm in the process of integrating test scripts into a Continuous Integration system like Hudson. My goal is to benchmark each load test over time and display it in readable charts.
While there are plugins to generate graphs for a single script run, I'd like to know how each session's data, such as those found in the summary report, could be recorded over time.
One way would be to store the summary reports into a jtl file, and graph data off of that.
I've checked out the Performance Plugin for Hudson, but I'm at a block at how to modify the plugin to display more charts with more information.
Both the reports from either JMeter or the Hudson plugin are snapshots (not charts over long periods of time) and that's part of the issue. I went through this same exercise a few months back and decided to go with a solution that was better suited for this problem.
I setup Logstash to pull the JMeter test results from the files it generates during every test. It outputs those results into an Elasticsearch index which I can chart with Kibana.
I know this adds several new pieces of software into your setup, but it only took a day to set things up and the results are much better than what the performance plugin was able to provide.

Hudson as passive server

Is it possible to use Hudson only as a passive server,i.e, not using it for building purpose instead sending build results generated by some other tool in maybe XML format and using Hudson to only display the results??
It's very doable.
If it's running on the same machine, such as a cron job, check out http://wiki.hudson-ci.org/display/HUDSON/Monitoring+external+jobs.
If you need to pull data from somewhere else, use a shell script as a build target, and do what you need to to stage the data locally (scp, etc.).
It may very well be possible using periodic builds and the URL SCM plug-in to pull in the xml files and the Plot Plug-in for display but more information is required before a more detailed answer can be provided.
What build tool are you currently using to generate build results?
A couple of my Hudson jobs are just summaries and display information. The 'jobs' need to run for data to be collected and saved. The run could be based dependent jobs or just scheduled nightly. Some examples:
One of our jobs just merges together the .SER files from Cobertura and generates the Cobertura reports for an overall code coverage from all of our unit, integration and different types of system tests (hint for others doing the same: Cobertura has little logic for unsynchronized SER files. Using them will yield some odd results. There are some tweaks that can be done to the merge code that reduces the problem)
Some of our builds write data to a database. We have a once a week task that pulls the data from the database and creates an HTML file with trend charts. The results are kept as part of the job.
It sounds to me what you're describing is a plugin for Hudson. For example, the CCCC plugin:
http://wiki.hudson-ci.org/display/HUDSON/CCCC+Plugin
It takes the output, in XML form, from the CCCC analyzer app and displays it in pretty ways in the Hudson interface.
Taking the same concept, you could write a plugin that works with the XML output from whatever build tool you have in mind and display it in Hudson.