Using HTML in job description for Jenkins job generated by DSL - html

I'm migrating some Jenkins jobs to DSL code from the current manual configurations. Some of these jobs have descriptions which contain HTML, but I can't find a way to enter this HTML in the seed job so that the generated job contains the same description. In one example, the current job has this description:
Multi-Platform Build <br/><br/>
Builds nightly but only if there has been SCM revisions against the application Core Trunk. <br/><br/>
This is being replaced by application-multi-platform-new
Which results in a nicely formatted job description with line breaks and a hyperlink as well.
I want to replicate this when I generate the same job from a DSL script but there doesn't seem to be a way to do this.

It should be possible with just specifying the html-tags that you need. What is your output?
description("""
Multi-Platform Build <br/><br/>
Builds nightly but only if there has been SCM revisions against the application Core Trunk. <br/><br/>
This is being replaced by application-multi-platform-new
""")

I've managed to find a workaround but I'd prefer to do this directly.
It's possible to use the below snippet:
job('multi-platform-build') {
description(readFileFromWorkspace('description.html'))
}
This allows you to have a separate file the workspace of the seed job which is called to provide the description.
This works but it's far from ideal as this means configuration being stored in two separate locations.

Related

Programmatically create gitlab-ci.yml file?

Is there any tool to generate .gitlab-ci.yml file like Jenkins has job-dsl-plugin to create jobs?
Jenkins DSL plugin allows me to generate jobs using Groovy, which outputs an xml that describes a job for Jenkins.
I can use DSL and a json file to generate jobs in Jenkins. What I’m looking for is a tool to help me generate .gitlab-ci.yml based on a specification.
The main question i have to ask what is your goal?
just reduce maintenance effort for repeating job snippets:
Sometimes .gitlab-ci.yml file are pretty similar in a lot of projects, and you want to manage them centrally. Then i recommend to take a look at Having Gitlab Projects calling the same gitlab-ci.yml stored in a central location - which shows multiple ways of centralizing your build,
generate pipeline configuration as the build is highly flexible
Actually this is more a templating task, and can be achieved in nearly every script language you like.
With simple bash, groovy, python, go, .. you name it. In the end the question is, what kind of flexibility you strive for, and what kind of logic you need for the generation. I will not go into the detail on how to generate a the .gitlab-ci.yml file, but how to use it for your next step. Because this is in my opinion the most crucial step. There is the way of simply generating and committing it, but you can also use GitLab CI to generate a file for you, which will be used in the next job of your pipeline.
setup:
script:
- echo ".." # generate your yaml file here, maybe use a custom image
artifacts:
paths:
- generated.gitlab-ci.yml
trigger:
needs:
- setup
trigger:
include:
- artifact: generated.gitlab-ci.yml
job: setup
strategy: depend
This allows you to generate a child pipeline and execute it - we use this for highly generic builds in monorepos.
see for further reading
GitLab JSONNET Example - documentation example for generated yml files within a pipeline
Dynamic Childpipelines - documentation for dynamically created pipelines

How to make hudson aggregate results provided in build artifacts over several builds

I have a hudson job that performs a stress test, torturing a virtual machine for several hours with some CPU- and IO-intensive tasks. The build scripts write a few interesting results into several files which are then stored as build artifacts. For example, one result is the time it took to perform certain operations.
I need to monitor the development of these results. For example, I need to know when the time for certain operations suddenly increases. So I need to aggregate these results over several (all?) builds. The ideal scenario would be if I could download the aggregated data from hudson.
I've been thinking about several possibilities to do this, but they all seem quite complicated. That's when I thought someone else might have had that problem already.
Maybe there already are some plugins doing this?
If you can write a script to extract the relevant numbers from the log files, you can use the Plot Plugin to visualize the data. We use this for simple stuff like tracking the executable size of build artifacts.
The Plot Plugin is more manual than the Perf Plugin mentioned by #Tao, but it might be easier to integrate depending on how much data murging the Perf Plugin requires.
Update: Java-style properties files (which are used as input to the Plot Plugin) are just simple name-value pairs in a text file, e.g.:
YVALUE=1234
Here's a build script that shows a (very stupid) example:
echo YVALUE=$RANDOM > buildtime.properties
This example plots a random number with each build.
I have not persoanlly use this plugin yet, but this might fits your need if you can just generate the xml file according to this plugin's format according to its description.
PerfPublisher Plugin
What about creating the results as JUnit results (XML files) so the results can be recorded by Hudson and will be aggregated by Hudson for different builds.

How to display credits

I want to give credit to all open source libraries we use in our (commercial) application. I thought of showing a HTML page in our about dialog. Our build process uses ant and the third party libs are committed in svn.
What do you think is the best way of generating the HTML-Page?
Hard code the HTML-Page?
Switch dependency-management to apache-ivy and write some ant task to generate the html
Use maven-ant-tasks and write some ant task to generate the HTML
Use maven only to handle the dependencies and the HTML once, download them and commit them. The rest is done by the unchanged ant-scripts
Switch to maven2 (Hey boss, I want to switch to maven, in 1 month the build maybe work again...)
...
What elements should the about-dialog show?
Library name
Version
License
Author
Homepage
Changes made with link to source archive
...
Is there some best-practise-advice? Some good examples (applications having a nice about-dialog showing the dependencies)?
There are two different things you need to consider.
First, you may need to identify the licenses of the third-party code. This is often down with a THIRDPARTYLICENSE file. Sun Microsystems does this a lot. Look in the install directory for OpenOffice.org, for example. There are examples of .txt and .html versions of such files around.
Secondly, you may want to identify your dependencies in the About box in a brief way (and also refer to the file of license information). I would make sure the versions appear in the About box. One thing people want to quickly check for is an indication of whether the copy of your code they have needs to be replaced or updated because one of your library dependencies has a recently-disclosed bug or security vulnerability.
So I guess the other thing you want to include in the about box is a way for people to find your support site and any notices of importance to users of the particular version (whether or not you have a provision in your app for checking on-line for updates).
Ant task seems to be the best way. We do a similar thing in one of our projects. All the open source libraries are present in a specified folder. An Ant task reads the manifest of these libraries, versions and so on and generates an HTML, copies into another specified folder from where it is picked up by the web container.
Generating the page with each build would be wasteful if the libraries are not going to change often. Library versions may change, but the actual libraries don't. Easier to just create a HTML page would be the easiest way out, but that's one more maintenance head ache. Generate it once and include it with the package. The script can always be run again in case some changes are being made to the libraries (updating versions, adding new libraries).

How to automate the tasks for releasing open-source-software?

Everyone managing open-source-software runs into the problem, that with the time the process of releasing a new version gets more and more work. You have to tag the release in your version-control, create the distributions (that should be easy with automated builds), upload them to your website and/or open-source-hoster. You have to announce the new release with nearly the same message on chosen web-forums, the news-system on sourceforge, mailinglists and your blog or website. And you have to update the entry of your software on freshmeat. Possible more tasks have to be done for the release.
Do you developed techniques to automate some of these tasks? Does software exist that supports you with this?
Pragmatic Project Automation shows how to do all of that. They use Ant for practically everything in the book, so if you know Ant you can make different targets to do any step in the build-release cycle.
For my Perl stuff, I wrote Module::Release. In the top-level directory I type a single command:
% release
If checks several things and dies if anything is wrong. If everything checks out, it uploads the distribution.
It automates my entire process:
Test against multiple versions of Perl
Test distribution files
Check the status of source control
Check for code and distribution quality metrics
Update changes file
Determine new version number
Release code to multiple places
Tag source control with new version number
Everyone seems to write their own release automator though. Most people like their process how they like their process, so general solutions don't work out that well socially.
Brad Fitzpatrick has ShipIt which is a Perl program to automate releases. There's slightly more info in his original announcement.

Best practices for version information?

I am currently working on automating/improving the release process for packaging my shop's entire product. Currently the product is a combination of:
Java server-side codebase
XML configuration and application files
Shell and batch scripts for administrators
Statically served HTML pages
and some other stuff, but that's most of it
All or most of which have various versioning information contained in them, used for varying purposes. Part of the release packaging process involves doing a lot of finding, grep'ing and sed'ing (in scripts) to update the information. This glue that packages the product seems to have been cobbled together in an organic, just-in-time manner, and is pretty horrible to maintain. For example, some Java methods create Date objects for the time of release, the arguments for which are updated by a textual replacement, without compiler validation... just, urgh.
I'm trying avoid giving examples of actual software used (i.e. CVS, SVN, ant, etc.) because I'd like to avoid the "use xyz's feature to do this" and concentrate more on general practices. I'd like to blame shoddy design for the problem, but if I had to start again, still using varying technologies, I'd be unsure how best to go about handling this, beyond laying down conventions.
My questions is, are there any best practices or hints and tips for maintaining and updating versioning information across different technologies, filetypes, platforms and version control systems?
Create a properties file that contains the version number and have all of the different components reference the properties file
Java files can reference the properties through
XML can use includes?
HTML can use a JavaScript to write the version number from the properties in the HTML
Shell scripts can read in the file
Indeed, to complete Craig Angus's answer, the rule of thumb here should be to not include any meta-informations in your normal delivery files, but to report those meta-data (version number, release date, and so on) into one special file -- included in the release --.
That helps when you use one VCS (Version Control System) tool from the development to homologation to pre-production.
That means whenever you load a workspace (either for developing, or for testing or for preparing a release into production), it is the versionning tool which gives you all the details.
When you prepare a delivery (a set of packaged files), you should ask that VCS tool about every meta-information you want to keep, and write them in a special file itself included into the said set of files.
That delivery should be packaged in an external directory (outside any workspace) and:
copied to a shared directory (or a maven repository) if it is a non-official release (but just a quick packaging for helping the team next door who is waiting for your delivery). That way you can make 10 or 20 delivers a day, it does not matter: they are easily disposable.
imported into the VCS in order to serve as official deliveries, and in order to be deployed easily since all you need is to ask the versionning tool for the right version of the right deliver, and you can begin to deploy it.
Note: I just described a release management process mostly used for many inter-dependant projects. For one small single project, you can skip the import in the VCS tool and store your deliveries elsewhere.
In addition to Craig Angus' ones include the version of tools used.