Get multiple coverage reports in coveralls for a single repository - lcov

Is it possible to get separate coverage reports for front-end and back-end tests for a single repository?
It seems one possible way is to concatenate the lcov reports into one and then ship to coveralls, as mentioned in this question.
However, I wanted to know if there is a way to see separate code coverage reports for front-end and back-end or provide two lcov files to coveralls. If so, how?

If you refer to Coverall's API documentation, you'll see that their Job API supports an optional parameter called service_number. Now by default this option is intended to match the build number for the CI system, but there's no reason you couldn't use it to track multiple coverage reports for each CI build.
One way you could do that would be to track the actual CI build number, multiply it by two, and have that number be the "backend" build number, and increment it by one to have it be the "frontend" build number. The doubling just ensures that you don't end up posting to the same build number more than once. Of course, you can use another method for generating these IDs - the API technically takes a string so you might be able to submit e.g. 234-frontend and 234-backend.
In theory, you could also use the required service_name parameter to the same effect. The catch there is that some of the reserved service names ("travis-ci", "travis-pro", or "coveralls-ruby") have special features, which you may be reluctant to sacrifice.

Related

Should I use a MarketPlace action instead of a plain bash `cp` command to copy files?

I am noticing there are many actions in the GitHub marketplace that do the same. Here is an example:
https://github.com/marketplace/actions/copy-file
Is there any benefit of using the GitHub marketplace action instead of plain bash commands? Do we have recommended practices guideline that helps to decide whether I use MarketPlace actions versus plain bash or command line
These actions don't seem to have any real value in my eyes...
Other than that, these run in docker and don't need cp, wget or curl to be available on the host, and they ensure a consistent version of their tools is used. If you're lucky these actions also run consistently the same way on Windows, Linux and Mac, where as your bash scripts may not run on Windows. But the action author would have to ensure this, it's not something that comes by default.
One thing that could be a reason to use these actions from the marketplace is that they can run as a post-step, which the run: script/bash/pwsh steps can't.
They aren't more stable or safer, unless you pin the actions on a commit-hash or fork it, the owner of the action can change the behavior of the action at any time. So, you are putting trust in the original author.
Many actions provide convenience functions, like better logging or output variables or the ability to safely pass in a credential, but these tasks seem to be more of an exercise in building an action by the author and they don't really serve a great purpose.
The documentation that comes with each of these actions, doesn't provide a clear reason to use these actions, the actions don't follow the preferred versioning scheme... I'd not use these.
So, when would you use an action from the marketplace...? In general actions, like certain cli's provide a specific purpose and an action should contain all the things it needs to run.
An action could contain a complex set of steps, ensure proper handling of arguments, issue special logging commands to make the output more human-readable or update the environment for tasks running further down in the workflow.
An action that adds this extra functionality on top of existing cli's makes it easier to pass data from one action to another or even from one job to another.
An action is also easier to re-use across repositories, so if you're using the same scripts in multiple repos, you could wrap them in an action and easily reference them from that one place instead of duplicating the script in each action workflow or adding the script to each repository.
GitHub provides little guidance on when to use an action or when an author should publish an action to the marketplace or not. Basically, anyone can publish anything to the marketplace that fulfills the minimum metadata requirements for the marketplace.
GitHub does provide guidance on versioning for authors, good actions should create tags that a user can pin to. Authors should practice semantic versioning to prevent accidentally breaking their users. Actions that specify a branch like main or master in their docs are suspect in my eyes and I wouldn't us them, their implementation could change from under you at any time.
As a consumer of any action, you should be aware of the security implications of using any actions. Other than that, the author has 2FA enabled on their account, GitHub does little to no verification on any actions they don't own themselves. Any author could in theory replace their implementation with ransomware or a bitcoin miner. So, for actions you haven't built a trust relation with its author, it's recommended to fork the action to your own account or organization and that you inspect the contents prior to running them on your runner, especially if that's a private runner with access to protected environments. My colleague Rob Bos has researched this topic deeply and has spoken about this topic frequently on conferences, podcasts and live streams.

Number of configurations in a project for build and install

In our project, we currently have two different configurations. The first one builds the assemblies. The other packages (including moving stuff to the right directories etc.) everything for InstallShield.
Now, we can't agree if it's better to move all the build steps into a single configuration and run it as a whole chain or if it's better to keep the build process separate from creating installation package.
Googling results in guides on how to do that but not in what way to do that (and our confusion is mainly due to the architecture of the configurations' order). We'll be using a few steps from PowerShield in order to move a number of files between different directories due to certain local considerations. The total number of steps will land on 5 or less.
The suggestion that I have is the following three configurations. They run separately, independently and their build steps overlap (being super sets of each other, consecutively regarded).
Configuration Build.
Configuration Build and test.
Configuration Build, test and package.
The main point of my suggestion is that e.g. the step that compiles the software is implemented in each configuration (as opposed to reusing the artifacts from an independent run of other configuration).
I would argue like this:
if you ever need to perform just one of the two steps - then leave them as separate steps.
This gives you the flexibility to run one, or the other, or both steps. E.g. could it be that you need to just build the solution, but not create the final installation package? E.g. for local testing?
However, if you never ever use one of the steps separately (you always run both together), then I'd probably just merge them together into one - having two separate steps doesn't make much sense to me

How does an assembly differ from a build?

I know that to build means either to compile from source code or the artifact itself. But what is an assembly? I tried to search but could not find the difference.
E.g. in .NET, assemblies are EXE files but isn't that what I get when I build the app? Isn't it the build?
EDIT: I mean build as a noun (the result of the build process).
If there were any standard I would accept for an authoritative definition of what a build is, it would be my own. My reason for saying this is that I have a more comprehensive view than most people living, though there are a few in retirement in Florida to whom I might bow.
As usage commands the language, the usage and definition of a 'software build' has evolved over time: Today, it would be referred to as 'the product,' or the result of a production process, or in the abstract sense, that process itself. So originally it referred to either the construction process or the product of the final phase of production. It was usually associated with a batch process (process in the sense of the instance of a module loaded into memory which has finite boundaries and is tracked by the operating system) or a job identification number. For that reason, the number on the "build" was often requested, many times to associate it with a date in order to correlate it with corrective actions. Sometimes the build was recorded in a sequential log entry in an authoritative journal along with a date and a brief description of the changes during that period.
I find it curious that this entry is tagged with both the keywords 'language-agnostic' and 'build.' It is practically self-defining: While an assembly or a compilation could only be those processes or the products therefrom, a build may require some initialisation data or context surrounding and or supporting the including compilations and becoming a part of the end product.
When one builds a house, the output is a house: A software build may have some of the same characteristics; for some, an edifice without doors or windows is not a house but such a resulting structure can be called a build -- likewise the set of compilations producing the principal modules of the product can be called a build.
I would not expect to be a build to be a precise term: Rather, it is the set of procedures and their results followed on one occasion to produce a particular product. But it merits noting that a build may include modules from several compilations and indeed compilations from several different language -- and even some assembly. Since the output or product can differ as a result, a part of that build may even be comprised of some procedural scripting and/or job control language as well as compiled or assembled components.
In short, a software build is the set of procedural elements involved in producing a certain product on a particular occasion and/or the resulting product itself, referred to for the purpose of identifying the contextual environment, issues addressed and costs involved in a job, order, task, package, directive or logged schedule in terms of all forms of resource required and expended.
Take a look here What are .NET Assemblies?. The output of the build is your assembly, so if you had a class library within your project called MyClassLib, then you would get MyClassLib.DLL when you build your application. So, the build process is what creates the assembly

Jenkins multi-configuration project doesn't aggregate test results

I have a multi-configuration project set up to run FF and IE selenium tests. However, it's not aggregating the test results.
If I look at the Project Page I see this:
If I go into a specific build I see this:
But if I click on one of those specific configuration names I see this:
Is there a way to get these results to aggregate? (I have the aggregate downstream results project configuration checkbox checked)
This is currently a Jenkins bug- Kohsuke Kawaguchi specifically replied to this bug on Aug/31/2011 in the IRC channel (logs - start # [21:54:47]). Here are the work around responses from those two links:
From the Bug page >>
You can workaround this by explicitly specifying the jobs to aggregate, rather than relying on the downstream builds logic, and specifying the matrix axes (in quotes) explicitly - i.e., NonMatrixJob,"MatrixJob/label=l_centos5_x86" - the quotes in case of commas in your axis.
From the IRC Log >>
I did verify that explicitly specifying the list of jobs to aggregate test results from and using the fully qualified job name, including axis, did the trick, but it's a shame that I can't get the auto-discovery from the downstream jobs working.

How to make hudson aggregate results provided in build artifacts over several builds

I have a hudson job that performs a stress test, torturing a virtual machine for several hours with some CPU- and IO-intensive tasks. The build scripts write a few interesting results into several files which are then stored as build artifacts. For example, one result is the time it took to perform certain operations.
I need to monitor the development of these results. For example, I need to know when the time for certain operations suddenly increases. So I need to aggregate these results over several (all?) builds. The ideal scenario would be if I could download the aggregated data from hudson.
I've been thinking about several possibilities to do this, but they all seem quite complicated. That's when I thought someone else might have had that problem already.
Maybe there already are some plugins doing this?
If you can write a script to extract the relevant numbers from the log files, you can use the Plot Plugin to visualize the data. We use this for simple stuff like tracking the executable size of build artifacts.
The Plot Plugin is more manual than the Perf Plugin mentioned by #Tao, but it might be easier to integrate depending on how much data murging the Perf Plugin requires.
Update: Java-style properties files (which are used as input to the Plot Plugin) are just simple name-value pairs in a text file, e.g.:
YVALUE=1234
Here's a build script that shows a (very stupid) example:
echo YVALUE=$RANDOM > buildtime.properties
This example plots a random number with each build.
I have not persoanlly use this plugin yet, but this might fits your need if you can just generate the xml file according to this plugin's format according to its description.
PerfPublisher Plugin
What about creating the results as JUnit results (XML files) so the results can be recorded by Hudson and will be aggregated by Hudson for different builds.