Jacoco: Aggregating branch coverage report of multiple test case method - junit

I am using Ant JUnit.
<for list="${test.classes.list}" param="class" delimiter=",">
<sequential>
<for list="${#{class}}" param="method" delimiter=",">
<sequential>
<jacoco:coverage destfile="${basedir}/jacoco.exec">
<junit fork="true">
......
<test name="#{class}" methods="#{method}"/>
</junit>
</jacoco:coverage>
<jacoco:report>
......
<csv destfile="coverage/#{class}.#{method}/report.csv"/>
</jacoco:report>
</sequential>
</for>
</sequential>
In the property file, I have:
test.classes.list=a.b.C,d.e.F
a.b.C=test1,test2
d.e.F=test1,test2,test3
Jacoco will produce a report for each test case method.
The problem is branch coverage for each class is not accurate as covered branches may be overlapped.
How do I aggregate the reports to get a correct branch coverage for whole project?

JaCoCo comes with Ant tasks to launch Java programs with execution recording and for creating coverage reports from the recorded data. Execution data can be collected and managed with the tasks coverage, agent, dump and merge.
This is an example from their Web page of how to merge a set of *.exec files:
<jacoco:merge destfile="merged.exec">
<fileset dir="executionData" includes="*.exec"/>
</jacoco:merge>

Related

ant task to run JUnit tests from a jar (but only files that have tests!)

We're using JUnit inside a custom framework to test an applications behaviour. We're not actually doing unit testing, just leveraging JUnit.
I've created an ant task to run all the tests in the jar file, but unfortunately it's trying to run everything as a JUnit test. Since the jar file contains things besides just the tests (it contains the supporting framework) this is a problem.
Is there a way to make the junit task only run things marked as tests (we use #Test)?
Currently my ant task looks like this:
<target name="test">
<junit printsummary="yes" haltonfailure="no">
<classpath refid="library.third-party.classpath" />
<classpath>
<pathelement location="${basedir}/build/jar/fidTester.jar" />
</classpath>
<formatter type="plain" />
<formatter type="xml" />
<batchtest fork="no" todir="${basedir}/reports">
<zipfileset src="${basedir}/build/jar/fidTester.jar" includes="**/tests/**/*.class" />
</batchtest>
</junit>
</target>
From the Ant JUnit Task documentation:
skipNonTests
Do not pass any classes that do not contain JUnit tests to the test
runner. This prevents non tests from appearing as test errors in test
results. Tests are identified by looking for the #Test annotation on
any methods in concrete classes that don't extend
junit.framework.TestCase, or for public/protected methods with names
starting with test in concrete classes that extend
junit.framework.TestCase. Classes marked with the JUnit 4
org.junit.runner.RunWith or org.junit.runner.Suite.SuiteClasses
annotations are also passed to JUnit for execution, as is any class
with a public/protected no-argument suite() method.

Testng listener to comply with Apache Ant JUnit XML Schema

As part of a testng automation test suite I would like to automatically push results from jenkins to testrail. I currently have this plugin installed on my jenkins server: https://github.com/jenkinsci/testrail-plugin
The read me states the output must comply with the junit schema: https://github.com/windyroad/JUnit-Schema/blob/master/JUnit.xsd
I have reference How do I get one junit report from TestNG for all my test cases? and added
<listeners>
<listener class-name="org.testng.reporters.JUnitXMLReporter"></listener>
</listeners>
to my listeners; however, this does not seem to create a file in the correct format as this causes jenkins to fail with the message :
Uploading results to TestRail.
Error pushing results to TestRail
Posting to index.php?/api/v2/add_results_for_cases/236 returned an error! Response from TestRail is:
{"error":"Field :results cannot be empty (one result is required)"}
Build step 'TestRail Plugin' marked build as failure
Finished: FAILURE
I am wondering if there is a different listener I should be using instead.
Thank you for the help.
I used the xsd file that was shared in the question to create a TestNG reporter that complies with the xsd.
To consume this reporter, please add a dependency as below
<dependency>
<groupId>com.rationaleemotions</groupId>
<artifactId>junitreport</artifactId>
<version>1.0.0</version>
</dependency>
This reporter makes use of the service loader approach to wire in itself. So it doesn't need to be added explicitly via the <listeners> tag (or) the #Listeners annotation.
Details can be found here

Sonar jacoco coverage without running junit

I've following setup in my build.xml for running jacoco:
<formatter type="xml" />
<batchtest todir="${reports.junit.xml.dir}">
<fileset dir="${test.dir}">
<include name="**/*.java" />
</fileset>
</batchtest>
</junit>
</jacoco:coverage>
But when I run this it's giving me :
[junit] Test FAILED
Now developers are working on fixing the junits, but I need to know if "without" running junits can I still show how much is the unit test coverage in sonar ?
To answer your question, no you can't get coverage data without running the unit tests. However, you can get coverage data even if the unit tests fail. You just need to keep unit test failure from failing the build & thereby preempting the output of the coverage report.
It looks like the default value of the <junit> tag attribute haltonfailure is off. So either remove your on or turn it off explicitly.

Mule Functional Tests - totally confused

We have a Mule application with 6 or seven flows with around 5 components per flow.
Here is the setup.
We send JMS requests to an ActiveMQ Queue. Mule listens to that. Based on content of the message we forward that to corresponding flows.
<flow name="MyAPPAutomationFlow" doc:name="MyAPPAutomationFlow">
<composite-source>
<jms:inbound-endpoint queue="MyAPPOrderQ" connector-ref="Active_MQ_1" doc:name="AMQ1 Inbound Endpoint"/>
<jms:inbound-endpoint queue="MyAPPOrderQ" connector-ref="Active_MQ_2" doc:name="AMQ2 Inbound Endpoint"/>
</composite-source>
<choice doc:name="Choice">
<when expression="payload.getProcessOrder().getOrderType().toString().equals("ANC")" evaluator="groovy">
<processor-chain>
<flow-ref name="ProcessOneFLow" doc:name="Go to ProcessOneFLow"/>
</processor-chain>
</when>
<when....
...........
</choice>
</flow>
<flow name="ProcessOneFLow" doc:name="ProcessOneFLow">
<vm:inbound-endpoint exchange-pattern="one-way" path="ProcessOneFLow" responseTimeout="10000" mimeType="text/xml" doc:name="New Process Order"/>
<component doc:name="Create A">
<spring-object bean="createA"/>
</component>
<component doc:name="Create B">
<spring-object bean="createB"/>
</component>
<component doc:name="Create C">
<spring-object bean="createC"/>
</component>
<component doc:name="Create D">
<spring-object bean="createD"/>
</component>
</flow>
<spring:beans>
<spring:import resource="classpath:spring/service.xml"/>
<spring:bean id="createA" name="createA" class="my.app.components.CreateAService"/>
<spring:bean id="createB" name="createB" class="my.app.components.CreateBService"/>
<spring:bean id="createC" name="createC" class="my.app.components.CreateCService"/>
<spring:bean id="createD" name="createD" class="my.app.components.CreateDService"/>
......
......
</spring:beans>
Now I am not sure how I can write Functional tests with them.
I went through the Functional Testing documentation in Mule website but there they have very simple tests.
Is Functional Testing not supposed to make actual backend updates using DAO or Service layers or is it just an extension of Unit tests where you mock up service layer?
I was of the idea - it can take in a request and use the inmemory Mule server to pass the request-response from one component to another in a flow.
Also kindly note there is no Outbound endpoint for any of our flows as they are mostly Fire and Forget type flows and status updates are managed by the DB updates the components do.
Also why do I need to create separate mule config xml files for tests? If I am not testing the flow xml that will actually be deployed on Live what's the point of this testing? I f I am creating separate xml configs just for tests that somewhat defeats the purpose to me...
Can some expert kindly elucidate a bit more and point to example tests similar to the ones we are using.
PS: the components inside Mule are dependent on external systems like webservices, databases etc. For Functional tests do we need to have those running or are we supposed to mock out those services/Db Access?
Functional testing your Mule application is no different from testing any application that relies on external resources, like databases or JMS brokers, so you need to use the same techniques you would do with a standard application.
Usually this means stubbing the resources out with in-memory implementations, like HSQLDB for databases or a transient ActiveMQ in-memory broker for JMS. For a Mule application, this implies modularizing your configuration so "live" transports are defined in a separate file, which you replace with one that contains the in-memory variants at testing time.
To validate Mule had the correct interaction with the resource, you can either read the resource directly using its Java client (for example JDBC or JMS), which is good if you want to ensure that purely non-Mule clients have no issue reading what Mule has dispatched, or use the MuleClient to read from these resources or create flows that consume these resources and pass messages to the <test:component>.
FYI These different techniques are explained and demonstrated in chapter 12 of Mule in Action, second edition.
https://blog.codecentric.de/en/2015/01/mule-esb-testing-part-13-unit-functional-testing/
https://developer.mulesoft.com/docs/display/current/Functional+Testing
Please refer this links
As you can see, it's an ordinary JUnit test extending FunctionalMunitSuite class.
There are two thing we need to do in our test:
Prepare MuleEvent object as an input to our flow. We can do that by using provided testEvent(Object payload) method.
Execute runFlow(String flowName, MuleEvent event) method specifying flow name to test against and event we just created in the first step.

How can I add Snapshot and test variations to my ivy.xml

I'm using ant+ivy+nexus to build and publish my java OSGi projects (just good old jars if you're unfamiliar with OSGi). After the usual mind-melting period one has when engaging with new tech I've got a mostly functional system. But, I now have two dimensions of artifact variation: snapshot/release and main/test.
The main/test dimension speaks for itself really. The snapshot/release is essential for publishing into nexus in a maven-friendly way. (Extremely useful for integration with open-source OSGi runtimes). So, I have the following artifacts on a per-project basis, and I have many many projects:
project-revision.xml (bp)
project-test-revision.xml (b)
project-revision-SNAPSHOT.xml (bp)
project-test-revision-SNAPSHOT.xml (b)
b = successfully building
p = successfully publishing
(I haven't yet got the test stuff publishing correctly)
It's taken me a while to get that far without duplicating code everywhere in my build scripts, but I've managed it... with one big caveat. For the SNAPSHOT branch I append "-SNAPSHOT" to all revisions. In ant I manage to achieve this programatically, but for ivy I'm using a duplicated ivy.xml; ivy-SNAPSHOT.xml. This has
<info ... revision="x.x-SNAPSHOT">
Notice the -SNAPSHOT. Without this I could never get my
<ivy:deliver>
<ivy:resolve>
<ivy:publish>
chain of commands to correctly publish artifact and maven pom. (Remember I have a requirement to make this maven friendly... I'll be damned if I actually end up using maven to build it mind!)
Now I'm stuck introducing the test/main dimension. I'm ideally looking to publish
project-test-revision(-SNAPSHOT).jar
(Note the optional snapshot). I really don't want to do this by specifying
<info ... module="project-test" ... >
as opposed to <info ... module="project" ... > in yet another ivy file. If I went this route (like I've already started) then I simply end up with loads of ivy-Option1-Option2...-OptionN.xml files. With each new two-value variation doubling the number of build and ivy files. That's crap. There's got to be a better way. I just can't find it.
If you have managed to successfully get ivy publishing artifacts with embellished names from one ivy file, would you mind posting the configuration snippets that achieve this? Would be extremely useful. (Don't forget maven won't know about configurations so I need to get the variations into the filename and pom).
Many thanks
Alastair
Ok, update: I've now got the artifact publishing. I struggled a little while I had the test conf extending the default conf. I'd get a complaint during publishing that the default configuration artifacts weren't present... something I don't care about while only publishing the test case. By making them independent but overlapping I get back fine-grain control of what to publish.
BUT!!!!! There's no viable test pom - that's NOT publishing yet. Well, actually it does publish, but it contains data for the non-test case. So this is still not maven friendly. If anyone has suggestions on this that'd be great.
either way, the code I'm now using, in case it helps you too:
ivy.xml:
<info
organisation="MY_ORGANISATION"
module="MY_PROJECT"
status="integration"
revision="1.0-SNAPSHOT">
</info>
<configurations>
<conf name="default" description="Default compilation configuration; main classes only" visibility="public"/>
<conf name="test" description="Test-inclusive compilation configuration. Don't forget to also add Default compilation configuration where needed. This does not extend default conf" visibility="public"/>
</configurations>
<publications>
<artifact name="MY_PROJECT type="jar" ext="jar" conf="default"/>
<artifact name="MY_PROJECT type="pom" ext="pom" conf="default"/>
<artifact name="MY_PROJECT-test" type="jar" ext="jar" conf="test"/>
<artifact name="MY_PROJECT-test" type="pom" ext="pom" conf="test"/>
</publications>
<dependencies>
<dependency org="MY_ORGANISATION" name="ANOTHER_PROJECT" rev="1.0-SNAPSHOT" transitive="true" conf="*"/>
<dependency org="junit" name="junit" rev="[4,)" transitive="true" conf="test->*"/>
</dependencies>
build.xml:
<property name="project.generated.ivy.file" value="SNAPSHOT_OR_RELEASE_IVY_XML_FILE" />
<property name="ivy.publish.status" value="RELEASE_OR_INTEGRATION" />
<property name="project.qualifier" value="-SNAPSHOT_OR_EMPTY" />
<property name="ivy.configuration" value="DEFAULT_OR_TEST" />
<target name="publish" depends="init-publish">
<ivy:deliver
deliverpattern="${project.generated.ivy.file}"
organisation="${project.organisation}"
module="${project.artifact}"
status="${ivy.publish.status}"
revision="${project.revision}${project.qualifier}"
pubrevision="${project.revision}${project.qualifier}"
conf="${ivy.configuration}"
/>
<ivy:resolve conf="${ivy.configuration}" />
<ivy:makepom
ivyfile="${project.generated.ivy.file}"
pomfile="${project.pom.file}"
/>
<ivy:publish
resolver="${ivy.omnicache.publisher}"
module="${project.artifact}"
organisation="${project.organisation}"
revision="${project.revision}${project.qualifier}"
pubrevision="${project.revision}${project.qualifier}"
pubdate="now"
overwrite="true"
publishivy="true"
status="${ivy.publish.status}"
artifactspattern="${project.artifact.dir}/[artifact]-[revision](-[classifier]).[ext]"
conf="${ivy.configuration}"
/>
</target>