Add Test Steps in JUNIT to get better reporting - junit

I am running UI testing with Jest and I am using a custom reporter to generate a JUNIT.xml file at the end of the run https://github.com/jest-community/jest-junit , so that my azure pipeline can read it and generate nice analytics. My test framework is organize around Test suite that represent a big functionality, then each aspect of that functionality is check within a test contain in the suite ( That check might require multiple steps ) and I would like show each of those steps. This way i think it would be more readable for anyone looking at the report and it would be very easy to get context on why a test failed.
I try to put assertion at each steps. But JUNIT only record the assertion that failed.
I also try to change the way my test are organize and make a step a test itself. But, in Jest, and it seem in a lot of other runner as well ( at least in Node ) it seem that it's not possible to guarantee easily that test are run in a specific sequence. Also, it's really verbose to code suite like this.
Does anybody have an idea on how I could achieve this granularity ?
Thank you.

Related

How to take screenshot on test failure with junit 5

Can someone tell me please: how to take a screenshot when test method fails (jUnit 5). I have a base test class with BeforeEach and AfterEach methods. Any other classes with #Test methods extends base class.
Well, it is possible to write java code that takes screenshots, see here for example.
But I am very much wondering about the real problem you are trying to solve this way. I am not sure if you figured that yet, but the main intention of JUnit is to provide you a framework that runs your tests in various environments.
Of course it is nice that you can run JUnit within your IDE, and maybe you would find it helpful to get a screenshot. But: "normally" unit tests also run during nightly builds and such - in environments where "taking a screenshot" might not make any sense!
Beyond that: screenshorts are an extremely ineffective way of collecting information! When you have a fail, you should be locking for textual log files, html/xml reports, whatever. You want that failing tests generate information that can be easily digested.
So, the real answer here is: step back from what you are doing right now, and re-consider non-screenshot solutions to the problem you actually want to solve!
You don't need to take screen shots for JUnit test failes/passes, rather the recommended way is to generate various reports (Tests Passed/Failed Report, Code coverage Report, Code complexity Report etc..) automatically using the below tools/plugins.
You can use Cobertura maven plugin or Sonarqube code quality tool so that these will automatically generate the reports for you.
You can look here for Cobertura-maven-plugin and here for Sonarqube for more details.
You need to integrate these tools with your CI (Continuous Integration) environments and ensure that if the code is NOT passing certain quality (in terms of tests coverage, code complexity, etc..) then the project build (war/ear) should fail automatically.

testlink integration with robotframework

I am looking for testlink integration with robotframework but no success so far.
Problem I have is how to write tests in robotframework and how to link with testlink. I mean what is the format of writing tests in testlink so that robotframework will understand and execute it and how it will be achived.
I have looked at https://code.google.com/p/robotframework-tmlibrary/ but no luck in how to use this properly.
Any help is appreciated.
Use this,
https://github.com/dmizverev/robot-framework-library/blob/master/listener/TestLinkListener.py
or
https://github.com/hayzer/robotframework2testlink
These two implementation are using the Robot Listener and using Testlink APIs. The method end_test() does the job to update the result.
If you don't have test suite, testcase, or test plan created before running the robot test execution. It can be automated. Implement start_suite() method and use test link API to create the suite and use start_test() method to create testcase and associate to test plan.
While running robot from cli append with --listener classname
Hope this helps.
There is another approach to this.
If you have tons of manual tests in Testlink and automate them independently with robotframework. Or you have two legacy sets of tests(manual and automated) that you are trying to bring together.
You can use jenkins to integrate robot and testlink. You'll have to have same testcase names in testlink and robot suites. Also you'll have to define additional custom keyword in testlink for automated tests that duplicates testcase name. There are both Testlink jenkins plugin and Robotframework jenkins plugin available for this. Also there are some useful forks that support test platforms from testlink or custom testplan fields.
Basically, you'll have to obtain TC names from a testplan in jenkins job and feed robot with those in any way (e.g. -t "$TCNAME") After that, execution results can be passed back to TL.
This approach lets you have independent test structure in robot and testlink and better integrate with other processes during development (pipelines in CI/CD, test planning for huge test suites), but you'll have to do some additional job of tracking two sets of tests.

Is It Possible to log Selenium Webdriver Test Results to Quality Center?

I am looking for a way to integrate selenium and QC to log Results.Please help me on this how it can be done. Is there any way to do this.
Thanks in advance
Rashmi
For simple storage of results, you can do what Roland recommended and just use the API to upload the results as an attachment to some entity of your choosing. To get a true "Run" record created in QC (like you get for Manual or QTP tests), you will need to have a Test in a Test Set that can be associated with the run results.
Perhaps the easiest option is to create a QTP/UFT wrapper test. This test will do nothing more than invoke your Selenium test, process the results, and then write those results back to QC using the standard 'Reporter' object.
Another, more complicated, approach is to look at creating a custom test type. This is an advanced topic, and you can refer to the QC documentation on the process.
I recommend the QTP wrapper for ease of implementation and flexibility.
You can use QC's REST API or it's OTA API.

Cucumber examples reuse in different features/scenarios

I've been using cucumber for awhile and I've stumbled upon a problem:
Actual question:
Is there a solution to import the examples from a single file/db using cucumber specifically as examples?
Or alternatively is there a way to define a variable while already in-step to be an example?
Or alternatively again, is there an option to send the examples as variables when I launch the feature file/scenario?
The Problem:
I have a couple of scenarios where I would like to use exactly the same examples, over and over again.
It sounds rather easy, but the examples table is very large (more specifically it contains all the countries in the world and their appropriate continents). Thus repeating it would be very troublesome, especially if the table needs changing (I will need to change all the instances of the table separately)
Complication:
I have a rerun function that knows when a specific example failed and reruns it after the test is done.
Restrictions:
I do not want to edit my rerun file
Related:
I've noticed that there is already an open discussion about importing it from csv here:
Importing CSV as test data in Cucumber?
However that discussion is invalid to me because I have the rerun function that only knows to work only with examples, and the solution suggested there ruins that.
Thank you!
You can use CSV and other external file systems with QAF using different BDD syntax.
If you want to use cucumber steps or cucumber runner, you can use QAF-cucumber and BDD2 (preferred) or Gherkin syntax. QAF-cucumber will enable external test data and other qaf features with cucumber.
Below is the example feature file uses BDD2 syntax can be run using TestNG or Cucumber runner.
Feature: feature uses external data file
#datafie:resources/${env}/testdata.csv
#regression
Scenario: Another scenario exploring different combination using data-provider
Given a "${precondition}"
When an event occurs
Then the outcome should "${be-captured}"
testdata.csv file may look like:
TestcaseId,precondition,be-captured
123461,abc,be captured
123462,xyz,not be captured
You can run using TestNG or Cucumber runner. You can use any of inbuilt data provider or custom as well.

"smart" JUnit test ordering

I want to add some hints to my build, to run certain tests "first" without re-running them later.
Simply add Class names to a "priority" string in an input parameter to my test task, or
Have JUnit's testers smart enough to remember/persist failing test class names, so that the next time around the builder runs those first.
What is the most idiomatic way of doing this in Ant?
The following tools might help you to achieve the desired JUnit test execution order, but they depend on Eclipse usage:
Continuous Testing for Eclipse (CT-Eclipse)
JUnit Max
infinitest
I have not used any of those tools, and I have no Ant-only solution.
You might consider these related posts:
Run JUnit automatically when building Eclipse project
Starting unit tests automatically after saving a file