Cypress result reports do not show a "passed" status - how to get them to show "pass"? - junit

Most test reporting frameworks show the number of tests run, number passed, number failed and number skipped or ignored. But with the Cypress test reporter options, be it the junit-reporter or the mocha-awesome reporter, they do not show the number passed. Is there a configurable option that would allow the cypress-junit-reporter to show the number of tests that passed?

Related

Getting name of labels in githubs action

I m writing a GitHub workflow that will run the unit test cases. But if PR has a WIP label I need to skip it. Which expression I can use to get the array of names of labels of the pr.

TestRail: how can I create a separate test plan from tests that failed during some test plan execution?

I have created a test plan consisting of some number of test cases. The QA has executed this plan. The results are in TestRail and I see that some tests were skipped and some tests failed. What I want is to create a special test plan to account for the tests that were not successfull during this particular execution. I dont want successful tests to be included in that plan and I dont want to do it manually.
Is there some routine in TestRail for this?
You could use API from gurock documentation and to do following:
read testRun which is active, read all testcases from it,
from above response catch all failing tests, store it in some collection (List, Map),
read this field "status_id": 2, status should be 2 = failed
create new testRun, and fill it with failed testcases,
You add Your failed testcases in field "case_ids": [1, 2, 3, 4, 7, 8],
Hope this helps,

When running SBT tests, what is the meaning of the different stats?

When running Scalatest and JUnit tests in SBT I receive a short summary at the end:
Passed: Total 1359, Failed 0, Errors 0, Passed 1358, Skipped 1, Ignored 13, Pending 1
I understand the meaning of the Total count, passed and ignored tests.
But what is the meaning of:
Skipped?. It looks like Ignored but there must be some difference.
Pending?. Aren't all the tests processed when the summary is given?
Difference between Failed and Errors?
Here is an explanation:
Passed means the test has run successfully.
Ignored means the test has been tagged as ignored, probably because it needs to be fixed.
Skipped means the assume condition is not satisfied, and the test is not run. More information
Pending, the test needs to be written. More information.
Difference between failed/error: I'm not sure, but a failed test is a test that has a failed assertion (TestFailedException when using ScalaTest), whereas an error is an unexpected exception.
Total count is the sum of:
Passed
Pending
Failed
Error

Jmeter: How to map specific variable values from CSV file to specific thread-groups in a test plan

I have a test plan with 12 thread-groups, each one is one test scenario.I want to use unique login credentials for each thread-group. So I've created a CSV file, added CSV Data Config element to each thread-group and selected "All Threads" in "Sharing mode". Whenever I execute the test plan(All thread-groups concurrently) the thread-groups are not taking variable rows sequentially. I expected that the 1st thread-group in the test plan would consider 1st row of variables in the CSV file based on the post: JMeter test plan with different parameter for each thread
But it is not happening and I am unable to understand the pattern of variable allocation. Please help me resolve my issue.
My CSV file looks like below:
userName,password,message
userone,sample123,message1
usertwo,sample123,message2
.
.
so on...
Refer below for configuration of CSV Data Config element:
Thanks!
Threads and thread groups are different things. When you choose "All Threads" in "Sharing mode", it just means that all threads in the same thread group will share CSV. Thread groups are always independent.
You have 2 simple options:
Use one thread group and control what users are doing with controllers. For example Throughput Controller can allow you to control how many threads perform this or other script scenario within the same thread group.
Split your CSV so, that each thread group has its own CSV.
And many more complicated options, for example:
Use __CSVRead or __StringFromFile function, which allows to read one line. That way you can assign each thread group a range of lines to read, rather than reading the entire file.
If your usernames and passwords are predictable (e.g. user1, user2, etc), you could use a counter and a range for each thread group.

How to measure number of asserts per line of code in SonarQube

We are attempting to get another view of our code coverage over the standad line and branch-coverage. We would like to get the number of asserts per line/method/class in order to see if we just are running though the code or if we are getting expected results.
So, how to measure the number of asserts in a codebase in sonarcube?
There are a product called pitest that answers to the goal of my question. And there is a plugin for sonar and pitest. So the answer to somehow verify if we actually checks for anything in the tests is: pitest.