Is It Possible to log Selenium Webdriver Test Results to Quality Center? - html

I am looking for a way to integrate selenium and QC to log Results.Please help me on this how it can be done. Is there any way to do this.
Thanks in advance
Rashmi

For simple storage of results, you can do what Roland recommended and just use the API to upload the results as an attachment to some entity of your choosing. To get a true "Run" record created in QC (like you get for Manual or QTP tests), you will need to have a Test in a Test Set that can be associated with the run results.
Perhaps the easiest option is to create a QTP/UFT wrapper test. This test will do nothing more than invoke your Selenium test, process the results, and then write those results back to QC using the standard 'Reporter' object.
Another, more complicated, approach is to look at creating a custom test type. This is an advanced topic, and you can refer to the QC documentation on the process.
I recommend the QTP wrapper for ease of implementation and flexibility.

You can use QC's REST API or it's OTA API.

Related

Add Test Steps in JUNIT to get better reporting

I am running UI testing with Jest and I am using a custom reporter to generate a JUNIT.xml file at the end of the run https://github.com/jest-community/jest-junit , so that my azure pipeline can read it and generate nice analytics. My test framework is organize around Test suite that represent a big functionality, then each aspect of that functionality is check within a test contain in the suite ( That check might require multiple steps ) and I would like show each of those steps. This way i think it would be more readable for anyone looking at the report and it would be very easy to get context on why a test failed.
I try to put assertion at each steps. But JUNIT only record the assertion that failed.
I also try to change the way my test are organize and make a step a test itself. But, in Jest, and it seem in a lot of other runner as well ( at least in Node ) it seem that it's not possible to guarantee easily that test are run in a specific sequence. Also, it's really verbose to code suite like this.
Does anybody have an idea on how I could achieve this granularity ?
Thank you.

Which kind of test should I use for a library?

I'm developing a PHP library that I'd like to use in different projects. The library uses a REST-like service in the background. I don't want to write tests for the service API, but for the library.
Would I need to write unit tests? Or functional tests? Since it is a library I won't write acceptance test - I hope this is correct.
I don't know if this is important for the issue, but the library needs to login into the service API and uses an API-key for the next operations. Also, when the library gets tested, the operations before are important. It is a designer tool and I have operations like 'move rectangle', 'rotate rectangle' and so on and I would like to test several operations in a sequence that should bring a certain result.
I think that this is a kind of functional test. Or do I need both? Can unit tests work with a service in the background?

How to take screenshot on test failure with junit 5

Can someone tell me please: how to take a screenshot when test method fails (jUnit 5). I have a base test class with BeforeEach and AfterEach methods. Any other classes with #Test methods extends base class.
Well, it is possible to write java code that takes screenshots, see here for example.
But I am very much wondering about the real problem you are trying to solve this way. I am not sure if you figured that yet, but the main intention of JUnit is to provide you a framework that runs your tests in various environments.
Of course it is nice that you can run JUnit within your IDE, and maybe you would find it helpful to get a screenshot. But: "normally" unit tests also run during nightly builds and such - in environments where "taking a screenshot" might not make any sense!
Beyond that: screenshorts are an extremely ineffective way of collecting information! When you have a fail, you should be locking for textual log files, html/xml reports, whatever. You want that failing tests generate information that can be easily digested.
So, the real answer here is: step back from what you are doing right now, and re-consider non-screenshot solutions to the problem you actually want to solve!
You don't need to take screen shots for JUnit test failes/passes, rather the recommended way is to generate various reports (Tests Passed/Failed Report, Code coverage Report, Code complexity Report etc..) automatically using the below tools/plugins.
You can use Cobertura maven plugin or Sonarqube code quality tool so that these will automatically generate the reports for you.
You can look here for Cobertura-maven-plugin and here for Sonarqube for more details.
You need to integrate these tools with your CI (Continuous Integration) environments and ensure that if the code is NOT passing certain quality (in terms of tests coverage, code complexity, etc..) then the project build (war/ear) should fail automatically.

Using iotagent-node-lib

I have receiving data from our sensors using GET. The request format is: http://IP:PORT/PATH?Operation=InsertObservation&value=0.0012&unit_id=123456&sensor_id=75648. How I can, using iotagent-node-lib, write them to Orion Context Broker?
The fast answer is: "using the iotagentLib.update() method". The slow and complete one imply some other steps you will need to complete to have a fully working agent. I suggest you take a look at the code of https://github.com/telefonicaid/sigfox-iotagent. That's one of the latest IOTAs we started to develop, and makes use of the IoT Agent Node Lib. Sigfox callbacks use HTTP calls much like your approach, so it should be really easy to modify the Sigfox Agent's code to feet your needs. Most of the interesting code is in this file:
https://github.com/telefonicaid/sigfox-iotagent/blob/develop/lib/sigfoxHandlers.js
I think you can reuse most of the code, excluding the sigfoxParser. If you have further doubts, you should be able to solve your doubts using iotagent-node-lib documentation.

Cucumber examples reuse in different features/scenarios

I've been using cucumber for awhile and I've stumbled upon a problem:
Actual question:
Is there a solution to import the examples from a single file/db using cucumber specifically as examples?
Or alternatively is there a way to define a variable while already in-step to be an example?
Or alternatively again, is there an option to send the examples as variables when I launch the feature file/scenario?
The Problem:
I have a couple of scenarios where I would like to use exactly the same examples, over and over again.
It sounds rather easy, but the examples table is very large (more specifically it contains all the countries in the world and their appropriate continents). Thus repeating it would be very troublesome, especially if the table needs changing (I will need to change all the instances of the table separately)
Complication:
I have a rerun function that knows when a specific example failed and reruns it after the test is done.
Restrictions:
I do not want to edit my rerun file
Related:
I've noticed that there is already an open discussion about importing it from csv here:
Importing CSV as test data in Cucumber?
However that discussion is invalid to me because I have the rerun function that only knows to work only with examples, and the solution suggested there ruins that.
Thank you!
You can use CSV and other external file systems with QAF using different BDD syntax.
If you want to use cucumber steps or cucumber runner, you can use QAF-cucumber and BDD2 (preferred) or Gherkin syntax. QAF-cucumber will enable external test data and other qaf features with cucumber.
Below is the example feature file uses BDD2 syntax can be run using TestNG or Cucumber runner.
Feature: feature uses external data file
#datafie:resources/${env}/testdata.csv
#regression
Scenario: Another scenario exploring different combination using data-provider
Given a "${precondition}"
When an event occurs
Then the outcome should "${be-captured}"
testdata.csv file may look like:
TestcaseId,precondition,be-captured
123461,abc,be captured
123462,xyz,not be captured
You can run using TestNG or Cucumber runner. You can use any of inbuilt data provider or custom as well.