Test framework using spreadsheet for Selenium RC / JUnit / Java - junit

We have a smoke test that we run every morning to check a number of applications which involves logging in, executing a simple operation and logging out.
The test at the moment is a collection of Selenium IDE scripts which were imported into Selenium RC as Java and run inside JUnit running inside Netbeans.
What we would like to do is run the test from a spreadsheet i.e each line of the spreadsheet has the application URL, the login parameters, some titles and text to check and the log out sequence.
At the moment, our simple POC simply has one JUnit Test class of the form:
#Test
public void testTestSmokeCheck() throws Exception { ... }
This calls a class which loops through the spreadsheet and does:
selenium sel = new DefaultSelenium
sel.start
sel.open
...
sel.close
for each line of the spreadsheet.
This works but the problem is that many lines of the spreadsheet are compressed into one JUnit test which either all passes or all fails.
What we would like is for each line of the spreadsheet to be a separate JUnit test.
This way each line of the spreadsheet would result in either red or green which would be far more meaningful.
Any ideas how to achieve that?

Not exactly a spreadsheet, but you might want to look into FitNesse. It drives tests from tables on Wiki pages, and prints out red/green pass/fail.
You can do multiple pages and test suites, which should solve your problem.

You can use the Parameterized runner (the JavaDoc has an example). If a test fails, the failure message will include the index of the data, which could correspond to the row in the spreadsheet.
If the number of rows isn't huge, I recommend writing a test case per row and getting rid of the spreadsheet. Sooner or later you will want to do specific logic for some of the pages, and a spreadsheet will no longer model that well.

Related

How to import Jira X-Ray data without creating new tests

My team is just getting started with X-Ray, and we are setting up our pipelines. However, while doing this I noticed that if I submit a Junit xml file to X-Ray via the REST api, it will create new tests for any test data that isn't already in the system.
Is there a way to have X-Ray ignore test results for tests that don't exist for the test execution? I don't want it constantly creating extra tests.
For example:
(Jira/X-Ray Server) TestExecution MyExecution has test testA
From client, I submit a Junit xml file containing results for testA and testB in the MyExecution TestExecution
testB now exist on the server under MyExecution
I would like to be able to submit the Junit xml file without it creating extra tests.
Whenever you import automation results using the REST API, or any of the available CI plugins, Xray will autoprovision ("generic") Test entities.
The flow is detailed here.
Xray tries to find a unique identifier for the automated test; in the case of JUnit, it's based on the full classname plus the name of the test method; this will become part of the Generic Definition field. The process for JUnit is described in more detail here.
How it works for a different test automation framework/report formats, is similar and is detailed on respective documentation pages.
If a "generic" Test is found, then the Test is reused and a Test Run is created against it. Otherwise, the Test will be auto-provisioned.
This process isn't configurable. However, in theory, if the user that you use for the submission of automation results isn't able to create Test issues, you may have what you need.
Things like these are usually not configurable because they are normally a consequence of applying good practices usually discussed internally with the team(s).

CODED UI TESTS: is there a way to add a warning in html report?

In my Coded UI Test project, I need to check if few Labels or Messages are consistent with the context. But those checks are not critical if not consistent and I need to output them only as warnings.
Note that I'm using nested ordered tests to use only one global ordered test with vstest.console.exe and get in one shot the overall test coverage report.
Till now I was creating assertions to check those consistencies, but an assertion failure leads to Test failure, then to ordered test failure and then to playback stop.
I tried to change Playback.PlaybackSettings.ContinueOnError value before and after the assertion: this works as I expect as the assertion is well reported as a warning in the html report file. But whatever, it causes the ordered test to stop and then my global ordered test chaining to fail...
I tried to use TestContext.WriteLine too instead of creating assert, but it seems that this is not output in the html report.
So my question is:
is there any way to create an assertion only as a Warning that will be output in the html report file and that doesn't lead to a test failure?
Thanks a lot for any answer and help on this ;)
So I got my solution with developping my own Warning Engine to integrate Warnings in test report, 'cause I found no existing solution for that with the current Coded UI Test Assertion engine.
I'll try to take some time to post generic parts of the code structure with comments translated in english (we're french so default comments are french for now...), but here are the main step lines :
Create a template based on the UITestActionLog.html original file
report structure of Coded UI Test engine, with only the start
bloc and the javascript functions and CSS declarations in it.
Create an assertion class with a main function to manage insertion
of Warning html bloc in the html report first created from the template.
Then create custom assert functions to call the main function
whereever on runtime, and custom Stopwatch to inject elapsed time in
the report ('cause I could'nt found a way to get back the elapsed
time directly from the Coded UI Test engine).
That's it.
Just a proposition as a way to do it, maybe not the best one but it worked for me. I'll try to take time to put blocl codes to be clearer on it.

JUnit Eclipse View

I am running Junit tests using Eclipse Luna. I have implemented #Test method. I loop within the #Test method for multiple records and use Assert.assertEquals for Non-XML messages and XMLAssert.assertXMLEqual for XML messages.
The problem is, when I run the Junit with single or multiple test cases, I do not get the proper result in the Junit View. It always shows "Runs: 1/1" and does not show the correct count of runs. Even the failures and success are not shown correctly. Am I missing something here?
If you only have a single #Test method, there is only one thing running, so the Runs: 1/1 is correct. If you want to have more show up, make each assertion in its own test.

Modify surefire custom reporting console logging

At our organization we are following this DSL model Domain specific language and stuff where users can write tests from a spreadsheet and the underlying java code understands and executes those instructions.
Now here is the problem.
We have a single test method in our class which uses a data provider, reads all the test methods from the file and executes the instructions.
Naturally, when surefire executes and prints results it says:
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
Is there a way to manipulate this in TestNG such that each custom test metod from excel can be picked up by the system as a legitimate test method when the overall suite executes.
I actually made the group migrate from Junit to TestNG and they are questioning if the DataProvider feature can handle that and i have no response for it :(
So essentially we want to break bindings between java methods by using external data providers but at the same time preserve the number of test methods executed as provided in an excel spreadsheet.
If you can give me any direction it would be most helpful to me.
Attaching my spreadsheet here.
My java file has only 1 test method:
#test
RunSuite(){
// Read each test method from file, i want the build server to recognize them someway as a individual test methods
}

spock: need a hook to perform some setup steps before any test class executes

I have several Spock test classes grouped together in a package. I am using Junit 4.10. Each test class contains several feature test methods.
I want to perform some setup steps (such as loading data into a DB, starting up a web server) before I run any test case, but only once when the testing starts.
I want this "OneTimeSetup" method to be called only once whether:
I run all the test classes in the package (for example if they are grouped in a Test Suite)
I run a few test classes
I run only one test class
I run only a certain feature method within a test class
From reading other posts on SO, it seems that this is what TestNG's #BeforeSuite does.
I am aware of Spock's setupSpec() and cleanupSpec() methods, but they only work within a given test class. I am looking to do something like "setupTestSuite()." How can this be achieved in Spock?
You can write a global extension, use a JUnit test suite, call a static method in a helper class (say from setupSpec) that does its work just once, or let the build tool do the job.