JUnit Eclipse View - junit

I am running Junit tests using Eclipse Luna. I have implemented #Test method. I loop within the #Test method for multiple records and use Assert.assertEquals for Non-XML messages and XMLAssert.assertXMLEqual for XML messages.
The problem is, when I run the Junit with single or multiple test cases, I do not get the proper result in the Junit View. It always shows "Runs: 1/1" and does not show the correct count of runs. Even the failures and success are not shown correctly. Am I missing something here?

If you only have a single #Test method, there is only one thing running, so the Runs: 1/1 is correct. If you want to have more show up, make each assertion in its own test.

Related

CODED UI TESTS: is there a way to add a warning in html report?

In my Coded UI Test project, I need to check if few Labels or Messages are consistent with the context. But those checks are not critical if not consistent and I need to output them only as warnings.
Note that I'm using nested ordered tests to use only one global ordered test with vstest.console.exe and get in one shot the overall test coverage report.
Till now I was creating assertions to check those consistencies, but an assertion failure leads to Test failure, then to ordered test failure and then to playback stop.
I tried to change Playback.PlaybackSettings.ContinueOnError value before and after the assertion: this works as I expect as the assertion is well reported as a warning in the html report file. But whatever, it causes the ordered test to stop and then my global ordered test chaining to fail...
I tried to use TestContext.WriteLine too instead of creating assert, but it seems that this is not output in the html report.
So my question is:
is there any way to create an assertion only as a Warning that will be output in the html report file and that doesn't lead to a test failure?
Thanks a lot for any answer and help on this ;)
So I got my solution with developping my own Warning Engine to integrate Warnings in test report, 'cause I found no existing solution for that with the current Coded UI Test Assertion engine.
I'll try to take some time to post generic parts of the code structure with comments translated in english (we're french so default comments are french for now...), but here are the main step lines :
Create a template based on the UITestActionLog.html original file
report structure of Coded UI Test engine, with only the start
bloc and the javascript functions and CSS declarations in it.
Create an assertion class with a main function to manage insertion
of Warning html bloc in the html report first created from the template.
Then create custom assert functions to call the main function
whereever on runtime, and custom Stopwatch to inject elapsed time in
the report ('cause I could'nt found a way to get back the elapsed
time directly from the Coded UI Test engine).
That's it.
Just a proposition as a way to do it, maybe not the best one but it worked for me. I'll try to take time to put blocl codes to be clearer on it.

Don't let test stop on failure

I'm looking for the best practice for following (simplified) scenario:
#Test
public void someTest() {
for(String someText : someTexts) {
Assert.true(checkForValidity(someText));
}
}
This test iterates through x-thousands of texts and in this case I don't want it to be stopped for each failure. I want the errors to be buffered and in case of error(s) to fail at the end. Has JUnit got something on board for for my aim?
First of all, it's not really the correct way to implement this. JUnit allows parametrizing tests by defining a collection of inputs/outputs with the Parametrized test runner. Doing it this way ensures that each test case becomes a unique instance, making test report clearly state which samples passed and which ones failed.
If you still insist on doing it your way you should have a look at AssertJ's Soft Assertions which allow "swallowing" individual assertion failures, accumulating them and only reporting after the test is finished. The linked documentation section uses a nice example and is definitely worth reading.

Junit report : Called Junit tests within another Junit Test, are not part of the report

I have not found any related searches to my problem here and hence Im writing this one.
I have a JUnit Class
MyHandler Class with tests - testx,testy,testz
that calls other JUnit Classes
MyTestA, MyTestB classes each having tests
based on some logic
MyTestA has tests - testa1, testa2, testa3
MyTestB has tests - testb1, testb2, testb3
Everything works fine, except that the report shows only 3 tests executed (testx,testy,testz) - Though all the tests in MyTestA, MyTestB are also executed, they are not part of the report.
I see this when using Junit reporting using ant and also in Eclipse IDE.
Within MyHandler Class, I am calling the JUnit classes as ,
org.junit.runner.JUnitCore.runClasses(MyTestA)
org.junit.runner.JUnitCore.runClasses(MyTestB)
Is there anyway, we can include the individual called junit tests in the Junit report?
We are using ant to build along with junit reporting.
Appreciate any help on this,
Sudhakar
JUnit will only report on the structure it knows about, so use a JUnit test suite (Junit4 Test Suites). You can list all the test classes and JUnit will run them and also include them in the report.

spock: need a hook to perform some setup steps before any test class executes

I have several Spock test classes grouped together in a package. I am using Junit 4.10. Each test class contains several feature test methods.
I want to perform some setup steps (such as loading data into a DB, starting up a web server) before I run any test case, but only once when the testing starts.
I want this "OneTimeSetup" method to be called only once whether:
I run all the test classes in the package (for example if they are grouped in a Test Suite)
I run a few test classes
I run only one test class
I run only a certain feature method within a test class
From reading other posts on SO, it seems that this is what TestNG's #BeforeSuite does.
I am aware of Spock's setupSpec() and cleanupSpec() methods, but they only work within a given test class. I am looking to do something like "setupTestSuite()." How can this be achieved in Spock?
You can write a global extension, use a JUnit test suite, call a static method in a helper class (say from setupSpec) that does its work just once, or let the build tool do the job.

Test framework using spreadsheet for Selenium RC / JUnit / Java

We have a smoke test that we run every morning to check a number of applications which involves logging in, executing a simple operation and logging out.
The test at the moment is a collection of Selenium IDE scripts which were imported into Selenium RC as Java and run inside JUnit running inside Netbeans.
What we would like to do is run the test from a spreadsheet i.e each line of the spreadsheet has the application URL, the login parameters, some titles and text to check and the log out sequence.
At the moment, our simple POC simply has one JUnit Test class of the form:
#Test
public void testTestSmokeCheck() throws Exception { ... }
This calls a class which loops through the spreadsheet and does:
selenium sel = new DefaultSelenium
sel.start
sel.open
...
sel.close
for each line of the spreadsheet.
This works but the problem is that many lines of the spreadsheet are compressed into one JUnit test which either all passes or all fails.
What we would like is for each line of the spreadsheet to be a separate JUnit test.
This way each line of the spreadsheet would result in either red or green which would be far more meaningful.
Any ideas how to achieve that?
Not exactly a spreadsheet, but you might want to look into FitNesse. It drives tests from tables on Wiki pages, and prints out red/green pass/fail.
You can do multiple pages and test suites, which should solve your problem.
You can use the Parameterized runner (the JavaDoc has an example). If a test fails, the failure message will include the index of the data, which could correspond to the row in the spreadsheet.
If the number of rows isn't huge, I recommend writing a test case per row and getting rid of the spreadsheet. Sooner or later you will want to do specific logic for some of the pages, and a spreadsheet will no longer model that well.