I want to have two test suites. One that runs all my test classes with active asserts and one that runs them with disabled asserts.
I already tried to create a static{} block inside the test suite and call setClassAssertionStatus for each test class but this does not work. Maybe because the test classes are already initialized when the static block is executed.
How else could I do this?
Related
I have several Spock test classes grouped together in a package. I am using Junit 4.10. Each test class contains several feature test methods.
I want to perform some setup steps (such as loading data into a DB, starting up a web server) before I run any test case, but only once when the testing starts.
I want this "OneTimeSetup" method to be called only once whether:
I run all the test classes in the package (for example if they are grouped in a Test Suite)
I run a few test classes
I run only one test class
I run only a certain feature method within a test class
From reading other posts on SO, it seems that this is what TestNG's #BeforeSuite does.
I am aware of Spock's setupSpec() and cleanupSpec() methods, but they only work within a given test class. I am looking to do something like "setupTestSuite()." How can this be achieved in Spock?
You can write a global extension, use a JUnit test suite, call a static method in a helper class (say from setupSpec) that does its work just once, or let the build tool do the job.
I have a test suite and a number of test in there own class files. These are selenium webdriver tests. Each test needs to start the webdriver before they start. How should this be done?
I can have the suite start the webdriver fine from its #BeforeClass. But when i try to run a single test from eclipse the webdriver doesnt start. The tests dont know that they are part of the suite and should run the suites #BeforeClass.
The single Tests would only run the #BeforeClass of the suite if their class extends the suite.
Due to the fact that that's a senseless relationship I think the solution for your Problem is either to define a BeforeClass in something like a TestFunctions.java file as Superclass for all Testclasses or create BeforeClasses for every single Testclass.
Keep in mind that the #BeforeClass and #Before Annotations of the superclass are executed before the #Before(Class) of the subclass but can be overridden.
Puzzled: I added a new test case function to a junit test. I run the entire class from either Eclipse or from maven, and the old case (there was only one before) runs and the new one does not. It doesn't fail. A breakpoint in it is not hit. The new function has an #Test annotation, just like the old one.
Junit version is 4.5.
Is there a way to get junit to log or trace its thought process in selecting functions to run?
I guess you still ran old class file, as new Java file was not be compiled successfully.
You could modify an old test method to see if the class is really modified: to let successful method to fail.
As per the documents, “assert” will fail the test and abort the current running test case, whereas a “verify” will fail the test and continue to run the test case.
But verifyTrue(false) is not failing the case(rather continue with the next step and mark the case as passed).
Assuming that's a Selenium call, then according to this, "[verify methods] don't stop the test when they fail. Instead, verification errors are all thrown at once during tearDown."
We have a smoke test that we run every morning to check a number of applications which involves logging in, executing a simple operation and logging out.
The test at the moment is a collection of Selenium IDE scripts which were imported into Selenium RC as Java and run inside JUnit running inside Netbeans.
What we would like to do is run the test from a spreadsheet i.e each line of the spreadsheet has the application URL, the login parameters, some titles and text to check and the log out sequence.
At the moment, our simple POC simply has one JUnit Test class of the form:
#Test
public void testTestSmokeCheck() throws Exception { ... }
This calls a class which loops through the spreadsheet and does:
selenium sel = new DefaultSelenium
sel.start
sel.open
...
sel.close
for each line of the spreadsheet.
This works but the problem is that many lines of the spreadsheet are compressed into one JUnit test which either all passes or all fails.
What we would like is for each line of the spreadsheet to be a separate JUnit test.
This way each line of the spreadsheet would result in either red or green which would be far more meaningful.
Any ideas how to achieve that?
Not exactly a spreadsheet, but you might want to look into FitNesse. It drives tests from tables on Wiki pages, and prints out red/green pass/fail.
You can do multiple pages and test suites, which should solve your problem.
You can use the Parameterized runner (the JavaDoc has an example). If a test fails, the failure message will include the index of the data, which could correspond to the row in the spreadsheet.
If the number of rows isn't huge, I recommend writing a test case per row and getting rid of the spreadsheet. Sooner or later you will want to do specific logic for some of the pages, and a spreadsheet will no longer model that well.