I have the classical structure for tests, I have a test suite of different suites like DatabaseTests, UnitTests etc. Sometimes those suites contains other suites like SlowDatabaseTests, FastDatabaseTests etc.
What I want is to randomize the running order of tests so I will make sure they are not dependent to each other. Randomization should be at every level, like suite should shuffle test class order, and test class should shuffle test method order.
If it is possible to do this in Eclipse that will be the best.
You do have a Sortable but I can't see how you would use it.
You could extend BlockJUnit4ClassRunner and have computeTestMethods() return a randomized copy of super.computeTestMethods(). Then use the #RunWith to set that as the runner to use.
e.g.
package com.stackoverflow.mlk;
import java.util.Collections;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.InitializationError;
public class RandomBlockJUnit4ClassRunner extends BlockJUnit4ClassRunner {
public RandomBlockJUnit4ClassRunner(Class<?> klass)
throws InitializationError {
super(klass);
}
protected java.util.List<org.junit.runners.model.FrameworkMethod> computeTestMethods() {
java.util.List<org.junit.runners.model.FrameworkMethod> methods = super.computeTestMethods();
Collections.shuffle(methods);
return methods;
}
}
Then
#RunWith(com.stackoverflow.mlk.RandomBlockJUnit4ClassRunner.class)
public class RandomOrder {
#Test
public void one() {
}
#Test
public void two() {
}
#Test
public void three() {
}
}
https://github.com/KentBeck/junit/pull/386 introduces some orders but not RANDOM. Probably you do not really want this; tests should run deterministically. If you need to verify that different permutations of tests still pass, either test all permutations; or, if this would be impractically slow, introduce a “random” seed for shuffling that is determined by an environment variable or the like, so that you can reproduce any failures. http://hg.netbeans.org/main/file/66d9fb12e98f/nbjunit/src/org/netbeans/junit/MethodOrder.java gives an example of doing this for JUnit 3.
In general what you need to do is to write your own test runner and in the test runner class aggregate the methods and randomly run each test (make sure you don't run a test twice).
Read more about the test framework and how to write your own test runner here:
http://www.ddj.com/architect/184415674
In JUnit 4.13, to run the tests within a test class in random order, write a small helper class:
import org.junit.runner.manipulation.Ordering;
import java.util.Random;
public class RandomOrder implements Ordering.Factory {
#Override
public Ordering create(Ordering.Context context) {
long seed = new Random().nextLong();
System.out.println("RandomOrder: seed = " + seed);
return Ordering.shuffledBy(new Random(seed));
}
}
Then, annotate your test class with:
#OrderWith(RandomOrder.class)
This way, the test methods of this one class are run in random order. Plus, if they unexpectedly fail, you know the random seed to repeat exactly this order.
I don't know though how to configure this for a whole project or a test suite.
I will make sure they are not dependent to
each other
You should make sure that this is the case without relying on random execution order. What makes you fear that dependencies may exist?
This issue is open on JUnit GitHub since 2 years, and point out 2 independent issues:
- Tests depending on the execution order;
- Non repeatable tests.
Consider adressing the issue at the root, rather than trying to use the framework to do the job afterwards. Use setUp and tearDown method to guarantee isolation, and test at the smallest level.
Here is a solution with Gradle and JUnit 5.8.0
Step 1 : Ensure that you have latest JUnit version dependency.
Step 2 : Define the required properties under build.gradle test section
test {
useJUnitPlatform()
systemProperties([
//Random in method level
'junit.jupiter.testmethod.order.default': 'org.junit.jupiter.api.MethodOrderer$Random',
// Random in class level
'junit.jupiter.testclass.order.default' : 'org.junit.jupiter.api.ClassOrderer$Random',
// Log configuration to see the seed
'java.util.logging.config.file' : file('src/test/resources/logging.properties')
])
//To print the JUnit logs in console
testLogging {
events "passed", "skipped", "failed", "standardOut", "standardError"
}
}
Step 3: Define logging.properties under src/test/resources
.level=CONFIG
java.util.logging.ConsoleHandler.level=CONFIG
org.junit.jupiter.api.ClassOrderer$Random.handlers=java.util.logging.ConsoleHandler
org.junit.jupiter.api.MethodOrderer$Random.handlers=java.util.logging.ConsoleHandler
Step 4 : Run test. gradlew clean test
You can see the seed used for the random test in the console
CONFIG: ClassOrderer.Random default seed: 65423695211256721
CONFIG: MethodOrderer.Random default seed: 6542369521653287
In case of flaky test, you can reproduce it by configuring the same seed where the JUnit tests were failing
systemProperties([
'junit.jupiter.execution.class.order.random.seed' : '65423695211256721'
'junit.jupiter.execution.order.random.seed' : '6542369521653287'
])
References : how-to-randomize-tests-in-junit , Random
Related
Student here. In JUnit 5, what is the best way to implement conditional test execution based on whether another test succeeded or failed? I presume it would involve ExecutionCondition, but I am unsure how to proceed. Is there a way to do this without having to add my own state to the test class?
To note, I am aware of dependent assertions, but I have multiple nested tests that represent distinct substates, and so I would like a way to do this at the test level itself.
Example:
#Test
void testFooBarPrecondition() { ... }
// only execute if testFooBarPrecondition succeeds
#Nested
class FooCase { ... }
// only execute if testFooBarPrecondition succeeds
#Nested
class BarCase { ... }
#Nested tests give the test writer more capabilities to express the
relationship among several groups of tests. Such nested tests make use
of Java’s nested classes and facilitate hierarchical thinking about
the test structure. Here’s an elaborate example, both as source code
and as a screenshot of the execution within an IDE.
As stated from JUnit 5 documentation, #Nested is use for convenient display in your IDE. I would rather use Assumptions for your preconditions inside a Depentent Assertions.
assertAll(
() -> assertAll(
() -> assumeTrue(Boolean.FALSE)
),
() -> assertAll(
() -> assertEquals(10, 4 + 6)
)
);
You can fix the problem by extracting common precondition logic in a #BeforeEach/#BeforeAll setup method, then use assumptions, which was developed exactly for the purpose of condition test execution. Some sample code:
class SomeTest {
#Nested
class NestedOne {
#BeforeEach
void setUp() {
boolean preconditionsMet = false;
//precondition code goes here
assumeTrue(preconditionsMet);
}
#Test // not executed when precodition is not met
void aTestMethod() {}
}
#Nested
class NestedTwo {
#Test // executed
void anotherTestMethod() { }
}
}
What is the difference between test suite, test case and test category.
I found a partial answer here
But what about categories?
Test case is a set of test inputs, execution conditions, and expected results developed to test a particular execution path. Usually, the case is a single method.
Test suite is a list of related test cases. Suite may contain common initialization and cleanup routines specific to the cases included.
Test category/group is a way to tag individual test cases and assign them to categories. With categories, you don't need to maintain a list of test cases.
Testing framework usually provides a way to specify which categories to include or exclude from a given test run. This allows you to mark related test cases across different test suites. This is useful when you need to disable/enable cases that have common dependencies (API, library, system, etc.) or attributes (slow, fast cases).
As far as I can understand, test group and test category are different names for the same concept used in different frameworks:
#Category in JUnit
#group annotation in PHPUnit
TestCategory in MSTest
Test Categories is like a sub Test Suite. Take this documentation by example. One a class file, you will have multiple test cases. A Test Suite is a grouping of Test classes that you want to run. A Test Category is a sub grouping of Test Cases. You could put a annotation at some test cases in your class file, and create to test suite pointing to the same test class, but filtering one of the suites to test only some categories. Example from documentation:
public interface FastTests { /* category marker */ }
public interface SlowTests { /* category marker */ }
public class A {
#Test
public void a() {
fail();
}
#Category(SlowTests.class)
#Test
public void b() {
}
}
#Category({SlowTests.class, FastTests.class})
public class B {
#Test
public void c() {
}
}
#RunWith(Categories.class)
#IncludeCategory(SlowTests.class)
#SuiteClasses( { A.class, B.class }) // Note that Categories is a kind of Suite
public class SlowTestSuite {
// Will run A.b and B.c, but not A.a
}
#RunWith(Categories.class)
#IncludeCategory(SlowTests.class)
#ExcludeCategory(FastTests.class)
#SuiteClasses( { A.class, B.class }) // Note that Categories is a kind of Suite
public class SlowTestSuite {
// Will run A.b, but not A.a or B.c
}
Notice that both Test Suite points to the same test classes, but they will run different test cases.
I am new to JUnit, but this is what I am trying to do:
generate data with my DataGenerator class;
instantiate a test class MyTestClass
pass to MyTestClass the test data generated data (first step)
run the test
collect TestResult result
With the above, all works, but I cannot see any timing information (time it took to complete the test) at the TestResult object. Anything being done wrong here ?
The approach above is because I need to run this on other test classes using the same data.
DataGenerator testData = new DataGenerator();
MyTestClass myTestClass = new MyTestClass("mytestmethod");
myTestClass.setBaseLine(testData);
try {
TestResult testResult = myTestClass.run();
System.out.println(testResult.wasSuccessful());
} catch (Throwable ex) {
Logger.getLogger(TestSupervisor.class.getName()).log(Level.SEVERE, null, ex);
}
JUnit 4 has a timeout annotation that will fail the test if the test too long.
//this test will fail if it takes longer than 1 sec
#Test(timeout = 1000)
public void myTest(){
...
}
I think you can get around the DataGenerator issue by using a better TestFixture.
If all of your tests that use the DataGenerator are in the same test class, you can use the #BeforeClass and #AfterClass annotations to set up and tear down data that is used across tests. (those methods with those annotations are called only once before/after all the tests are run).
So I'm trying to run tests that will evaluate certain properties of different websites. The actual evaluation is being handled by a pay-per-call resource, so I want to minimize the number of times I generate the resource. Also, I need this to run in JUnit to fit into a larger automated test suite.
I've been doing this with parameterized tests so far, but I just learned that they instantiate a new instance for each test method.
Now I'm trying to figure out a way to have the resource created just once for each parameter that is being fed into the constructor of my testing class. #BeforeClass does it just once, and #Before does it once before each test.
All the help topics I've been able to find have dealt with creating expensive resources once for all tests, but in this case I need the resource to be recreated for each new set of parameters.
I've written some example code / output below to better show what I'm looking for:
#RunWith(Parameterized.class)
public class MyTestClass {
private static Resource expensiveToCreateResource;
public MyTestClass(String url) {
System.out.println("Constructing resource for " + url);
expensiveToCreateResource = new Resource(url); //This is getting created 4x, which is wrong
}
#Parameters
public static Collection<Object[]> data() {
return Arrays.asList(new Object[][] {{"url1"},{"url2"}});
}
#Test
public test1() {
expensiveToCreateResource.method1();
System.out.println("test1");
}
#Test
public test2() {
expensiveToCreateResource.method2();
System.out.println("test2");
}
}
would produce output:
Constructing resource for url1
test1
test2
Constructing resource for url2
test1
test2
Any ideas / solutions? Thanks.
If you want to have the class instantiated once per parameter, you'll have to write your own JUnit test runner. Instead I'd try to cache the information as needed, e.g. in a static map that maps URLs to resources.
If writing a Java unit test with mocking using JMock, should we use
Mockery context = new Mockery()
or
Mockery context = new JUnit4Mockery()
What is the difference between the two, and when should we use which?
#Rhys It's not the JUnit4Mockery that replaces the need to call assertIsSatisfied, its the JMock.class (combined with the #RunWith). You wont need to call assertIsSatisfied when you create a regular Mockery.
The JUnit4Mockery translates errors.
By default, expectation exceptions are reported in Junit as ExpectationError, so for example, using
Mockery context = new Mockery();
you'll get
unexpected invocation: bar.bar()
no expectations specified: did you...
- forget to start an expectation with a cardinality clause?
- call a mocked method to specify the parameter of an expectation?
and using,
Mockery context = new JUnit4Mockery();
you'll get
java.lang.AssertionError: unexpected invocation: bar.bar()
no expectations specified: did you...
- forget to start an expectation with a cardinality clause?
- call a mocked method to specify the parameter of an expectation?
what happened before this: nothing!
The JUnit4Mockery converted the ExpectationError to an java.lang.AssertionError which JUnit deals with. Net result is that it'll show up in your JUnit report as an failure (using JUnit4Mockery) rather than an error.
When using JMock with JUnit 4, you can avoid some boilerplate code by taking advantage of the JMock test runner. When you do this, you must use the JUnit4Mockery instead of the regular Mockery.
Here is how you'd structure a JUnit 4 test:
#RunWith(JMock.class)
public void SomeTest() {
Mockery context = new JUnit4Mockery();
}
The main advantage is there is no need to call assertIsSatisfied in each test, it is called automatically after each test.
Better yet, per http://incubator.apache.org/isis/core/testsupport/apidocs/org/jmock/integration/junit4/JUnitRuleMockery.html use #Rule and avoid #RunWith which you might need for some other system:
public class ATestWithSatisfiedExpectations {
#Rule
public final JUnitRuleMockery context = new JUnitRuleMockery();
private final Runnable runnable = context.mock(Runnable.class);
#Test
public void doesSatisfyExpectations() {
context.checking(new Expectations() {
{
oneOf(runnable).run();
}
});
runnable.run();
}
}