I created a #Rule similar to #RunWith parameterized.class so that all testcase can repeat tests with different parameters (I cannot use parameterized.class directly becase my test class already has #Runwith for other purpose).
However, the #Rule does not work when testing the following method:
#Test
public void fooTest() {
/*exception is an ExceptedException initialized somewhere else as:
*#Rule public ExpectedException exception = ExpectedException.none();
*/
exception.expect(A);
exception.expectMessage(B);
someTestWhichThrowExceptionOfB();
}
In fact, if I hardcode my parameter as a value, the test passes since it does throw the exception of B.
But if I set my parameter = MyParameterRule.value(), the test does throw an exception of B as well but then fails and saying it fails because exception of B?
I guess in the second case if I use MyParameterRule, then the exception does not work? then why? how to make it work still?
If you can depend on JUnit 4.12, you may be able to use Parameterized with #UseParametersRunnerFactory (see the Parameterized Javadoc for details).
As for why your parameterized rule wasn't working, here is a (somewhat long) explanation.
JUnit has an internal assumption that a new instance of your test class is created for each test method. JUnit does this so the state stored in an instance of your test from one test method run doesn't affect the next test method run.
The ExpectedException rule has the same expectation. When you call expect, it changes the state of the ExpectedException field that was created at the initialization time of that field. It modifies it to "expect" an exception to be thrown.
When your Rule tries to run the same method twice (with different parameters) that violates this assumption. The first time you call a method that calls expect it will work, but when you call it again, it might not work, because you are re-using the previously-modified ExpectedException rule.
When a JUnit runs a test method for a JUnit4-style test it does the following:
Create an instance of the test class.
Create a Statement that will run the test method
If the method's #Test annotation uses the expected attribute, wrap the Statement with another statement that handles expected exceptions
If the method's #Test annotation uses the timeout attribute, wrap the Statement with another statement that handles timeouts
Wrap that Statement with other statements that will call the methods annotated with #Before and #After
Wrap that Statement with other statements that will call the rules
For more details, look at the JUnit source.
So the Statement that is passed to your rule wraps an already-constructed class (it has to, so the apply() method of your rule is called).
Unfortunately, this means that a Rule should not be used to run a test method multiple times, because the test (or it's rules) could have state that was set the previous time the method was run.
If you can't depend on JUnit 4.12, it's possible you can hack this to work by using the RuleChain Rule to ensure that the custom Rule you use to run the test multiple times runs "around" the other rules.
If you are using Java 8 then you can replace the ExpectedException rule with the Fishbowl library.
#Test
public void fooTest() {
Throwable exception = exceptionThrownBy(
() -> someTestWhichThrowExceptionOfB());
assertEquals(A.class, exception.getClass());
assertEquals("B", exception.getMessage());
}
This can be achieved with anonymous classes in Java 6 and 7, too.
#Test
public void fooTest() {
Throwable exception = exceptionThrownBy(new Statement() {
public void evaluate() throws Throwable {
someTestWhichThrowExceptionOfB();
}
});
assertEquals(A.class, exception.getClass());
assertEquals("B", exception.getMessage());
}
Related
As a beginner for Mockito Junit, this may sound very dump question, but I'd still like to confirm:
Class1 input1;
#Mock
Class2 input2;
private Class3 obj;
#Before
public void setup() {
this obj = new Class3(input1, input2);
}
#Test
public void doTest() {
val result1 = obj.method1(input1); // return sth.
val result2 = obj.method2(input2); // return null.
}
So input1 and input2 are passed into Class3 obj, but only input2 is Mocked dependency. Then I found when I call method2 which relies on input2, it simply returns null.
So whatever mocked class is essentially null? That's why we need to use when...thenReturn for mocked class? After all, our purpose is to test major function, but not dependency.
Is my understanding correct?
A mocked class is not null. It is a skeleton with the same signature as the original class, but without the implementation. It is instrumented to 'see' the calls to all methods, so it can be verified afterwards. A mock is therefor an object that does not work. It can't store data and it can't execute methods. You can only control all calls to it and all return values of the mock. If you need some more advanced mocks you should use a #Spy. A spy is a 'mock' but with the original implementation: it is an instrumented class to detect all calls to it and control output, BUT also has the original storage facilities and real calls.
Another way to do real calls is by this construction:
Mockito.when(myMockedObject).thenCallRealMethod();
In unit testing it is best practice to ONLY test the one class you are testing, without the underlying classes. It sounds like an open door, but it really is not. All the classes that are used by the class you are testing should be mocked. With the mock you have full control on return values and you can test all corner-cases for that class. All classes that are mocked should be tested themselves by their own unit tests. This brings the next issue: all classes that are used should be injectable or changeable by the test. Instead of a real DB driver you want to be able to inject a mock so you can see if all the right calls are made.
Yes your understanding is correct.
If you have used the appropriate runner (junit4) or extension (junit5), your mocked object is not null (even if its toString method may return something looking like "null").
However, what may be a problem is that your Class3#method2 uses a method of the mock of Class2 that is returning null.
In fact, that behavior is wanted. Here you have the choice between :
make your mock return deep stubs using the annotation #Mock (answer = Answers.RETURNS_DEEP_STUBS), this way any method of Class2 (that is not final nor returning a primitive or wrapper type) will return a mock, and any method of this mock will return a mock and so on.
declare explicitly how the mock should behave with something like: Mockito.when(input2.myMethod()).thenReturn("test");. The subbing API supplied by Mockito is well documented: https://static.javadoc.io/org.mockito/mockito-core/2.23.0/org/mockito/Mockito.html#stubbing
Hope this helps,
I have a test suite which tests two different things in the same class. I have a before method that initialises some fields for the test methods to use. However, I have a group of test methods that uses the first set of field, and another group that uses the second, but not the first. I know it's possible to split the before action over different before methods, but is it also possible to specify which one runs before each test?
Concrete example:
#Before
public void before1() {...}
#Before
public void before2() {...}
#Test
public void test1() {
//Only before1 runs
}
#Test
public void test2() {
//Only before2 runs
}
This is a simple representation, but I have much more tests that use either of these befores.
Everything you've stated in your question is pointing to splitting up your tests into 2 separate classes. I am guessing that the two groups you have are testing distinct features of your code and may even have some commonality in the test names. Take all of the tests that require before1 into a test class and all the tests that require before2 into another test class. You can name these new test classes according to the grouping of behaviour you're testing.
For example if half of your tests are for success scenarios and half are testing failure scenarios, put these into classes named something like FooSucceedsTest and the failures into FooFailsTest.
There is no guarantee on the order of a #Before executing just as there's no guarantee on a #Test order of execution.
The solution is to do any setup a test is dependent on in the #Test itself and use the #Before for common setup before test execution.
I am using jUnit to manage integration tests for an application that accesses a database. Because setting up the test data is a time-consuming operation, I have been doing that in the #BeforeClass method, which is executed only once per test class (as opposed to the #Before method, which is run once per test method).
Now I want to try a few different permutations for the configuration of the data layer, running all of my tests on each different configuration. This seems like a natural use of the Parameterized test runner. Problem is, Parameterized supplies parameters to the class constructor, and the #BeforeClass method is abstract and is called before the class constructor.
A few questions,
Does Parameterized call the #BeforeClass method for each permutation of parameters, or does it only call once?
If the #BeforeClass method is called repeatedly, is there some way to access the parameter values from inside of it?
If none of these, what do people suggest as the best alternative approach to this problem?
I think you are going to need a custom test runner. I'm having the same issue you are having (needing to run the same tests using multiple, expensive configurations). You'd need a way to parameterize the set up, perhaps using #Parameter annotations similar to those used by the Parameterized runner but on static member fields instead of instance fields. The custom runner would have to find all static member fields with the #Parameter annotation and then run the test class (probably using the basic BlockJunit4ClassRunner) once per static #Parameter field. The #Parameter field should probably be a #ClassRule.
Andy on Software has done a good job of developing custom test runners, and he explains so pretty clearly in these blog posts here and here.
#BeforeClass is only called once in your example. Which makes sense given the name - before class!
If your tests require different data, there are two choices I can think of:
Set up that data in #Before so it is test specific
Group the tests that you want to run with the same data into separate test classes and use #BeforeClass for each one.
You can call this initialization logic in the constructor of your test class. Keep track of the last parameter used in a static variable. When it changes, set up the class for the new parameter.
I can't think of an equivalent for AfterClass.
This is an old question, but I just had to solve a probably similar problem. I went with the solution below for now, which essentially is an implementation of TREE's (updated) answer with using a generic abstract base class in order to avoid duplication whenever you need this mechanism.
Concrete tests would provide a #Parameters method that return an iterable of single-element arrays containing a Supplier< T > each. Those suppliers are then executed exactly once per actual input needed by the concrete test methods.
#RunWith(Parameterized.class)
public class AbstractBufferedInputTest<T> {
private static Object INPUT_BUFFER;
private static Object PROVIDER_OF_BUFFERED_INPUT;
private T currentInput;
#SuppressWarnings("unchecked")
public AbstractBufferedInputTest(Supplier<T> inputSuppler) {
if (PROVIDER_OF_BUFFERED_INPUT != inputSuppler) {
INPUT_BUFFER = inputSuppler.get();
PROVIDER_OF_BUFFERED_INPUT = inputSuppler;
}
currentInput = (T) INPUT_BUFFER;
}
/**
*
* #return the input to be used by test methods
*/
public T getCurrentInput() {
return currentInput;
}
}
You could do your initialization in a #Before method, writing to an instance variable but testing for null.
#RunWith(value = Parameterized.class)
public class BigThingTests {
private BigThing bigThing;
#Before
public void createBitThing() {
if (bigThing == null) {
bigThing = new BigThing();
}
}
...
}
A new instance of BigThingTests is created for each set of parameters, and bigThing is set to null with each new instance. The Parameterized runner is single-threaded, so you don't have to worry about multiple initializations.
Let say I have a test class called MyTest.
In it I have three tests.
public class MyTest {
AnObject object;
#Before
public void setup(){
object = new AnObject();
object.setSomeValue(aValue);
}
#Test
public void testMyFirstMethod(){
object.setAnotherValue(anotherValue);
// do some assertion to test that the functionality works
assertSomething(sometest);
}
#Test
public void testMySecondMethod(){
AValue val = object.getAnotherValue();
object.doSomethingElse(val);
// do some assertion to test that the functionality works
assertSomething(sometest);
}
Is there any way I can use the value of anotherValue, which is set with its setter in the first test, in the second test. I am using this for testing database functionality. When I create an object in the DB I want to get its GUID so I can use this to do updates and deletes in later test methods, without having to hardcode the GUID and therefore making it irrelevant for future use.
You are introducing a dependency between two tests. JUnit deliberately does not support dependency between tests, and you can't guarantee the order of execution (except for test classes in a test suite, see my answer to Has JUnit4 begun supporting ordering of test? Is it intentional?). So you really want to have dependencies between two test methods:
you have to use an intermediate static value
as Cedric suggests, use TestNG, which specifically supports dependencies
in this case, you can create a method to create the line, and call it from both methods.
I would personally prefer 3, because:
I get independent tests, and I can run just the second test (in Eclipse or such like)
In my teardown in the class, I can remove the line from the database, the cleanup. This means that whichever test I run, I always start off with the same (known) database state.
However, if your setup is really expensive, you can consider this to be an integration test and just accept the dependency, to save time.
You should use TestNG if you need this (and I agree it's fairly common in integration testing). TestNG uses the same instance to run your tests, so values stored in fields are preserved between tests, which is very useful when your objects are expensive to create (JUnit forces you to use statics to achieve the same effect, which should be avoided).
First off, make sure your #Test 's run in some kind of defined order
i.e. #FixMethodOrder(MethodSorters.NAME_ASCENDING)
In the example below, I'm assuming that test2 will run after test1
To share a variable between them, use a ThreadLocal (from java.lang).
Note that the scope of the ThreadLocal variable is to the thread, so if you are running multiple threads, each will have a copy of 'email' (the static in this case implies that its only global to the thread)
private static ThreadLocal<String> email = new ThreadLocal<String>();
#Test
public void test1 {
email.set("hchan#apache.org);
}
#Test
public void test2 {
System.out.println(email.get());
}
You should not do that. Tests are supposed to be able to run in random order. If you want to test things that depend on one value in the database, you can do that in the #Before code, so it's not all repeated for each test case.
I have found nice solution, just add Before annotation to the previous test!
private static String email = null;
#Before
#Test
public void test1 {
email = "test#google.com"
}
#Test
public void test2 {
System.out.println(email);
}
If you, like me, googled until here and the answer didn't serve to you, I'll just leave this: Use #BeforeEach
I have a base class for many tests that has some helper methods they all need.
It does not by itself have any tests on it, but JUnit (in eclipse) is invoking the test runner on it and complaining that there are no methods to test.
How can I make it ignore this class?
I know I could add a dummyTest method that would solve the problem, but it would also appear for all the children classes.
Suggestions?
Use to #Ignore annotation. It also works on classes.
See this one:
#Ignore public class IgnoreMe {
#Test public void test1() { ... }
#Test public void test2() { ... }
}
Also, you can annotate a class
containing test methods with #Ignore
and none of the containing tests will
be executed.
Source: JUnit JavaDoc
Just as a note, I'd always recommend giving a reason for the ignore:
#Ignore("This test will prove bug #123 is fixed, once someone fixes it")
I'm hoping the junit xml report formatter, used when running tests from ant, will one day include the ignored count (and the reasons) along with pass, fail, and error.
Making the class abstract should be enough for JUnit 4. If it doesn't work, double check which version of JUnit you're using.
This also communicates your intent that this is just a fragment of a test.
It also prevents JUnit from counting the tests in the class as "ignored" (so the final count of ignored tests will be what you expect).
Alternatively, rename the class. Most of the runners use the class name to determine which classes are tests and which are helpers. In Maven with the default settings, you just have to make sure the base class doesn't begin or end with Test and doesn't end with TestCase (http://maven.apache.org/surefire/maven-surefire-plugin/examples/inclusion-exclusion.html)
JUnit5
#Ignore not exit in the future version, If you are using JUnit5, you can use #Disabled from JUnit Jupiter
import org.junit.jupiter.api.Disabled;
You can even use #Disabled with a comment #Disabled("some comment here")
Class
Annotate the class, will disable all the tests in the class :
#Disabled
public class DemoTest { }
#Disabled("some comment here")
public class DemoTest { }
Method
#Disabled
public void whenCaseThenResult() { }
#Disabled("some comment here")
public void whenCaseThenResult() { }
There are three options:
In JUnit 4 or 5 its enough to make the base class abstract. If you use #Igonre attribute it will show it as an ignored test (and will add it to the total count of tests).
You can use the #Ignore annotation. This annotation also works for classes.
The #Ignore test annotation is used to ignore particular tests or groups of tests in order to skip the build failure.
#Ignore("Base class not yet ready")
class MyBaseClassTestCases {...}
You can change the name of the test class that it will not have the word Test (case sensitive) at the start or the end of the test name/or the word TestCase at the end.
Adding an empty test works.
#Test(expected = Test.None.class)
public void ATest() {}
Beware, adding it without (expected = Test.None.class) will add an "Add at least one assertion" sonar issue.