I have a long public method. It clearly separates logically into different parts. What I could do to test it would be something like
#Test
public void shouldWork() {
//Execute target method
//This is a comment hinting that now the first step is verified
//Verify first step
//This is a comment hinting that now the second step is verified
//Verify second step
}
I don't like this at all since it will only show a failure in this gigantic method if any individual part had an error. It seems reasonable to write separate test methods for every step. Then I'm facing the following situation:
#Test
public void testFirstStep() {
//Execute target method
//Verify first step
}
#Test
public void testSecondStep() {
//Execute target method
//Verify second step
}
Here I have the problem that my verifications for both steps have the form verify(myMockitoMock).myMethod(myArg) (multiple times on the same mock). Since these verifications do respect the order of calls I'm fine for the first step but the test for the second step assumes that all the verifications of the first step already happened. Logically it would be fine to do
#Test
public void testFirstStep() {
//Execute target method
executeVerificationsForFirstStep(myMock);
}
#Test
public void testSecondStep() {
//Execute target method
executeVerificationsForFirstStep(myMock);
//Verify second step
}
private void executeVerificationsForFirstStep(Mock myMock) {
//Verifications for first step on myMock
}
However then the test for the second test will fail if some verification fails for the first step.
Any suggestions?
The best way is to split your methods into several submethods and write a test method for each one of the submethods.
And in the end you can test the main method that calls your submethods in order to make sure that all your submethods are working fine together and this will be like a system testing (integration testing)
First write the test that makes you unhappy...
#Test
public void shouldWork() {
//Execute target method
//This is a comment hinting that now the first step is verified
//Verify first step
//This is a comment hinting that now the second step is verified
//Verify second step
}
Then ensure that it passes and verifies something useful. Bear in mind the rules of thumb that a class should have one responsibility and a method should do one thing. Extract the obvious methods, which you can already see. Re-run the test and ensures that it passes.
Now evaluate what the class and methods are doing relative to those two rules of thumb. Does the class have one responsibility? If not, then think about what the responsibilities are and write some tests to bring new classes into existence to meet those responsibilities. If you work in this way (TDD), I suspect that your methods will do one thing more often without even thinking much about that rule.
Related
#Rule
public ErrorCollector errorCollector = new ErrorCollector();
public void verifyDeviceType(String device_Type){
System.out.println(deviceType.getText()+","+device_Type);==> camera,camera1
errorCollector.checkThat("Expected Device Type Not Present.",deviceType.getText(),equalTo(device_Type));
}
public void verifyDeviceStatus(String device_Status){
System.out.println(deviceStatus.getText()+","+device_Status);==>Might be offline,Online2
errorCollector.checkThat("Expected Device Status Not Present.",deviceStatus.getText(),equalTo(device_Status));
}
As shown above, first method should fail because camera vs. camera1 difference.
Second method should fail because 'Might be offline' Vs Online2 word difference, which I am expecting to be equal.
But ErrorCollector runs smoothly with out any complaints showing all the tests as passed.
BTW, lastly, even if it shows them as errors, how do we access the messages or errors stored in the ErrorCollector, say in the next method, the third method after these two methods ran through collecting errors ?
Then again, after learning about JUnitSoftAssertions, I tried
#Rule
public JUnitSoftAssertions softAssertions = new JUnitSoftAssertions();
public void verifyDeviceType(String device_Type){
System.out.println(deviceType.getText()+","+device_Type);==> camera,camera1
softAssertions.assertThat(deviceType.getText()).as("Expected Device Type").isEqualTo(device_Type);
}
public void verifyDeviceStatus(String device_Status){
System.out.println(deviceStatus.getText()+","+device_Status);==>Might be offline,Online2
softAssertions.assertThat(deviceStatus.getText()).as("Expected Device Status").isEqualTo(device_Status);
}
A reproducible test case would be great if you want people to help you.
I'm not sure to understand exactly what you are trying to achieve, are you looking for a report of all failed assertions? Your code samples don't show any tests methods (that is annotated with #Test), anyway for the AssertJ question, you can access collected errors with assertionErrorsCollected.
Hope it helps!
I have a test suite which tests two different things in the same class. I have a before method that initialises some fields for the test methods to use. However, I have a group of test methods that uses the first set of field, and another group that uses the second, but not the first. I know it's possible to split the before action over different before methods, but is it also possible to specify which one runs before each test?
Concrete example:
#Before
public void before1() {...}
#Before
public void before2() {...}
#Test
public void test1() {
//Only before1 runs
}
#Test
public void test2() {
//Only before2 runs
}
This is a simple representation, but I have much more tests that use either of these befores.
Everything you've stated in your question is pointing to splitting up your tests into 2 separate classes. I am guessing that the two groups you have are testing distinct features of your code and may even have some commonality in the test names. Take all of the tests that require before1 into a test class and all the tests that require before2 into another test class. You can name these new test classes according to the grouping of behaviour you're testing.
For example if half of your tests are for success scenarios and half are testing failure scenarios, put these into classes named something like FooSucceedsTest and the failures into FooFailsTest.
There is no guarantee on the order of a #Before executing just as there's no guarantee on a #Test order of execution.
The solution is to do any setup a test is dependent on in the #Test itself and use the #Before for common setup before test execution.
I created a #Rule similar to #RunWith parameterized.class so that all testcase can repeat tests with different parameters (I cannot use parameterized.class directly becase my test class already has #Runwith for other purpose).
However, the #Rule does not work when testing the following method:
#Test
public void fooTest() {
/*exception is an ExceptedException initialized somewhere else as:
*#Rule public ExpectedException exception = ExpectedException.none();
*/
exception.expect(A);
exception.expectMessage(B);
someTestWhichThrowExceptionOfB();
}
In fact, if I hardcode my parameter as a value, the test passes since it does throw the exception of B.
But if I set my parameter = MyParameterRule.value(), the test does throw an exception of B as well but then fails and saying it fails because exception of B?
I guess in the second case if I use MyParameterRule, then the exception does not work? then why? how to make it work still?
If you can depend on JUnit 4.12, you may be able to use Parameterized with #UseParametersRunnerFactory (see the Parameterized Javadoc for details).
As for why your parameterized rule wasn't working, here is a (somewhat long) explanation.
JUnit has an internal assumption that a new instance of your test class is created for each test method. JUnit does this so the state stored in an instance of your test from one test method run doesn't affect the next test method run.
The ExpectedException rule has the same expectation. When you call expect, it changes the state of the ExpectedException field that was created at the initialization time of that field. It modifies it to "expect" an exception to be thrown.
When your Rule tries to run the same method twice (with different parameters) that violates this assumption. The first time you call a method that calls expect it will work, but when you call it again, it might not work, because you are re-using the previously-modified ExpectedException rule.
When a JUnit runs a test method for a JUnit4-style test it does the following:
Create an instance of the test class.
Create a Statement that will run the test method
If the method's #Test annotation uses the expected attribute, wrap the Statement with another statement that handles expected exceptions
If the method's #Test annotation uses the timeout attribute, wrap the Statement with another statement that handles timeouts
Wrap that Statement with other statements that will call the methods annotated with #Before and #After
Wrap that Statement with other statements that will call the rules
For more details, look at the JUnit source.
So the Statement that is passed to your rule wraps an already-constructed class (it has to, so the apply() method of your rule is called).
Unfortunately, this means that a Rule should not be used to run a test method multiple times, because the test (or it's rules) could have state that was set the previous time the method was run.
If you can't depend on JUnit 4.12, it's possible you can hack this to work by using the RuleChain Rule to ensure that the custom Rule you use to run the test multiple times runs "around" the other rules.
If you are using Java 8 then you can replace the ExpectedException rule with the Fishbowl library.
#Test
public void fooTest() {
Throwable exception = exceptionThrownBy(
() -> someTestWhichThrowExceptionOfB());
assertEquals(A.class, exception.getClass());
assertEquals("B", exception.getMessage());
}
This can be achieved with anonymous classes in Java 6 and 7, too.
#Test
public void fooTest() {
Throwable exception = exceptionThrownBy(new Statement() {
public void evaluate() throws Throwable {
someTestWhichThrowExceptionOfB();
}
});
assertEquals(A.class, exception.getClass());
assertEquals("B", exception.getMessage());
}
Let say I have a test class called MyTest.
In it I have three tests.
public class MyTest {
AnObject object;
#Before
public void setup(){
object = new AnObject();
object.setSomeValue(aValue);
}
#Test
public void testMyFirstMethod(){
object.setAnotherValue(anotherValue);
// do some assertion to test that the functionality works
assertSomething(sometest);
}
#Test
public void testMySecondMethod(){
AValue val = object.getAnotherValue();
object.doSomethingElse(val);
// do some assertion to test that the functionality works
assertSomething(sometest);
}
Is there any way I can use the value of anotherValue, which is set with its setter in the first test, in the second test. I am using this for testing database functionality. When I create an object in the DB I want to get its GUID so I can use this to do updates and deletes in later test methods, without having to hardcode the GUID and therefore making it irrelevant for future use.
You are introducing a dependency between two tests. JUnit deliberately does not support dependency between tests, and you can't guarantee the order of execution (except for test classes in a test suite, see my answer to Has JUnit4 begun supporting ordering of test? Is it intentional?). So you really want to have dependencies between two test methods:
you have to use an intermediate static value
as Cedric suggests, use TestNG, which specifically supports dependencies
in this case, you can create a method to create the line, and call it from both methods.
I would personally prefer 3, because:
I get independent tests, and I can run just the second test (in Eclipse or such like)
In my teardown in the class, I can remove the line from the database, the cleanup. This means that whichever test I run, I always start off with the same (known) database state.
However, if your setup is really expensive, you can consider this to be an integration test and just accept the dependency, to save time.
You should use TestNG if you need this (and I agree it's fairly common in integration testing). TestNG uses the same instance to run your tests, so values stored in fields are preserved between tests, which is very useful when your objects are expensive to create (JUnit forces you to use statics to achieve the same effect, which should be avoided).
First off, make sure your #Test 's run in some kind of defined order
i.e. #FixMethodOrder(MethodSorters.NAME_ASCENDING)
In the example below, I'm assuming that test2 will run after test1
To share a variable between them, use a ThreadLocal (from java.lang).
Note that the scope of the ThreadLocal variable is to the thread, so if you are running multiple threads, each will have a copy of 'email' (the static in this case implies that its only global to the thread)
private static ThreadLocal<String> email = new ThreadLocal<String>();
#Test
public void test1 {
email.set("hchan#apache.org);
}
#Test
public void test2 {
System.out.println(email.get());
}
You should not do that. Tests are supposed to be able to run in random order. If you want to test things that depend on one value in the database, you can do that in the #Before code, so it's not all repeated for each test case.
I have found nice solution, just add Before annotation to the previous test!
private static String email = null;
#Before
#Test
public void test1 {
email = "test#google.com"
}
#Test
public void test2 {
System.out.println(email);
}
If you, like me, googled until here and the answer didn't serve to you, I'll just leave this: Use #BeforeEach
What is the purpose of the org.apache.hadoop.mapreduce.Mapper.run() function in Hadoop? The setup() is called before calling the map() and the clean() is called after the map(). The documentation for the run() says
Expert users can override this method for more complete control over the execution of the Mapper.
I am looking for the practical purpose of this function.
The default run() method simply takes each key / value pair supplied by the context and calls the map() method:
public void run(Context context) throws IOException, InterruptedException {
setup(context);
while (context.nextKeyValue()) {
map(context.getCurrentKey(), context.getCurrentValue(), context);
}
cleanup(context);
}
If you wanted to do more than that ... you'd need to override it. For example, the MultithreadedMapper class
I just came up with a fairly odd case for using this.
Occasionally I've found that I want a mapper that consumes all its input before producing any output. I've done it in the past by performing the record writes in my cleanup function. My map function doesn't actually output any records, it just reads the input and stores whatever will be needed in private structures.
It turns out that this approach works fine unless you're producing a LOT of output. The best I can make out is that the mapper's spill facility doesn't operate during cleanup. So the records that are produced just keep accumulating in memory, and if there are too many of them you risk heap exhaustion. This is my speculation of what's going on - could be wrong. But definitely the problem goes away with my new approach.
That new approach is to override run() instead of cleanup(). My only change to the default run() is that after the last record has been delivered to map(), I call map() once more with null key and value. That's a signal to my map() function to go ahead and produce its output. In this case, with the spill facility still operable, memory usage stays in check.
Maybe it could be used for debugging purposes as well. You can then skip part of the input key-value pairs (=take a sample) to test your code.