JUnit 5 major features outline - junit

Could you please outline new major features of JUnit 5 in comparison to JUnit 4?
What are new annotations, if any, and what they are used for (if few words)?

JUnit 5 programming model remained almost unchanged. We still have to use annotations to declare test and life-cycle methods.
At first sight there are no big changes. However, they exists:
Neither test classes nor test methods need to be public.
#Test annotation does not have additional parameters
Life-cycle annotations were renamed
#BeforeAll / #AfterAll
#BeforeEach / #AfterEach
#Disabled is the analogous to JUnit 4’s #Ignore
Also some changes was made for Assertions and Assumptions:
The optional message is now the last parameter
Assertion messages can be lazily evaluated using Supplier<String>
Now it is possible to assert boolean condition using BooleanSupplier
Also JUnit 5 introduced some new concepts into programming model:
Tagging and filtering
#Tag and #Tags used to declare tags for filtering tests, either at the class or method level; analogous to Categories in JUnit 4
#Nested test classes
For better grouping and organization, shared initialization state.
#DisplayName
Allow to declare custom display names — with spaces, special characters, and even emojis — that will be displayed by test runners and test reporting.
Dynamic tests
Useful when you need to run the same set of tests on many different input values or configurations.
JUnit 5 doen't support anymore Runners and Rules. These partially competing concepts have been replaced by a single consistent extension model.
Since test execution follows a certain life cycle. And each phase of that life cycle that can be extended is represented by an interface. Extensions can express interest in certain phases in that they implement the corresponding interface(s).
Using extensions you can implement:
Conditional test execution
TestExecutionCondition
ContainerExecutionCondition
Constructor and methods parameters resolution (dependency injection)
ParameterResolver
Exception handling
TestExecutionExceptionHandler
Handle test life-cycle
TestInstancePostProcessor
BeforeAllCallback
BeforeEachCallback
BeforeTestExecutionCallback
AfterTestExecutionCallback
AfterEachCallback
AfterAllCallback

Related

NUnit equivalent for JUnit test state management with #Before/#After

I come from Java world and I mostly used JUnit, and now I have some problems expressing some aspects of tests with NUnit 3. In JUnit, each test creates its own instance of a test class, so it's perfectly valid to create some instance variables in a test class, set up them in #Before method, test method and helpers can access these variables freely without worrying they would be overwritten by other tests run in parallel, and #After tears down the test data nicely. With NUnit it does not work and SetUp and TearDown methods seem to be useless in this case, because test fixture instance is reused between invocations of test method(s), so fields of test fixture class can (and are) overwritten by every invocation of a test method (my class has a few test methods, and each of them generates several test cases, so there are some tens of invocations in one test run).
I do not know how to work around this problem. In my scenario, set up would create a temporary folder, which would be used as a work folder for following test case. Tear down would delete the temporary folder afterwards, cleaning up all intermediate files created by tested method. But now, when SetUp creates and stores a temporary folder path in instance field (so it can be read by test logic and somewhat complicated asserts and verifiers), the value of such field is overwritten by test cases run in parallel. I considered several approaches:
implement an IDisposable which would represent a context of each test, and enclose it with using in each test method - I do not like this idea, because I do not like the idea of IDisposable being used as anything else than resource management tool and combinig IDisposable with using to simulate set up/tear down smells to me like an abuse of this particular language feature,
create a method which accepts a delegate for actual test logic, and which invokes custom SetUpTestCase/TearDownTestCase methods. The method would invoke set up, then test delegate, and tear down afterwards. What I do not like about this approach is that it does not play well with test methods which accept parameters - each set of test methods parametrized in particular way would need a corresponding delegate type. Also it somewhat seems to be against spirit of NUnit and the way of describing test methods with attributes - after all, why should the main logic of my test be delegated to anything? Shouldn't the [Test] or [TestCase] method be actual test?
maybe there's some way to use more advanced aspects of NUnit, like actions or some callbacks/triggers/whatever, I am just too unexperienced to see these. What I particularly miss is the way to transfer data from set up method (for example, a path to a temporary folder created by it) to the test method that follows. I cannot use instance fields for this, and I do not know whether there exists any "tag" structure which would pass test-specific data between methods invoked on different stages of a test lifecycle?
Generally, SetUp and TearDown attributes seem pretty useless to me, if they cannot set up the test case without their result being overwritten immediately by another test case run in parallel. What am I missing here?
How can I implemented such per-test case, scoped setup/tear down behavior with NUnit? What do I do wrong, or what do I miss?
As you have established, the TestFixture class is instantiated once before the OneTimeSetUp is called; then for each test it runs a set of SetUp, Test and TearDown; and finally, the OneTimeTearDown.
If you want the tests to be run in parallel (which is not the default) then you must specify The Parallelizable Attribute. Whether you do that or not, it is a good idea for your tests to be written independently, so they do not conflict with each other - they need to be structured.
The AAA (Arrange, Act, Assert) pattern is a common way of structuring unit tests for a method under test. If your tests are to be run in parallel, then TestFixture fields are not suitable for holding information which may conflict across parallel tests, in the same way that it wouldn't be suitable in a multithreaded class.
I'd suggest using a private method in the TestFixture to set up the temporary folder - it will need to have some way of providing a unique folder name, so that the parallel tests do not interact - perhaps use a Guid or CallerMemberName as part of the folder name, and return the folder name.
This method should be called from the Arrange part of the test. And you'll need a try...finally wrapping the rest of the Test to ensure the folder gets torn down. Or you could go with your IDisposable idea - I don't think there's anything wrong with that: the whole point of that is to guarantee tidying up resources (both managed and unmanaged) when something goes out of scope.
Your second suggestion of a delegate would also be fine if you used lambda expressions rather than strictly-defined delegates - the lambda expression can capture variables from the containing scope.

What good are JUnit's #Ignore and #Disabled annotations?

What is the advantage to adding the #Disabled or #Ignore annotations to JUnit tests, e.g.:
#Test
#Disabled
void testSomething() { /* ... */ }
instead of just removing the #Test annotation?
void testSomething() { /* ... */ }
Either way, the test should not be executed.
The utility of these annotations is largely in documentation/reporting. When you run a JUnit suite, you get a report of the results. #Ignored/#Disabled tests will be marked as such (with optional comments) in that report.
This lets you track how many tests are being ignored/disabled. You can set policies around this (i.e. if a test is #Ignored for a month, just delete it) or make CI systems fail if too many tests are being #Ignored. You can make graphs showing trends of Passed/Failed/Skipped over time.
Really, it all comes down to how you want to track the evolution of your test suite, and wether you'd want to see a section of "skipped" tests, or the total number of tests going down when a test is temporarily broken/no longer useful.
#Disabled or #Ignore annotations can be used to disable or ignore the test methods from the test suite.
#Disabled introduced in junit5. It accepts only one optional parameter, which indicates the reason this test is disabled. Example :
#Disabled("Do not run in a lower environment")
Advantages of adding #Disabled or #Ignore:
Search-ability: You can easily identify all #Ignore or #Disabled annotations in the source code, while unannotated or commented out tests are not so simple to find.
Maintainable: It is easy to maintain or modify it later. It is always good practice to use annotations.
In my opinion, #Ignore is cleaner than commenting an entire block of test method .
Also, when you run your test suite, you would get a warning about some tests are ignored. You won’t get that if you comment it. That maybe someday you would want to enable it
Other advantages:
Manual-only execution: ensures a test is never run automatically while still allowing you to run it manually with no code change. My preferred approach for this, since it defaults to safe behaviour without requiring any configuration of test engine or test run.
Avoids warnings: untagging can trigger "unused code"-warnings for tests that are in use, only manually
Modularity: You can use it on an entire test class, whereas in JUnit 5 you would need to untag each test case individually

Where to create KnowledgeBase in a Drools unit test?

Brief
I'm looking to write unit tests in JUnit to check individual drools rules. A unit test should be simple to write and fast to run. If the unit tests are slow then developers will avoid running them and the build will become excessively slow. With that in mind I'm trying to figure out the best (fastest to execute and easiest to write) method of writing these unit tests.
First attempt
The first option I tried was to create the KnowledgeBase as a static class attribute and initialized on one .drl file. Each test then creates a new session in the #Before method. This was based on the code examples in the Drools JBoss rules developer guide.
I've seen a second option that tidies this up a bit by creating some annotations to abstract the initialization code but it's basically the same approach.
I noticed that this basic unit test on one .drl file was taking a couple of seconds to run. This isn't too bad with one unit test but once it's scaled up I can see it being a problem. I did some reading and found that the KnowledgeBase is expensive to create, whereas the session is cheap. This is why the examples have the KnowledgeBase as static so it's created only once, however, with multiple unit test classes it will potentially be created many times.
Alternative
The alternative I tried is to create a singleton KnowledgeBase that will load all .drl files. This will be done once globally for the test suite and then autowired into each test class. I used a spring #Configuration class and defined the KnowledgeBase as an #Bean. I now find that the KnowledgeBase takes 2 seconds (but runs only once), the session creation takes about 0.2 seconds and the test itself takes no time at all.
It seems as if the spring approach may scale better but I'm not sure if I will get other problems with testing a single rule but using a KnowledgeBase initialized on all files? I'm using an AgendaFilter to target the specific rule I want to test. Also I've searched online quite a bit and haven't found anybody else doing it this way.
Summary
What will be the most scalable way of writing these tests? Thanks
This is a very nice collection of experiences. Let me contribute some ideas.
If you have a scenario where a large number of rules is essential to test the outcome because rules vie with each other, you should create the KieBase containing all and serialize it, once. For individual tests, either derive a session from it for each test, inserting facts and firing all rules, or, alternatively, derive the session up front, and run tests, clearing the session (!), inserting facts and firing all rules.
For testing a single rule or a small (<10) set of rules, compiling the DRL from scratch for each set of tests may be tolerable, especially when you adopt the strategy to reuse the session (!) by clearing Working Memory between individual tests.
Some consideration should also be given to rule design. Excessive iterative algorithms should not be implemented in DRL by hook or by crook; using a DRL function or some (static) Java method may be far superior. And testing such as these is much easier.
Also, following established rule design patterns helps a lot. Google "Design Patterns in Production Systems".

Drools testing with junit

What is the best practice to test drools rules with junit?
Until now we used junit with dbunit to test rules. We had sample data that was put to hsqldb. We had couple of rule packages and by the end of the project it is very hard to make a good test input to test certain rules and not fire others.
So the exact question is that how can I limit tests in junit to one or more certain rule(s) for testing?
Personally I use unit tests to test isolated rules. I don't think there is anything too wrong with it, as long as you don't fall into a false sense of security that your knowledge base is working because isolated rules are working. Testing the entire knowledge base is more important.
You can write the isolating tests with AgendaFilter and StatelessSession
StatelessSession session = ruleBase.newStatelessSesssion();
session.setAgendaFilter( new RuleNameMatches("<regexp to your rule name here>") );
List data = new ArrayList();
... // create your test data here (probably built from some external file)
StatelessSessionResult result == session.executeWithResults( data );
// check your results here.
Code source: http://blog.athico.com/2007/07/my-rules-dont-work-as-expected-what-can.html
Do not attempt to limit rule execution to a single rule for a test. Unlike OO classes, single rules are not independent of other rules, so it does not make sense to test a rule in isolation in the same way that you would test a single class using a unit test. In other words, to test a single rule, test that it has the right effect in combination with the other rules.
Instead, run tests with a small amount of data on all of your rules, i.e. with a minimal number of facts in the rule session, and test the results and perhaps that a particular rule was fired. The result is not actually that much different from what you have in mind, because a minimal set of test data might only activate one or two rules.
As for the sample data, I prefer to use static data and define minimal test data for each test. There are various ways of doing this, but programmatically creating fact objects in Java might be good enough.
I created simple library that helps to write unit tests for Drools. One of the features is exactly what you need: declare particular drl files you want to use for your unit test:
#RunWith(DroolsJUnitRunner.class)
#DroolsFiles(value = "helloworld.drl", location = "/drl/")
public class AppTest {
#DroolsSession
StatefulSession session;
#Test
public void should_set_discount() {
Purchase purchase = new Purchase(new Customer(17));
session.insert(purchase);
session.fireAllRules();
assertTrue(purchase.getTicket().hasDiscount());
}
}
For more details have a look on blog post: https://web.archive.org/web/20140612080518/http://maciejwalkowiak.pl/blog/2013/11/24/jboss-drools-unit-testing-with-junit-drools/
A unit test with DBUnit doesn't really work. An integration test with DBUnit does. Here's why:
- A unit test should be fast.
-- A DBUnit database restore is slow. Takes 30 seconds easily.
-- A real-world application has many not null columns. So data, isolated for a single feature, still easily uses half the tables of the database.
- A unit test should be isolated.
-- Restoring the dbunit database for every test to keep them isolated has drawbacks:
--- Running all tests takes hours (especially as the application grows), so no one runs them, so they constantly break, so they are disabled, so there is no testing, so you application is full of bugs.
--- Creating half a database for every unit test is a lot of creation work, a lot of maintenance work, can easily become invalid (with regards to validation which database schema's don't support, see Hibernate Validator) and ussually does a bad job of representing reality.
Instead, write integration tests with DBunit:
- One DBunit, the same for all tests. Load it only once (even if you run 500 tests).
-- Wrap each test in a transaction and rollback the database after every test. Most methods use propagation required anyway. Set the testdata dirty only (to reset it in the next test if there is a next test) only when propagation is requires_new.
- Fill that database with corner cases. Don't add more common cases than are strictly needed to test your business rules, so ussually only 2 common cases (to be able to test "one to many").
- Write future-proof tests:
-- Don't test the number of activated rules or the number of inserted facts.
-- Instead, test if a certain inserted fact is present in the result. Filter the result on a certain property set to X (different from the common value of that property) and test the number of inserted facts with that property set to X.
Unit test is about taking minimum piece of code and test all possible usecases defining specification. With integration tests your goal is not all possible usecases but integration of several units that work together. Do the same with rules. Segregate rules by business meaning and purpose. Simplest 'unit under the test' could be file with single or high cohension set of rules and what is required for it to work (if any), like common dsl definition file and decision table. For integration test you could take meaningful subset or all rules of the system.
With this approach you'll have many isolated unit tests and few integration tests with limited amount of common input data to reproduce and test common scenarios. Adding new rules will not impact most of unit tests but few integration tests and will reflect how new rules impact common data flow.
Consider JUnit testing library that could be suitable for this approach

Extending Junit4 or test case?

We've got a simple webservice for handling queries about our data. I'd like to make a set of asserts/case extentions that would provide high level methods for testing various aspects of the response. For instance I could write assertResultCountMinimum(int). The method would take care of building the query, executing it, and unpacking the response to validate the data. I'd also like the to
I want to make sure I have the right idea in my head about how to go about this.
First create a test case class of my own, with the right setup and teardown methods. For our purposes, MyTestCase. Then provide a series of classes that extend Assert with the new assert methods. The end user of these classes would extend MyTestCase and would use the asserts that I've created. This is the pattern I think I see in jWebUnit.
I feel like I'm mixing and matching junit 3 and 4 concepts. I'd love to have just junit 4 concepts. But I can't seem to line up in my head the proper way to build this. Also, the assert methods that belong to Junit's Assert class are all static. Some of my asserts would require requerying the webservice. This makes me think I should really just provide the asserts as a series of helper functions inside of MytestCase. The later gets the job done, but doesn't feel right.
Any insight, musings, requests for clarification, much appreciated.
Follow up edit:
As Jeanne suggests below, I'm creating a super class with all of my asserts & setup/teardown methods. In reality my asserts are actually helper functions which wrap around the basic junit 4 asserts, which I import into my super class. Any test of mine will just extend this super class. One caveat that I'm considering is making the super class abstract, since there shouldn't be any instance of the super class.
Marc,
I use two patterns in JUnit 4. For "utility type" assertions, I made a static class. For example ReflectionAssertions. Then I use a static import to use those assertions in my JUnit 4 test.
For local type assertions that are only used in one class, I make them regular methods in the JUnit 4 test class itself. For example assertCallingMyBusinessMethodWithNullBlowsUp(). These don't have much reuse value.
I don't consider this mixing concepts because the later group aren't reusable outside my test. If I had reusable assertions that made webservice calls (and therefore needed state), I would create a superclass that did not extend TestCase and use that. My superclass would have the state and #Before methods for setup. As such, it is part of the test.