I'd like to define two different suites in the same JUnit TestCase Class; one for behaviour tests and another for efficiency tests. Is it possible?
If yes, how? If not, why not?
Additional details: I'm using JUnit 3.8.1.
If I understand you correctly, you're trying to partition your tests. Suites, on their own, are not really the mechanism you need, rather it's JUnit's categories you need to investigate:
http://java.dzone.com/articles/closer-look-junit-categories
I've not used these as I've usually found the overhead of test partitioning too much effort, but this may work for you. I think TestNG has supported this concept for quite a while.
Also, if you're using Maven you get a partitioning of tests into unit and integration tests for free - check out the Failsafe plugin - which is good for separating tests you want to run quickly as part of every build from longer running tests.
Related
Brief
I'm looking to write unit tests in JUnit to check individual drools rules. A unit test should be simple to write and fast to run. If the unit tests are slow then developers will avoid running them and the build will become excessively slow. With that in mind I'm trying to figure out the best (fastest to execute and easiest to write) method of writing these unit tests.
First attempt
The first option I tried was to create the KnowledgeBase as a static class attribute and initialized on one .drl file. Each test then creates a new session in the #Before method. This was based on the code examples in the Drools JBoss rules developer guide.
I've seen a second option that tidies this up a bit by creating some annotations to abstract the initialization code but it's basically the same approach.
I noticed that this basic unit test on one .drl file was taking a couple of seconds to run. This isn't too bad with one unit test but once it's scaled up I can see it being a problem. I did some reading and found that the KnowledgeBase is expensive to create, whereas the session is cheap. This is why the examples have the KnowledgeBase as static so it's created only once, however, with multiple unit test classes it will potentially be created many times.
Alternative
The alternative I tried is to create a singleton KnowledgeBase that will load all .drl files. This will be done once globally for the test suite and then autowired into each test class. I used a spring #Configuration class and defined the KnowledgeBase as an #Bean. I now find that the KnowledgeBase takes 2 seconds (but runs only once), the session creation takes about 0.2 seconds and the test itself takes no time at all.
It seems as if the spring approach may scale better but I'm not sure if I will get other problems with testing a single rule but using a KnowledgeBase initialized on all files? I'm using an AgendaFilter to target the specific rule I want to test. Also I've searched online quite a bit and haven't found anybody else doing it this way.
Summary
What will be the most scalable way of writing these tests? Thanks
This is a very nice collection of experiences. Let me contribute some ideas.
If you have a scenario where a large number of rules is essential to test the outcome because rules vie with each other, you should create the KieBase containing all and serialize it, once. For individual tests, either derive a session from it for each test, inserting facts and firing all rules, or, alternatively, derive the session up front, and run tests, clearing the session (!), inserting facts and firing all rules.
For testing a single rule or a small (<10) set of rules, compiling the DRL from scratch for each set of tests may be tolerable, especially when you adopt the strategy to reuse the session (!) by clearing Working Memory between individual tests.
Some consideration should also be given to rule design. Excessive iterative algorithms should not be implemented in DRL by hook or by crook; using a DRL function or some (static) Java method may be far superior. And testing such as these is much easier.
Also, following established rule design patterns helps a lot. Google "Design Patterns in Production Systems".
Can someone tell me please: how to take a screenshot when test method fails (jUnit 5). I have a base test class with BeforeEach and AfterEach methods. Any other classes with #Test methods extends base class.
Well, it is possible to write java code that takes screenshots, see here for example.
But I am very much wondering about the real problem you are trying to solve this way. I am not sure if you figured that yet, but the main intention of JUnit is to provide you a framework that runs your tests in various environments.
Of course it is nice that you can run JUnit within your IDE, and maybe you would find it helpful to get a screenshot. But: "normally" unit tests also run during nightly builds and such - in environments where "taking a screenshot" might not make any sense!
Beyond that: screenshorts are an extremely ineffective way of collecting information! When you have a fail, you should be locking for textual log files, html/xml reports, whatever. You want that failing tests generate information that can be easily digested.
So, the real answer here is: step back from what you are doing right now, and re-consider non-screenshot solutions to the problem you actually want to solve!
You don't need to take screen shots for JUnit test failes/passes, rather the recommended way is to generate various reports (Tests Passed/Failed Report, Code coverage Report, Code complexity Report etc..) automatically using the below tools/plugins.
You can use Cobertura maven plugin or Sonarqube code quality tool so that these will automatically generate the reports for you.
You can look here for Cobertura-maven-plugin and here for Sonarqube for more details.
You need to integrate these tools with your CI (Continuous Integration) environments and ensure that if the code is NOT passing certain quality (in terms of tests coverage, code complexity, etc..) then the project build (war/ear) should fail automatically.
I'm currently working on a class, dealing with network issues. Using JUnit 3.8.1 and having a hardware device, that's not always around to test against, I'd like to conditionally suppress individual tests. Is there a way to achive this with a simple annotation like #if(!gatewayAvailable) -> test's suppressed?
Thanx for any pointers, marcus
There is no such feature in JUnit 3.8.1. You have to use JUnit4 and its Assume class.
I'm in the middle of setting up PMD as a tool in our team to support us writing better code. Basically I'm building Ant scripts and try to set up some rules for everyone to use.
But right now I hit this problem:
When I write JUnit tests I don't want to use the same rules I apply on our main source code. I don't care that much about String rules (like string dupliates or weird instantiations) in the junit tests.
My questions is:
Is that a fault on my side and should I start writing better JUnit tests?
Should I provide a 2nd set of rules that disables some of the string/design/finalizers rules?
The second option - I don't run PMD against my tests at all. I could and PMD provides some JUnit specific rules. I would definitely use a separate ruleset against the test code though. I expect more String literals and some thing specified instead of using conditionals/loops. After all, I don't want to duplicate the code I am trying to test.
Two things. Why are you trying to set up rules why not using the existing rules? (Special requirements?). And second yes of course Unit tests should have a good quality as well. Your Unit test test you production code so shouldn't they have at least the same quality as your production code?
There is so much written about unit testing but I have hardly found any books/blogs about integration testing? Could you please suggest me something to read on this topic?
What tests to write when doing integration testing?
what makes a good integration test?
etc etc
Thanks
Anything written by Kent Beck, father of both JUnit and SUnit, is a great place to start (for unit tests / test writing in general). I'm assuming that you don't mean "continuous integration," which is a process-based build approach (very cool, when you get it working).
In my own experience, integration tests look very similar to regular unit tests, simply at a higher level. More mock objects. More state initialization.
I believe that integration tests are like onions. They have layers.
Some people prefer to "integrate" all of their components and test the "whole" product as an the "integration" test. You can certainly do this, but I prefer a more incremental approach. If you start low-level and then keep testing at higher composition layers, then you will achieve integration testing.
Maybe it is generally harder to find information on integration testing because it is much more specific to the actual application and its business use. Nevertheless, here's my take on it.
What applies to unit-tests also applies to integration tests: modules should have an easy way to mock their externals inputs (files, DB, time...), so that they can be tested together with the other unit-tests.
But what I've found extremely useful, at least for data-oriented applications, is to be able to create a "console" version of the application that takes input files that fully determine its state (no dependencies on databases, network resources...), and outputs the result as another file. One can then maintain pairs of inputs / expected results files, and test for regressions as part of nightly builds, for example. Having this console version allows for easier scripting, and makes debugging incredibly easier as one can rely on a very stable environment, where it is easy to reproduce bugs and to run the debugger.
J.B. Rainsberger has written about them. Here's a link to an InfoQ article with more info.
http://www.infoq.com/news/2009/04/jbrains-integration-test-scam