very curious to know whether PMD rules(other than PMD JUnit ruleset) needs to be applied for the Junit Java Test files also. Since these test files never going to be executed in production, how much it is necessary to have PMD rules complaint for Junit files.
Please share your thoughts.
This question is more a definition of what the software architect / technical lead wants.
We have a common rule, what the quality of the tests have to be in the same quality as the code for production. But why?
Since the test code is ensuring the quality of the production code, it has to have the same maintainability as the production code.
Means if you do not focus to have some common rules on the tests, the tests will get worse and worse over the time.
We separate between:
User acceptance tests (automated)
integration tests (automated)
and unit tests (automated)
from top to down we are releasing the rules chains to the developers. UA-Tests are crucial, so identical to any production code. Unit tests are nice and needed, but we will survive if we loose some of them.
Related
Brief
I'm looking to write unit tests in JUnit to check individual drools rules. A unit test should be simple to write and fast to run. If the unit tests are slow then developers will avoid running them and the build will become excessively slow. With that in mind I'm trying to figure out the best (fastest to execute and easiest to write) method of writing these unit tests.
First attempt
The first option I tried was to create the KnowledgeBase as a static class attribute and initialized on one .drl file. Each test then creates a new session in the #Before method. This was based on the code examples in the Drools JBoss rules developer guide.
I've seen a second option that tidies this up a bit by creating some annotations to abstract the initialization code but it's basically the same approach.
I noticed that this basic unit test on one .drl file was taking a couple of seconds to run. This isn't too bad with one unit test but once it's scaled up I can see it being a problem. I did some reading and found that the KnowledgeBase is expensive to create, whereas the session is cheap. This is why the examples have the KnowledgeBase as static so it's created only once, however, with multiple unit test classes it will potentially be created many times.
Alternative
The alternative I tried is to create a singleton KnowledgeBase that will load all .drl files. This will be done once globally for the test suite and then autowired into each test class. I used a spring #Configuration class and defined the KnowledgeBase as an #Bean. I now find that the KnowledgeBase takes 2 seconds (but runs only once), the session creation takes about 0.2 seconds and the test itself takes no time at all.
It seems as if the spring approach may scale better but I'm not sure if I will get other problems with testing a single rule but using a KnowledgeBase initialized on all files? I'm using an AgendaFilter to target the specific rule I want to test. Also I've searched online quite a bit and haven't found anybody else doing it this way.
Summary
What will be the most scalable way of writing these tests? Thanks
This is a very nice collection of experiences. Let me contribute some ideas.
If you have a scenario where a large number of rules is essential to test the outcome because rules vie with each other, you should create the KieBase containing all and serialize it, once. For individual tests, either derive a session from it for each test, inserting facts and firing all rules, or, alternatively, derive the session up front, and run tests, clearing the session (!), inserting facts and firing all rules.
For testing a single rule or a small (<10) set of rules, compiling the DRL from scratch for each set of tests may be tolerable, especially when you adopt the strategy to reuse the session (!) by clearing Working Memory between individual tests.
Some consideration should also be given to rule design. Excessive iterative algorithms should not be implemented in DRL by hook or by crook; using a DRL function or some (static) Java method may be far superior. And testing such as these is much easier.
Also, following established rule design patterns helps a lot. Google "Design Patterns in Production Systems".
Can someone tell me please: how to take a screenshot when test method fails (jUnit 5). I have a base test class with BeforeEach and AfterEach methods. Any other classes with #Test methods extends base class.
Well, it is possible to write java code that takes screenshots, see here for example.
But I am very much wondering about the real problem you are trying to solve this way. I am not sure if you figured that yet, but the main intention of JUnit is to provide you a framework that runs your tests in various environments.
Of course it is nice that you can run JUnit within your IDE, and maybe you would find it helpful to get a screenshot. But: "normally" unit tests also run during nightly builds and such - in environments where "taking a screenshot" might not make any sense!
Beyond that: screenshorts are an extremely ineffective way of collecting information! When you have a fail, you should be locking for textual log files, html/xml reports, whatever. You want that failing tests generate information that can be easily digested.
So, the real answer here is: step back from what you are doing right now, and re-consider non-screenshot solutions to the problem you actually want to solve!
You don't need to take screen shots for JUnit test failes/passes, rather the recommended way is to generate various reports (Tests Passed/Failed Report, Code coverage Report, Code complexity Report etc..) automatically using the below tools/plugins.
You can use Cobertura maven plugin or Sonarqube code quality tool so that these will automatically generate the reports for you.
You can look here for Cobertura-maven-plugin and here for Sonarqube for more details.
You need to integrate these tools with your CI (Continuous Integration) environments and ensure that if the code is NOT passing certain quality (in terms of tests coverage, code complexity, etc..) then the project build (war/ear) should fail automatically.
I'd like to define two different suites in the same JUnit TestCase Class; one for behaviour tests and another for efficiency tests. Is it possible?
If yes, how? If not, why not?
Additional details: I'm using JUnit 3.8.1.
If I understand you correctly, you're trying to partition your tests. Suites, on their own, are not really the mechanism you need, rather it's JUnit's categories you need to investigate:
http://java.dzone.com/articles/closer-look-junit-categories
I've not used these as I've usually found the overhead of test partitioning too much effort, but this may work for you. I think TestNG has supported this concept for quite a while.
Also, if you're using Maven you get a partitioning of tests into unit and integration tests for free - check out the Failsafe plugin - which is good for separating tests you want to run quickly as part of every build from longer running tests.
We are creating unit test cases for our existing code base, while progressing through the creation of the test cases the test files are getting bigger in size and are taking very long time in execution.
I know the limitations of unit testing and I did some research also to increase efficiency. While research I found one useful idea to tighten up the provided data set.
Still I am looking for some more ideas on how I can increase efficiency of running/creating the unit test cases? We can keep the option to increase the server resources outside of this scope.
As your question was general I'll cover a few of the common choices. But most of the speed-up techniques have downsides.
If you have dependencies on external components (web services, file systems, etc.) you can get a speed-up by mocking them. This is usually desirable for unit testing anyway. You still need to have integration/functional tests that test with the real component.
If testing databases, you can get quite a speed-up by using an in-memory database (sqlite works well with PHP's PDO; with java maybe H2?). This can have downsides, unless database portability is already a design goal. (I'm about to move to running one set of unit tests against both MySQL and sqlite.) Mocking the database away completely (see above) may be better.
PHPUnit allows you to specify #group on each test. You could go through and mark your slower tests with #group slow, and then use the --exclude-group commandline flag to exclude them on most of your tests runs, and just include them in the overnight build. (You can also specify groups to include/exclude in your phpunit.xml.dist file.
(I don't think jUnit has this option, but TestNG does; for C#, NUnit offers categories for this.)
Creating fixtures once, and then sharing them between tests is quicker than creating the fixture before each test. The XUnit Test Patterns devotes whole chapters to the pros and cons (mostly cons) of this approach.
I know throwing hardware at it was explicitly forbidden in your question, but look again at #group, and consider how it can allow you to split your tests across multiple machines. Or splitting tests by directory, and processing one directory on each of multiple machines on your LAN. (PHPUnit is single-threaded, so you could run multiple instances on the same machine, each doing its own directory: be aware of how fixtures need to be independent (including unique names for databases you create, mocking the filesystem, etc.) if you go down this route.)
There is so much written about unit testing but I have hardly found any books/blogs about integration testing? Could you please suggest me something to read on this topic?
What tests to write when doing integration testing?
what makes a good integration test?
etc etc
Thanks
Anything written by Kent Beck, father of both JUnit and SUnit, is a great place to start (for unit tests / test writing in general). I'm assuming that you don't mean "continuous integration," which is a process-based build approach (very cool, when you get it working).
In my own experience, integration tests look very similar to regular unit tests, simply at a higher level. More mock objects. More state initialization.
I believe that integration tests are like onions. They have layers.
Some people prefer to "integrate" all of their components and test the "whole" product as an the "integration" test. You can certainly do this, but I prefer a more incremental approach. If you start low-level and then keep testing at higher composition layers, then you will achieve integration testing.
Maybe it is generally harder to find information on integration testing because it is much more specific to the actual application and its business use. Nevertheless, here's my take on it.
What applies to unit-tests also applies to integration tests: modules should have an easy way to mock their externals inputs (files, DB, time...), so that they can be tested together with the other unit-tests.
But what I've found extremely useful, at least for data-oriented applications, is to be able to create a "console" version of the application that takes input files that fully determine its state (no dependencies on databases, network resources...), and outputs the result as another file. One can then maintain pairs of inputs / expected results files, and test for regressions as part of nightly builds, for example. Having this console version allows for easier scripting, and makes debugging incredibly easier as one can rely on a very stable environment, where it is easy to reproduce bugs and to run the debugger.
J.B. Rainsberger has written about them. Here's a link to an InfoQ article with more info.
http://www.infoq.com/news/2009/04/jbrains-integration-test-scam