How can i determine what percentage of my methods (and code) are covered by jUnit tests? I am assuming there is a more sophisticated way then simply counting ... and 1 and 2 and ..
I specifically wonder how will such counting be handled when single method is covered by 'n' tests.
I've used EclEmma very successfully to cover JUnit test runs. And its free.
This presentation points to several tools you can use for the purpose.
Cenqua Clover $250-$2500 payware http://www.cenqua.com/clover/
Cobertura (GPL): http://cobertura.sourceforge.net/
Coverlipse Eclipse plug-in:
http://coverlipse.sourceforge.net/index.php
Jester: http://jester.sourceforge.net/
I would suggest to go for Cobertura for code coverage. It gives detailed information and can give you line by line coverage as well as branch coverage.
Related
Can someone tell me please: how to take a screenshot when test method fails (jUnit 5). I have a base test class with BeforeEach and AfterEach methods. Any other classes with #Test methods extends base class.
Well, it is possible to write java code that takes screenshots, see here for example.
But I am very much wondering about the real problem you are trying to solve this way. I am not sure if you figured that yet, but the main intention of JUnit is to provide you a framework that runs your tests in various environments.
Of course it is nice that you can run JUnit within your IDE, and maybe you would find it helpful to get a screenshot. But: "normally" unit tests also run during nightly builds and such - in environments where "taking a screenshot" might not make any sense!
Beyond that: screenshorts are an extremely ineffective way of collecting information! When you have a fail, you should be locking for textual log files, html/xml reports, whatever. You want that failing tests generate information that can be easily digested.
So, the real answer here is: step back from what you are doing right now, and re-consider non-screenshot solutions to the problem you actually want to solve!
You don't need to take screen shots for JUnit test failes/passes, rather the recommended way is to generate various reports (Tests Passed/Failed Report, Code coverage Report, Code complexity Report etc..) automatically using the below tools/plugins.
You can use Cobertura maven plugin or Sonarqube code quality tool so that these will automatically generate the reports for you.
You can look here for Cobertura-maven-plugin and here for Sonarqube for more details.
You need to integrate these tools with your CI (Continuous Integration) environments and ensure that if the code is NOT passing certain quality (in terms of tests coverage, code complexity, etc..) then the project build (war/ear) should fail automatically.
I'd like to define two different suites in the same JUnit TestCase Class; one for behaviour tests and another for efficiency tests. Is it possible?
If yes, how? If not, why not?
Additional details: I'm using JUnit 3.8.1.
If I understand you correctly, you're trying to partition your tests. Suites, on their own, are not really the mechanism you need, rather it's JUnit's categories you need to investigate:
http://java.dzone.com/articles/closer-look-junit-categories
I've not used these as I've usually found the overhead of test partitioning too much effort, but this may work for you. I think TestNG has supported this concept for quite a while.
Also, if you're using Maven you get a partitioning of tests into unit and integration tests for free - check out the Failsafe plugin - which is good for separating tests you want to run quickly as part of every build from longer running tests.
I'm currently working on a class, dealing with network issues. Using JUnit 3.8.1 and having a hardware device, that's not always around to test against, I'd like to conditionally suppress individual tests. Is there a way to achive this with a simple annotation like #if(!gatewayAvailable) -> test's suppressed?
Thanx for any pointers, marcus
There is no such feature in JUnit 3.8.1. You have to use JUnit4 and its Assume class.
I'm in the middle of setting up PMD as a tool in our team to support us writing better code. Basically I'm building Ant scripts and try to set up some rules for everyone to use.
But right now I hit this problem:
When I write JUnit tests I don't want to use the same rules I apply on our main source code. I don't care that much about String rules (like string dupliates or weird instantiations) in the junit tests.
My questions is:
Is that a fault on my side and should I start writing better JUnit tests?
Should I provide a 2nd set of rules that disables some of the string/design/finalizers rules?
The second option - I don't run PMD against my tests at all. I could and PMD provides some JUnit specific rules. I would definitely use a separate ruleset against the test code though. I expect more String literals and some thing specified instead of using conditionals/loops. After all, I don't want to duplicate the code I am trying to test.
Two things. Why are you trying to set up rules why not using the existing rules? (Special requirements?). And second yes of course Unit tests should have a good quality as well. Your Unit test test you production code so shouldn't they have at least the same quality as your production code?
There is so much written about unit testing but I have hardly found any books/blogs about integration testing? Could you please suggest me something to read on this topic?
What tests to write when doing integration testing?
what makes a good integration test?
etc etc
Thanks
Anything written by Kent Beck, father of both JUnit and SUnit, is a great place to start (for unit tests / test writing in general). I'm assuming that you don't mean "continuous integration," which is a process-based build approach (very cool, when you get it working).
In my own experience, integration tests look very similar to regular unit tests, simply at a higher level. More mock objects. More state initialization.
I believe that integration tests are like onions. They have layers.
Some people prefer to "integrate" all of their components and test the "whole" product as an the "integration" test. You can certainly do this, but I prefer a more incremental approach. If you start low-level and then keep testing at higher composition layers, then you will achieve integration testing.
Maybe it is generally harder to find information on integration testing because it is much more specific to the actual application and its business use. Nevertheless, here's my take on it.
What applies to unit-tests also applies to integration tests: modules should have an easy way to mock their externals inputs (files, DB, time...), so that they can be tested together with the other unit-tests.
But what I've found extremely useful, at least for data-oriented applications, is to be able to create a "console" version of the application that takes input files that fully determine its state (no dependencies on databases, network resources...), and outputs the result as another file. One can then maintain pairs of inputs / expected results files, and test for regressions as part of nightly builds, for example. Having this console version allows for easier scripting, and makes debugging incredibly easier as one can rely on a very stable environment, where it is easy to reproduce bugs and to run the debugger.
J.B. Rainsberger has written about them. Here's a link to an InfoQ article with more info.
http://www.infoq.com/news/2009/04/jbrains-integration-test-scam