Javadoc in JUnit test classes? - junit

Is it a best practice to put Javadoc comments in JUnit test classes and methods? Or is the idea that they should be so easy to read and simple that it is unnecessary to provide a narrative of the test intent?

I use Javadoc in my testing a lot.
But it only gets really useful when you add your own tag to your javadoc.
The main objective here is to make the test understandable for other developers contributing to your project. And for that we don't even need to generate the actual javadoc.
/**
* Create a valid account.
* #result Account will be persisted without any errors,
* and Account.getId() will no longer be <code>null</code>
*/
#Test
public void createValidAccount() {
accountService.create(account);
assertNotNull(account.getId());
}
Next we'll need to notify our Javadoc plugin in maven that we added a new tag.
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.8</version>
<configuration>
<tags>
<tag>
<name>result</name>
<placement>a</placement>
<head>Test assertion :</head>
</tag>
</tags>
</configuration>
</plugin>
</plugins>
</build>
And now all that is left to do is call our maven plugin.
javadoc:test-javadoc (or javadoc:test-aggregate for multi-module projects)
This is a fairly easy example, but when running more complex tests, it is impossible to describe the tests by simply using a self-descriptive method name.

I personally use javadoc comments sparingly as I find they increase the on-screen clutter. If I can name a class, function or variable in a more self-descriptive way then I will in preference to a comment. An excellent book to read on this topic is Clean Code by Robert C. Martin (a.k.a Uncle Bob).
My personal preference is to make both the class and methods self descriptive i.e.
class ANewEventManager {
#Test
public void shouldAllowClassesToSubscribeToEvents() {
/* Test logic here */
}
}
One advantage of this approach is that it is easy to see in the junit output what is failing before browsing the code.

This question leads to eternal holywar of "whether code needs comments or must be self-descriptive".
As mentioned in the accepted answer, many cite Rob Martin as a source of "code should be descriptive and not need a comment" and "do not write javadocs on any methods other that public APIs". But "Clean Code" isn't "A Bible of the Absolute Truth". The reasonable pragmatic answer would be "it always depends".
My personal preference is:
When test is trivial, its name can be self-descriptive and it does not need a doc.
When test implies some non-trivial scenario, document this scenario in the javadoc, so that it can be seen in quick help by other developers (Ctrl + Q in IntelliJ IDEA), so that they can read a simple human-language doc instead of reading the complex test code and understand what it does.
As mentioned in #Foumpie's answer, javadocs can be generated as html files and be shared eg with QA team, so that they know what is covered by auto-tests and do not duplicate the same work manually.
I often write a javadoc with test method scenario before implementing the test, to have a complete plan of what this test has to do before spending some significant time to implement it.

Related

How can I filter a few methods to be analysed for code coverage

I using Jacoco as code-coverage plugin configured inside my pom.xml. I want to test and analyse coverage of only a few methods from my class file and want to show coverage percentage accordingly for them only. But as jacoco analyse whole file it shows less coverage, though the methods concerned are covered 100%.
Is there any way out in jacoco to exclude some methods being analysed without changing source file code?
That's not possible. Jacoco allows inclusions and exclusions at class level but not at method level.
There is some support for filtering at method level, discussed here. This allows Jacoco to ignore extraneous byte code generated by the Java compiler. On a similar note; Jacoco can also ignore some generated code on the basis of annotations (such as code generated by Lombok)
Although there is currently no way to tell Jacoco (via the Maven plugin, for example) to ignore specific methods, there are some open Jacoco issues related to this:
Filtering options for coverage analysis
Investigate filtering with annotations
You could perhaps vote for those and/or raise another issues for your specific requirements.
It is not clear why you "want to test and analyse coverage of only a few methods from my class file and want to show coverage percentage accordingly for them only."
May be you have some code which is not related to main class? In this case think about design. One of possible solution is to split your class to parent and child or main class and some utilities.
May be 2 developers are working with the same class you each wants to show only own results?
May be some code hard to test? Try the mocking way.

How to take screenshot on test failure with junit 5

Can someone tell me please: how to take a screenshot when test method fails (jUnit 5). I have a base test class with BeforeEach and AfterEach methods. Any other classes with #Test methods extends base class.
Well, it is possible to write java code that takes screenshots, see here for example.
But I am very much wondering about the real problem you are trying to solve this way. I am not sure if you figured that yet, but the main intention of JUnit is to provide you a framework that runs your tests in various environments.
Of course it is nice that you can run JUnit within your IDE, and maybe you would find it helpful to get a screenshot. But: "normally" unit tests also run during nightly builds and such - in environments where "taking a screenshot" might not make any sense!
Beyond that: screenshorts are an extremely ineffective way of collecting information! When you have a fail, you should be locking for textual log files, html/xml reports, whatever. You want that failing tests generate information that can be easily digested.
So, the real answer here is: step back from what you are doing right now, and re-consider non-screenshot solutions to the problem you actually want to solve!
You don't need to take screen shots for JUnit test failes/passes, rather the recommended way is to generate various reports (Tests Passed/Failed Report, Code coverage Report, Code complexity Report etc..) automatically using the below tools/plugins.
You can use Cobertura maven plugin or Sonarqube code quality tool so that these will automatically generate the reports for you.
You can look here for Cobertura-maven-plugin and here for Sonarqube for more details.
You need to integrate these tools with your CI (Continuous Integration) environments and ensure that if the code is NOT passing certain quality (in terms of tests coverage, code complexity, etc..) then the project build (war/ear) should fail automatically.

JUnit equivalents for TestNG's #BeforeSuite, #BeforeTest

I'm refactoring some test classes from TestNG to JUnit 4. During the process, I've stumbled upon the following annotations:
#BeforeTest
#AfterTest
According to the manual:
The annotated method will be run before/after any test method belonging to the classes inside the tag is run.
What would be the equivalent annotations in JUnit?
This is the original answer, but I think it is wrong. See below for a better one
The equivalent would be the annotations
#Before
and
#After
see also http://junit.sourceforge.net/javadoc/org/junit/Before.html
This is a better answer, after I learned about the difference between Before/AfterMethod and Before/AfterTest in TestNG
If I got it right, with Before/AfterTest you can run a method before or after a list of tests, that you specify inside the annotation or a separate document.
There is no out of the box feature like this in JUnit.
Probably the best you can do, is put what ever you want to do in a JUnit Rule. See also http://schauderhaft.de/2011/07/24/rules-in-junit-4-9-beta-3/
Then you can use that Rule in any test that needs it.

Why would you want Dependency Injection without configuration?

After reading the nice answers in this question, I watched the screencasts by Justin Etheredge. It all seems very nice, with a minimum of setup you get DI right from your code.
Now the question that creeps up to me is: why would you want to use a DI framework that doesn't use configuration files? Isn't that the whole point of using a DI infrastructure so that you can alter the behaviour (the "strategy", so to speak) after building/releasing/whatever the code?
Can anyone give me a good use case that validates using a non-configured DI like Ninject?
I don't think you want a DI-framework without configuration. I think you want a DI-framework with the configuration you need.
I'll take spring as an example. Back in the "old days" we used to put everything in XML files to make everything configurable.
When switching to fully annotated regime you basically define which component roles yor application contains. So a given
service may for instance have one implementation which is for "regular runtime" where there is another implementation that belongs
in the "Stubbed" version of the application. Furthermore, when wiring for integration tests you may be using a third implementation.
When looking at the problem this way you quickly realize that most applications only contain a very limited set of component roles
in the runtime - these are the things that actually cause different versions of a component to be used. And usually a given implementation of a component is always bound to this role; it is really the reason-of-existence of that implementation.
So if you let the "configuration" simply specify which component roles you require, you can get away without much more configuration at all.
Of course, there's always going to be exceptions, but then you just handle the exceptions instead.
I'm on a path with krosenvold, here, only with less text: Within most applications, you have a exactly one implementation per required "service". We simply don't write applications where each object needs 10 or more implementations of each service. So it would make sense to have a simple way say "this is the default implementation, 99% of all objects using this service will be happy with it".
In tests, you usually use a specific mockup, so no need for any config there either (since you do the wiring manually).
This is what convention-over-configuration is all about. Most of the time, the configuration is simply a dump repeating of something that the DI framework should know already :)
In my apps, I use the class object as the key to look up implementations and the "key" happens to be the default implementation. If my DI framework can't find an override in the config, it will just try to instantiate the key. With over 1000 "services", I need four overrides. That would be a lot of useless XML to write.
With dependency injection unit tests become very simple to set up, because you can inject mocks instead of real objects in your object under test. You don't need configuration for that, just create and injects the mocks in the unit test code.
I received this comment on my blog, from Nate Kohari:
Glad you're considering using Ninject!
Ninject takes the stance that the
configuration of your DI framework is
actually part of your application, and
shouldn't be publicly configurable. If
you want certain bindings to be
configurable, you can easily make your
Ninject modules read your app.config.
Having your bindings in code saves you
from the verbosity of XML, and gives
you type-safety, refactorability, and
intellisense.
you don't even need to use a DI framework to apply the dependency injection pattern. you can simply make use of static factory methods for creating your objects, if you don't need configurability apart from recompiling code.
so it all depends on how configurable you want your application to be. if you want it to be configurable/pluggable without code recompilation, you'll want something you can configure via text or xml files.
I'll second the use of DI for testing. I only really consider using DI at the moment for testing, as our application doesn't require any configuration-based flexibility - it's also far too large to consider at the moment.
DI tends to lead to cleaner, more separated design - and that gives advantages all round.
If you want to change the behavior after a release build, then you will need a DI framework that supports external configurations, yes.
But I can think of other scenarios in which this configuration isn't necessary: for example control the injection of the components in your business logic. Or use a DI framework to make unit testing easier.
You should read about PRISM in .NET (it's best practices to do composite applications in .NET). In these best practices each module "Expose" their implementation type inside a shared container. This way each module has clear responsabilities over "who provide the implementation for this interface". I think it will be clear enough when you will understand how PRISM work.
When you use inversion of control you are helping to make your class do as little as possible. Let's say you have some windows service that waits for files and then performs a series of processes on the file. One of the processes is to convert it to ZIP it then Email it.
public class ZipProcessor : IFileProcessor
{
IZipService ZipService;
IEmailService EmailService;
public void Process(string fileName)
{
ZipService.Zip(fileName, Path.ChangeFileExtension(fileName, ".zip"));
EmailService.SendEmailTo(................);
}
}
Why would this class need to actually do the zipping and the emailing when you could have dedicated classes to do this for you? Obviously you wouldn't, but that's only a lead up to my point :-)
In addition to not implementing the Zip and email why should the class know which class implements the service? If you pass interfaces to the constructor of this processor then it never needs to create an instance of a specific class, it is given everything it needs to do the job.
Using a D.I.C. you can configure which classes implement certain interfaces and then just get it to create an instance for you, it will inject the dependencies into the class.
var processor = Container.Resolve<ZipProcessor>();
So now not only have you cleanly separated the class's functionality from shared functionality, but you have also prevented the consumer/provider from having any explicit knowledge of each other. This makes reading code easier to understand because there are less factors to consider at the same time.
Finally, when unit testing you can pass in mocked dependencies. When you test your ZipProcessor your mocked services will merely assert that the class attempted to send an email rather than it really trying to send one.
//Mock the ZIP
var mockZipService = MockRepository.GenerateMock<IZipService>();
mockZipService.Expect(x => x.Zip("Hello.xml", "Hello.zip"));
//Mock the email send
var mockEmailService = MockRepository.GenerateMock<IEmailService>();
mockEmailService.Expect(x => x.SendEmailTo(.................);
//Test the processor
var testSubject = new ZipProcessor(mockZipService, mockEmailService);
testSubject.Process("Hello.xml");
//Assert it used the services in the correct way
mockZipService.VerifyAlLExpectations();
mockEmailService.VerifyAllExceptions();
So in short. You would want to do it to
01: Prevent consumers from knowing explicitly which provider implements the services it needs, which means there's less to understand at once when you read code.
02: Make unit testing easier.
Pete

Should #Entity Pojos be tested?

I don't know if I should test my #Entity-annotated Pojos. After all, there are mainly just generated getters/setters. Should I test them?
When it comes to testing DAOs I'm using all those entities - so they are already propely tested, I guess?
Thanks for your thoughts.
Matt
Can your code contain any bugs? If not, what's the point in testing it? In fact, trying to test it would just introduce new bugs (because your tests could be wrong).
So the conclusion is: You should not test getters and setters without code (i.e. those which just assign or read a field without any additional code).
The exception is: When you manually write those getters/setters because you could have made a typo. But even then, some code will use these and there should be a test for that code which in turn tests whether the getters/setters behave correctly.
The only reason I could think of the write tests would be to test the #Entity annotation itself. Testing the storage and retrieval of values seems like one is doubting a fundamental ability of our programming environment :)