JUnit Reports -- Test Method Descriptions - junit

I am trying to see if there is a way to include "descriptive text" in my junit reports by way of javadocs. JUnit 4 doesnt seem to support the 'description' attribute for the #Test annotation like TestNG does.
So far from what I have researched there is only one tool out there called javadoc-junit (http://javadoc-junit.sourceforge.net/). However I could not get this to work since it seems to be incompatible with Junit 4.
What I want is some way to provide a sentence or two of text with my each test method in the JUnit report. JavaDoc is no good since the target audience will have to swtich between JavaDoc and the Junit Report to see documentation and/or test stats.
Anyone know of anything else I could use with minimal effort?
Best,
Ray J

In JUnit 5 there is a way to annotate every test with a #DisplayName. The declared test classes can have text, special characters and emojis.
The declared text on each test is visible by test runners and test reports.
The Javadoc says:
public #interface DisplayName
#DisplayName is used to declare a custom display name for the annotated test class or test method.
Display names are typically used for test reporting in IDEs and build tools and may contain spaces, special characters, and even emoji.
And the User Guide:
import org.junit.gen5.api.DisplayName;
import org.junit.gen5.api.Test;
#DisplayName("A special test case")
class DisplayNameDemo {
#Test
#DisplayName("Custom test name containing spaces")
void testWithDisplayNameContainingSpaces() {
}
#Test
#DisplayName("╯°□°)╯")
void testWithDisplayNameContainingSpecialCharacters() {
}
#Test
#DisplayName("😱")
void testWithDisplayNameContainingEmoji() {
}
}

There's also rather recent solution called Allure. That's a Java-based test execution report mainly based on adding supplementary annotations to the code. Existing annotations include:
custom description: #Description("A cool test")
grouping by features or stories: #Features({"feature1", "feature2"}), #Stories({"story1", "story2" })
marking methods executed inside test case as steps: #Step (works even for private methods)
attachments: #Attachment(name = "Page screenshot", type = "image/png")
See their wiki and example project for more details.

I don't put javadocs in JUnit tests. I usually make the name of the method descriptive enough so it's as good as or better than any comment I could come up with.

I could imagine, that the Framework for Integrated Tests (FIT) would be a nice and clean solution.
What does FIT do?
FIT is a framework that allows to write tests via a table in a Word document, a wiki table or an html table.
Every character outside of a table is ignored by FIT and let you enter documentation, description, requirements and so on.
How does on of these tables look like?
Imagine a function MyMath.square(int) that squares it's input parameter. You have to build a so called Fixture, being an adapter between your MyMath and the following table:
class.with.Fixture.Square
x square()
2 4
5 25
The first column describes input values, the second the expected result. If it's not equal, this field is marked as red.
How does a Fixture look like?
For the given example, this would be the correct fixture:
package class.with.Fixture // Must be the same as in the fist row of the table
public class Square extends Fixture {
public int x; // Must be the same as in the second row
public int square() { // Must be the same as in the second row
return MyMath.square(x);
}
}
Probably, you can use FIT for your requirements.
Feel free to comment my answer or edit your question for more information!

Related

Junit test cases for a class that generates text content

I have a class named EmailNotificationContentBuilder. As the name suggest the class is responsible to generate content for an email notification to be sent after a process ends. The notification basically tells whether the process was successful or not, the start time end time and the statuses of the child processes ( in tabular format ). I have following doubts regarding writing Junit test cases for this class:-
Is it required to have a Junit for this class? Since it generates textual content.
If yes then how can I assert the content generated by the class ? Some of the contents are represented in tabular format.
Do you want to make sure it does what it's supposed to do? If yes, then write a test. If you don't care if the code works fine or not, then don't write one.
This is the most typical thing a unit test does: test that the value returned by a method is correct. Get the String it returns, and check that it's what you expect it to be:
#Test
public void shouldReturnTabularData() {
EmailNotificationContentBuilder builder = new EmailNotificationContentBuilder();
String result = builder.build("some input");
assertEquals("title1\ttitle2\nvalue1\tvalue2", result);
}

Guideliness to write junit test cases for if,loop and exception

I'm new to Junit. I want to write test cases for if condition,loops.
Do we have any guidelines or procedure to write test cases for if,loop conditions?
Can anyone explain with an example?
IF Age < 18 THEN WHILE Age <> 18
DO ResultResult = Result +1 AgeAge = Age +1 END
DO Print “You can start driving in {Result} years”
ELSE
Print “You can start driving now!”
ENDIF
You want one test case for each major scenario that your code is supposed to be able to handle. With an "if" statement, there are generally two cases, although you might include a third case which is the "boundary" of the two. With a loop, you might want to include a case where the loop is run multiple times, and also a case where the loop is not run at all.
In your particular example, I would write three test cases - one where the age is less than 18, one where the age is exactly 18, and one where the age is over 18. In JUnit, each test case is a separate method inside a test class. Each test method should run the code that you're testing, in the particular scenario, then assert that the result was correct.
Lastly, you need to consider what to call each test method. I strongly recommend using a sentence that indicates which scenario you're testing, and what you expect to happen. Some people like to begin their test method names with the word "test"; but my experience is that this tends to draw attention away from what CONDITION you're trying to test, and draws attention toward which particular method or function it is that you're testing, and you tend to get lower quality tests as a result. For your example, I would call the test methods something like this.
public void canStartDrivingIfAgeOver18()
public void canStartDrivingIfAgeEquals18()
public void numberOfYearsRemainingIsShownIfAgeUnder18()
From my understanding of writing in junit for java ,we were used to create a source code into different blocks is the code conventional,and used to pass the values as args to the function from the test cases so the values will steps into the block statements ,and passes the test cases .
For example you are having the variable as age by assuming as functionName(int age), for testing you should pass the integer from the test case as functionName(18) it will steps into the statements and will show you the status of the test case.Create test case for a testing class write test case for the functions
UseClass classObj=new UseClass();// it should be your class
#Test
public void testValidateAge() {
classObj.validateAge("20");
assertEquals(200,"");
}
Correct me if 'm wrong :)

Is it OK to have multiple assertions in a unit test when testing complex behavior?

Here is my specific scenario.
I have a class QueryQueue that wraps the QueryTask class within the ArcGIS API for Flex. This enables me to easily queue up multiple query tasks for execution. Calling QueryQueue.execute() iterate through all the tasks in my queue and call their execute method.
When all the results have been received and processed QueryQueue will dispatch the completed event. The interface to my class is very simple.
public interface IQueryQueue
{
function get inProgress():Boolean;
function get count():int;
function get completed():ISignal;
function get canceled():ISignal;
function add(query:Query, url:String, token:Object = null):void;
function cancel():void;
function execute():void;
}
For the QueryQueue.execute method to be considered successful several things must occur.
task.execute must be called on each query task once and only once
inProgress = true while the results are pending
inProgress = false when the results have been processed
completed is dispatched when the results have been processed
canceled is never called
The processing done within the queue correctly processes and packages the query results
What I am struggling with is breaking these tests into readable, logical, and maintainable tests.
Logically I am testing one state, that is the successful execution state. This would suggest one unit test that would assert #1 through #6 above are true.
[Test] public mustReturnQueryQueueEventArgsWithResultsAndNoErrorsWhenAllQueriesAreSuccessful:void
However, the name of the test is not informative as it does not describe all the things that must be true in order to be considered a passing test.
Reading up online (including here and at programmers.stackexchange.com) there is a sizable camp that asserts that unit tests should only have one assertion (as a guideline). As a result when a test fails you know exactly what failed (i.e. inProgress not set to true, completed displayed multiple times, etc.) You wind up with potentially a lot more (but in theory simpler and clearer) tests like so:
[Test] public mustInvokeExecuteForEachQueryTaskWhenQueueIsNotEmpty():void
[Test] public mustBeInProgressWhenResultsArePending():void
[Test] public mustNotInProgressWhenResultsAreProcessedAndSent:void
[Test] public mustDispatchTheCompletedEventWhenAllResultsProcessed():void
[Test] public mustNeverDispatchTheCanceledEventWhenNotCanceled():void
[Test] public mustReturnQueryQueueEventArgsWithResultsAndNoErrorsWhenAllQueriesAreSuccessful:void
// ... and so on
This could wind up with a lot of repeated code in the tests, but that could be minimized with appropriate setup and teardown methods.
While this question is similar to other questions I am looking for an answer for this specific scenario as I think it is a good representation of a complex unit testing scenario exhibiting multiple states and behaviors that need to be verified. Many of the other questions have, unfortunately, no examples or the examples do not demonstrate complex state and behavior.
In my opinion, and there will probably be many, there are a couple of things here:
If you must test so many things for one method, then it could mean your code might be doing too much in one single method (Single Responsibility Principle)
If you disagree with the above, then the next thing I would say is that what you are describing is more of an integration/acceptance test. Which allows for multiple asserts, and you have no problems there. But, keep in mind that this might need to be relegated to a separate section of tests if you are doing automated tests (safe versus unsafe tests)
And/Or, yes, the preferred method is to test each piece separately as that is what a unit test is. The closest thing I can suggest, and this is about your tolerance for writing code just to have perfect tests...Is to check an object against an object (so you would do one assert that essentially tests this all in one). However, the argument against this is that, yes it passes the one assert per test test, but you still lose expressiveness.
Ultimately, your goal should be to strive towards the ideal (one assert per unit test) by focusing on the SOLID principles, but ultimately you do need to get things done or else there is no real point in writing software (my opinion at least :)).
Let's focus on the tests you have identified first. All except the last one (mustReturnQueryQueueEventArgs...) are good ones and I could immediatelly tell what's being tested there (and that's very good sign, indicating they're descriptive and most likely simple).
The only problem is your last test. Note that extensive use of words "and", "with", "or" in test name usually rings problems bell. It's not very clear what it's supposed to do. Return correct results comes to mind first, but one might argue it's vague term? This holds true, it is vague. However you'll often find out that this is indeed pretty common requirement, described in details by method/operation contract.
In your particular case, I'd simplify last test to verify whether correct results are returned and that would be all. You tested states, events and stuff that lead to results building already, so there is no need to that again.
Now, advices in links you provided are quite good ones actually, and generally, I suggest sticking to them (single assertion for one test). The question is, what single assertion really stands for? 1 line of code at the end of test? Let's consider this simple example then:
// a method which updates two fields of our custom entity, MyEntity
public void Update(MyEntity entity)
{
entity.Name = "some name";
entity.Value = "some value";
}
This method contract is to perform those 2 operations. By success, we understand entity to be correctly updated. If one of them for some reasons fails, method as a unit is considered to fail. You can see where this is going; you'll either have two assertions or write custom comparer purely for testing purposes.
Don't be tricked by single assertion; it's not about lines of code or number of asserts (however, in majority of tests you'll write this will indeed map 1:1), but about asserting single unit (in the example above, update is considered to be an unit). And unit might be in reality multiple things that don't make any sense at all without eachother.
And this is exactly what one of questions you linked quotes (by Roy Osherove):
My guideline is usually that you test one logical CONCEPT per test. you can have multiple asserts on the same object. they will usually be the same concept being tested.
It's all about concept/responsibility; not the number of asserts.
I am not familiar with flex, but I think I have good experience in unit testing, so you have to know that unit test is a philosophy, so for the first answer, yes you can make a multiple assert but if you test the same behavior, the main point always in unit testing is to be very maintainable and simple code, otherwise the unit test will need unit test to test it! So my advice to you is, if you are new in unit testing, don't use multiple assert, but if you have good experience with unit testing, you will know when you will need to use them

How can I make JUnit let me set variables in one test case and access them in other if they are in the same class

Let say I have a test class called MyTest.
In it I have three tests.
public class MyTest {
AnObject object;
#Before
public void setup(){
object = new AnObject();
object.setSomeValue(aValue);
}
#Test
public void testMyFirstMethod(){
object.setAnotherValue(anotherValue);
// do some assertion to test that the functionality works
assertSomething(sometest);
}
#Test
public void testMySecondMethod(){
AValue val = object.getAnotherValue();
object.doSomethingElse(val);
// do some assertion to test that the functionality works
assertSomething(sometest);
}
Is there any way I can use the value of anotherValue, which is set with its setter in the first test, in the second test. I am using this for testing database functionality. When I create an object in the DB I want to get its GUID so I can use this to do updates and deletes in later test methods, without having to hardcode the GUID and therefore making it irrelevant for future use.
You are introducing a dependency between two tests. JUnit deliberately does not support dependency between tests, and you can't guarantee the order of execution (except for test classes in a test suite, see my answer to Has JUnit4 begun supporting ordering of test? Is it intentional?). So you really want to have dependencies between two test methods:
you have to use an intermediate static value
as Cedric suggests, use TestNG, which specifically supports dependencies
in this case, you can create a method to create the line, and call it from both methods.
I would personally prefer 3, because:
I get independent tests, and I can run just the second test (in Eclipse or such like)
In my teardown in the class, I can remove the line from the database, the cleanup. This means that whichever test I run, I always start off with the same (known) database state.
However, if your setup is really expensive, you can consider this to be an integration test and just accept the dependency, to save time.
You should use TestNG if you need this (and I agree it's fairly common in integration testing). TestNG uses the same instance to run your tests, so values stored in fields are preserved between tests, which is very useful when your objects are expensive to create (JUnit forces you to use statics to achieve the same effect, which should be avoided).
First off, make sure your #Test 's run in some kind of defined order
i.e. #FixMethodOrder(MethodSorters.NAME_ASCENDING)
In the example below, I'm assuming that test2 will run after test1
To share a variable between them, use a ThreadLocal (from java.lang).
Note that the scope of the ThreadLocal variable is to the thread, so if you are running multiple threads, each will have a copy of 'email' (the static in this case implies that its only global to the thread)
private static ThreadLocal<String> email = new ThreadLocal<String>();
#Test
public void test1 {
email.set("hchan#apache.org);
}
#Test
public void test2 {
System.out.println(email.get());
}
You should not do that. Tests are supposed to be able to run in random order. If you want to test things that depend on one value in the database, you can do that in the #Before code, so it's not all repeated for each test case.
I have found nice solution, just add Before annotation to the previous test!
private static String email = null;
#Before
#Test
public void test1 {
email = "test#google.com"
}
#Test
public void test2 {
System.out.println(email);
}
If you, like me, googled until here and the answer didn't serve to you, I'll just leave this: Use #BeforeEach

Interface explosion problem

I am implementing a screen using MVP pattern, with more feature added to the screen, I am adding more and more methods to the IScreen/IPresenter interface, hence, the IScreen/IPresenter interface is becoming bigger and bigger, what should I do to cope with this situation?
There is no specific limit on the number of artifacts (methods, constants, enumerations, etc) - say N - in an interface such that we can say if interface X has more than N artifacts, it is bloated.
At least not without a context. In this case, the context is what is the interface supposed to provide?, or better yet, what are implementations of this interface supposed to do? What is the intended behavior or role of classes implementing the interface?
I would strongly suggest you get familiar with certain metrics like cohesion and coupling (both in general and in specifics to OO.) In particular, I'd suggest you take a look at LCOM. Once you understand it, it will help you eyeball situations like the one you are encountering now.
http://javaboutique.internet.com/tutorials/coupcoh/
One of the last things you want to do with an interface or class (or even package or module if you were doing procedural programming) is to turn them into bags of methods and functions where you throw everything but the kitchen sink. That leads to either poorly cohesion or tight coupling (or both.)
One of the problems with interfaces is that we cannot easily compute or estimate their LCOM as one would with actual classes, which could guide you in deciding when to r-efactor. So for that you have to use a bit of intuition.
Let's assume your interface is named A for the sake of argument. Then,
Step 1:
Consider grouping the interface methods by arguments: is there a subset of methods that operate on the same type of arguments? If so, are they significantly different from other method groups?
interface A
{
void method1();
void method2(someArgType x);
someOtherType y method3();
...
void doSomethingOn( someType t );
boolean isUnderSomeCondition( someType t )
someType replaceAndGetPrev( someType t, someFields ... )
}
In such a case, consider splitting that group into its own interface, B.
Step 2:
Once you extract interface B, does it look like this?
interface B
{
void doSomethingOn( someType t );
...
boolean isUnderSomeCondition( someType t )
...
someType replaceAndGetPrev( someType t, someFields ... )
}
That is, it represents methods that do things on some type?
If so, your interface is mimicking a procedural module operation on an ADT (in this case, someType) - nothing wrong with if you are using a procedural or multi-paradigm language.
Within reason and while being pragmatic, in OO, you minimize procedures that do things on other objects. You call methods in those objects to do things to themselves on your behalf. Or more precisely, you signal them to do something internally.
In such a case, consider turning B into a class encapsulating the type (and, have it extend an interface with the same signature, but only if it makes sense, if you expect different implementations of artifacts encapsulating/managing elements of that type.)
class Bclass
{
someType t;
Bclass(){ t=new someType();}
...
void doSomethingOn();
...
boolean isUnderSomeCondition()
...
someType replaceAndGetPrev( someFields ... )
}
Step 3:
Determine the relationships between the interfaces and classes re-factored out from A.
If B represent things that can only exist when A does (A is a context for B, for example a servlet request exists in a servlet context in Java EE lingo), then have B define a method that returns A (for example A B.getContext() or something like that.)
If B represent things that are managed by A (A being a composite of things, including B), then have A define a method that returns B (B A.getBThingie())
If there is no such relationship between A and B, and they have nothing in common other than they were grouped together, then chances are that the original interface was poorly cohesive.
If you cannot disentangle one from the other without breaking a significant amount of your design, then that's a sign that pieces of your system had poor boundaries and are tightly coupled.
Hope it helps.
ps. Also, I would also avoid trying to fit your interfaces and classes into traditional patterns UNLESS doing so serves an application/business specific purpose. I gotta throw that in there just in case. Too many people run amok with the GoF book trying to fit their classes into patterns rather than asking 'what problem am I solving with this?'
In my opinion, a "perfect program world" contains public interfaces and internal implementations.
Each interface is strictly "in charge" of one thing only.
I try to view these entities is "little" human beings which interact with one another in order to complete a certain task.
(sorry if this is a bit of philosophizing)
What flavor of Model-View-Presenter are you using? I've found that Passive View rarely involves overlap between the view and presenter interfaces - normally they change at different times.
Typically the view's interface is essentially a view model, perhaps something like this (C#-style):
public interface IEditCustomerView {
string FirstName { get; set; }
string LastName { get; set; }
string Country { get; set; }
List<Country> AvailableCountries { get; set; }
// etc.
}
The view implementation usually has handlers for user gestures that are usually thin wrappers that call into the presenter:
public class EditCustomerView {
// The save button's 'click' observer
protected void SaveCustomer() {
this.presenter.SaveCustomer();
}
}
The presenter generally has a method for each user gesture, but none of the data, since it gets that directly from the view (which is generally passed to the presenter in the constructor, though you can pass it on each method call if it's more suitable):
public interface IEditCustomerPresenter {
void Load();
void SaveCustomer();
}
Can you break your interface into sub-interfaces representing sections of the screen? For example, if your screen is divided into groups such as a navigation section, or a form section, or a toolbar section, then your IPresenter/IScreen could have getters for interfaces for those sections, and those sections could contain relevant methods for each section. Your main IPresenter/IScreen would still have methods that are relevant to the whole interface.
If sections of the screen don't work as a logical category for your application, think of other things that might provide a logical breakdown. Workflow would be one.
EDIT For example:
For example, for a large UI which I did, I actually broke up not just my presenter but also my model and view code. The entire screen neatly broke up into a tree (in this case), with the main presenter delegating work to the children presenters and down the chain. When I had to later go back and add to this UI, I found fitting into the hierarchy fairly simple and maintainable.
In an example that works like this, the MainPresenter implementation of IMainPresenter knows about both it's model, it's view, and it's sub-presenters. Each SubPresenter controls its own view and model. Any operations on what logically belongs in that sub-section should be in the SubPresenter. If your screen is laid out in such a way that there are logical units like this, such a set-up should work well. Each SubPresenter should be able to return its SubView for the MainPresenter to plug into the MainView as appropriate.