Should #Entity Pojos be tested? - junit

I don't know if I should test my #Entity-annotated Pojos. After all, there are mainly just generated getters/setters. Should I test them?
When it comes to testing DAOs I'm using all those entities - so they are already propely tested, I guess?
Thanks for your thoughts.
Matt

Can your code contain any bugs? If not, what's the point in testing it? In fact, trying to test it would just introduce new bugs (because your tests could be wrong).
So the conclusion is: You should not test getters and setters without code (i.e. those which just assign or read a field without any additional code).
The exception is: When you manually write those getters/setters because you could have made a typo. But even then, some code will use these and there should be a test for that code which in turn tests whether the getters/setters behave correctly.

The only reason I could think of the write tests would be to test the #Entity annotation itself. Testing the storage and retrieval of values seems like one is doubting a fundamental ability of our programming environment :)

Related

Create reusable mocks for model in junit test and spring boot

I am writing junit test cases for a spring boot application. I am having lot of doubts and I have listed them below.
Is it enough to write unit test cases only for service layer?
How to re-use the stubs / mocks created for the models. Each model is created with lot of dependencies. If we don't reuse them, we will be creating the same objects again and again. If we reuse how to accommodate the test values for all test cases?
Is there any best practices when creating the stubs?
Do we need to write the unit test cases for utility methods?
Rest controllers needs unit test cases?
I'll try to give the best general answers, though keep in mind that the specific case might require different approach.
No, generally speaking you should test everything that contains logic which might be subject to maintenance, and therefore unwanted changes.
I'm not quite sure on what is the problem here. The way I understood it, you want to write the stubs/mocks once and use them for multiple tests; maybe, you might use one test class and generate the stubs in a #Before, #BeforeClass annotated methods.
Well, the stub is intended to be used in place of a specific method when provided a given input. So, first, you should identify what inputs your stubbed method is going to receive and be sure you are passing them along (Note: if you provide the wrong inputs the stub won't work). Second, you need to stub the return object or the answer. Anyway, you might need to use sequential stubbing for cases when the method is called multiple times and different returns are required.
Yes, a maintenance change might cause the change in the behavior of such methods heavily affecting the product. You should always use JUnit to constraint the logic. Anyway, utility classes should be trivial and I don't expect it to be difficult for you to test them.
Like I already said, if it contains logic, yes, it should. Anyway I kinda remember there are different frameworks to mock rest calls.
Daniele

Bind JUnit test to a class

I was wondering if there is a way where we can bind a JUnit test class (eg: AbcTest) to the class Abc such that whenever a method is added to Abc, either the same method stub is added to AbcTest or the test file shows an error. At times the additions are too many
Enjoy your Holidays!
Cheers!
You'd need a code generator to parse your class Abc and do the generation of AbcTest for you.
You can certainly do this to create empty method skeletons, but I'd question the value of doing so. You still have to fill in the meat of the method; no generator will read your mind as to what an effective test would be.
And part of the value of writing the test - maybe the biggest benefit - is the thought you put into it. A generator would destroy that aspect of unit testing.
Besides, if you're doing test-driven development, shouldn't you be writing the test before you write the method? That'd be a Zen challenge for your generator....
One thing you can do is to install moreunit, which has a missing test methods views, where you can add any new methods to the test.

Subclassing a test subject for Junit testing

I want to test validation logic in a legacy class. The class uses a method to load effective dates from a config file.
I have written a subclass of the class in question and overridden the config method so I can run my unit test against the subclass with any combination of effective dates.
Is this an appropriate strategy? It strikes me as a clean technique for testing code that you don't want to mess with.
I like it, its the most simple and straight forward way to get this done. And since it is a legacy class, it will not change anymore, so you don't run danger of bumping into the fragile base class problem neither.
It seems to be an appropriate strategy to me. Ofcourse with this override you won't
be able to test the code (in the original class) that loads the config data, but if you have other tests to cover this sceario then I think the approach you outlined is fine.

Should I change code to make it more testable?

I often find myself changing my code to make it more testable, I always wonder whether this is a good idea or not. Some of the things I find myself doing are:
Adding setters just so I can set an internal object to a mock.
Adding getters for internal maps/lists so I can check the internal state of the object has changed after performing some external action.
Wrapping concrete system classes and creating a new interface so I can mock them. For example, File classes can be hard to mock - so I'll create a new interface FileInterface and WrappedFile which extends it and then use the FileInterface instead of File.
Changing your code to make it more testable can be a good thing, but only if it makes your code itself better. Refactoring for testability can make your code better independent of the test suite's needs. Those are good changes.
Of your three examples only #3 is a really good one; often those new interfaces will make your code more flexible for regular use later. #1 is usually addressed for testing via dependency injection, which in my mind makes code needlessly more complicated but does at least make it easier to test. #2 sounds like a bad idea in general.
It is perfectly fine and even recommended to change your code to make it more testable. Here is a list of 10 things that make code hard to test.
I think your third is really ok, but I'm not too fond of the first and the second. If you just open your class internals with getters and setters, then you're giving up encapsulation completely. Depending on your language, there are ways to open visibility of some parameters to test. But what I actually do (which opens encapsulation a little less) is to make the fields I want to check protected (when dependency injection doesn't solve the problem).
Then, on the test project, I inherit the class, and create a "more powerful one", where I can check the internals, but I change nothing on the implementation, and use this class in the tests.
Finally, changing your code to have dependency injection and inversion of control is also highly recommended, as it makes your code easier to test AND more readable and maintainable.
Though changing is ok, the best things to do is to TDD. It produces testable code naturally, once the tests are written first.
It's a trade-off..
You want a slim, streamlined API or a bloated more complicated, but easilier tested one.
Not the answer you wanted to hear, I know :)
Seems reasonable. Some things don't need checking though; I'd suggest checking to see if adding to a list worked is a little useless. But do whatever you feel comfortable with.
Ideally, every class you design should test itself, so you don't need to change the public interface. But you are talking about legacy code, so I think that changing code is reasonable only when the public impact isn't much noticeable. I would prefer to add a static inner class to test instead of bloat the interface of the tested class.

Test cases, "when", "what", and "why"?

Being new to test based development, this question has been bugging me. How much is too much? What should be tested, how should it be tested, and why should it be tested? The examples given are in C# with NUnit, but I assume the question itself is language agnostic.
Here are two current examples of my own, tests on a generic list object (being tested with strings, the initialisation function adds three items {"Foo", "Bar", "Baz"}):
[Test]
public void CountChanging()
{
Assert.That(_list.Count, Is.EqualTo(3));
_list.Add("Qux");
Assert.That(_list.Count, Is.EqualTo(4));
_list[7] = "Quuuux";
Assert.That(_list.Count, Is.EqualTo(8));
_list.Remove("Quuuux");
Assert.That(_list.Count, Is.EqualTo(7));
}
[Test]
public void ContainsItem()
{
Assert.That(_list.Contains("Qux"), Is.EqualTo(false));
_list.Add("Qux");
Assert.That(_list.Contains("Qux"), Is.EqualTo(true));
_list.Remove("Qux");
Assert.That(_list.Contains("Qux"), Is.EqualTo(false));
}
The code is fairly self-commenting, so I won't go into what's happening, but is this sort of thing taking it too far? Add() and Remove() are tested seperately of course, so what level should I go to with these sorts of tests? Should I even have these sorts of tests?
I would say that what you're actually testing are equivalence classes. In my view, there is no difference between a adding to a list that has 3 items or 7 items. However, there is a difference between 0 items, 1 item and >1 items. I would probably have 3 tests each for Add/Remove methods for these cases initially.
Once bugs start coming in from QA/users, I would add each such bug report as a test case; see the bug reproduce by getting a red bar; fix the bug by getting a green bar. Each such 'bug-detecting' test is there to stay - it is my safety net (read: regression test) that even if I make this mistake again, I will have instant feedback.
Think of your tests as a specification. If your system can break (or have material bugs) without your tests failing, then you don't have enough test coverage. If one single point of failure causes many tests to break, you probably have too much (or are too tightly coupled).
This is really hard to define in an objective way. I suppose I'd say err on the side of testing too much. Then when tests start to annoy you, those are the particular tests to refactor/repurpose (because they are too brittle, or test the wrong thing, and their failures aren't useful).
A few tips:
Each testcase should only test one thing. That means that the structure of the testcase should be "setup", "execute", "assert". In your examples, you mix these phases. Try splitting your test-methods up. That makes it easier to see exactly what you are testing.
Try giving your test-methods a name that describes what it is testing. I.e. the three testcases contained in your ContainsItem() becomes: containsReportsFalseIfTheItemHasNotBeenAdded(), containsReportsTrueIfTheItemHasBeenAdded(), containsReportsFalseIfTheItemHasBeenAddedThenRemoved(). I find that forcing myself to come up with a descriptive name like that helps me conceptualize what I have to test before I code the actual test.
If you do TDD, you should write your test firsts and only add code to your implementation when you have a failing test. Even if you don't actually do this, it will give you an idea of how many tests are enough. Alternatively use a coverage tool. For a simple class like a container, you should aim for 100% coverage.
Is _list an instance of a class you wrote? If so, I'd say testing it is reasonable. Though in that case, why are you building a custom List class?
If it's not code you wrote, don't test it unless you suspect it's in some way buggy.
I try to test code that's independent and modular. If there's some sort of God-function in code I have to maintain, I strip out as much of it as possible into sub-functions and test them independantly. Then the God function can be written to be "obviously correct" -- no branches, no logic, just passing results from one well-tested subfunction to another.