I am writing junit test cases for a spring boot application. I am having lot of doubts and I have listed them below.
Is it enough to write unit test cases only for service layer?
How to re-use the stubs / mocks created for the models. Each model is created with lot of dependencies. If we don't reuse them, we will be creating the same objects again and again. If we reuse how to accommodate the test values for all test cases?
Is there any best practices when creating the stubs?
Do we need to write the unit test cases for utility methods?
Rest controllers needs unit test cases?
I'll try to give the best general answers, though keep in mind that the specific case might require different approach.
No, generally speaking you should test everything that contains logic which might be subject to maintenance, and therefore unwanted changes.
I'm not quite sure on what is the problem here. The way I understood it, you want to write the stubs/mocks once and use them for multiple tests; maybe, you might use one test class and generate the stubs in a #Before, #BeforeClass annotated methods.
Well, the stub is intended to be used in place of a specific method when provided a given input. So, first, you should identify what inputs your stubbed method is going to receive and be sure you are passing them along (Note: if you provide the wrong inputs the stub won't work). Second, you need to stub the return object or the answer. Anyway, you might need to use sequential stubbing for cases when the method is called multiple times and different returns are required.
Yes, a maintenance change might cause the change in the behavior of such methods heavily affecting the product. You should always use JUnit to constraint the logic. Anyway, utility classes should be trivial and I don't expect it to be difficult for you to test them.
Like I already said, if it contains logic, yes, it should. Anyway I kinda remember there are different frameworks to mock rest calls.
Daniele
I was wondering if there is a way where we can bind a JUnit test class (eg: AbcTest) to the class Abc such that whenever a method is added to Abc, either the same method stub is added to AbcTest or the test file shows an error. At times the additions are too many
Enjoy your Holidays!
Cheers!
You'd need a code generator to parse your class Abc and do the generation of AbcTest for you.
You can certainly do this to create empty method skeletons, but I'd question the value of doing so. You still have to fill in the meat of the method; no generator will read your mind as to what an effective test would be.
And part of the value of writing the test - maybe the biggest benefit - is the thought you put into it. A generator would destroy that aspect of unit testing.
Besides, if you're doing test-driven development, shouldn't you be writing the test before you write the method? That'd be a Zen challenge for your generator....
One thing you can do is to install moreunit, which has a missing test methods views, where you can add any new methods to the test.
I want to test validation logic in a legacy class. The class uses a method to load effective dates from a config file.
I have written a subclass of the class in question and overridden the config method so I can run my unit test against the subclass with any combination of effective dates.
Is this an appropriate strategy? It strikes me as a clean technique for testing code that you don't want to mess with.
I like it, its the most simple and straight forward way to get this done. And since it is a legacy class, it will not change anymore, so you don't run danger of bumping into the fragile base class problem neither.
It seems to be an appropriate strategy to me. Ofcourse with this override you won't
be able to test the code (in the original class) that loads the config data, but if you have other tests to cover this sceario then I think the approach you outlined is fine.
I'm trying to migrate to JUnit 4 and I'm not clear about the correct way to set up test suites.
I know how to set up a test suite with fixed tests using the #SuitesClasses annotation.
However, I want to have a top-level suite class, where I can programatically decide which test classes or suites I want to load. I know that there are addTest and addTestSuite operations in the TestSuite class.
However, if I define a TestSuite subclass with a constructor that attempts to add these tests and try to run it, I get an error "Must have SuiteClasses annotation".
Any idea how to do this?
I would recommend creating a subclass of the BlockJUnit4ClassRunner and pull in the classes you want to test manually. The protected methods of the class do all the hard work for you, although you might want to tweak the Descriptions a bit to make sure the results are all unique in the output files.
I'm a big fan of TDD and use it for the vast majority of my development these days. One situation I run into somewhat frequently, though, and have never found what I thought was a "good" answer for, is something like the following (contrived) example.
Suppose I have an interface, like this (writing in Java, but really, this applies to any OO language):
public interface PathFinder {
GraphNode[] getShortestPath(GraphNode start, GraphNode goal);
int getShortestPathLength(GraphNode start, GraphNode goal);
}
Now, suppose I want to create three implementations of this interface. Let's call them DijkstraPathFinder, DepthFirstPathFinder, and AStarPathFinder.
The question is, how do I develop these three implementations using TDD? Their public interface is going to be the same, and, presumably, I would write the same tests for each, since the results of getShortestPath() and getShortestPathLength() should be consistent among all three implementations.
My choices seem to be:
Write one set of tests against PathFinder as I code the first implementation. Then write the other two implementations "blind" and make sure they pass the PathFinder tests. This doesn't seem right because I'm not using TDD to develop the second two implementation classes.
Develop each implementation class in a test-first manner. This doesn't seem right because I would be writing the same tests for each class.
Combine the two techniques above; now I have a set of tests against the interface and a set of tests against each implementation class, which is nice, but the tests are all the same, which isn't nice.
This seems like a fairly common situation, especially when implementing a Strategy pattern, and of course the differences between implementations might be more than just time complexity. How do others handle this situation? Is there a pattern for test-first development against an interface that I'm not aware of?
You write interface tests to exercise the interface, and you write more detailed tests for the actual implementations. Interface-based design talks a bit about the fact that your unit tests should form a kind of "contract" specification for that interface. Maybe when Spec# comes out, there'll be a langugage supported way to do this.
In this particular case, which is a strict strategy implementation, the interface tests are enough. In other cases, where an interface is a subset of the implementation's functionality, you would have tests for both the interface and the implementation. Think of a class which implements 3 interfaces, for example.
EDIT: This is useful so that when you add another implementation of the interface down the road, you already have tests for verifying that the class implements the contract of the interface correctly. This can work for something as specific as ISortingStrategy to something as wide-ranging as IDisposable.
there is nothing wrong with writing tests against the interface, and reusing them for each implementation, for example -
public class TestPathFinder : TestClass
{
public IPathFinder _pathFinder;
public IGraphNode _startNode;
public IGraphNode _goalNode;
public TestPathFinder() : this(null,null,null) { }
public TestPathFinder(IPathFinder ipf,
IGraphNode start, IGraphNode goal) : base()
{
_pathFinder = ipf;
_startNode = start;
_goalNode = goal;
}
}
TestPathFinder tpfDijkstra = new TestPathFinder(
new DijkstraPathFinder(), n1, nN);
tpfDijkstra.RunTests();
//etc. - factory optional
I would argue that this is the least effort solution, which is very much in line with Agile/TDD principles.
I would have no problem going with option 1, and keep in mind that refactoring is part of TDD and it's usually during a refactoring phase that you move to a design pattern such as strategy, so I wouldn't feel bad about doing that w/o writing new tests.
If you wanted to test the implementation-specific details of each PathFinder impl, you might consider passing mock GraphNodes which are somehow capable of helping to assert the Dijkstra-ness or DepthFirst-ness, etc, of the implementation. (Perhaps these mock GraphNodes could record how they are traversed, or somehow measure performance.) Maybe this is testing overkill, but then again if you know your system needs these three distinct strategies for some reason, it'd probably be good to have tests to demonstrate why - otherwise why not just pick one implementation and throw the others away?
I don't mind reusing test code as a template for new tests that have similar functionality. Depending on the particular class under test, you may have to rework them with different mock objects and expectations. At the least you'll have to refactor them to use the new implementation. I would follow the TDD method, though, of taking one test, reworking it for the new class, then writing just the code to pass that test. This may take even more discipline, though, since you already have one implementation under your belt and will undoubtedly be influenced by code you have already written.
This doesn't seem right because I'm
not using TDD to develop the second
two implementation classes.
Sure you are.
Start by commenting out all the tests but one. As you make a test pass either refactor or uncomment another test.
Jtf