Query regarding Junit testing [duplicate] - junit

This question already has answers here:
Is Unit Testing worth the effort? [closed]
(44 answers)
Closed 8 years ago.
I am very new to junit.I was reading a tutorial on http://docs.spring.io/spring/articles/2007/Spring-MVC-step-by-step/part1.html for spring mvc and I found a test class as below :
1.)
package springapp.web;
import org.springframework.web.servlet.ModelAndView;
import springapp.web.HelloController;
import junit.framework.TestCase;
public class HelloControllerTests extends TestCase {
public void testHandleRequestView() throws Exception{
HelloController controller = new HelloController();
ModelAndView modelAndView = controller.handleRequest(null, null);
assertEquals("hello.jsp", modelAndView.getViewName());
} }
I can not understand why I need to use TestCase of Junit as an extra burden when I can check the same by creating a simple test class.
public class TestStub {
public static void main(String[] args) {
HelloController controller = new HelloController();
ModelAndView modelAndView = controller.handleRequest(null, null);
if(modelAndView.getViewName().equals("hello.jsp")) {
...
}
}
}
Again mentioning that I am a beginner.

You could of course create such a class with a main method and apply your own checks and decide if the test is successful or not.
Then you'd be adding more classes, and more tests and things will get a bit messier. Maybe you'd like to run all tests and see their status as a whole, not stopping at the first one that failed. Maybe you'd like to be able to rerun only the tests that failed. Maybe you'd like to run some config section before all of the test methods in a class, and the list can go on.
At this point you'd start tinkering around trying to extract common reusable stuff and create some sort of framework for your needs, which JUnit, TestNg, etc are already. I guess this sounds a bit like reinventing the wheel.
These frameworks have been around for some time so they have been thoroughly tested and they integrate nicely with IDEs and continuous integration tools. Also many of the communities developing widely used frameworks, such as Spring, have put a lot of work in providing a way to facilitate the integration with test frameworks (custom context-aware runners, mock builders, etc), basically making your life easier.
Test classes are "code" as well, and they also have to be clean and well maintained. Using a known test framework makes it easier for team members to understand what you were trying to express, because it "enforces" a standardised way of doing it. This still requires everyone to learn about it, but you have higher chances of reusing that knowledge when switching workplaces, than if you'd be using your own.
I'm sure there are more reasons, but these came to mind as I was reading your question. Stefan has also made a good point, a lot has changed since 2007 and this evolution has made things simpler both with Spring and JUnit4.

Related

Should EasyMock be used to mock only external services?

I have a question concerning the use of EasyMock in junits. We have configured a framework for junits which uses inmemory derby database and EasyMock to test our service project. We use in memory derby for dao layer completely. The question arises on weather to use EasyMock completely or easymock and derby together in the service layer. Below is the scenario :
//class under test is in user-service project
class ServiceClassUnderTest {
IUserService userService;
IAddressService addressService;
public Address getUsersAddress(String id) {
User user = userService.getUserById(id);
// some logic goes here
Address address = addressService.getAddresdByUser(user);
// some validations goes here
return address;
}
}
The class under test is in user-service project and so is the IUserService interface. While IAddressService interface is in address-service project used as a dependency in user-service project.
Now the problem is in the change of approach suggested by some colleagues.
Approach we used to follow
Prepare test data for userService as its in the same project and mock addressService as its part of a dependency project and we might not have much idea about its behaviour and table structure
Advantage : cleaner approach as we have mimimal code for mocking and test data is in separate sql files
Suggested approach
Mock all services irrespective of its being in the same project or part of a dependency project
Disadvantage : more mocking relevant code then actual test related code, making it difficult to maintain and compromises readability.
The code example given is to only explain the scenario where as in real project we have a lot more complex structure with several service beans in a single class.
Could you please give me your suggestions on which approach is better and why considering the arguments provided by me for both approaches ??
A definitive is hard without have the complete big picture. Assuming you really want unit tests, I usually do this:
Test only the query done to the DB with an actual DB
Mock everything that is used by my tested class.
This "everything" should be no more than 3 or 4 dependencies. Otherwise, I will refactor until I get something that is readable.
Having more test code than production code is normal.
If I end up having trivial code in my tested method, I just don't test it. However, a test can also be used to document. So this is a blurry line.

Unit testing with a Singleton

I am developing an AS3 application which uses a Singleton class to store Metrics in Arrays. It's a Singleton because I only ever want one instance of this class to be created and it needs to be created from any part of the app.
The difficulty comes when I want to unit test this class. I thought adding public getters and setters would enable me to unit test this properly and would be useful for my app. I have read that changing to a Factory pattern will enable unit testing or using Inversion of control. This would of course make it more flexible too. I would like to know of people's thoughts on this matter as there are SO many conflicting opinions on this!
Thanks
Chris
If you're using an IoC framework, then make your consumers require an instance of the service in their constructor, and configure the IoC framework to only build one instance and keep handing it out to all requests in the entire application. This is the default behavior of Castle Windsor in my experience.
For unit testing you can use a Mock object in place of the real object.

Linq-to-sql datacontext class design question?

I'm using linq-to-sql in a web scenario only. I know it is recommended to always wrap the datacontext in a using but in a web scenario only, the kind of scenario I'm designing for, this is really not necessary because each page will spawn and die in very short time.
Given that background, I've looked at extended some of my linq-to-sql classes as such
public partial class aspnet_User
{
MainDataContext _dc = new MainDataContext();
public MainDataContext DataContext
{
get
{
return _dc;
}
set
{
_dc = value;
}
}
public static aspnet_User GetUser(Guid guid)
{
//Here I load the user and associated child tables.
//I set the datacontext for the aspnet_User with the local DC here
}
//_dc.SubmitChanges()
public SaveUser()
So, this is the design construct I've employed and it appears to work well for my case. Part of my problem is I'm using this built-in membership structure and trying to add to it which is easier then recreating my own but has certain limitations. Granted, the exact composition of interface to the object isn't ideal because some the functionality is stratified across the database tables, for better or worse.
My question is, for some of the child objects, such as aspnet_Membership, I've also extended that with its own DC. But, there is no mechanism yet for me to update all the childrens DC without setting each manually.
I'd like to see various design solutions that require a minimum of coding that could address this in a somewhat elegant way.
Also, any detailed and specific suggestions about the stratification of the objects might be appreciated. This could be a kill 2 bird with one stone.
You really should manipulate the Membership through the methods provided natively by the System.Web.Security.MembershipProvider classes. It's not advisable to manipulate the database tables in the ASP.NET Membership database directly.
If you want to wrap the System.Web.Security.MembershipProvider in your own custom class, that's fine; in fact, ASP.NET MVC does exactly that to make it easier to perform unit testing. But that's not really a DataContext, as such.

whats the recommended Data access layer design pattern if i will apply ado entity frame work later?

I am creating a website and using Linq to SQl as a data access layer, and i am willing to make the website can work on both linq to sql and ado entity framework, without changing many things in the other layers: business logic layer or UI layer,
Whats the recommended pattern to achieve this goal? can you explain in brief how to do that?
UPDATE
As answered below that repository pattern will help me a lot,
i checked nerd dinner website and understood it, but i found this code inside:
public class DinnersController : Controller {
IDinnerRepository dinnerRepository;
//
// Dependency Injection enabled constructors
public DinnersController()
: this(new DinnerRepository()) {
}
public DinnersController(IDinnerRepository repository) {
dinnerRepository = repository;
}
Which means as i understood that it declare a dinnerRepository using the interface IDinnerRepository , and in the constructor gave it the DinnerRepository, which will be in my case for example the linq to sql implementation,
My question is if i need to switch to ado.net entity framework, i will need to edit this constructor line or there is a better solution for this?
Update 2
Where should i put this Respository Interface and the classes which implement it in my solution, in the data access layer or in the business layer?
The Repository pattern is a good choice. If you implement it as an interface; then you can change out the concrete classes and not have to change anything else.
The Nerd Dinner walkthrough has an excellent example of the Repository pattern (with interface).
The code you listed there would go in your controller (if you were doing an MVC Application); and you create any class you wanted so long as it implemented the IDinnerRepository interface (or you could have something like an IRepository interface if you wanted to design an interface that everyone had to implement that did the basic CRUD actions, and then implement specific interfaces if you needed more (but let's not go interface crazy).
If you're 'tiering' your application, then that part would go in the "Business Logic" layer, and the Repository would be in the "Data Access Layer". That constructor contract would be the 'loosely' coupled part.
I wound up using a minor variation on the "Repository" pattern. I picked it up from the excellent Nerd Dinner tutorial. You can find the whole tutorial here and the code is on Codeplex.
Don't let all the MVC put you off if your not in an MVC situation, the underlying encapsulation of Linq2SQL is a good one. In a recent update of a codebase I went from Linq2SQL to Linkq2EF and all the changes were nicely dealt with in the repository, no outside code had to be touched.
It is also worth noting that the RIA Services stuff comes with a similar pattern. You point it at Linq2Sql or Linq2EF and it build you a basic layer over it complete with CRUD. That layer is in source code so you could just rip it out and use it in a non RIA project but I just leave it as is and link to it in other projects so I use the layer even if I ignore the over-the-wire abilities.

Developing to an interface with TDD

I'm a big fan of TDD and use it for the vast majority of my development these days. One situation I run into somewhat frequently, though, and have never found what I thought was a "good" answer for, is something like the following (contrived) example.
Suppose I have an interface, like this (writing in Java, but really, this applies to any OO language):
public interface PathFinder {
GraphNode[] getShortestPath(GraphNode start, GraphNode goal);
int getShortestPathLength(GraphNode start, GraphNode goal);
}
Now, suppose I want to create three implementations of this interface. Let's call them DijkstraPathFinder, DepthFirstPathFinder, and AStarPathFinder.
The question is, how do I develop these three implementations using TDD? Their public interface is going to be the same, and, presumably, I would write the same tests for each, since the results of getShortestPath() and getShortestPathLength() should be consistent among all three implementations.
My choices seem to be:
Write one set of tests against PathFinder as I code the first implementation. Then write the other two implementations "blind" and make sure they pass the PathFinder tests. This doesn't seem right because I'm not using TDD to develop the second two implementation classes.
Develop each implementation class in a test-first manner. This doesn't seem right because I would be writing the same tests for each class.
Combine the two techniques above; now I have a set of tests against the interface and a set of tests against each implementation class, which is nice, but the tests are all the same, which isn't nice.
This seems like a fairly common situation, especially when implementing a Strategy pattern, and of course the differences between implementations might be more than just time complexity. How do others handle this situation? Is there a pattern for test-first development against an interface that I'm not aware of?
You write interface tests to exercise the interface, and you write more detailed tests for the actual implementations. Interface-based design talks a bit about the fact that your unit tests should form a kind of "contract" specification for that interface. Maybe when Spec# comes out, there'll be a langugage supported way to do this.
In this particular case, which is a strict strategy implementation, the interface tests are enough. In other cases, where an interface is a subset of the implementation's functionality, you would have tests for both the interface and the implementation. Think of a class which implements 3 interfaces, for example.
EDIT: This is useful so that when you add another implementation of the interface down the road, you already have tests for verifying that the class implements the contract of the interface correctly. This can work for something as specific as ISortingStrategy to something as wide-ranging as IDisposable.
there is nothing wrong with writing tests against the interface, and reusing them for each implementation, for example -
public class TestPathFinder : TestClass
{
public IPathFinder _pathFinder;
public IGraphNode _startNode;
public IGraphNode _goalNode;
public TestPathFinder() : this(null,null,null) { }
public TestPathFinder(IPathFinder ipf,
IGraphNode start, IGraphNode goal) : base()
{
_pathFinder = ipf;
_startNode = start;
_goalNode = goal;
}
}
TestPathFinder tpfDijkstra = new TestPathFinder(
new DijkstraPathFinder(), n1, nN);
tpfDijkstra.RunTests();
//etc. - factory optional
I would argue that this is the least effort solution, which is very much in line with Agile/TDD principles.
I would have no problem going with option 1, and keep in mind that refactoring is part of TDD and it's usually during a refactoring phase that you move to a design pattern such as strategy, so I wouldn't feel bad about doing that w/o writing new tests.
If you wanted to test the implementation-specific details of each PathFinder impl, you might consider passing mock GraphNodes which are somehow capable of helping to assert the Dijkstra-ness or DepthFirst-ness, etc, of the implementation. (Perhaps these mock GraphNodes could record how they are traversed, or somehow measure performance.) Maybe this is testing overkill, but then again if you know your system needs these three distinct strategies for some reason, it'd probably be good to have tests to demonstrate why - otherwise why not just pick one implementation and throw the others away?
I don't mind reusing test code as a template for new tests that have similar functionality. Depending on the particular class under test, you may have to rework them with different mock objects and expectations. At the least you'll have to refactor them to use the new implementation. I would follow the TDD method, though, of taking one test, reworking it for the new class, then writing just the code to pass that test. This may take even more discipline, though, since you already have one implementation under your belt and will undoubtedly be influenced by code you have already written.
This doesn't seem right because I'm
not using TDD to develop the second
two implementation classes.
Sure you are.
Start by commenting out all the tests but one. As you make a test pass either refactor or uncomment another test.
Jtf