I need to test method which returns ordered List of some complex objects. Simplified example:
class MyObject {
public String foo() { return someString; }
}
I want to test both: orderable of returned collection (since now I was using org.hamcrest.collection.IsIterableContainingInOrder.contains and fulfiling predicate).
To sum up. I'm looking for such syntax:
#Test
public void shouldMatchPredicate() {
List<MyObject> collection = testObject.generate();
//collection = [myObject#x, myObject#y, myObject#z]
assertThat(collection, somePredicate("x", "y", "z")
}
Default one, contains method is not working, since first argument is Collection<MyObject> and arguments in predicate are Strings. I need some kind of bridge between it.
Since Predicate is a Guava object and Hamcrest does not depend on Guava it will not have a Matcher that will take a Predicate. Also, since Guava is not dependent on Hamcrest, they will not provide a Matcher either.
I suggest writing your own Matcher that takes a Predicate. This is relatively easy to do. Get the source code for IsIterableContainingInOrder and modify it to take a Predicate.
Another option would be to do the following:
assertThat(Iterables.all(myList, myPredicate), CoreMatchers.is(true));
This won't give you much documentation on a failure but it will pass/fail properly.
I would use a MyObjectFactory in testObject.generate(), avoiding the direct new statement.
MyObjectFactory would be a dependency of testObject.
Doing so, I would obtain 2 benefits:
A weaker coupling between testObject and MyObject (testObject would know MyObject only in terms of interface
The possibility to mock MyObjectFactory and, finally, the possibility to assert the 3 ordered calls: MyObjectFactory.BuildNewWithValue("x"), MyObjectFactory.BuildNewWithValue("y") and MyObjectFactory.BuildNewWithValue("z")
Your unit test would be an interaction test.
To assert the returned collection itself, I would write 3 asserts.
Related
I have a method which needs to be called instead of the real method.
Instead I get an exception. Can somebody please help me with right way to call the alternate method through mockito ?
org.mockito.exceptions.misusing.InvalidUseOfMatchersException:
Invalid use of argument matchers!
2 matchers expected, 4 recorded.
This exception may occur if matchers are combined with raw values:
//incorrect:
someMethod(anyObject(), "raw String");
When using matchers, all arguments have to be provided by matchers.
For example:
//correct:
someMethod(anyObject(), eq("String by matcher"));
//Code starts here
class A{
public realMethod(String s, Foo f){
}
}
class B {
public mockMethod(String s, Foo f) {
}
}
class UnitTestClass{
ClassA mock = new ClassA();
mock.when(realMethod(any(String.class), any(Foo.class))).thenReturn(mockMethod(any(String.class),any(Foo.class));
}
You are getting mocking wrong.
Here:
thenReturn(mockMethod(any(String.class),any(Foo.class));
That simply doesn't make sense.
Mocking works like this:
you create a mock object of some class, like A mock = mock(A.class)
you specify interactions on that mock object
Your code implies that you think that these specifications are working like "normal" code - but they do not!
What you want to do: when some object is called with certain parameters, then return the result of another method call.
Like in:
when(a.foo(x, y)).thenReturn(b.bar(x, y))
That is what want you intend to do. But thing is: it isn't that easy. You can't use the any() matcher in thee thenReturn part in order to "provide" the arguments that were passed in the when() call before! It is that simply.
Mocking should be within a specific unit test to get a specific result.
Meaning: you are not writing an ordinary program where it would make any sense to "forward" parameters to another call. In other words; your code should more look like:
when(mock.realMethod("a", someSpecificFoo)).thenReturn(mockMethod("a", someSpecificFoo))
That is the only thing possible here.
Beyond that, you might want to look into a Mockito enter link description here instead.
Long story short: it simply looks like you don't understand how to use mocking frameworks. I suggest that you step back and read/work various tutorials. This is not something you learn by trial and error.
I am testing a Restful endpoint in my JUnit and getting an exception as below in the
list which is present as an argument inside the save method,
**"Argument(s) are different! Wanted:"**
save(
"121",
[com.domain.PP#6809cf9d,
com.domain.PP#5925d603]
);
Actual invocation has different arguments:
save(
"121",
[com.domain.PP#5b6e23fd,
com.domain.PP#1791fe40]
);
When I debugged the code, the code broke at the verify line below and threw the
above exception. Looks like the arguments inside the "testpPList" within the save
method is different. I dont know how it becomes different as I construct them in my
JUNit properly and then RestFul URL is invoked.
Requesting your valuable inputs. Thanks.
Code:
#Test
public void testSelected() throws Exception {
mockMvc.perform(put("/endpointURL")
.contentType(TestUtil.APPLICATION_JSON_UTF8)
.content(TestUtil.convertObjectToJsonBytes(testObject)))
.andExpect(status().isOk());
verify(programServiceMock, times(1)).save(id, testpPList);
verifyNoMoreInteractions(programServiceMock);
}
Controller method:
#RequestMapping(value = "/endpointURL", method = RequestMethod.PUT)
public #ResponseBody void uPP(#PathVariable String id, #RequestBody List<PPView> pPViews) {
// Code to construct the list which is passed into the save method below
save(id, pPList);
}
Implementing the Object#equals(Object) can solve it by the equality comparison. Nonetheless, sometimes the object you are validating cannot be changed or its equals function can not be implemented. For such cases, it's recommended using org.mockito.Matchers#refEq(T value, String... excludeFields). So you may use something like:
verify(programServiceMock, times(1)).save(id, refEq(testpPList));
Just wrapping the argument with refEq solves the problem.
Make sure you implement the equals method in com.domain.PP.
[Edit]
The reasoning for this conclusion is that your failed test message states that it expects this list of PP
[com.domain.PP#6809cf9d, com.domain.PP#5925d603]
but it's getting this list of PP
[com.domain.PP#5b6e23fd, com.domain.PP#1791fe40]
The hex values after the # symbol for each PP object is their hash codes. Because they are different, then it shows that they belong to different objects. So the default implementation of equals will say they're not equal, which is what verify() uses.
It's good practice to also implement hashCode() whenever you implement equals(): According to the definition of hashCode, two objects that are equal MUST have equal hashCodes. This ensures that objects like HashMap can use hashCode inequality as a shortcut for object inequality (here, placing objects with different hashCodes in different buckets).
For example, suppose I this:
class Gundam00 extends Gundam implements MobileSuit {
...
public void fight(final List<MobileSuit> mobiruSuitso, final List<Gundam> theOtherDudes, final List<Person> casualities) {
....
}
}
Suppose theOtherDudes and casualities parameters are optional. How can I make this method as clean as possible? I thought about having booleans indicating if they're null, and then checking them as needed.
I could also have different versions of the method for each combination of parameters but there would be a lot of code duplication I think.
Any suggestions?
I find that past 2-3 arguments, the ability to remember what all the arguments to a function are suffers. And comprehensibility along with it.
Passing named arguments can help. Languages with a convenient hash-like literal syntax make this really easy. Take JavaScript:
g = new Gundam00();
g.fight({opponent: enemy, casualties: 'numerous'});
You can also take advantage of variable length argument features to work this in (treat odd arguments as names, even arguments as the actual parameters).
g.fight('opponent',enemy,'casualties', 'numerous');
And some languages actually support named arguments straight-out (see: http://en.wikipedia.org/wiki/Named_parameter#Use_in_programming_languages ).
Finally, you might want to consider adding other methods for this using what some call a Fluent Interface (http://en.wikipedia.org/wiki/Fluent_interface ). Basically, you've got method call which return the object itself, so you can chain calls together:
g.opponent(enemy).casualties('numerous').fight();
This might be the easiest option if you're working in a manifestly/statically-typed class-focused language.
Update
Responding to Setsuna's comment... in that last example, if you've got the luxury, you can make methods like opponent and casualties simple setters that don't affect any internal state or computation in any other way than setting a parameter for which they're named. They simply set internal properties up, and then all of the real work happens inside action methods like fight.
If you can't do that (or if you don't like writing methods whose operations are sub-atomic), you could stake out a half-way spot between this idea with the hash-like literal idea, and create your own collection class specifically for invoking named arguments:
n = new NArgs();
g.fight(n.arg('opponent',enemy).arg('casualties','numerous').arg('motion','slow'));
A little more unwieldy, but it separates out the named arguments problem and lets you keep your methods a bit more atomic, and NArgs is probably something you could implement pretty easily just wrapping some methods around one type of Collection (HashTable?) or another that's available in your language.
Add the methods. Overloading methods is generally an antipattern and a refactoring opportunity for someone else.
http://www.codinghorror.com/blog/2007/03/curlys-law-do-one-thing.html
I thought about having booleans indicating if they're null, and then checking them inside and reacting accordingly.
Or ... you could just check if they're null.
if(theOtherDudes == null)
...
If there is only one "main method" in your class, then you can implement the optional arguments as getter/setter functions. Example:
public void setOtherDudes(final List<Gundam> theOtherDudes) {} // for input arguments
public List<Person> getCasualities() {} // for output arguments
And then, in your documentation, mention that if the caller has any optional input arguments it has to be passed in before calling fight(), and the optional output values will be available when fight() has been called.
This is worthwhile if there are dozens of optional arguments. Otherwise, I suggest overloading the method as the simplest way.
I am very interested in Linq to SQL with Lazy load feature. And in my project I used AutoMapper to map DB Model to Domain Model (from DB_RoleInfo to DO_RoleInfo). In my repository code as below:
public DO_RoleInfo SelectByKey(Guid Key)
{
return SelectAll().Where(x => x.Id == Key).SingleOrDefault();
}
public IQueryable<DO_RoleInfo> SelectAll()
{
Mapper.CreateMap<DB_RoleInfo, DO_RoleInfo>();
return from role in _ctx.DB_RoleInfo
select Mapper.Map<DB_RoleInfo, DO_RoleInfo>(role);
}
SelectAll method is run well, but when I call SelectByKey, I get the error:
Method “RealMVC.Data.DO_RoleInfo MapDB_RoleInfo,DO_RoleInfo” could not translate to SQL.
Is it that Automapper doesn't support Linq completely?
Instead of Automapper, I tried the manual mapping code below:
public IQueryable<DO_RoleInfo> SelectAll()
{
return from role in _ctx.DB_RoleInfo
select new DO_RoleInfo
{
Id = role.id,
name = role.name,
code = role.code
};
}
This method works the way I want it to.
While #Aaronaught's answer was correct at the time of writing, as often the world has changed and AutoMapper with it. In the mean time, QueryableExtensions were added to the code base which added support for projections that get translated into expressions and, finally, SQL.
The core extension method is ProjectTo1. This is what your code could look like:
using AutoMapper.QueryableExtensions;
public IQueryable<DO_RoleInfo> SelectAll()
{
Mapper.CreateMap<DB_RoleInfo, DO_RoleInfo>();
return _ctx.DB_RoleInfo.ProjectTo<DO_RoleInfo>();
}
and it would behave like the manual mapping. (The CreateMap statement is here for demonstration purposes. Normally, you'd define mappings once at application startup).
Thus, only the columns that are required for the mapping are queried and the result is an IQueryable that still has the original query provider (linq-to-sql, linq-to-entities, whatever). So it is still composable and this will translate into a WHERE clause in SQL:
SelectAll().Where(x => x.Id == Key).SingleOrDefault();
1 Project().To<T>() prior to v. 4.1.0
Change your second function to this:
public IEnumerable<DO_RoleInfo> SelectAll()
{
Mapper.CreateMap<DB_RoleInfo, DO_RoleInfo>();
return from role in _ctx.DB_RoleInfo.ToList()
select Mapper.Map<DB_RoleInfo, DO_RoleInfo>(role);
}
AutoMapper works just fine with Linq to SQL, but it can't be executed as part of the deferred query. Adding ToList() at the end of your Linq query causes it to immediately evaluate the results, instead of trying to translate the AutoMapper segment as part of the query.
Clarification
The notion of deferred execution (not "lazy load") does not make any sense once you've changed the resulting type to something that's not a data entity. Consider these two classes:
public class DB_RoleInfo
{
public int ID { get; set; }
public string Name { get; set; }
}
public class DO_RoleInfo
{
public Role Role { get; set; } // Enumeration type
}
Now consider the following mapping:
Mapper.CreateMap<DB_RoleInfo, DO_RoleInfo>
.ForMember(dest => dest.Role, opt => opt.MapFrom(src =>
(Role)Enum.Parse(typeof(Role), src.Name)));
This mapping is completely fine (unless I made a typo), but let's say you write the SelectAll method in your original post instead of my revised one:
public IQueryable<DO_RoleInfo> SelectAll()
{
Mapper.CreateMap<DB_RoleInfo, DO_RoleInfo>();
return from role in _ctx.DB_RoleInfo
select Mapper.Map<DB_RoleInfo, DO_RoleInfo>(role);
}
This actually kind of works, but by calling itself a "queryable", it lies. What happens if I try to write this against it:
public IEnumerable<DO_RoleInfo> SelectSome()
{
return from ri in SelectAll()
where (ri.Role == Role.Administrator) ||
(ri.Role == Role.Executive)
select ri;
}
Think really hard about this. How could Linq to SQL possibly be able to successfully turn your where into an actual database query?
Linq knows nothing about the DO_RoleInfo class. It doesn't know how to do the mapping backward - in some cases, that may not even possible. Sure, you may look at this code and go "Oh, that's easy, just search for 'Administrator' or 'Executive' in the Name column", but you're the only one who knows that. As far as Linq to SQL is concerned, the query is pure nonsense.
Imagine that somebody gave you these instructions:
Go to the supermarket and bring back the ingredients for making Morton Thompson Turkey.
Unless you've made it before, and most people haven't, your response to that instruction is most likely going to be:
What the hell is that?
You can go to the market, and you can get specific ingredients by name, but you can't evaluate the condition I've given you while you're over there. I have to "un-map" the criteria first. I have to tell you, here are the ingredients we need for this recipe - now go and get them.
To summarize, this is not some simple incompatibility between Linq to SQL and AutoMapper. It is not unique to either of those two libraries. It doesn't matter how you actually do the mapping to a non-entity type - you could just as easily do the mapping manually, and you'd still get the same error, because you are now giving Linq to SQL a set of instructions that are no longer comprehensible, dealing with mysterious classes that don't have an intrinsic mapping to any particular entity type.
This issue is fundamental to the concept of O/R Mapping and deferred query execution. A projection is a one-way operation. Once you project, you can no longer go back to the query engine and say oh by the way, here are some more conditions for you. It's too late. The best you can do is take what it already gave you and evaluate the extra conditions yourself.
Last but not least, I'll leave you with a workaround. If the only thing you want to be able to do from your mapping is filter the rows, you can write this:
public IEnumerable<DO_RoleInfo> SelectRoles(Func<DB_RoleInfo, bool> selector)
{
Mapper.CreateMap<DB_RoleInfo, DO_RoleInfo>();
return _ctx.DB_RoleInfo
.Where(selector)
.Select(dbr => Mapper.Map<DB_RoleInfo, DO_RoleInfo>(dbr));
}
This is a utility method that handles the mapping for you and accepts a filter on the original entity, and not the mapped entity. It might be useful if you have many different kinds of filters but always need to do the same mapping.
Personally, I think you will be better off just writing out the queries properly, by first determining what you need to retrieve from the database, then doing any projections/mappings, and then, finally, if you need to do further filtering (which you shouldn't), then materialize the results with ToList() or ToArray() and write more conditions against the local list.
Don't try to use AutoMapper or any other tool to hide the real entities exposed by Linq to SQL. The domain model is your public interface. The queries you write are an aspect of your private implementation. It's important to understand the difference and maintain a good separation of concerns.
I asked a related question about findbugs, but let's ask a more general question.
Suppose that I am working with an object-oriented language in which polymorphism is possible.
Suppose that the language supports static type checking (e.g., Java, C++)
Suppose that the language does not allow variance in parameters (e.g., Java, again...)
If I am overriding the equality operation, which takes Object as a parameter, what should I do in a situation where the parameter is not the same type or a subtype as the LHS that equals had been called upon?
Option 1 - Return false because the objects are clearly not equals
Option 2 - Throw a casting exception because if the language actually supported variance (which would have been preferable), this would have been caught at compile time as an error; thus, detecting this error at runtime makes sense since a situation where another type is sent should have been illegal.
I vote for option 1. It is possible for two objects of different types to be equal -- for example, int and double, if first class objects, can validily be cast as each other and are comparable mathematically. Also, you may want to consider differing subclasses equal in some respects, yet neither may be able to be cast to the other (though they may be derived from the same parent).
Return false, because the objects are not equal.
I don't see how throwing a ClassCastException would be any better here.
There are contracts in interfaces such as Collection or List that actually depend on any object being able to check for equality with any other object.
It depends.
SomeClass obj1 = new SomeClass();
object other = (object)obj1;
return obj1.Equals(other); // should return "true", since they are really the same reference.
SomeClass obj1 = new SomeClass();
object other = new object();
return obj1.Equals(other); // should return "false", because they're different reference objects.
class SomeClass { }
class OtherClass { }
SomeClass obj1 = new SomeClass();
OtherClass obj2 = new OtherClass();
return obj1.Equals(obj2); // should return "false", because they're different reference objects.
If the types are two completely different types that don't inherit from one another, then there is no possible way they can be the same reference.
You shouldn't be throwing any type of casting exception because you're accepting the base object class as a parameter.
Hmmm.. I like Option 1 as well, for the following 4 reasons:
1) The objects were apparently not equal by whatever condition you used to check
2) Throwing ClassCastException requires a check for that exception every time you do a comparison, and I think that contributes to the code being less understandable or at least longer...
3) The class cast exception is merely a symptom of the problem, which is that the two objects were not equal, even at the type level.
4) As user "cbo" mentioned above, double/int being equal despite their types being different (4.0 == 4) and this applies to other types as well.
Disclaimer: I may be letting my Python ways colour a Java debate :)
Dr. Dobb's Java Q&A says the best practice is that they both are the same type. So I vote option 1.