Mock methods not directly called in unit test with JMock - junit

I have a method under test. Within its call stack, it calls a DAO which intern uses JDBC to chat with the DB. I am not really interested in knowing what will happen at the JDBC layer; I already have tests for that, and they work wonderfully.
I am trying to mock, using JMock, the DAO layer, so I can focus on the details this method under test. Here is a basic representation of what I have.
#Test
public void myTest()
{
context.checking(new Expectations() {
{
allowing(myDAO).getSet(with(any(Integer.class)));
will(returnValue(new HashSet<String>()));
}
});
// Used only to show the mock is working but not really part of this test.
// These asserts pass.
Set<String> temp = myDAO.getSet(Integer.valueOf(12));
Assert.assertNotNull(temp);
Assert.assertTrue(temp.isEmpty());
MyTestObject underTest = new MyTestObject();
// Deep in this call MyDAO is initialized and getSet() is called.
// The mock is failing to return the Set as desired. getSet() is run as
// normal and throws a NPE since JDBC is not (intentionally) setup. I want
// getSet() to just return an empty set at this layer.
underTest.thisTestMethod();
...
// Other assertions that would be helpful for this test if mocking
// was working.
}
It, from what I have learned creating this test, that I cannot mock indirect objects using JMock. OR I am not seeing a key point. I'm hoping for the second half to be true.
Thoughts and thank you.

From the snippet, I'm guessing that MyTestObject uses reflection, or a static method or field to get hold of the DAO, since it has no constructor parameters. JMock does not do replacement of objects by type (and any moment now, there'll be a bunch of people recommending other frameworks that do).
This is on purpose. A goal of JMock is to highlight object design weaknesses, by requiring clean dependencies and focussed behaviour. I find that burying DAO/JDBC access in the domain objects eventually gets me into trouble. It means that the domain objects have secret dependencies that make them harder to understand and change. I prefer to make those relationships explicit in the code.
So you have to get the mocked object somehow into the target code. If you can't or don't want to do that, then you'll have to use another framework.
P.S. One point of style, you can simplify this test a little:
context.checking(new Expectations() {{
allowing(myDAO).getSet(12); will(returnValue(new HashSet<String>()));
}});
within a test, you should really know what values to expect and feed that into the expectation. That makes it easier to see the flow of values between the objects.

Related

JavaFX FXML Parameter passing from Controller A to B and back

I want to create a controller based JavaFX GUI consisting of multiple controllers.
The task I can't accomplish is to pass parameters from one Scene to another AND back.
Or in other words:
The MainController loads SubController's fxml, passes an object to SubController, switches the scene. There shall not be two open windows.
After it's work is done, the SubController shall then switch the scene back to the MainController and pass some object back.
This is where I fail.
This question is very similar to this one but still unanswered. Passing Parameters JavaFX FXML
It was also mentioned in the comments:
"This work when you pass parameter from first controller to second but how to pass parameter from second to first controller,i mean after first.fxml was loaded.
– Xlint Xms Sep 18 '17 at 23:15"
I used the first approach in the top answer of that thread.
Does anyone have a clue how to achieve this without external libs?
There are numerous ways to do this.
Here is one solution, which passes a Consumer to another controller. The other controller can invoke the consumer to accept the result once it has completed its work. The sample is based on the example code from an answer to the question that you linked.
public Stage showCustomerDialog(Customer customer) {
FXMLLoader loader = new FXMLLoader(
getClass().getResource(
"customerDialog.fxml"
)
);
Stage stage = new Stage(StageStyle.DECORATED);
stage.setScene(
new Scene(
(Pane) loader.load()
)
);
Consumer<CustomerInteractionResult> onComplete = result -> {
// update main screen based upon result.
};
CustomerDialogController controller =
loader.<CustomerDialogController>getController();
controller.initData(customer, onComplete);
stage.show();
return stage;
}
...
class CustomerDialogController() {
#FXML private Label customerName;
private Consumer<CustomerInteractionResult> onComplete
void initialize() {}
void initData(Customer customer, Consumer<CustomerInteractionResult> onComplete) {
customerName.setText(customer.getName());
this.onComplete = onComplete;
}
#FXML
void onSomeInteractionLikeCloseDialog(ActionEvent event) {
onComplete.accept(new CustomerInteractionResult(someDataGatheredByDialog));
}
}
Another way to do this is to add a result property to the controller of the dialog screen and interested invokers could listen to or retrieve the result property. A result property is how the in-built JavaFX dialogs work, so you would be essentially imitating some of that functionality.
If you have a lot of this passing back and forth stuff going on, a shared dependency injection model based on something like Gluon Ignite, might assist you.
I've used AfterBurner.fx for dependency injection, which is very slick and powerful as long as you follow the conventions. It's not necessarily an external lib if you just copy the 3 classes into your structure. Although you do need the javax Inject jar, so I guess it is an eternal reference.
Alternately, if you have a central "screen" from which most of your application branches out you could use property binding probably within a singleton pattern. There are some good articles on using singleton in JavaFX, like this one. I did that for a small application that works really great, but defining all of those bindings can get out of hand if there are a lot of properties.
To pass data back, the best approach is probably to fire custom Events, which the parent controller subscribes to with Node::addEventHandler. See How to emit and handle custom events? for context.
In complex cases when the two controllers have no reference to each other, a Event Bus as #jewelsea mentioned is the superior option.
For overall architecture, this Reddit comment provides some good detail: https://www.reddit.com/r/java/comments/7c4vhv/are_there_any_canonical_javafx_design_patterns/dpnsedh/

Would it be valid to assert on mockito spyed object?

I am spying an object like:
#Spy
#InjectMocks
private final A a= new A()
and in test case I am asserting on properties of A object.
As spying an object means calling real methods, would it be right to asserting on properties of spyed objects?
Technically it will work, but I recommend against it, it's usually a bad idea (even an anti-pattern) to mock value object.
If the test is to verify properties have changed to specific values then this object is the tested object and as such there would be no reason for it to be a spy.
If creating a simple value object is too difficult with the current API, then there's a usability problem in the tested code, refactoring or utility is needed to create valid instances easily.
Only in some very specific edge cases one would need the combination of a #Spy and #InjectMocks. Most the time it is not needed, and such tests show a design issue in the tested code.
Instead just create a real object, inject mocks if needed and assert properties directly on the tested object. A good spy usage would be verifying that there was interactions between two collaborating objects only.

Looking for way to replace many "when...thenReturn" statements in test method

I'm writing a JUnit test on a method that is interacting primarily with classes in org.apache.poi.hssf.usermodel (like HSSFWorkbook, HSSFFont and HSSFCellStyle).
The method ultimately builds up and returns an HSSFWorkbook object.
In order to build up the HSSFWorkbook object, methods like workbook.createFont() and workbook.createCellStyle() are invoked.
I currently mock out the objects like this in the setup class of my unit test
workbook = mock(HSSFWorkbook.class);
font = mock(HSSFFont.class);
cellStyle = mock(HSSFCellStyle.class);
Then, in my test method, I invoke the following to avoid NPEs
when(workbook.createFont()).thenReturn(font);
when(workbook.createCellStyle()).thenReturn(cellStyle);
I'm discovering I'll have to do many more of those in order to avoid the NPEs and I'm wondering if there's a way that I can avoid writing all of those "when...thenReturn" statements.
one of the rules of Mocking is: Never mock types you don't own. Another rule is a stubbed call on a mock, shouldn't return another mock.
The reason is in front of you :).
If your class only deals with creating the HSSFWorkbook, then treat the tests as integration tests and use the real library. If your class does something else too, then move all the other stuff to other classes (this is to follow the single responsibility principle)

Is it possible to determine a method's callback if its type is void and its "return;" had skipped its execution?

I have a method which works like this:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
}
The userInput consist of individual checks in the deploy method. Now, I'd like to JUnit test if the user input check algorithms behave right (so if the deployment process would start or not depending on the right or wrong user input). So I need to test this with right and wrong user inputs. I could do this task by checking if anything has been deployed at all, but in this case this is very cumbersome.
So I wonder if it's somehow possible to know in the corresponding JUnit test if the deploy method has been aborted or not (due to wrong user inputs)? (By the way, changing the deploy method is no option.)
As you describe your problem, you can only check your method for side effects, or if it throws an Exception. The easiest way to do this is using a mocking framework like JMockit or Mockito. You have to mock the first method after the checking of user input has finished:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
startDeploy(); // mock this method
}
You can also extend the class under test, and override startDeploy() if it's possible. This would avoid having to use a mocking framework.
Alternative - Integration tests
It sounds like the deploy method is large and complex, and deals with files, file systems, external services (ftp), etc.
It is sometimes easier in the long run to just accept that you're dealing with external systems, and test these external systems. For instance, if deploy() copies a file to directory x, test that the file exists in the target directory. I don't know how complex deploy is, but often mocking these methods can be as hard as just testing the actual behaviour. This may be cumbersome, but like most tests, it would allow you refactor your code so it is simpler to understand. If your goal is refactoring, then in my experience, it's easier to refactor if you're testing actual behaviour rather than mocking.
You could create a UserInput stub / mock with the correct expectations and verify that only the expected calls (and no more) were made.
However, from a design point of view, if you were able to split your validation and the deployment process into separate classes - then your code can be as simple as:
if (_validator.isValid(userInput)) {
_deployer.deploy(userInput);
}
This way you can easily test that if the validator returns false the deployer is never called (using a mocking framework, such as jMock) and that it is called if the validator returns true.
It will also enable you to test your validation and deployment code seperately, avoiding the issue you're currently having.

Major difference between: Mockito and JMockIt

This is what I found from my initial attempts to use JMockIt. I must admit that I found the JMockIt documentation very terse for what it provides and hence I might have missed something. Nonetheless, this is what I understood:
Mockito: List a = mock(ArrayList.class) does not stub out all methods
of List.class by default. a.add("foo") is going to do the usual thing
of adding the element to the list.
JMockIt: #Mocked ArrayList<String> a;
It stubs out all the methods of a by default. So, now a.add("foo")
is not going to work.
This seems like a very big limitation to me in JMockIt.
How do I express the fact that I only want you to give me statistics
of add() method and not replace the function implementation itself
What if I just want JMockIt to count the number of times method add()
was called, but leave the implementation of add() as is?
I a unable to express this in JMockIt. However, it seems I can do this
in Mockito using spy()
I really want to be proven wrong here. JMockit claims that it can do everything that
other mocking frameworks do plus a lot more. Does not seem like the case here
#Test
public void shouldPersistRecalculatedArticle()
{
Article articleOne = new Article();
Article articleTwo = new Article();
when(mockCalculator.countNumberOfRelatedArticles(articleOne)).thenReturn(1);
when(mockCalculator.countNumberOfRelatedArticles(articleTwo)).thenReturn(12);
when(mockDatabase.getArticlesFor("Guardian")).thenReturn(asList(articleOne, articleTwo));
articleManager.updateRelatedArticlesCounters("Guardian");
InOrder inOrder = inOrder(mockDatabase, mockCalculator);
inOrder.verify(mockCalculator).countNumberOfRelatedArticles(isA(Article.class));
inOrder.verify(mockDatabase, times(2)).save((Article) notNull());
}
#Test
public void shouldPersistRecalculatedArticle()
{
final Article articleOne = new Article();
final Article articleTwo = new Article();
new Expectations() {{
mockCalculator.countNumberOfRelatedArticles(articleOne); result = 1;
mockCalculator.countNumberOfRelatedArticles(articleTwo); result = 12;
mockDatabase.getArticlesFor("Guardian"); result = asList(articleOne, articleTwo);
}};
articleManager.updateRelatedArticlesCounters("Guardian");
new VerificationsInOrder(2) {{
mockCalculator.countNumberOfRelatedArticles(withInstanceOf(Article.class));
mockDatabase.save((Article) withNotNull());
}};
}
A statement like this
inOrder.verify(mockDatabase, times(2)).save((Article) notNull());
in Mockito, does not have an equivalent in JMockIt as you can see from the example above
new NonStrictExpectations(Foo.class, Bar.class, zooObj)
{
{
// don't call zooObj.method1() here
// Otherwise it will get stubbed out
}
};
new Verifications()
{
{
zooObj.method1(); times = N;
}
};
In fact, all mocking APIs mock or stub out every method in the mocked type, by default. I think you confused mock(type) ("full" mocking) with spy(obj) (partial mocking).
JMockit does all that, with a simple API in every case. It's all described, with examples, in the JMockit Tutorial.
For proof, you can see the sample test suites (there are many more that have been removed from newer releases of the toolkit, but can still be found in the old zip files), or the many JMockit integration tests (over one thousand currently).
The equivalent to Mockito's spy is "dynamic partial mocking" in JMockit. Simply pass the instances you want to partially mock as arguments to the Expectations constructor. If no expectations are recorded, the real code will be executed when the code under test is exercised. BTW, Mockito has a serious problem here (which JMockit doesn't), because it always executes the real code, even when it's called inside when(...) or verify(...); because of this, people have to use doReturn(...).when(...) to avoid surprises on spied objects.
Regarding verification of invocations, the JMockit Verifications API is considerably more capable than any other. For example:
new VerificationsInOrder() {{
// preceding invocations, if any
mockDatabase.save((Article) withNotNull()); times = 2;
// later invocations, if any
}};
Mockito's a much older library than JMockIT, so you could expect that it would have many more features. Have a read through the release list if you want to see some of the less well documented functionality. JMockIT authors have produced a matrix of features in which they missed out every single thing that other frameworks do that they don't, and got several wrong (for instance, Mockito can do strict mocks and ordering).
Mockito was also written to enable unit-level BDD. That generally means that if your tests provide a good example of how to use the code, and if your code is lovely and decoupled and well-designed, then you won't need all the shenanigans that JMockIT provides. One of the hardest things to do in Open Source is say "no" to the many requests that don't help in the long run.
Compare the examples on the front pages of Mockito and JMockIT to see the real difference. It's not about what you test, it's about how well your tests document and describe the behavior of the class.
Declaration of Interest: Szczepan and I were on the same project when he wrote the first draft of Mockito, after seeing some of us roll out our own stub classes rather than use the existing mocking frameworks of the time. So I feel like he wrote it all for me, and am thoroughly biased. Thank you Szczepan.