Can anyone explain the difference to me between Cucumber and Junit
From my understanding they are both used to test Java code although I am not sure of the difference?
Are they simply difference implementations of the same test suite or aimed at testing different things?
Cucumber and JUnit are different and solve different things.
Cucumber is a Behavior Driven Design (BDD) framework that takes "stories" or scenarios written in human readable languages such as English and turns those human readable text into a software test.
here's an Example cucumber story:
cucumber will then knows how to turn this text into a software test to make sure the software works as described. The output will tell you if the story is actually what the software does and if not, what was different:
Here's where the code is fixed to make the cucumber test pass:
This makes what is called an "Executable Specification" which is a nice way of documenting all of the features your software supports. This is different than normal documentation because without the corresponding test, someone reading the document doesn't know if the documentation is up to date.
Other Benefits of Executable Specifications:
Non-programmers can read and understand the tests
Non-programmers can write the tests since they are in plain English.
BDD results and Executable Specifications are very high level. They cover the overall features and perhaps a few edge cases as examples but don't test every possible condition or every code path. Also BDD tests are "integration tests" in that they test how all your code modules work together, but they don't test everything thoroughly.
This is where JUnit comes in.
JUnit is a lower level "Unit test" tool that allows developers to test every possible code path in their code. Each module of your code (or classes, or even methods) is tested in isolation. It is much more low level than a BDD framework. Using the same calculator story as the Cucumber example, JUnit tests would test lots of different calculation examples and invalid inputs to make sure the program responds correctly and computes the values correctly.
Hope that helps
I think Cucumber is more used for integration tests, while JUnit is more used in behaviour tests instead. Besides, Cucumber syntax is more accurate than JUnit, but much more complex. Here you can see a Cucumber test example:
package com.c0deattack.cucumberjvmtutorial;
import cucumber.annotation.en.Given;
import cucumber.annotation.en.Then;
import cucumber.annotation.en.When;
import cucumber.runtime.PendingException;
public class DepositStepDefinitions {
#Given("^a User has no money in their account$")
public void a_User_has_no_money_in_their_current_account() {
User user = new User();
Account account = new Account();
user.setAccount(account);
}
#When("^£(\\d+) is deposited in to the account$")
public void £_is_deposited_in_to_the_account(int arg1) {
// Express the Regexp above with the code you wish you had
throw new PendingException();
}
#Then("^the balance should be £(\\d+)$")
public void the_balance_should_be_£(int arg1) {
// Express the Regexp above with the code you wish you had
throw new PendingException();
}
private class User {
private Account account;
public void setAccount(Account account) {
this.account = account;
}
}
private class Account {
}
}
You can see that JUnit is more simple, but not necessarily less powerful:
import static org.junit.Assert.assertEquals;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
public class MyClassTest {
#Test(expected = IllegalArgumentException.class)
public void testExceptionIsThrown() {
MyClass tester = new MyClass();
tester.multiply(1000, 5);
}
#Test
public void testMultiply() {
MyClass tester = new MyClass();
assertEquals("10 x 5 must be 50", 50, tester.multiply(10, 5));
}
}
Hope it helps,
Clemencio Morales Lucas.
Cucumber is in which you can do BDD Behavior Driven Development. Something like you can convert your functional use case into Cucumber story. In that sense you can also take Cucumber to be DSL for Functional Use Case Document.
JUnit on other side is for unit testing which would be a method in Java. So a use can be unit test (rarely though) to inyegration test or a full system test, that's your first case Cucumber. Unit Test will be Unit Test only.
Related
I am completely new to apache camel.
I got some basic understanding about it.
Now I am going through some videos and documents to get some ideas for implementing junit test cases for apache camel spring DSL based spring boot application but it's not clear to me since there are many ways to implement or in very high level.
I am confused.. which one to follow and what is actually happening in those junits
Does anyone have example or link or videos which explains junit coverage for apache camel spring DSL based spring boot application?
I am particularly looking for junits.
also If you know someone good tutorials about apache camel let me know.
JUnit and Camel doesn't work the same as JUnit and "normal" code and as far as I am aware there's only fairly rudimentary ways to get coverage of a camel route from JUnit. Camel routes are a processing model that is essentially an in memory model of the various steps that need to run, so you can't use code coverage tools to track what parts get executed.
Consider this route (in a subclass of RouteBuilder ):
public void configure() throws Exception {
from("jms:queue:zzz_in_document_q")
.routeId("from_jms_to_processor_to_jms")
.transacted()
.log(LoggingLevel.INFO, "step 1/3: ${body}")
.bean(DocBean.class)
.log(LoggingLevel.INFO, "step 2/a3 - now I've got this: ${body}")
.process(new DocProcessor())
.log(LoggingLevel.INFO, "step 3/3 - and finally I've got this: ${body}")
.to("jms:queue:zzz_out_document_q");
}
and an associated test case, in a class that extends CamelBaseTestSupport:
#Test
public void testJmsAndDbNoInsert() throws Exception {
long docCountBefore = count("select * from document");
template.sendBody("jms:queue:zzz_in_document_q", new Long(100));
Exchange exchange = consumer.receive("jms:queue:zzz_out_document_q", 5000);
assertNotNull(exchange);
Document d = exchange.getIn().getBody(Document.class);
assertNotNull(d);
long docCountAfter = count("select * from document");
assertEquals(docCountAfter, docCountBefore);
}
When the unit test runs the app context will run the configure method, so I've got 100% coverage of my route before I even put a message on the queue! Except I don't, because all it's done is created the execution model in the camel route system and the various components and processors are now all going to run in the right order.
Beans and Processors will get included in the coverage reports, but if you have complex logic in the routes it's not going to give you coverage on this.
There is this capability, delivered around 2017 - https://issues.apache.org/jira/browse/CAMEL-8657 - but I haven't used it and am not sure how it will go working with whatever coverage tooling you use.
I am testing a Cordova plugin in Java/Android and I need to initialize my Plugin class and set some state before I run my Tests.
#Before
public void beforeEach() throws Exception {
System.out.println("Creating new Instance ");
PowerMockito.mockStatic(Helpers.class);
PowerMockito.when(Helpers.canUseStorage(any(), any())).thenReturn(true);
MyLogger myLoggerMock = PowerMockito.mock(MyLogger.class);
PowerMockito.doNothing().when(myLoggerMock, "log", anyString());
PowerMockito.whenNew(MyLogger.class).withAnyArguments().thenReturn(myLoggerMock);
this.sut = spy(new FilePicker());
PowerMockito.doNothing().when(this.sut).pick(any(), any());
}
I want to create a Test Suite / Java Class per public function, but I do not want to repeat that code every time.
Is there a way to share that before each between test suites? I have found ClassRule but I think I do not do what I need (or I am understanding it wrong... I am really new in Java)
In Typescript we can share beforeEachfunctions with several suites, and each suite can have their own beforeEach
One possible ways is using inheritance:
Make all test classes extend from one "parent test" class and define a #Before in a parent class.
So it will be called automatically for all the subclasses:
public class ParentTest {
#Before
public void doInitialization() {
....
}
}
public class Test1Class extends ParentClass {
#Test
public void fooTest() {
// doInitialization will be executed before this method
}
#Test
public void barTest() {
// doInitialization will be executed before this method as well
}
}
Two notes:
Note 1
In the code you use sut (subject under test) - this obviously should not be in the parent's doInitialization method, so its possible that Test1Class will also have methods annotated with #Before (read here for information about ordering and so forth)
Then the `sut gets initialized with Spy which is frankly weird IMHO, the Subject Under Test should be a real class that you wrote, but that's beyond the scope of the question, just mentioning it because it can point on mistake.
Note 2
I'm writing it in an an attempt to help because you've said that you're new in Java, this is not strictly related to your question...
While this approach works in general you should be really cautious with PowerMockito. I'm not a PowerMockito expert and try to avoid this type of mocks in my code but in a nutshell the way it manipulates the byte code can clash with other tools. From your code: you can refactor the HelperUtils to be non-static and thus avoid PowerMocking in favor of regular mocking which is faster and much more safe.
As for the Logging - usually you can compromise on it in unit test, if you're using slf4j library you can config it to use "no-op" log for tests, like sending all the logging messages into "nothing", and not-seeing them in the console.
I am fairly new to Cucumber. I was experimenting with it by just creating few test features when I noticed the difference when running a single feature vs running the whole suite (from the IntelliJ).
I noticed that when I run single feature it runs using the cucumber-jvm option and in this case, the CucumberConfig(the blank class to define the runner and cucumber options) and the Runner is not utilized. However, when I run the whole suite it runs as a JUnit test and obviously, in this case, the Config class and the runner comes into the picture.
I confirmed this with the following sample code:
#RunWith(CustomRunner.class)
#CucumberOptions()
public class CucumberConfig {
#BeforeClass
public static void beforeClass()
{
System.out.println("This is run before Once: ");
}
#AfterClass
public static void afterClass()
{
System.out.println("This is run after Once: ");
}
}
CustomRunner
public class CustomRunner extends Cucumber {
public CustomRunner(Class clazz) throws InitializationError, IOException {
super(clazz);
System.out.println("I am in the custom runner.");
}
}
Also, I understand that while running as cucumber-junit we can't pass specific feature to run as in cucumber-jvm. Correct me if I am wrong.
My doubt is, is this the default behavior or am I doing something wrong. And, if this is default how can I make cucumber to always use the Config file.
I'll appreciate if someone can provide some insight on this.
When you're using IntelliJ IDEA to run the tests, IDEA will use cucumber.api.Main to run the tests. As such it will ignore CucumberConfig neither will it run #BeforeClass nor #AfterClass, these are only used by the JUnit runner.
This is what I found from my initial attempts to use JMockIt. I must admit that I found the JMockIt documentation very terse for what it provides and hence I might have missed something. Nonetheless, this is what I understood:
Mockito: List a = mock(ArrayList.class) does not stub out all methods
of List.class by default. a.add("foo") is going to do the usual thing
of adding the element to the list.
JMockIt: #Mocked ArrayList<String> a;
It stubs out all the methods of a by default. So, now a.add("foo")
is not going to work.
This seems like a very big limitation to me in JMockIt.
How do I express the fact that I only want you to give me statistics
of add() method and not replace the function implementation itself
What if I just want JMockIt to count the number of times method add()
was called, but leave the implementation of add() as is?
I a unable to express this in JMockIt. However, it seems I can do this
in Mockito using spy()
I really want to be proven wrong here. JMockit claims that it can do everything that
other mocking frameworks do plus a lot more. Does not seem like the case here
#Test
public void shouldPersistRecalculatedArticle()
{
Article articleOne = new Article();
Article articleTwo = new Article();
when(mockCalculator.countNumberOfRelatedArticles(articleOne)).thenReturn(1);
when(mockCalculator.countNumberOfRelatedArticles(articleTwo)).thenReturn(12);
when(mockDatabase.getArticlesFor("Guardian")).thenReturn(asList(articleOne, articleTwo));
articleManager.updateRelatedArticlesCounters("Guardian");
InOrder inOrder = inOrder(mockDatabase, mockCalculator);
inOrder.verify(mockCalculator).countNumberOfRelatedArticles(isA(Article.class));
inOrder.verify(mockDatabase, times(2)).save((Article) notNull());
}
#Test
public void shouldPersistRecalculatedArticle()
{
final Article articleOne = new Article();
final Article articleTwo = new Article();
new Expectations() {{
mockCalculator.countNumberOfRelatedArticles(articleOne); result = 1;
mockCalculator.countNumberOfRelatedArticles(articleTwo); result = 12;
mockDatabase.getArticlesFor("Guardian"); result = asList(articleOne, articleTwo);
}};
articleManager.updateRelatedArticlesCounters("Guardian");
new VerificationsInOrder(2) {{
mockCalculator.countNumberOfRelatedArticles(withInstanceOf(Article.class));
mockDatabase.save((Article) withNotNull());
}};
}
A statement like this
inOrder.verify(mockDatabase, times(2)).save((Article) notNull());
in Mockito, does not have an equivalent in JMockIt as you can see from the example above
new NonStrictExpectations(Foo.class, Bar.class, zooObj)
{
{
// don't call zooObj.method1() here
// Otherwise it will get stubbed out
}
};
new Verifications()
{
{
zooObj.method1(); times = N;
}
};
In fact, all mocking APIs mock or stub out every method in the mocked type, by default. I think you confused mock(type) ("full" mocking) with spy(obj) (partial mocking).
JMockit does all that, with a simple API in every case. It's all described, with examples, in the JMockit Tutorial.
For proof, you can see the sample test suites (there are many more that have been removed from newer releases of the toolkit, but can still be found in the old zip files), or the many JMockit integration tests (over one thousand currently).
The equivalent to Mockito's spy is "dynamic partial mocking" in JMockit. Simply pass the instances you want to partially mock as arguments to the Expectations constructor. If no expectations are recorded, the real code will be executed when the code under test is exercised. BTW, Mockito has a serious problem here (which JMockit doesn't), because it always executes the real code, even when it's called inside when(...) or verify(...); because of this, people have to use doReturn(...).when(...) to avoid surprises on spied objects.
Regarding verification of invocations, the JMockit Verifications API is considerably more capable than any other. For example:
new VerificationsInOrder() {{
// preceding invocations, if any
mockDatabase.save((Article) withNotNull()); times = 2;
// later invocations, if any
}};
Mockito's a much older library than JMockIT, so you could expect that it would have many more features. Have a read through the release list if you want to see some of the less well documented functionality. JMockIT authors have produced a matrix of features in which they missed out every single thing that other frameworks do that they don't, and got several wrong (for instance, Mockito can do strict mocks and ordering).
Mockito was also written to enable unit-level BDD. That generally means that if your tests provide a good example of how to use the code, and if your code is lovely and decoupled and well-designed, then you won't need all the shenanigans that JMockIT provides. One of the hardest things to do in Open Source is say "no" to the many requests that don't help in the long run.
Compare the examples on the front pages of Mockito and JMockIT to see the real difference. It's not about what you test, it's about how well your tests document and describe the behavior of the class.
Declaration of Interest: Szczepan and I were on the same project when he wrote the first draft of Mockito, after seeing some of us roll out our own stub classes rather than use the existing mocking frameworks of the time. So I feel like he wrote it all for me, and am thoroughly biased. Thank you Szczepan.
Without looking into JUnit source itself (my next step) is there an easy way to set the default Runner to be used with every test without having to set #RunWith on every test? We've got a huge pile of unit tests, and I want to be able to add some support across the board without having to change every file.
Ideally I'm hope for something like: -Djunit.runner="com.example.foo".
I don't think this is possible to define globally, but if writing you own main function is an option, you can do something similar through code. You can create a custom RunnerBuilder and pass it to a Suite together with your test classes.
Class<?>[] testClasses = { TestFoo.class, TestBar.class, ... };
RunnerBuilder runnerBuilder = new RunnerBuilder() {
#Override
public Runner runnerForClass(Class<?> testClass) throws Throwable {
return new MyCustomRunner(testClass);
}
};
new JUnitCore().run(new Suite(runnerBuilder, testClasses));
This won't integrate with UI test runners like the one in Eclipse, but for some automated testing scenarios it could be an option.
JUnit doesn’t supporting setting the runner globally. You can hide away the #RunWith in a base class, but this probably won't help in your situation.
Depending on what you want to achieve, you might be able to influence the test behavior globally by using a custom RunListener. Here is how to configure it with the Maven Surefire plugin: http://maven.apache.org/plugins/maven-surefire-plugin/examples/junit.html#Using_custom_listeners_and_reporters