How to get handle of Karate Results when running features and scenarios in SEQUENTIAL mode NOT Parallel in Karate - junit

I'm basically attempting to run my scenarios in sequence and trying to find if there is a way to get the handle to Results in #AfterClass method.
I know that it is possible when running in parallel, but our necessity is to get a break down of all the test executed in test report which is not possible when implemented like below.
public class AllTestRunner {
#Test
public void testParallel() {
Results results = Runner.parallel(1, "classpath:karate/feature");
}
}
The issue with this approach is that test reports appear as only 1 test executed which is not expected.
Is there a way to get Results when running like below ?
#RunWith(Karate.class)
#KarateOptions(features = "classpath:karate/feature")
public class AllTestRunner {
#AfterClass
public void testParallel() {
Results results = ...;
}
}
Basically the purpose of trying to get the Results is to perform an API call if a scenario has failed, by checking result.isFailed().
Again, running tests in parallel is generating in test report like below.
1 tests 0 failures 0 ignored 7.373s duration 100% successful
Tests Standard output
Test Duration Result
testParallel 7.373s passed

No there isn't. Maybe you can explore the hooks, but be warned that the API will change slightly in 1.0 - https://stackoverflow.com/a/59080128/143475
If you have unusual needs like this, you should consider contributing code to Karate. The so called "sequential" mode is a convenience to connect with JUnit. There should not be a problem with the parallel runner, I don't understand what you mean by "only 1 test executed".

Related

In a CucumberBDD project org.junit.rules.ErrorCollector OR org.assertj.core.api.JUnitSoftAssertions are not catching error in spite of mismatch

#Rule
public ErrorCollector errorCollector = new ErrorCollector();
public void verifyDeviceType(String device_Type){
System.out.println(deviceType.getText()+","+device_Type);==> camera,camera1
errorCollector.checkThat("Expected Device Type Not Present.",deviceType.getText(),equalTo(device_Type));
}
public void verifyDeviceStatus(String device_Status){
System.out.println(deviceStatus.getText()+","+device_Status);==>Might be offline,Online2
errorCollector.checkThat("Expected Device Status Not Present.",deviceStatus.getText(),equalTo(device_Status));
}
As shown above, first method should fail because camera vs. camera1 difference.
Second method should fail because 'Might be offline' Vs Online2 word difference, which I am expecting to be equal.
But ErrorCollector runs smoothly with out any complaints showing all the tests as passed.
BTW, lastly, even if it shows them as errors, how do we access the messages or errors stored in the ErrorCollector, say in the next method, the third method after these two methods ran through collecting errors ?
Then again, after learning about JUnitSoftAssertions, I tried
#Rule
public JUnitSoftAssertions softAssertions = new JUnitSoftAssertions();
public void verifyDeviceType(String device_Type){
System.out.println(deviceType.getText()+","+device_Type);==> camera,camera1
softAssertions.assertThat(deviceType.getText()).as("Expected Device Type").isEqualTo(device_Type);
}
public void verifyDeviceStatus(String device_Status){
System.out.println(deviceStatus.getText()+","+device_Status);==>Might be offline,Online2
softAssertions.assertThat(deviceStatus.getText()).as("Expected Device Status").isEqualTo(device_Status);
}
A reproducible test case would be great if you want people to help you.
I'm not sure to understand exactly what you are trying to achieve, are you looking for a report of all failed assertions? Your code samples don't show any tests methods (that is annotated with #Test), anyway for the AssertJ question, you can access collected errors with assertionErrorsCollected.
Hope it helps!

Collect JSON object in a file when a Junit test fails

I have ~50 JSON arrays as an array of models being plugged into Unit tests to compare resultant configs. Each file looks like this:
0.json
1.json... and so on
[{model1},{model2},{model3}]
I am trying to run unit tests to compare the resultant configs and want to run the tests in a manner that the test itself keeps running and collect the models if an assertion fails and output it to a json file somewhere.
Say, model2 fails, I want to collect model2 into a file output.json as an array
Till now, the code looks like this, even if the test is file by file, its fine, but will save me days of effort:
#Test
public void compareAWithB() throws Exception {
File lbJsonFile1 = new File("src/test/resources/iad_ad3/6.json");
compareAWithBHelper(lbJsonFile1);
}
public void compareAWithBHelper(File lbJsonFile) throws Exception {
Model[] dtos = new ObjectMapper().readValue(lbJsonFile, Model[].class);
for(Model dto : dtos) {
Model model = ModelConverter.apiToDao(dto);
String A = doSomeThing();
String B = doSomething2();
Assert.assertEquals(A,B);
//Required: if assert fails, collect the json object and continue
}
I tried using SoftAssertions in AssertJ, but weirdly, it was not printing out all the json objects OR maybe, I don't really understand the checkThat() method properly.
Tried using collectors.checkThat, couldn't get it to work reliably. This is a production area, so, don't have much room for errors, and wanna reduce the manual effort.
Made another attempt to use collectors as one of the posts on stackoverflow, couldn't get it to work reliably
/*try {
collector.checkThat(A, CoreMatchers.equalTo(B));
} catch (AssertionError error) {
System.out.println(dto.toString());
throw new AssertionError(error.getMessage());
}*/
Can someone please help ?
If you want to gather all assertion errors and not stop at the first error then soft assertions is a good candidate to use. To get started with soft assertions you can follow the guide available here: https://assertj.github.io/doc/#assertj-core-soft-assertions.
collector.checkThat does not come from AssertJ (neither anything from your code samples), it's a bit confusing, I would suggest to write a reproducible test so that people can help more easily.
Alternatively if you are dealing with JSON, you can give a try to addressed by https://github.com/lukas-krecan/JsonUnit which provides first class citizen JSON assertions.
Hope it helps.

Is it possible to specify which Before method runs in JUnit, and if so, how?

I have a test suite which tests two different things in the same class. I have a before method that initialises some fields for the test methods to use. However, I have a group of test methods that uses the first set of field, and another group that uses the second, but not the first. I know it's possible to split the before action over different before methods, but is it also possible to specify which one runs before each test?
Concrete example:
#Before
public void before1() {...}
#Before
public void before2() {...}
#Test
public void test1() {
//Only before1 runs
}
#Test
public void test2() {
//Only before2 runs
}
This is a simple representation, but I have much more tests that use either of these befores.
Everything you've stated in your question is pointing to splitting up your tests into 2 separate classes. I am guessing that the two groups you have are testing distinct features of your code and may even have some commonality in the test names. Take all of the tests that require before1 into a test class and all the tests that require before2 into another test class. You can name these new test classes according to the grouping of behaviour you're testing.
For example if half of your tests are for success scenarios and half are testing failure scenarios, put these into classes named something like FooSucceedsTest and the failures into FooFailsTest.
There is no guarantee on the order of a #Before executing just as there's no guarantee on a #Test order of execution.
The solution is to do any setup a test is dependent on in the #Test itself and use the #Before for common setup before test execution.

How can I make JUnit let me set variables in one test case and access them in other if they are in the same class

Let say I have a test class called MyTest.
In it I have three tests.
public class MyTest {
AnObject object;
#Before
public void setup(){
object = new AnObject();
object.setSomeValue(aValue);
}
#Test
public void testMyFirstMethod(){
object.setAnotherValue(anotherValue);
// do some assertion to test that the functionality works
assertSomething(sometest);
}
#Test
public void testMySecondMethod(){
AValue val = object.getAnotherValue();
object.doSomethingElse(val);
// do some assertion to test that the functionality works
assertSomething(sometest);
}
Is there any way I can use the value of anotherValue, which is set with its setter in the first test, in the second test. I am using this for testing database functionality. When I create an object in the DB I want to get its GUID so I can use this to do updates and deletes in later test methods, without having to hardcode the GUID and therefore making it irrelevant for future use.
You are introducing a dependency between two tests. JUnit deliberately does not support dependency between tests, and you can't guarantee the order of execution (except for test classes in a test suite, see my answer to Has JUnit4 begun supporting ordering of test? Is it intentional?). So you really want to have dependencies between two test methods:
you have to use an intermediate static value
as Cedric suggests, use TestNG, which specifically supports dependencies
in this case, you can create a method to create the line, and call it from both methods.
I would personally prefer 3, because:
I get independent tests, and I can run just the second test (in Eclipse or such like)
In my teardown in the class, I can remove the line from the database, the cleanup. This means that whichever test I run, I always start off with the same (known) database state.
However, if your setup is really expensive, you can consider this to be an integration test and just accept the dependency, to save time.
You should use TestNG if you need this (and I agree it's fairly common in integration testing). TestNG uses the same instance to run your tests, so values stored in fields are preserved between tests, which is very useful when your objects are expensive to create (JUnit forces you to use statics to achieve the same effect, which should be avoided).
First off, make sure your #Test 's run in some kind of defined order
i.e. #FixMethodOrder(MethodSorters.NAME_ASCENDING)
In the example below, I'm assuming that test2 will run after test1
To share a variable between them, use a ThreadLocal (from java.lang).
Note that the scope of the ThreadLocal variable is to the thread, so if you are running multiple threads, each will have a copy of 'email' (the static in this case implies that its only global to the thread)
private static ThreadLocal<String> email = new ThreadLocal<String>();
#Test
public void test1 {
email.set("hchan#apache.org);
}
#Test
public void test2 {
System.out.println(email.get());
}
You should not do that. Tests are supposed to be able to run in random order. If you want to test things that depend on one value in the database, you can do that in the #Before code, so it's not all repeated for each test case.
I have found nice solution, just add Before annotation to the previous test!
private static String email = null;
#Before
#Test
public void test1 {
email = "test#google.com"
}
#Test
public void test2 {
System.out.println(email);
}
If you, like me, googled until here and the answer didn't serve to you, I'll just leave this: Use #BeforeEach

Is stress test with junit possible?

I'm new to java and Junit, I need to stress test a set of web services, now for each web service I have a test like this:
#Test
public void webServiceTest() {
Integer firstParameter=0;
Integer secondParameter=9;
List<GeoArea> sampleList = kitDAO.myWebServiceToTest(firstParameter, secondParameter);
Assert.assertNotNull(sampleList);
Assert.assertTrue(sampleList.size() > 0);
}
Is there a way to call this test 100 time simultaneously with different parameters? I would create 100 thread, pass to them 100 different set of parameters and start the thread simultaneously. Do you think this is possible? How would you do it?
Thank you
JUnitPerf provides a LoadTest wrapper to run the same test multiple times. I don't think you can pass it different parameters, but you could add that part yourself. Have a static list of your 100 parameters and then have each instance of the test remove one value from that static list.