Running junit4 tests as part of a junit5 dynamic test - junit

In junit5 you can create dynamic tests as in this code
Stream<DynamicNode> dynamicTestsWithContainers() {
return Stream.of("A", "B", "C")
.map(input -> dynamicContainer("Container " + input, Stream.of(
dynamicTest("not null", () -> assertNotNull(input)),
dynamicContainer("properties", Stream.of(
dynamicTest("length > 0", () -> assertTrue(input.length() > 0)),
dynamicTest("not empty", () -> assertFalse(input.isEmpty()))
))
)));
}
I have loads of junit4 tests I don't want to migrate, but I could use them as subsets of tests that can be dynamically chained as part of a wider junit5 dynamic test.
Can this be done without recoding in a "streams way"? Can I wrap a junit4 test within a junit 5 dynamic test?
What I already tried:
#RunWith(JUnitPlatform.class)
Executable (but not sure if there is a way to do it with this)
What I know:
It would not be highly dynamic
It may not be a best practice

Related

Build gradle kotlin dsl task extension method

I want to modify my build.gradle.kts by implementing some tasks. Specially, I want to obtain the output of the first task in my second task, where the first task runs a shell command. There are some basic examples here and here, which are implemented in groovy dsl. Now, I need this functionality in kotlin dsl.
A working example is:
task<Exec>("avdIsRunning") {
commandLine("adb", "devices")
standardOutput = ByteArrayOutputStream()
}
task("task2") {
dependsOn("avdIsRunning")
doLast {
val standardOutput = (tasks.getByName("avdIsRunning") as Exec).standardOutput.toString()
println("Foo's output: $standardOutput")
}
}
What I want is, to call a extension method avdIsRunning.output() that provides the standradOutput of avdIsRunning-task, compare to the examples I linked above.

Scala Mock: MockFunction0-1() once (never called - UNSATISFIED)

I'm working on a scala object in order to perform some testing
My start object is as follows
object obj1 {
def readvalue : IO[Float] = IO{
scala.io.StdIn.readFloat()
}
}
The testing should be
1- value of type FLOAT
2- should be less than 3
As we can not mock singleton objects I've used mocking functions here is what I've done.
class FileUtilitiesSpec
extends FlatSpec
with Matchers
with MockFactory
{
"value" should "be of Type Float" in {
val alpha = mockFunction[() => IO[Float]]
alpha.expects shouldBe a[IO[Float]]
}
"it" should "be less than 3" in {
val alpha = mockFunction[() => IO[Float]]
alpha.expects shouldBe <(3)
}
}
Im getting an error saying that :
MockFunction0-1() once (never called - UNSATISFIED) was not an instance of cats.effect.IO, but an instance of org.scalamock.handlers.CallHandler0
ScalaTestFailureLocation: util.FileUtilitiesSpec at (FileUtilitiesSpec.scala:16)
Expected :cats.effect.IO
Actual :org.scalamock.handlers.CallHandler0 ```
I would recommend reading the examples here as a starting point: https://scalamock.org/quick-start/
Using mock objects only makes sense if you are planning to use them in some other code, e.g. dependencies you do not want to make part of your module under test, or are beyond your control.
An example might be a database connection where you would depend on an actual system, making the code hard to test without simulating it.
The example you provided only has mocks, but no code using it, hence the error you are getting is absolutely correct. The mock expects to be used, but was not called.
The desired behaviour for a mocking library in this case is to make the test fail as the developer intended for this interaction with a mock to happen, but it was not recorded - so something is wrong.

JUnit 5 -- how to make test execution dependent on another test passing?

Student here. In JUnit 5, what is the best way to implement conditional test execution based on whether another test succeeded or failed? I presume it would involve ExecutionCondition, but I am unsure how to proceed. Is there a way to do this without having to add my own state to the test class?
To note, I am aware of dependent assertions, but I have multiple nested tests that represent distinct substates, and so I would like a way to do this at the test level itself.
Example:
#Test
void testFooBarPrecondition() { ... }
// only execute if testFooBarPrecondition succeeds
#Nested
class FooCase { ... }
// only execute if testFooBarPrecondition succeeds
#Nested
class BarCase { ... }
#Nested tests give the test writer more capabilities to express the
relationship among several groups of tests. Such nested tests make use
of Java’s nested classes and facilitate hierarchical thinking about
the test structure. Here’s an elaborate example, both as source code
and as a screenshot of the execution within an IDE.
As stated from JUnit 5 documentation, #Nested is use for convenient display in your IDE. I would rather use Assumptions for your preconditions inside a Depentent Assertions.
assertAll(
() -> assertAll(
() -> assumeTrue(Boolean.FALSE)
),
() -> assertAll(
() -> assertEquals(10, 4 + 6)
)
);
You can fix the problem by extracting common precondition logic in a #BeforeEach/#BeforeAll setup method, then use assumptions, which was developed exactly for the purpose of condition test execution. Some sample code:
class SomeTest {
#Nested
class NestedOne {
#BeforeEach
void setUp() {
boolean preconditionsMet = false;
//precondition code goes here
assumeTrue(preconditionsMet);
}
#Test // not executed when precodition is not met
void aTestMethod() {}
}
#Nested
class NestedTwo {
#Test // executed
void anotherTestMethod() { }
}
}

ceylon.test.TestRunner fails when tests fails

Whenever a test function (a function annotated with test) contains assertions that fail, the assertion has the same effect as when trowing an exception: no further code lines in that function will be executed. Thus, assert statements in functions that are annotated with 'test' works just as ordinary assert statements in ordinary Ceylon functions. This runs contrary to the documentation, which states that ordinary assert statements can be used for making unit tests.
Thus, running the code below, I get to see the statement myTests1 but not ´myTests2`:
import ceylon.test {
test, TestRunner, createTestRunner
}
test
Anything myTests1 () {
// assert something true!
assert(40 + 2 == 42);
print("myTests1");
return null;
}
test
void myTests2 () {
// assert something false!
assert(2 + 2 == 54);
print("myTests2");
}
"Run the module `tests`."
shared void run() {
print("reached run function");
TestRunner myTestRunner = createTestRunner(
[`function myTests1`, `function myTests2`]);
myTestRunner.run();
}
This is the actual output:
"C:\Program Files\Java\jdk1.8.0_121\bin\java" -Dceylon.system.repo=C:\Users\Jon\.IdeaIC2017.2\config\plugins\CeylonIDEA\classes\embeddedDist\repo "-javaagent:C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2017.2.1\lib\idea_rt.jar=56393:C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2017.2.1\bin" -Dfile.encoding=windows-1252 -classpath C:\Users\Jon\.IdeaIC2017.2\config\plugins\CeylonIDEA\classes\embeddedDist\lib\ceylon-bootstrap.jar com.redhat.ceylon.launcher.Bootstrap run --run run tests/1.0.0
reached run function
myTests1
Process finished with exit code 0
This is working as intended – replacing those asserts with assertEquals’ has the same effect and prints the same output, because both do exactly the same thing: throw an exception if the assertion fails.
All test frameworks that I’m aware of behave the same way in this situation: an assertion failure results in an exception and thus immediately terminates execution of the test method. This is by design, since you don’t know what your program will do once one expectation will be violated – the rest of the method might depend on that assertion holding true, and might break in unpredictable and confusing ways.
If you’re writing tests like
test
shared void testTwoThings() {
assertEquals { expected = 42; actual = 40 + 2; };
assertEquals { expected = 42; actual = 6 * 9; };
}
you’re supposed to write two tests instead.

How can I make my JUnit tests run in random order?

I have the classical structure for tests, I have a test suite of different suites like DatabaseTests, UnitTests etc. Sometimes those suites contains other suites like SlowDatabaseTests, FastDatabaseTests etc.
What I want is to randomize the running order of tests so I will make sure they are not dependent to each other. Randomization should be at every level, like suite should shuffle test class order, and test class should shuffle test method order.
If it is possible to do this in Eclipse that will be the best.
You do have a Sortable but I can't see how you would use it.
You could extend BlockJUnit4ClassRunner and have computeTestMethods() return a randomized copy of super.computeTestMethods(). Then use the #RunWith to set that as the runner to use.
e.g.
package com.stackoverflow.mlk;
import java.util.Collections;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.InitializationError;
public class RandomBlockJUnit4ClassRunner extends BlockJUnit4ClassRunner {
public RandomBlockJUnit4ClassRunner(Class<?> klass)
throws InitializationError {
super(klass);
}
protected java.util.List<org.junit.runners.model.FrameworkMethod> computeTestMethods() {
java.util.List<org.junit.runners.model.FrameworkMethod> methods = super.computeTestMethods();
Collections.shuffle(methods);
return methods;
}
}
Then
#RunWith(com.stackoverflow.mlk.RandomBlockJUnit4ClassRunner.class)
public class RandomOrder {
#Test
public void one() {
}
#Test
public void two() {
}
#Test
public void three() {
}
}
https://github.com/KentBeck/junit/pull/386 introduces some orders but not RANDOM. Probably you do not really want this; tests should run deterministically. If you need to verify that different permutations of tests still pass, either test all permutations; or, if this would be impractically slow, introduce a “random” seed for shuffling that is determined by an environment variable or the like, so that you can reproduce any failures. http://hg.netbeans.org/main/file/66d9fb12e98f/nbjunit/src/org/netbeans/junit/MethodOrder.java gives an example of doing this for JUnit 3.
In general what you need to do is to write your own test runner and in the test runner class aggregate the methods and randomly run each test (make sure you don't run a test twice).
Read more about the test framework and how to write your own test runner here:
http://www.ddj.com/architect/184415674
In JUnit 4.13, to run the tests within a test class in random order, write a small helper class:
import org.junit.runner.manipulation.Ordering;
import java.util.Random;
public class RandomOrder implements Ordering.Factory {
#Override
public Ordering create(Ordering.Context context) {
long seed = new Random().nextLong();
System.out.println("RandomOrder: seed = " + seed);
return Ordering.shuffledBy(new Random(seed));
}
}
Then, annotate your test class with:
#OrderWith(RandomOrder.class)
This way, the test methods of this one class are run in random order. Plus, if they unexpectedly fail, you know the random seed to repeat exactly this order.
I don't know though how to configure this for a whole project or a test suite.
I will make sure they are not dependent to
each other
You should make sure that this is the case without relying on random execution order. What makes you fear that dependencies may exist?
This issue is open on JUnit GitHub since 2 years, and point out 2 independent issues:
- Tests depending on the execution order;
- Non repeatable tests.
Consider adressing the issue at the root, rather than trying to use the framework to do the job afterwards. Use setUp and tearDown method to guarantee isolation, and test at the smallest level.
Here is a solution with Gradle and JUnit 5.8.0
Step 1 : Ensure that you have latest JUnit version dependency.
Step 2 : Define the required properties under build.gradle test section
test {
useJUnitPlatform()
systemProperties([
//Random in method level
'junit.jupiter.testmethod.order.default': 'org.junit.jupiter.api.MethodOrderer$Random',
// Random in class level
'junit.jupiter.testclass.order.default' : 'org.junit.jupiter.api.ClassOrderer$Random',
// Log configuration to see the seed
'java.util.logging.config.file' : file('src/test/resources/logging.properties')
])
//To print the JUnit logs in console
testLogging {
events "passed", "skipped", "failed", "standardOut", "standardError"
}
}
Step 3: Define logging.properties under src/test/resources
.level=CONFIG
java.util.logging.ConsoleHandler.level=CONFIG
org.junit.jupiter.api.ClassOrderer$Random.handlers=java.util.logging.ConsoleHandler
org.junit.jupiter.api.MethodOrderer$Random.handlers=java.util.logging.ConsoleHandler
Step 4 : Run test. gradlew clean test
You can see the seed used for the random test in the console
CONFIG: ClassOrderer.Random default seed: 65423695211256721
CONFIG: MethodOrderer.Random default seed: 6542369521653287
In case of flaky test, you can reproduce it by configuring the same seed where the JUnit tests were failing
systemProperties([
'junit.jupiter.execution.class.order.random.seed' : '65423695211256721'
'junit.jupiter.execution.order.random.seed' : '6542369521653287'
])
References : how-to-randomize-tests-in-junit , Random