ceylon.test.TestRunner fails when tests fails - ceylon

Whenever a test function (a function annotated with test) contains assertions that fail, the assertion has the same effect as when trowing an exception: no further code lines in that function will be executed. Thus, assert statements in functions that are annotated with 'test' works just as ordinary assert statements in ordinary Ceylon functions. This runs contrary to the documentation, which states that ordinary assert statements can be used for making unit tests.
Thus, running the code below, I get to see the statement myTests1 but not ´myTests2`:
import ceylon.test {
test, TestRunner, createTestRunner
}
test
Anything myTests1 () {
// assert something true!
assert(40 + 2 == 42);
print("myTests1");
return null;
}
test
void myTests2 () {
// assert something false!
assert(2 + 2 == 54);
print("myTests2");
}
"Run the module `tests`."
shared void run() {
print("reached run function");
TestRunner myTestRunner = createTestRunner(
[`function myTests1`, `function myTests2`]);
myTestRunner.run();
}
This is the actual output:
"C:\Program Files\Java\jdk1.8.0_121\bin\java" -Dceylon.system.repo=C:\Users\Jon\.IdeaIC2017.2\config\plugins\CeylonIDEA\classes\embeddedDist\repo "-javaagent:C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2017.2.1\lib\idea_rt.jar=56393:C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2017.2.1\bin" -Dfile.encoding=windows-1252 -classpath C:\Users\Jon\.IdeaIC2017.2\config\plugins\CeylonIDEA\classes\embeddedDist\lib\ceylon-bootstrap.jar com.redhat.ceylon.launcher.Bootstrap run --run run tests/1.0.0
reached run function
myTests1
Process finished with exit code 0

This is working as intended – replacing those asserts with assertEquals’ has the same effect and prints the same output, because both do exactly the same thing: throw an exception if the assertion fails.
All test frameworks that I’m aware of behave the same way in this situation: an assertion failure results in an exception and thus immediately terminates execution of the test method. This is by design, since you don’t know what your program will do once one expectation will be violated – the rest of the method might depend on that assertion holding true, and might break in unpredictable and confusing ways.
If you’re writing tests like
test
shared void testTwoThings() {
assertEquals { expected = 42; actual = 40 + 2; };
assertEquals { expected = 42; actual = 6 * 9; };
}
you’re supposed to write two tests instead.

Related

C++Winrt how to throw and handle exception without terminating program

I have following code
IAsyncOperation<bool> trythiswork()
{
bool contentFound{ false };
try
{
auto result = co_await someAsyncFunc();
winrt::check_bool(result)
if (result)
{
contentFound = true;
}
}
catch (...)
{
LOG_CAUGHT_EXCEPTION();
}
co_return contentFound;
}
When the result is false, it fails and throws but catch goes to fail fast and program terminates. How does log function terminate the program? Isn't it supposed to only log the exception? I assumed that I am handling this exception so program won't crash but it is crashing.
So how to throw and catch so that program does not terminate? I do want to throw. And also catch and preferably log the exception as well.
Thanks
The issue can be reproduced using the following code:
IAsyncOperation<bool> someAsyncFunc() { co_return false; }
IAsyncOperation<bool> trythiswork()
{
auto contentFound { false };
try
{
auto result = co_await someAsyncFunc();
winrt::check_bool(result);
// throw std::bad_alloc {};
contentFound = true;
}
catch (...)
{
LOG_CAUGHT_EXCEPTION();
}
co_return contentFound;
}
int main()
{
init_apartment();
auto result = trythiswork().get();
}
As it turns out, everything works as advertised, even if not as intended. When running the code with a debugger attached you will see the following debug output:
The exception %s (0x [trythiswork]
Not very helpful, but it shows that logging itself works. This is followed up by something like
FailFast(1) tid(b230) 8007023E {Application Error}
causing the process to terminate. The WIL only recognizes exceptions of type std::exception, wil::ResultException, and Platform::Exception^. When it handles an unrecognized exception type it will terminate the process by default. This can be verified by commenting out the call to check_bool and instead throwing a standard exception (such as std::bad_alloc). This produces a program that will log exception details, but continue to execute.
The behavior can be customized by registering a callback for custom exception types, giving clients control over translating between custom exception types and HRESULT values. This is useful in cases where WIL needs to interoperate with external library code that uses its own exception types.
For C++/WinRT exception types (based on hresult_error) the WIL already provides error handling helpers that can be enabled (see Integrating with C++/WinRT). To opt into this all you need to do is to #include <wil/cppwinrt.h> before any C++/WinRT headers. When using precompiled headers that's where the #include directive should go.
With that change, the program now works as desired: It logs exception information for exceptions that originate from C++/WinRT, and continues to execute after the exception has been handled.

Can I somehow declare that a test expects to have unhandled rejections?

I have a library that imposes "metering" on some code. The normal case is easy to test where the code runs to completion and the meter is not exceeded.
I also want to test the case where the meter is exceeded, causing the code under test to throw forever until it unwinds to the caller. It is easy to test that the code only executes the parts I expect. However, in the case of promises, the exceeded meter causes some unhandled rejections in the code under test, as expected.
AVA currently fails my test run with:
1 test passed
1 unhandled rejection
Effectively, the test reduces to:
const expectedLog = ['a'];
test('unhandled rejection', async t => {
const log = [];
// I don't have control over the code in f():
const f = async () => {
log.push('a');
// This is a side-effect, but fails the test.
Promise.reject(RangeError('metering'));
throw RangeError('metering');
};
await t.throwsAsync(() => f(), { instanceOf: RangeError });
t.deepEqual(log, expectedLog);
});
Is there any way for me to tell AVA that I expected the 1 unhandled rejection?
I require not to modify the code under test (the f function above), as the whole point is to test that metering stops the code dead in its tracks no matter what that untrusted code is.
Is there any way for me to tell AVA that I expected the 1 unhandled rejection?
No. Unhandled rejections are indicative of bugs and therefore AVA will fail the test run. As AVA's maintainer I don't think there's enough reason to change that behavior.
I require not to modify the code under test (the f function above), as the whole point is to test that metering stops the code dead in its tracks no matter what that untrusted code is.
Rejecting a promise, that is then garbage collected, won't impact the untrusted code. If you're in an async function you can use throw directly.

Some calls cause stack unwinding, though no C++ exception is thrown

I use Visual Studio Native Unit Test Framework for C++. When an assertion fails, next statements aren't executed and local objects destructors are called, so it seems like an exception is thrown, but I can't catch any C++ exception by catch (...) clause. After some experiments, I noticed that __int2c() call (that triggers the 2c interrupt, due to documentation), for example, has the same effect. By this day I was aware only about exceptions that have such behavior. Could you give me some hint about what can be the reason in this case?
UPDATE:
Here is a code sample
void func()
{
struct Foo
{
~Foo()
{
// this code is executed
}
};
Foo foo;
try
{
Assert::IsTrue(false);
}
catch (...)
{
// this code is not executed
}
// this code is not executed
}

JUnit 5 -- how to make test execution dependent on another test passing?

Student here. In JUnit 5, what is the best way to implement conditional test execution based on whether another test succeeded or failed? I presume it would involve ExecutionCondition, but I am unsure how to proceed. Is there a way to do this without having to add my own state to the test class?
To note, I am aware of dependent assertions, but I have multiple nested tests that represent distinct substates, and so I would like a way to do this at the test level itself.
Example:
#Test
void testFooBarPrecondition() { ... }
// only execute if testFooBarPrecondition succeeds
#Nested
class FooCase { ... }
// only execute if testFooBarPrecondition succeeds
#Nested
class BarCase { ... }
#Nested tests give the test writer more capabilities to express the
relationship among several groups of tests. Such nested tests make use
of Java’s nested classes and facilitate hierarchical thinking about
the test structure. Here’s an elaborate example, both as source code
and as a screenshot of the execution within an IDE.
As stated from JUnit 5 documentation, #Nested is use for convenient display in your IDE. I would rather use Assumptions for your preconditions inside a Depentent Assertions.
assertAll(
() -> assertAll(
() -> assumeTrue(Boolean.FALSE)
),
() -> assertAll(
() -> assertEquals(10, 4 + 6)
)
);
You can fix the problem by extracting common precondition logic in a #BeforeEach/#BeforeAll setup method, then use assumptions, which was developed exactly for the purpose of condition test execution. Some sample code:
class SomeTest {
#Nested
class NestedOne {
#BeforeEach
void setUp() {
boolean preconditionsMet = false;
//precondition code goes here
assumeTrue(preconditionsMet);
}
#Test // not executed when precodition is not met
void aTestMethod() {}
}
#Nested
class NestedTwo {
#Test // executed
void anotherTestMethod() { }
}
}

JUnit - How to make test case pass if it times out?

I ran into a dilemma with making a test pass if it times out.
#Test(timeout=1, expected=Exception.class)
public void testMovesToSolveMaximum() {
PuzzleSolver pS = createSimplePuzzleSolver(maximumPuzzleStateA, maximumPuzzleStateB);
PuzzleState goal = new SimplePuzzleState();
goal.configureState(maximumPuzzleStateB);
checkThatComputedSolutionIsCorrect(pS, goal);
}
However, the test case fails due to timeout even though I specified that is the expected result.
If I understand the question correctly then you are observing the specific behavior due to the way that the default JUnit runner is evaluating the whole test:
After realizing that there is a timeout set on your test method it runs it in a different thread and is waiting for the result. As the timeout in your example is set to 1[ms] I believe that it reaches the timeout before the test actually finishes which makes the runner throw the timeout exception (that is indeed a java.lang.Exception) which you thought needed to be caught by the expected attribute in the Test annotation. But the attribute expected on the Test annotation is evaluating only the exceptions thrown from the test method and not from the timeout checking mechanism. In other words the expected exception mechanism is not working for the timeout exception throw by the f/w and not by a test.
You can explore this yourself starting in BlockJUnit4ClassRunner class in JUnit (relevant part to start from. NOTE: it is not so easy to go over the code and understand the flow...):
protected Statement methodBlock(FrameworkMethod method) {
Object test;
try {
test = new ReflectiveCallable() {
#Override
protected Object runReflectiveCall() throws Throwable {
return createTest();
}
}.run();
} catch (Throwable e) {
return new Fail(e);
}
Statement statement = methodInvoker(method, test);
statement = possiblyExpectingExceptions(method, test, statement);
statement = withPotentialTimeout(method, test, statement);
statement = withBefores(method, test, statement);
statement = withAfters(method, test, statement);
statement = withRules(method, test, statement);
return statement;
}