ava: Logs generated outside tests are not shown in the console - ava

My problem
ava logging (t.log) only work inside a test, but not during setup (before, beforeEach) or teardown (after*) functions.
This means that meaningful setup / teardown data, which is very useful for debugging and reproducing, is lost. This happens both for successful and failed tests, and with and without the --verbose flag.
Code
import test from 'ava';
test.before(t => {
// This runs before all tests
t.log('before - 1');
});
test.before(t => {
// This runs after the above, but before tests
t.log('before - 2');
});
test.after('cleanup', t => {
// This runs after all tests
t.log('after');
});
test.after.always('guaranteed cleanup', t => {
// This will always run, regardless of earlier failures
t.log('after always');
});
test.beforeEach(t => {
// This runs before each test
t.log('beforeEach');
});
test.afterEach(t => {
// This runs after each test
t.log('afterEach');
});
test.afterEach.always(t => {
// This runs after each test and other test hooks, even if they failed
t.log('afterEachAlways');
});
test(t => {
t.log('A test');
t.pass();
});
test(t => {
t.log('A test');
t.fail();
});
Output
$ ava run.js --verbose
✔ [anonymous]
ℹ A test
✖ [anonymous] Test failed via `t.fail()`
ℹ A test
1 test failed [00:22:08]
[anonymous]
ℹ A test
/Users/adam/Personal/tmp/ava-bug-log-in-before-each/run.js:46
45: t.log('A test');
46: t.fail();
47: });
Test failed via `t.fail()`
Note that only the printouts from the test (A test) are show. All other logs are lost.
My question
How can I see the logs from the setup and teardown steps of the test suite?

Could you open an issue for this? https://github.com/avajs/ava/issues
I agree this should work.

Related

Same Cypress tests reporting different results on different machines using Chrome & Electron

My colleague & I are running the same Cypress test suite on our machines, but getting different results.
The version of Cypress we are using is 3.8.3.
When they run .\node_modules\.bin\cypress run, all tests are passing.
But when I try to run the same command on my machine, one of the tests is failing.
I get the below error message:
<failure message="cy.type() can only be called on a single element.
Your subject contained 8 elements." type="CypressError">
<![CDATA[CypressError: cy.type() can only be called on a single element. Your subject contained 8 elements.
I can understand what the test is saying, but I don't know why we are getting different results on different machines when running the same tests.
One difference I can spot is that they have the option to run tests on Chrome, while I only have the option to run on Electron.
Can someone please help to explain what is causing this issue, & how it can be resolved
Cypress automatically detects browsers installed on your local machine. So please check if you have chrome installed. Electron comes directly with cypress.
The application behavior can vary from machine to machine and also from browser to browser depending on the configurations and also on the internet speed. So its always good to use Test Retries which will retry the test as per the defined value before finally marking the test as fail. Test retries were introduced in cypress 5.0.
You can apply them globally through your cypress.json
{
"retries": {
// Configure retry attempts for `cypress run`
// Default is 0
"runMode": 2,
// Configure retry attempts for `cypress open`
// Default is 0
"openMode": 0
}
Or, you can also add it to a specific test -
describe('User sign-up and login', () => {
// `it` test block with no custom configuration
it('should redirect unauthenticated user to sign-in page', () => {
// ...
})
// `it` test block with custom configuration
it(
'allows user to login',
{
retries: {
runMode: 2,
openMode: 1,
},
},
() => {
// ...
}
)
})
Or to a specific test suite as well -
// Customizing retry attempts for a suite of tests
describe('User bank accounts', {
retries: {
runMode: 2,
openMode: 1,
}
}, () => {
// The per-suite configuration is applied to each test
// If a test fails, it will be retried
it('allows a user to view their transactions', () => {
// ...
}
it('allows a user to edit their transactions', () => {
// ...
}
})

Issue listening for custom event via puppeteer

I am currently working on a GitLab CI test environment and I have a test harness which we use to test our SDK. I have gone about setting up a custom event that is fired on the page which designates the end of the test run. In my puppeteer implementation I am wanting to listen for this custom event "TEST_COMPLETE".
I have not been successful in getting this to work so I figured I would at least make sure the custom-event.js example on the puppeteer repo worked and there too I am not seeing what I believe I should be getting. I cloned the main repo below and performed an npm install. When I execute the js test below, setting headless:false and don't close the browser, I do not see any log in console that shows any custom event being fired.
It is my understanding that I should see some console event message with 'fired' and then 'app-ready' event and info, but this is not the case. Even if I interact with the page I don't see anything outside of some 'features_loaded' and 'features_unveil' logs.
https://github.com/puppeteer/puppeteer/blob/main/examples/custom-event.js
Anyone able to get the expected behavior on this code today? Not sure if this worked previously and has broke since or I am just doing something wrong. Any info would be of great help, Thanks!
Not sure if this is what you need, but I can get the message 'TEST_COMPLETE fired.' in Node.js console with this simplified code (puppeteer 8.0.0):
import puppeteer from 'puppeteer';
const browser = await puppeteer.launch();
try {
const [page] = await browser.pages();
await page.goto('https://example.org/');
await page.exposeFunction('onCustomEvent', async (type) => {
console.log(`${type} fired.`);
await browser.close();
});
await page.evaluate(() => {
document.addEventListener('TEST_COMPLETE', (e) => {
window.onCustomEvent('TEST_COMPLETE');
});
document.dispatchEvent(new Event('TEST_COMPLETE'));
});
} catch (err) { console.error(err); }

How to get full stack trace in Test Cafe errors

I have been using Test Cafe to write an internal test framework where actions (t.click) and assertions (t.expect) are not directly written inside the spec, but are defined and aggregated in other files.
Everything cool until a test does not fail: in this case the Test Cafe reporter writes in console the assertion/action failed and the relative snippet of code, but I did not find the way to understand the full stack trace of function calls from my test down to the failed assertions.
How can I make sure to provide a full stack trace in the reporter, logging a stack trace with all the calls to function that made my test fail?
I understood that the reason should be linked to how async/await is transpiled into generators: the stack trace of the error shows only the last await executed and not all the previous calls.
<section> ... </section>
<section class="section--modifier">
<h1> ... </h1>
<div>
...
<button class="section__button">
<div class="button__label">
<span class="label__text">Hello!</span> <-- Target of my test
</div>
</button>
...
</div>
</section>
<section> ... </section>
//
// My spec file
//
import { Selector } from 'testcafe';
import {
verifyButtonColor
} from './button';
fixture`My Fixture`
.page`....`;
test('Test my section', async (t) => {
const MySection = Selector('.section--modifier');
const MyButton1 = MySection.find('.section__button');
const MySection2 = Selector('.section--modifier2');
const MyButton2 = MySection2.find('.section__button');
....
await verifyButtonColor(t, MyButton1, 'green'); // it will fail!
....
....
....
await verifyButtonColor(t, MyButton2, 'green');
});
//
// Definition of assertion verifyButtonColor (button.js)
//
import { Selector } from 'testcafe';
import {
verifyLabelColor
} from './label';
export async function verifyButtonColor(t, node, expectedColor) {
const MyLabel = node.find('.button__label');
await verifyLabelColor(t, MyLabel, expectedColor);
}
//
// Definition of assertion verifyLabelColor (label.js)
//
export async function verifyLabelColor(t, node, expectedColor) {
const MyText= node.find('.label__text');
const color = await MyText.getStyleProperty('color');
await t.expect(color).eql(expectedColor, `Color should be ${expectedColor}, found ${color}`); // <-- it will FAIL!
}
What I get not in the reporter is that my test failed because the assertion defined in "verifyLabelColor" failed (the color is not green :(),
...
await t.expect(color).eql(expectedColor, `Color should be ${expectedColor}, found ${color}`);
...
but in the reporter I have no evidence that failed due to the following stack of calls
- await verifyButtonColor(t, MyButton1, 'green');
- await verifyLabelColor(t, MyLabel, expectedColor);
- await t.expect(color).eql(expectedColor, `Color should be ${expectedColor}, found ${color}`);
Any body faced a similar problem?
An alternative could be to log the "path" of the selector that caused the failure, but looking to Test Cafe documentation I did not find the possibility to do it: knowing that the assertion failed on element with the path below could at least help to understand what went wrong
.section--modifier .section__button .button__label .label__text
This subject is related to TestCafe proposal : Have a multiple stacktrace reporter for fast analysis when a test fails
In the meantime you could give a try to this reporter: /testcafe-reporter-cucumber-json or maybe you could develop your own reporter

TeamCity no longer running Protractor/Jasmine test via Gulp

Had a successful test suite running under TeamCity (Protractor/Jasmine/JS). However we are now no longer able to get beyond the first build step
npm install
After trying to start the test suite, very quickly, build step two fails. This is gulpfile.js
var gulp = require("gulp");
var gulpProtractorAngular = require("gulp-angular-protractor");
gulp.task("runtest", callback => {
gulp
.src(["SmokeTest.js"])
.pipe(gulpProtractorAngular({
configFile: "SmokeTest.js",
debug: false,
autoStartStopServer: true
}))
.on("error", e => {
console.log(e);
})
.on("end", callback);
});
The only change between a working state and now is that we've added a few more specs. The whole suite runs just fine locally.
I've downloaded the Build Log from a successful run and a fail and the ONLY difference - apart from the error notification, is a message:
[Step 2/3] [Step 2/3] [17:31:06] The following tasks did not complete: runtest
[Step 2/3] [17:31:06] Did you forget to signal async completion?
So the gulpfile.js might be the culprit but I don't understand why or how to make a change to fix!
Help please!

Cannot run Mocha.js in synchronous mode

I am testing stored procedures with mocha running in a nodejs instance. I have this test skeleton:
var chai = require('chai'),
MyReporter = require("../MyReporter.js"),
chokidar = require('chokidar'),
expect = chai.expect,
should = chai.should,
assert = chai.assert;
var Mocha = require('mocha');
amochai = new Mocha({
bail: false,
debug: true
});
amochai.addFile("mytest_v1.js");
function runMocha(callback) {
amochai.run(function () {
callback();
});
}
watcher.on('change', function(path, stats) {
runMocha(function () {});
}
Problem: My tests are always run in an asynchronous mode, although all my tests are written like this:
describe('Mysql stored procedures', function(){
describe('Add this data', function(){
it('-- Should return this information', function(){
// asserts
});
});
});
There is no done() callback, I mean nowhere, so, as it is mentioned everywhere that mocha.js is synchronous by default, what could be the reason why my code is running in a asynchronous mode ?
PATCH
To patch my problem, I had to use before() and check my tests state, but this becomes a nightmare to maintain.
You are running operations that are asynchronous in your synchronous mocha tests.
Saying a Mocha test is "synchronous" is ambiguous. It could mean:
The operation tested by the test happens synchronously.
Mocha handles the test in a synchronous way.
The two are not equivalent. One does not entail the other. By default Mocha handles all tests synchronously. We can make it handle a test asynchronously but adding a parameter to the callback we pass to it (or its equivalent in other test interfaces). (Later versions of Mocha eventually added another way to make Mocha handle a test asynchronously: return a promise. But I'm going to use callbacks for the following examples.) So we can have 4 combinations of synchronicity:
Operation synchronous, Mocha test synchronous.
it("reads foo", function () {
fs.readFileSync("foo");
// ssert what we want to assert.
});
No problem.
Operation synchronous, Mocha test asynchronous.
it("reads foo", function (done) {
fs.readFileSync("foo");
// Assert what we want to assert.
done();
});
It is pointless to have the Mocha test be asynchronous, but no problem.
Operation asynchronous, Mocha test asynchronous.
it("reads foo", function (done) {
fs.readFile("foo", function () {
// Assert what we want to assert.
done();
});
});
No problem.
Operation asynchronous, Mocha test synchronous.
it("reads foo", function () {
fs.readFile("foo", function () {
// Assert what we want to assert.
});
});
This is a problem. Mocha will return right away from the test callback and call it successful (assuming fs.readFile does not raise an exeption). The asynchronous operation will still be scheduled and the call back may still be called later. One important point here: Mocha does not have the power to make the operations it tests synchronous. So making the Mocha test synchronous has no effect on the operations in the test. If they are asynchronous, they will remain asynchronous, no matter what we tell Mocha.
This last case, in your system would cause the execution of the stored procedures to be queued with the DB system. If this queuing happens without error, Mocha finishes right away. Then if there is a file change, your watcher launches another Mocha run and more operations are queued, etc.