AVA afterEach() callback not completing - ava

It seems like if I try and do something asynchronous (or just slow?) from an always.afterEach() function the process won't wait for it to complete when the test fails. My specific case if is that I'm trying to do some database cleanup after each test and even though the afterEach hook gets called it doesn't seem to be given a chance to finish before the process exits.
I've tried to distill this down to a simple example:
import test from 'ava'
test.afterEach.always('CLEANUP', async t => {
console.log('START', t.title)
await new Promise(resolve => setTimeout(resolve, 5000))
console.log('END', t.title)
})
test('Test', async t => {
console.log('START', t.title)
t.plan(1)
t.is(1, 2)
console.log('END', t.title)
})
This is the output I get:
START Test
END Test
✖ Test 1 === 2
START CLEANUP for Test
1 test failed [10:42:18]
1. Test
AssertionError: 1 === 2
_callee2$ (cleanup-test.js:15:5)
Test.fn (cleanup-test.js:11:1)
I would expect to see END CLEANUP printed (that does happen when the test passes.
I'm new to server-side JS so I'm hoping that I'm just doing something silly.
UPDATE: It turns out that this only breaks when the fails-fast flag is set. That's probably a bug.

Related

Kafkajs - `The group is rebalancing, so a rejoin is needed` error causes message to be consumed more than once

I have an edge in Kafkajs consumer, where at times I get a rebalancing error:
The group is rebalancing, so a rejoin is needed
[Connection] Response Heartbeat(key: 12, version: 3)
The group is rebalancing, so a rejoin is needed
[Runner] The group is rebalancing, re-joining
Then once the consumer group is rebalanced, the last message that was processed is processed again, as a commit did not occur due to the error.
Kafka consumer initialzation code:
import { Consumer, Kafka } from 'kafkajs';
const kafkaInstance = new Kafka({
clientId: 'some_client_id',
brokers: ['brokers list'],
ssl: true
});
const kafkaConsumer = kafkaInstance.consumer({ groupId: 'some_consumer_group_id });
await kafkaConsumer.connect();
await kafkaConsumer.subscribe({ topic: 'some_topic', fromBeginning: true });
await kafkaConsumer.run({
autoCommit: false, // cancel auto commit in order to control committing
eachMessage: ... some processing function
});
I increased sessionTimeout & heartbeatInteval to higher values and different combinations, but still under heavy message load, I get the error.
I added call to heartbeat function inside of eachMessage function, which seems to resolve the issue.
But was wondering its considered as "good practice" or is there something else I can do on the consumer side in order to prevent such error?
I added a call to heartbeat function inside of eachMessage function, which seems to resolve the issue.

Validate JSON in Gatling performance test

I just wonder if I can validate a value of certain filed in JSON response using Gatling?
currently the code only check if a field presents in the JSON response as following:
val searchTransaction: ChainBuilder = exec(http("search Transactions")
.post(APP_VERSION + "/transactionDetals")
.headers(BASIC_HEADERS)
.headers(SESSION_HEADERS)
.body(ElFileBody(ENV + "/bodies/transactions/searchTransactionByAmount.json"))
.check(status.is(200))
.check(jsonPath("$.example.current.transaction.results[0].amount.value")
If I want to verify the transaction value equals to 0.01, is it possible to achieve this?
I googled but didn't find any result, if there is a similar asked before, please let me know I will close this. Thanks.
I tried some assertion in the test, I find the assertion won't fail the performance at all.
val searchTransaction: ChainBuilder = exec(http("search Transactions")
.post(APP_VERSION + "/transactionDetals")
.headers(BASIC_HEADERS)
.headers(SESSION_HEADERS)
.body(ElFileBody(ENV + "/bodies/transactions/searchTransactionByAmount.json"))
.check(status.is(200))
.check(jsonPath("$.example.current.transaction.results[0].amount.value").saveAs("actualAmount"))).
exec(session => {
val actualTransactionAmount = session("actualAmount").as[Float]
println(s"actualTransactionAmount: ${actualTransactionAmount}")
assert(actualTransactionAmount.equals(10)) // this should fail, as amount is 0.01, but performance test still pass.
session
})
You right, this is a normal solution
.check(jsonPath("....").is("...")))
Such checks are the norm. Since the service may respond and not return for example 5xx status, but there will be an error in the response. So it’s better to check this.
Example: I have application which return status of create user and I checke it
.check(jsonPath("$.status").is("Client created"))
I figure out the way to verify, not sure if that is the best way but it works for me.
val searchTransaction: ChainBuilder = exec(http("search Transactions")
.post(APP_VERSION + "/transactionDetals")
.headers(BASIC_HEADERS)
.headers(SESSION_HEADERS)
.body(ElFileBody(ENV + "/bodies/transactions/searchTransactionByAmount.json"))
.check(status.is(200))
.check(jsonPath("$.example.current.transaction.results[0].amount.value").is("0.01")))
if I change to value 0.02, then test will fail and in the session log it will tell
something like below:
---- Errors --------------------------------------------------------------------
> jsonPath($.example.current.transaction.results[0].amount.value). 19 (100.0%)
find.is(0.02), but actually found 0.01
================================================================================
I aware that verify a value in JSON will make it like a functional test, not a load test. In a load test, maybe we are not supposed to validate every possible piece of information. But there is function there if anyone wants to verify something in JSON maybe can reference.
Just curious, I still don't know why using assert in previous code won't fail the test though?

How to get the PeekPokeTester expect function to print signal values in hex?

By default when I call the expect() function in the tester the values come up as decimals. Although in the provided example here:
https://github.com/freechipsproject/chisel-testers/wiki/Using-the-PeekPokeTester
the outputs comes out as hex. How can you select this?
example:
[info] [0.026] EXPECT AT 5 io_key_column got 979262996 expected 4293125357 FAIL
Try using the Driver.execute to run your test. It allows you to set a bunch of options by passing in an array of strings.
In this case try
val args = Array("--display-base", "16")
iotesters.Driver.execute(args, () => new RealGCD2) { c =>
new GCDPeekPokeTester(c)
} should be (true)

Jenkins parameterized job that only queues one build

Imagine a Jenkins job A which takes 1 minute to run, and job B which takes 5 minutes.
If we configure job A to trigger job B, while job B is running job A may run 5 times before B completes. However, Jenkins doesn't add 5 builds to job B's queue, which is great because otherwise speedy job A would be creating an ever-growing backlog of builds for poor slow job B.
However, now we want to have job A trigger B as a parameterized job, using the parameterized trigger plugin. Parameterized jobs do queue up a backlog, which means job A is happily creating a huge pile of builds for job B, which can't possibly keep up.
It does make sense to add a new parameterized build to the queue each time it's triggered, since the parameters may be different. Jenkins should not always assume that a new parameterized build renders previously queued ones unnecessary.
However, in our case we actually would like this. Job A builds and packages our application, then Job B deploys it to a production-like environment and runs a heavier set of integration tests. We also have a build C which deploys to another environment and does even more testing, so this is an escalating pattern for us.
We would like the queue for our parameterized job B to only keep the last build added to it; each new build would replace any job currently in the queue.
Is there any nice way to achieve this?
Add a "System Groovy Script" pre-build step to job B that checks for (newer) queued jobs of the same name, and bails out if found:
def name = build.properties.environment.JOB_NAME
def queue = jenkins.model.Jenkins.getInstance().getQueue().getItems()
if (queue.any{ it.task.getName() == name }) {
println "Newer " + name + " job(s) in queue, aborting"
build.doStop()
} else {
println "No newer " + name + " job(s) in queue, proceeding"
}
You could get rid of Parameterized Trigger Plugin, and instead, use the traditional triggering. As you said, this would prevent job B queue from piling up.
How to pass the parameters from A to B then? Make job A to yield the parameters in it's console output. In job B, to get these build parameters, examine the console output of the latest A build (with a Python script, perhaps?).
Ron's solution worked for me. If you don't like having bunch of cancelled builds in build history you can add the following system groovy script to job A before you trigger job B:
import hudson.model.*
def q = jenkins.model.Jenkins.getInstance().getQueue()
def items = q.getItems()
for (i=0;i<items.length;i++){
if(items[i].task.getName() == "JobB"){
items[i].doCancelQueue()
}
}
Here's one workaround:
Create a job A2B between jobs A and B
Add a build step in job A2B that determines whether B is running. To achieve that, check:
Determine if given job is currently running using Hudson/Jenkins API
Python API's is_queued_or_running()
Finally, trigger job B from A2B only if there are no B builds queued or running (carrying the parameters through)
In case you're using Git, this is now supported by the "Combine Queued git hashes" under the Triggering/ Parameters/ Pass-through option.
The first Git plugin version that should actually work with this is 1.1.27 (see Jenkins-15160)
Here's a more flexible option if you are only care about a few parameters matching. This is especially helpful when a job is triggered externally (i.e. from GitHub or Stash) and some parameters don't need a separate build.
If the checked parameters match in both value and existence in both the current build and a queued build, the current build will be aborted and the description will show that a future build contains the same checked parameters (along with what they were).
It could be modified to cancel all other queued jobs except the last one if you don't want to have build history showing the aborted jobs.
checkedParams = [
"PARAM1",
"PARAM2",
"PARAM3",
"PARAM4",
]
def buildParams = null
def name = build.project.name
def queuedItems = jenkins.model.Jenkins.getInstance().getQueue().getItems()
yieldToQueuedItem = false
for(hudson.model.Queue.Item item : queuedItems.findAll { it.task.getName() == name }) {
if(buildParams == null) {
buildParams = [:]
paramAction = build.getAction(hudson.model.ParametersAction.class)
if(paramAction) {
buildParams = paramAction.getParameters().collectEntries {
[(it.getName()) : it.getValue()]
}
}
}
itemParams = [:]
paramAction = item.getAction(hudson.model.ParametersAction.class)
if(paramAction) {
itemParams = paramAction.getParameters().collectEntries {
[(it.getName()) : it.getValue()]
}
}
equalParams = true
for(String compareParam : checkedParams) {
itemHasKey = itemParams.containsKey(compareParam)
buildHasKey = buildParams.containsKey(compareParam)
if(itemHasKey != buildHasKey || (itemHasKey && itemParams[compareParam] != buildParams[compareParam])) {
equalParams = false
break;
}
}
if(equalParams) {
yieldToQueuedItem = true
break
}
}
if (yieldToQueuedItem) {
out.println "Newer " + name + " job(s) in queue with matching checked parameters, aborting"
build.description = "Yielded to future build with:"
checkedParams.each {
build.description += "<br>" + it + " = " + build.buildVariables[it]
}
build.doStop()
return
} else {
out.println "No newer " + name + " job(s) in queue with matching checked parameters, proceeding"
}
The following is based on Ron's solution, but with some fixes to work on my Jenkins 2 including removing java.io.NotSerializableException exception and handling that the format of getName() is some times different from that of JOB_NAME
// Exception to distinguish abort due to newer jobs in queue
class NewerJobsException extends hudson.AbortException {
public NewerJobsException(String message) { super(message); }
}
// Find jenkins job name from url name (which is the most consistently named
// field in the task object)
// Known forms:
// job/NAME/
// job/NAME/98/
#NonCPS
def name_from_url(url)
{
url = url.substring(url.indexOf("/") + 1);
url = url.substring(0, url.indexOf("/"));
return url
}
// Depending on installed plugins multiple jobs may be queued. If that is the
// case skip this one.
// http://stackoverflow.com/questions/26845003/how-to-execute-only-the-most-recent-queued-job-in-jenkins
// http://stackoverflow.com/questions/8974170/jenkins-parameterized-job-that-only-queues-one-build
#NonCPS
def check_queue()
{
def name = env.JOB_NAME
def queue = jenkins.model.Jenkins.getInstance().getQueue().getItems()
if (queue.any{ name_from_url(it.task.getUrl()) == name }) {
print "Newer ${name} job(s) in queue, aborting"
throw new NewerJobsException("Newer ${name} job(s) in queue, aborting")
} else {
print "No newer ${name} job(s) in queue, proceeding"
}
}

A query on Junit test methods

public void testNullsInName() {
fail("sample failure");
Person p = new Person(null, "lastName");
assertEquals("lastName", p.getFullName());
p = new Person("Tanner", null);
assertEquals("Tanner ?", p.getFullName());
}
I have difficulty in understanding fail in Junit .
Could anybody please tell me what is the use of fail in the above method ??
( I want to know what it is responsible to do there )
And typically if i want to add this below line also in the above code . how could i add
Person p = new Person(null, "lastName"); // After this statement
if(p==null)
{
// then dont proceed further to the further execution
// Show the Junit Test case as PASS .
}
Please help me .
The fail("sample failure"); -statement in the first case will cause the test to be reported as failed with reason "sample failure" when the statement is run. No idea why it's placed as first statement in the test case, as it will cause the test to fail immediately and the rest of the statements are never executed. As for the second case, simply returning from the method will cause the test to pass.