Long duration on Jenkins skipped tests - junit

I am using Jenkins ver. 1.408 to run tests.
I have notice the timing in the result is incorrect. If I run a test plan with all skipped test in the Jenkins result I will see that the duration of the #ignored tests was a few minutes.
For example I ran this 3 tests, 2 are marked as ignored and in the results it seem to take 3:30 minutes, 1:10 for every test (including the skipped ones)
test1 1 min 10 sec Skipped
test2 1 min 10 sec Passed
test3 1 min 10 sec Skipped
Is it a bug or am I doing something wrong?

I think this is a bug.
I have seen similar incorrect durations in different places in jenkins test results views. But it is not really a problem for us.
It could be related to this issue?
https://issues.jenkins-ci.org/browse/JENKINS-42438
Wrong JUnit test duration shown in classes list
Resolved May 2019 and reported as fixed in jUnit plugin 1.27

Related

How can I get a kernel's execution time with NSight Compute 2019 CLI?

Suppose I have an executable myapp which needs no command-line argument, and launches a CUDA kernel mykernel. I can invoke:
nv-nsight-cu-cli -k mykernel myapp
and get output looking like this:
==PROF== Connected to process 30446 (/path/to/myapp)
==PROF== Profiling "mykernel": 0%....50%....100% - 13 passes
==PROF== Disconnected from process 1234
[1234] myapp#127.0.0.1
mykernel(), 2020-Oct-25 01:23:45, Context 1, Stream 7
Section: GPU Speed Of Light
--------------------------------------------------------------------
Memory Frequency cycle/nsecond 1.62
SOL FB % 1.58
Elapsed Cycles cycle 4,421,067
SM Frequency cycle/nsecond 1.43
Memory [%] % 61.76
Duration msecond 3.07
SOL L2 % 0.79
SM Active Cycles cycle 4,390,420.69
(etc. etc.)
--------------------------------------------------------------------
(etc. etc. - other sections here)
so far - so good. But now, I just want the overall kernel duration of mykernel - and no other output. Looking at nv-nsight-cu-cli --query-metrics, I see, among others:
gpu__time_duration incremental duration in nanoseconds; isolated measurement is same as gpu__time_active
gpu__time_active total duration in nanoseconds
So, it must be one of these, right? But when I run
nv-nsight-cu-cli -k mykernel myapp --metrics gpu__time_duration,gpu__time_active
I get:
==PROF== Connected to process 30446 (/path/to/myapp)
==PROF== Profiling "mykernel": 0%....50%....100% - 13 passes
==PROF== Disconnected from process 12345
[12345] myapp#127.0.0.1
mykernel(), 2020-Oct-25 12:34:56, Context 1, Stream 7
Section: GPU Speed Of Light
Section: Command line profiler metrics
---------------------------------------------------------------
gpu__time_active (!) n/a
gpu__time_duration (!) n/a
---------------------------------------------------------------
My questions:
Why am I getting "n/a" values?
How can I get the actual values I'm after, and nothing else?
Notes: :
I'm using CUDA 10.2 with NSight Compute version 2019.5.0 (Build 27346997).
I realize I can filter the standard output stream of the unqualified invocation, but that's not what I'm after.
I actually just want the raw number, but I'm willing to settle for using --csv and taking the last field.
Couldn't find anything relevant in the nvprof transition guide.
tl;dr: You need to specify the appropriate 'submetric':
nv-nsight-cu-cli -k mykernel myapp --metrics gpu__time_active.avg
(Based on #RobertCrovella's comments)
CUDA's profiling mechanism collects 'base metrics', which are indeed listed with --list-metrics. For each of these, multiple samples are taken. In version 2019.5 of NSight Compute you can't just get the raw samples; you can only get 'submetric' values.
'Submetrics' are essentially some aggregation of the sequence of samples into a scalar value. Different metrics have different kinds of submetrics (see this listing); for gpu__time_active, these are: .min, .max, .sum, .avg. Yes, if you're wondering - they're missing second-moment metrics like the variance or the sample standard deviation.
So, you must either specify one or more submetrics (see example above), or alternatively, upgrade to a newer version of NSight Compute, with which you actually can just get all the samples apparently.

How do I wait for a random amount of time before executing the next action in Puppeteer?

I would love to be able to wait for a random amount of time (let's say a number between 5-12 seconds, chosen at random each time) before executing my next action in Puppeteer, in order to make the behaviour seem more authentic/real world user-like.
I'm aware of how to do it in plain Javascript (as detailed in the Mozilla docs here), but can't seem to get it working in Puppeteer using the waitFor call (which I assume is what I'm supposed to use?).
Any help would be greatly appreciated! :)
You can use vanila JS to randomly wait between 5-12 seconds between action.
await page.waitFor((Math.floor(Math.random() * 12) + 5) * 1000)
Where:
5 is the start number
12 is the end number
1000 means it's converting seconds to milliseconds
(PS: However, if you question is about waiting 5-12 seconds randomly before every action, then you should have a class with wrapper, which is a different issue until you update your question.)

TCL performance test - why so much fluctuation?

I modified TCL8.4.20 source in the following files in order to measure the run time of TCL script:
basic utility
Record the time:
void save()
{
struct timespec t;
clock_gettime(CLOCK_MONOTONIC_RAW, &t);
save_to_an_array();
}
tclMain.c, in Tcl_Main() function, before calling Tcl_FSEvalFile(), record time
tclBasic.c, in Tcl_EvalEx(), at the start, record time; there are multi exits, record time at each exit.
tclMain.c, before exiting Tcl_Main(), dump out all the recordings.
Build TCL source as usual and executable tclsh8.4 now has my builtin function to record the script run time and dump out the times at the end.
I use an one-liner script: puts hello
To my surprise, the run time varies greatly. Here is a consecutive time:
run1 - 232.00ms
run2 - 7886.00ms
run3 - 6973.00ms
run4 - 5749.00ms
run5 - 224.00ms
run6 - 6820.00ms
run7 - 6074.00ms
run8 - 221.00ms
Maybe bytecode version has better consistency? So I added more probes to Tcl_EvalObjEx and TclExecuteByteCode(). Here is the new script:
proc p {} {
puts hello
}
p
But it is not consistent either:
run1 - 226.00ms
run2 - 7877.00ms
run3 - 6964.00ms
run4 - 5740.00ms
run5 - 218.00ms
run6 - 6809.00ms
run7 - 6064.00ms
run8 - 216.00ms
Do you see what might be the problem?
[UPDATE]
Maybe puts is a bad choice since it is I/O function which is impacted by many system issues, so I changed the script to some random commands:
set a 100
set b 200
append a 300
array set arr1 {one 1 two 2}
It definitely is better:
run1 - 9.00ms
run2 - 9.00ms
run3 - 19.00ms
run4 - 9.00ms
run5 - 9.00ms
run6 - 9.00ms
run7 - 9.00ms
run8 - 9.00ms
run9 - 9.00ms
run10 - 9.00ms
But again, how does that run3 at 19ms come from?
The problem with using wall-clock timing is that it is very sensitive to whatever else is going on on your system. The OS can simply decide to suspend your process at any moment on the grounds of having other “more important” work for the CPU to do. It doesn't know that you're trying to do performance measurements.
The usual way of fixing this is to both do many runs of the timing script and to take the minimum of the measured timings, bearing in mind that the cost of doing timing at all is non-zero and can have an effect on the output.
The time command in standard Tcl is intended for this sort of thing. Here's an example of use:
puts [time {
set a 100
set b 200
append a 300
array set arr1 {one 1 two 2}
} 100]
This runs the code fragment from before 100 times and prints the average execution time. (In my performance-intensive tests, I'll use a whole bunch of stabilisation code so that I get reasonable information out of even microbenchmarks, but all they're really doing is guessing a good value for the iterations and printing the minimum of a bunch of samples. Also, microbenchmarks might well end up with iteration counts in the hundreds of thousands or millions.)
Be aware that you're using a version of Tcl that has been end-of-lifed. 8.5 is the current LTS version (i.e., it mostly only receives security fixes — if any; we don't have many vulns — and updates to support evolving OS APIs), and 8.6 is for new work. (8.7 and 9.0 are under development, but still pre-alpha.)

web-component-tester progress has incorrect total

The progress ring widget that appears at the top right-hand corner of the page when running web-component-tester seems to show the total number of tests as always 3x the number of test suites, rather than the actual total of tests.
Is this a known issue, or this there something that I can do to get the correct total to show?
As an example, in this screenshot I have only one test in my suite, yet when the test is finished, the progress ring shows only 33% completion:
I can see in the code for the MultiReporter constructor that ESTIMATED_TESTS_PER_SUITE is set to 3 and is multiplied by the number of test suites in order to compute the total used by the Mocha HTML Reporter to render the progress widget. It appears that the onRunnerStart handler in MultiReporter is supposed to replace the estimate for the current suite with the actual total, but in my testing the runner argument passed to this handler is itself a MultiReporter object with an estimated total, so the updated total is still an estimate rather than the actual total.
Unfortunately, I haven't been able to figure out why the MultiReporter never computes the correct total, nor have I been able to find any hooks for explicitly specifying the total number of tests.

neo4j batchimporter is slow with big IDs

i want to import csv-Files with about 40 million lines into neo4j. For this i try to use the "batchimporter" from https://github.com/jexp/batch-import.
Maybe it's a problem that i provide own IDs. This is the example
nodes.csv
i:id
l:label
315041100 Person
201215100 Person
315041200 Person
rels.csv :
start
end
type
relart
315041100 201215100 HAS_RELATION 30006
315041200 315041100 HAS_RELATION 30006
the content of batch.properties:
use_memory_mapped_buffers=true
neostore.nodestore.db.mapped_memory=1000M
neostore.relationshipstore.db.mapped_memory=5000M
neostore.propertystore.db.mapped_memory=4G
neostore.propertystore.db.strings.mapped_memory=2000M
neostore.propertystore.db.arrays.mapped_memory=1000M
neostore.propertystore.db.index.keys.mapped_memory=1500M
neostore.propertystore.db.index.mapped_memory=1500M
batch_import.node_index.node_auto_index=exact
./import.sh graph.db nodes.csv rels.csv
will be processed without errors, but it takes about 60 seconds!
Importing 3 Nodes took 0 seconds
Importing 2 Relationships took 0 seconds
Total import time: 54 seconds
When i use smaller IDs - for example 3150411 instead of 315041100 - it takes just 1 second!
Importing 3 Nodes took 0 seconds
Importing 2 Relationships took 0 seconds
Total import time: 1 seconds
Actually i would take even bigger IDs with 10 digits. I don't know what i'm doing wrong. Can anyone see an error?
JDK 1.7
batchimporter 2.1.3 (with neo4j 2.1.3)
OS: ubuntu 14.04
Hardware: 8-Core-Intel-CPU, 16GB RAM
I think the problem is that the batch importer is interpreting those IDs as actual physical ids on disk. And so the time is spent in the file system, inflating the store files up to the size where they can fit those high ids.
The ids that you're giving are intended to be "internal" to the batch import, or? Although I'm not sure how to tell the batch importer that is the case.
#michael-hunger any input there?
the problem is that those ID's are internal to Neo4j where they represent disk record-ids. if you provide high values there, Neo4j will create a lot of empty records until it reaches your ids.
So either you create your node-id's starting from 0 and you store your id as normal node property.
Or you don't provide node-id's at all and only lookup nodes via their "business-id-value"
i:id id:long l:label
0 315041100 Person
1 201215100 Person
2 315041200 Person
start:id end:id type relart
0 1 HAS_RELATION 30006
2 0 HAS_RELATION 30006
or you have to configure and use an index:
id:long:people l:label
315041100 Person
201215100 Person
315041200 Person
id:long:people id:long:people type relart
0 1 HAS_RELATION 30006
2 0 HAS_RELATION 30006
HTH Michael
Alternatively you can also just write a small java or groovy program to import your data if handling those ids with the batch-importer is too tricky.
See: http://jexp.de/blog/2014/10/flexible-neo4j-batch-import-with-groovy/