I need to test a program that first preprocesses some data and then computes several different results using this preprocessed data -- it makes sense to write separate test for each computation.
Official JUnit policy seems to be that I should run preprocessing before each computation test.
How can I set up my test so that I could run the preparation only once (it's quite slow) before running remaining tests?
Use the annotation #BeforeClass to annotate the method which will be run once before all test methods.
Related
Learning AVA.js test runner
It is not clear how can I mock global objects (e.g. Date, Math etc.) as long as the tests run in parallel so such object patching becomes concurrent.
How should one really go with it?
I wanted to run kotest in the same spec in parallel. I read the below section in the documentation. But it says you can run only specs in parallel, tests in the single spec will always run sequentially.
https://kotest.io/docs/framework/project-config.html#parallelism
Is there a way to achieve parallelism at the test level? I'm using kotest for my e2e API testing. All tests are independent and should have no problem running them in parallel. But with kotest, I can't. Please advise.
You can enable concurrent tests on a per-spec basis or a global basis.
For example:
class MySpec : FunSpec({
concurrency = 10
test("1") { }
test("2") { }
}
I have a really sophisticated net which takes up a lot of memory on my gpu. I have found out that if I train and test my data (which is the standard case) the memory usage is as twice as high as if I do only training. Is it really necessary to test my data? Or is it just used for visualisation, i.e. to show me if my net is overfitting or sth like that?
I assume it is necessary, but I do not know the reason. My question is: How to separate training and testing? I know you can do
test_initialization: false
But if I want to test my net how would I do that afterwards?
Thanks in advance!
If you have a TEST phase in your train.prototxt, you can use a command line to test your network. You can see this link, where they mention the following command line:
# score the learned LeNet model on the validation set as defined in the
# model architeture lenet_train_test.prototxt
caffe test -model examples/mnist/lenet_train_test.prototxt -weights
examples/mnist/lenet_iter_10000.caffemodel -gpu 0 -iterations 100
You can edit it to test your network.
There is also a Python tutorial you can follow to load the trained network with a script and use it in the field. This can be manipulated to perform separate forward passes and compare the results with what you expect. I don't expect this to work completely out of the box, so you will have to try some things out.
Would equivalent unit tests in JMockit give a significant speed up compared to PowerMock?
Background:
I have to get unit test coverage up on a large legacy code base.
We currently have PowerMock unit tests (300+) that take over 15 minutes to run.
PowerMock has been used so far due to lots of static methods calling static methods, ad infinitum.
Expanding further we estimate the need for about 1000+ Unit test classes and want a sub 10 Minute build.
We are simultaneously breaking the dependencies and unit testing with Mockito which tends to be 4x faster than the equivalent PowerMock test, but this practice is much harder, slower and perceived as risk owing to the change to production code.
Many thanks,
Alex.
I currently have a couple of tests which really run very long. Inside each test I do always the same:
there is a loop which creates a new object (every iteration with different parameters), does some time consuming calculations with the object and at the end of each iteration compares the result to the expected result.
Every iteration in this loop is completely isolated. I could easily run all those 200 very time consuming iterations in parallel. But how best to do this?
Cheers,
AvH
Junit 4 has inbuilt parellel processing. Check this documentation.
Apart from that you may need consider moving all the duplicate iterations in to a static setup method and annotate as #BeforeClass. That will make sure code runs only once in the entire lifecycle.
#BeforeClass
public static void setup() {
//Move anything needs to run only once.
}
You have to create an own modification of the Parameterized runner. See http://jankesterblog.blogspot.de/2011/10/junit4-running-parallel-junit-classes.html
The library JUnit Toolbox provides a ParallelParameterized runner.