I currently have a couple of tests which really run very long. Inside each test I do always the same:
there is a loop which creates a new object (every iteration with different parameters), does some time consuming calculations with the object and at the end of each iteration compares the result to the expected result.
Every iteration in this loop is completely isolated. I could easily run all those 200 very time consuming iterations in parallel. But how best to do this?
Cheers,
AvH
Junit 4 has inbuilt parellel processing. Check this documentation.
Apart from that you may need consider moving all the duplicate iterations in to a static setup method and annotate as #BeforeClass. That will make sure code runs only once in the entire lifecycle.
#BeforeClass
public static void setup() {
//Move anything needs to run only once.
}
You have to create an own modification of the Parameterized runner. See http://jankesterblog.blogspot.de/2011/10/junit4-running-parallel-junit-classes.html
The library JUnit Toolbox provides a ParallelParameterized runner.
Related
Learning AVA.js test runner
It is not clear how can I mock global objects (e.g. Date, Math etc.) as long as the tests run in parallel so such object patching becomes concurrent.
How should one really go with it?
I wanted to run kotest in the same spec in parallel. I read the below section in the documentation. But it says you can run only specs in parallel, tests in the single spec will always run sequentially.
https://kotest.io/docs/framework/project-config.html#parallelism
Is there a way to achieve parallelism at the test level? I'm using kotest for my e2e API testing. All tests are independent and should have no problem running them in parallel. But with kotest, I can't. Please advise.
You can enable concurrent tests on a per-spec basis or a global basis.
For example:
class MySpec : FunSpec({
concurrency = 10
test("1") { }
test("2") { }
}
Summary:
I'm trying to find out if a single method can be executed twice in overlap when executing on a single thread. Or if two different methods can be executed in overlap, where when they share access to a particular variable, some unwanted behaviour can occur.
Ex of a single method:
var ball:Date;
method1 ():Date {
ball = new Date();
<some code here>
return ball;
}
Questions:
1) If method1 gets fired every 20ms using the event system, and the whole method takes more than 20ms to execute, will the method be executed again in overlap?
2) Are there any other scenarios in a single thread environment where a method(s) can be executed in overlap, or is the AVM2 limited to executing 1 method at a time?
Studies: I've read through https://www.adobe.com/content/dam/Adobe/en/devnet/actionscript/articles/avm2overview.pdf which explains that the AVM2 has a stack for running code, and the description for methods makes it seem that if there isn't a second stack, the stack system can only accomodate 1 method execution at a time. I'd just like to double check with the StackeOverflow experts to see for sure.
I'm dealing with some time sensitive data, and have to make sure a method isn't changing a variable that is being accessed by another method at the same time.
ActionScript is single-threaded; although, can support concurrency through ActionScript workers, which are multiple SWF applications that run in parallel.
There are asynchronous patterns, if you want a nested function, or anonymous function to execute within the scope chain of a function.
What I think you're referring to is how AVM2 executes event-driven code, to which you should research AVM2 marshalled slice. Player events are executed at the beginning of the slice.
Heavy code execution will slow frame rate.
It's linear - blocking synchronously. Each frame does not invoke code in parallel.
AVM2 executes 20 millisecond marshalled slices, which depending on frame rate executes user actions, invalidations, and rendering.
According to this article, a SpriteBatch instance needs to call dispose() once it is no longer needed. However, as I examine some of libgdx's official examples like Pax Britannica and Super Jumper, I found out that they never call SpriteBatch.dispose(). Why is that?
SpriteBatch must always be disposed.
Internally, it creates and manages several Mesh objects. These objects allocate vertex/index arrays on the GPU. Those are only deallocated if you explicitly call Mesh#dispose(), which will be triggered by calling dispose() on your SpriteBatch object.
It will also, by default, create its own ShaderProgram. And similarly, would be leaked if you didn't call dispose().
If the demo's aren't doing this, perhaps it's time to send a pull request!
I think the given demo games try to keep things simple. They are supposed to show how the basic things in libgdx work in a minimalistic way and thus also abstract a little of some details. That's useful for beginners to not bloat up the examples with a lot of very specific code.
In a real world example I think SpriteBatch.dispose() has to be called in the dispose() method of the GameScreen in SuperJumper for example. And also GameScreen.dispose() has to be called when switching back to the MainMenuScreen, because this doesn't happen automatically as well.
When creating a Spritebatch like this new SpriteBatch(), it creates one internal Mesh. When not calling SpriteBatch.dispose() this one mesh would also not be disposed and thus SuperJumper has a memory leak there.
I've created games where I have multiple screens which all has their own SpriteBatch. I just removed all dispose() methods of the batches and for me, there's no effect of this. So keep in mind to check for this before releasing your product. Even if you can't feel any downside not to dispose batches, there's no reason not to dispose them. Just do it on the Screen implemententations dispose methos, takes about 1 nano second to do :)
I need to test a program that first preprocesses some data and then computes several different results using this preprocessed data -- it makes sense to write separate test for each computation.
Official JUnit policy seems to be that I should run preprocessing before each computation test.
How can I set up my test so that I could run the preparation only once (it's quite slow) before running remaining tests?
Use the annotation #BeforeClass to annotate the method which will be run once before all test methods.