junit5 parallel execution doesn't work through ConsoleLauncher - junit

Based on the JUnit5 console launcher docs, it seems there isn't a way to configure parallel execution. Does ConsoleLauncher not support parallel execution yet or should setting the config in junit-platform.properties be enough?
I tried using the ConsoleLauncher (set parallel mode in junit-platform.properties) but the tests still seem to be running sequentially. I am able to run them in parallel via Gradle though.

I was able to get it to work by adding specific config while invoking the ConsoleLauncher
-Djunit.jupiter.execution.parallel.enabled=true
-Djunit.jupiter.execution.parallel.mode.default=concurrent
Ref: https://junit.org/junit5/docs/current/user-guide/#running-tests-config-params

Related

2 variants for JUnit execution: TestRunner & JUnitCore

There seem to be two approaches for invoking JUnit tests from the OS command shell:
java junit.textui.TestRunner <class-name>
and
java org.junit.runner.JUnitCore <class-name>
When do we use one versus the other?
Also, are there other ways to invoke Junit tests from the OS command shell?
JUnitCore is an entry point of JUnit - so if you want to run a test programmatically or of from some non-java script, I think, its the way to go for JUnit 4.
TestRunner is something a very old junit 3.x
Notice, that nowadays JUnit 5 is the latest available major release and it has yet another way to run the tests.
The question about different ways of running the tests from command line has been already answered Here so I can't add much to this.
However, I can comment on:
Also, are there other ways to invoke Junit tests from the OS command shell?
Nowadays in regular projects people do not run tests like this, instead they use one of build tools (Maven, Gradle for example) that among other things take care of tests.
So for example if you use maven, you can run mvn test and it will compile everything you need, including source code of tests, will take care about all test dependencies and will run all the tests with the help of build-in surefire plugin.
If you don't want to compile anything (assuming that all the code has been already compiled and all is set, you can use mvn surefire:test)
These build tools are also integrated with CI tools (like Jenkins, etc.) So this is considered to be a solved problem.
So unless you're doing something really different (like writing the IDE UI that should run test selected by user on demand or something) there is no really need to run tests with the options you've mentioned.

Should tests run in Debug or Release configuration in dotnet core

I am using dotnet core 2+, but the question is likely much more generic.
My CI pipeline currently looks like this:
dotnet build -c Release
dotnet test
dotnet public -c Release --no-build
For the test step, it uses the default Debug configuration, therefore it also has to build the app using Debug config.
Therefore, I was wondering, whether there is any advantage in running tests using Debug rather than Release or should i just simply add dotnet test -c Release?
I believe it's possible to choose by comparing of the differences between "Debug" and "Release".
In Release mode: there are compiler's optimizations. Compiler does some low-level improvements. This leads to original code can be changed significantly in some places. (some variables and methods calls can be optimized in a non obvious way).
In Debug mode: code is not optimized, and along with final assemblies compiler produces .pdb files with are used for a step by step debugging.
As a conclusion, for tests, is better to use Release mode:
It is lighter than Debug (.pdb files are not needed).
It is faster that Debug (due to compiler's optimizations, .pdb files are not generated).
Tests are run against prod like environment.
P.S. Along with that keep an eye on preprocessor directives and config transformation if they are presented, and depend on build configuration.
Do not use Debug mode if you are not going to debug your tests. Sometimes, people need debugging the app through the tests or even debugging the test itslef. If this is not the case, go with Release mode, it is lighter.

Limit to single core for unit test

I have an issue where my unit test currently passes on my dev machine (multi core machine), but the same code fails in pre production (single core machine).
Is it possible to somehow limit the number of cores available for a unit test to get an equal environment on my dev machine? Unfortunately I'm not able to run the unit tests on the pre prod machine.
There are several ways to do that.
Use taskset command
Taskset command binds all threads of a particular process to some subset of cores. Using is easy: taskset -c 0 'your command'
This will bind every thread to the first CPU.
So in order to do this you need to be able to run your unit test programmatically via the command line. If you use some build tool you just run the coommand after taskset. For example
taskset -c 0 "mvn clean compile test"
If you run your test via IDE then you can check full command which is printed when you run the test. In that case it will look like
taskset -c 0 "C:\Program Files\Java\jdk1.8.0_73\bin\java -cp classpath com.intellij.rt.execution.junit.JUnitStarter name_of_test"
More about taskset command
Use affinity locks
Affinity lock can be used programmatically to bind some code to a particular core. But in that case I'm not sure if it will be able to bind also newly created threads during the code execution. I think taskset is easier to use and does all the work.
Check OpenHFT/Java-Thread-Affinity as it's the most popular affinity lock tool for java.

Troubleshooting failed packer build

I am just getting started with Packer, and have had several instances where my build is failing and I'd LOVE to log in to the box to investigate the cause. However, there doesn't seem to be a packer login or similar command to give me a shell. Instead, the run just terminates and tears down the box before I have a chance to investigate.
I know I can use the --debug flag to pause execution at each stage, but I'm curios if there is a way to just pause after a failed run (and prior to cleanup) and then runt he cleanup after my debugging is complete.
Thanks.
This was my top annoyance with packer. Thankfully, packer build now has an option -on-error that gives you options.
packer build -on-error=ask ... to the rescue.
From the packer build docs:
-on-error=cleanup (default), -on-error=abort, -on-error=ask - Selects what to do when the build fails. cleanup cleans up after the previous steps, deleting temporary files and virtual machines. abort exits without any cleanup, which might require the next build to use -force. ask presents a prompt and waits for you to decide to clean up, abort, or retry the failed step.
Having used Packer extensively, the --debug flag is most helpful. Once the process is paused you SSH to the box with the key (in the current dir) and figure out what is going on.
Yeah, the way I handle this is to put a long sleep in a script inline provisioner after the failing step, then I can ssh onto the box and see what's up. Certainly the debug flag is useful, but if you're running the packer build remotely (I do it on jenkins) you can't really sit there and hit the button.
I do try and run tests on all the stuff I'm packing outside of the build - using the Chef provisioner I've got kitchen tests all over everything before it gets packed. It's a royal pain to try and debug anything besides packer during a packer run.
While looking up info for this myself, I ran across numerous bug reports/feature requests for Packer.
Apparently, someone added new features to the virtualbox and vmware builders a year ago (https://github.com/mitchellh/packer/issues/409), but it hasn't gotten merged into main.
In another bug (https://github.com/mitchellh/packer/issues/1687), they were looking at adding additional features to --debug, but that seemed to stall out.
If a Packer build is failing, first check where the build process has got stuck, but do the check in this sequence:
Are the boot commands the appropriate ones?
Is the preseed config OK?
If 1. and 2. are OK, then it means box has booted and the next to check is the login: SSH keys, ports, ...
Finally any issues within the provisioning scripts

Hudson build fails when run in browser but works from command line

I am setting up a new Hudson task (on WinXP) for a project which generates javascript files, and performs xslt transformations as part of the build process.
The ant build is failing on the XSL transformations when run from Hudson, but works fine when the same build on the same codebase (ie in Hudson's workspace) is run from the command line.
The failure message is:
line 208: Variable 'screen' is multiply defined in the same scope.
I have tried configuring Hudson to use both ant directly and to use a batch script - both fail in Hudson.
I have tried in Firefox, IE6 and Chrome and have seen the same issue.
Can anyone suggest how we can workaround this problem with Hudson?
Problem solved.
Our build is actually dependent on jdk 1.4.2, and Hudson appears to run using 1.6. When I set Hudson to run as a service, it ran as my local user, which meant that it picked up the 1.4.2 JAVA_HOME environment variable - and therefore worked.
I guess another possible solution is to configure Hudson to use 1.4.2 by default.
I would assume this is not an issue with Hudson directly, as it is with the build script and/or the environment itself.
Is your build script relying on certain environment variables being defined, or worse, the job running from within a certain directory structure (i.e. it works if it's run from under /home/mash/blah but not from under another directory like /tmp)? Is the build script making reference to external files by relative paths?
These are the things I would look into. For environment variables, you can tell Hudson to pass these into Ant. For the other issues, you probably want to change your build script. Check the console output provided by Hudson, and maybe set Ant to print verbose/debug messages to get a better idea about the environment/filepaths.