Based on the JUnit5 console launcher docs, it seems there isn't a way to configure parallel execution. Does ConsoleLauncher not support parallel execution yet or should setting the config in junit-platform.properties be enough?
I tried using the ConsoleLauncher (set parallel mode in junit-platform.properties) but the tests still seem to be running sequentially. I am able to run them in parallel via Gradle though.
I was able to get it to work by adding specific config while invoking the ConsoleLauncher
-Djunit.jupiter.execution.parallel.enabled=true
-Djunit.jupiter.execution.parallel.mode.default=concurrent
Ref: https://junit.org/junit5/docs/current/user-guide/#running-tests-config-params
AFAIK, in Hyperledger Sawtooth I can add custom Transaction Processors, but I don't clearly understand can I add them dynamically, and how it will work?
For example, I have working validators network with dynamic peering and want to add new custom Transaction Processor to support new transaction family. Probably, I can run docker container with TP on some machines of network, but often I will not able to do that on all machines (which can be closed to me in production).
Thanks advance
You run the Identity TP just like any other Sawtooth Transaction Processor, on the command line. After installing package python3-sawtooth-identity, thpe something like this on the command line:
/usr/bin/identity-tp -v -C tcp://localhost:4004
You can also automate it as a service.
I am just getting started with Packer, and have had several instances where my build is failing and I'd LOVE to log in to the box to investigate the cause. However, there doesn't seem to be a packer login or similar command to give me a shell. Instead, the run just terminates and tears down the box before I have a chance to investigate.
I know I can use the --debug flag to pause execution at each stage, but I'm curios if there is a way to just pause after a failed run (and prior to cleanup) and then runt he cleanup after my debugging is complete.
Thanks.
This was my top annoyance with packer. Thankfully, packer build now has an option -on-error that gives you options.
packer build -on-error=ask ... to the rescue.
From the packer build docs:
-on-error=cleanup (default), -on-error=abort, -on-error=ask - Selects what to do when the build fails. cleanup cleans up after the previous steps, deleting temporary files and virtual machines. abort exits without any cleanup, which might require the next build to use -force. ask presents a prompt and waits for you to decide to clean up, abort, or retry the failed step.
Having used Packer extensively, the --debug flag is most helpful. Once the process is paused you SSH to the box with the key (in the current dir) and figure out what is going on.
Yeah, the way I handle this is to put a long sleep in a script inline provisioner after the failing step, then I can ssh onto the box and see what's up. Certainly the debug flag is useful, but if you're running the packer build remotely (I do it on jenkins) you can't really sit there and hit the button.
I do try and run tests on all the stuff I'm packing outside of the build - using the Chef provisioner I've got kitchen tests all over everything before it gets packed. It's a royal pain to try and debug anything besides packer during a packer run.
While looking up info for this myself, I ran across numerous bug reports/feature requests for Packer.
Apparently, someone added new features to the virtualbox and vmware builders a year ago (https://github.com/mitchellh/packer/issues/409), but it hasn't gotten merged into main.
In another bug (https://github.com/mitchellh/packer/issues/1687), they were looking at adding additional features to --debug, but that seemed to stall out.
If a Packer build is failing, first check where the build process has got stuck, but do the check in this sequence:
Are the boot commands the appropriate ones?
Is the preseed config OK?
If 1. and 2. are OK, then it means box has booted and the next to check is the login: SSH keys, ports, ...
Finally any issues within the provisioning scripts
Is there a flag to make SGE output the machine that it finally dispatched a job to run on ?
I looked through the man but couldn't pinpoint anything.
There are several possibilities:
1: While the job is running, you could use qstat -g t to get the nodes, where your job(s) is/are running.
2: After the job has finished, qacct -j [jobid] shows information for each node, the job was running on.
3: On Linux you could execute the command hostname (or mpirun hostname) to print the respective nodes.
My NAnt builds run fine locally on a developer machine, and locally on the command line of the Hudson server, but they will not run in my configured Hudson project.
The console output when I run a Build via the Hudson web UI is similar to the following :
Started by user anonymous [workspace]
$ sh -xe
C:\WINDOWS\TEMP\hudson8104357939096562606.sh
C:\WINDOWS\TEMP\hudson8104357939096562606.sh:
fork failed: no error [1] Archiving
artifacts Finished: SUCCESS
I have another project configured properly that runs fine so I know the NAnt plugin is setup properly in Hudson, and that NAnt is on the system path.
Can anyone suggest possible causes as to why this build won't run?
The problematic build may be configured to Execute a Shell script, rather than Execute a Windows Batch file.
Copy the command from the existing build step (the Execute Shell Script) and remove the step. Then add a new step to Execute a windows Batch File and paste the command.
Trigger the build and observe the results.
(I asked and answered this since it took me quite a while to figure out how I had mis-configured this particular build. Hopefully it'll save time or give ideas to other people trouble-shooting automation..)