Transaction Processor gossip in Hyperledger Sawtooth distibuted mode - hyperledger-sawtooth

AFAIK, in Hyperledger Sawtooth I can add custom Transaction Processors, but I don't clearly understand can I add them dynamically, and how it will work?
For example, I have working validators network with dynamic peering and want to add new custom Transaction Processor to support new transaction family. Probably, I can run docker container with TP on some machines of network, but often I will not able to do that on all machines (which can be closed to me in production).
Thanks advance

You run the Identity TP just like any other Sawtooth Transaction Processor, on the command line. After installing package python3-sawtooth-identity, thpe something like this on the command line:
/usr/bin/identity-tp -v -C tcp://localhost:4004
You can also automate it as a service.

Related

How can I upgrade to the latest Operator Lifecycle Manager on OpenShift 3.11?

I've found version 0.6.0 of the Operator Framework's Operator Lifecycle Manager (OLM) to be lacking and see that 0.12.0 is available with lots of new updates. How can I upgrade to that version?
Also, what do I need to consider regarding this upgrade? Will I lose any integrations, etc.
One thing that needs to be considered is that in OpenShift 3, OLM is running from a namespace called operator-lifecycle-manager. In future versions that becomes simply olm. Some things to consider,
Do you have operators running right now and if you make this change will your catalog names change? This will need to be reflected in your subscriptions.
Do you want to change any of the default install configuration?
Look into values.yaml to configure your OLM
Look into the yaml files in step 2 to and adjust if needed.
1) First, turn off OLM 0.6.0 or whatever version you might have.
You can delete that namespace, or as I did stopped the deployments within and scale the replicasets down to 0 pods which effectively turns OLM 0.6.0 off.
2) Install OLM 0.12.0
oc create -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.12.0/crds.yaml
oc create -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.12.0/olm.yaml
alt 2) If you'd rather just install the latest from the repo's master branch:
oc create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/crds.yamlcrds.yaml
oc create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml
So now you have OLM 0.12.0 installed. You should be able to see in the logs that it picks up where 0.6.0 left off. You'll need to start learning about OperatorGroups though as that's new and going to start impacting your operation of operators pretty quick. The Cluster Console's ability to show your catalogs does seem to be lost, but you can still view that information through the commandline with oc get packagemanifests.

How to use sawtooth identity-tp processor

I am playing around with hyperledger-sawtooth. I have installed the
sawtooth in ubuntu machine but identity transaction processor is not
installed with sawtooth. so how can i use identity-tp command
go into the sawtooth-core/bin folder where all the deafualt TPs will be there. You will find build_xxx_identity-tp.
Start your validator, settings TP and run above shell script file from bin.
You will see log in your validator, that identity-tp is registered.
Install package python3-sawtooth-identity
To start a TP, including the Identity TP, just type it on the command line. For example,
/usr/bin/identity-tp -v -C tcp://localhost:4004
For Docker, you normally run the Identity TP in its own container, just like other transaction processors.
For more info, see https://sawtooth.hyperledger.org/docs/core/releases/latest/cli/identity-tp.html
Edit: as requested, here's the Identity Transaction Processor Specification:
https://sawtooth.hyperledger.org/docs/core/nightly/master/transaction_family_specifications/identity_transaction_family.html

Limit to single core for unit test

I have an issue where my unit test currently passes on my dev machine (multi core machine), but the same code fails in pre production (single core machine).
Is it possible to somehow limit the number of cores available for a unit test to get an equal environment on my dev machine? Unfortunately I'm not able to run the unit tests on the pre prod machine.
There are several ways to do that.
Use taskset command
Taskset command binds all threads of a particular process to some subset of cores. Using is easy: taskset -c 0 'your command'
This will bind every thread to the first CPU.
So in order to do this you need to be able to run your unit test programmatically via the command line. If you use some build tool you just run the coommand after taskset. For example
taskset -c 0 "mvn clean compile test"
If you run your test via IDE then you can check full command which is printed when you run the test. In that case it will look like
taskset -c 0 "C:\Program Files\Java\jdk1.8.0_73\bin\java -cp classpath com.intellij.rt.execution.junit.JUnitStarter name_of_test"
More about taskset command
Use affinity locks
Affinity lock can be used programmatically to bind some code to a particular core. But in that case I'm not sure if it will be able to bind also newly created threads during the code execution. I think taskset is easier to use and does all the work.
Check OpenHFT/Java-Thread-Affinity as it's the most popular affinity lock tool for java.

Troubleshooting failed packer build

I am just getting started with Packer, and have had several instances where my build is failing and I'd LOVE to log in to the box to investigate the cause. However, there doesn't seem to be a packer login or similar command to give me a shell. Instead, the run just terminates and tears down the box before I have a chance to investigate.
I know I can use the --debug flag to pause execution at each stage, but I'm curios if there is a way to just pause after a failed run (and prior to cleanup) and then runt he cleanup after my debugging is complete.
Thanks.
This was my top annoyance with packer. Thankfully, packer build now has an option -on-error that gives you options.
packer build -on-error=ask ... to the rescue.
From the packer build docs:
-on-error=cleanup (default), -on-error=abort, -on-error=ask - Selects what to do when the build fails. cleanup cleans up after the previous steps, deleting temporary files and virtual machines. abort exits without any cleanup, which might require the next build to use -force. ask presents a prompt and waits for you to decide to clean up, abort, or retry the failed step.
Having used Packer extensively, the --debug flag is most helpful. Once the process is paused you SSH to the box with the key (in the current dir) and figure out what is going on.
Yeah, the way I handle this is to put a long sleep in a script inline provisioner after the failing step, then I can ssh onto the box and see what's up. Certainly the debug flag is useful, but if you're running the packer build remotely (I do it on jenkins) you can't really sit there and hit the button.
I do try and run tests on all the stuff I'm packing outside of the build - using the Chef provisioner I've got kitchen tests all over everything before it gets packed. It's a royal pain to try and debug anything besides packer during a packer run.
While looking up info for this myself, I ran across numerous bug reports/feature requests for Packer.
Apparently, someone added new features to the virtualbox and vmware builders a year ago (https://github.com/mitchellh/packer/issues/409), but it hasn't gotten merged into main.
In another bug (https://github.com/mitchellh/packer/issues/1687), they were looking at adding additional features to --debug, but that seemed to stall out.
If a Packer build is failing, first check where the build process has got stuck, but do the check in this sequence:
Are the boot commands the appropriate ones?
Is the preseed config OK?
If 1. and 2. are OK, then it means box has booted and the next to check is the login: SSH keys, ports, ...
Finally any issues within the provisioning scripts

Run a task before svn check-out

I would like to run a task (stop a running vm machine) before Jenkins starts the check-out.
The reason is: VM blocks access to some files I have to update via subversion.
Is this possible?
There are two plugins for controlling virtual machines, depending on whether you are using VirtualBox or VMWare.
I'm quite sure you can configure the pre-build step to be "Suspend" as shown in the images, at least for VMWare.
VMware Plugin
VirtualBox Plugin
Edit your project and set:
Configure M2 Extra Build Steps --> Execute shell --> Type in whatever you'd like to do. For example:
# Wipe the local repository before each build.
rm -rf $WORKSPACE/.repository
Have a look at How do I trigger another job from hudson as a pre-build step?. I think this has been asked before there.