How to use sawtooth identity-tp processor - hyperledger-sawtooth

I am playing around with hyperledger-sawtooth. I have installed the
sawtooth in ubuntu machine but identity transaction processor is not
installed with sawtooth. so how can i use identity-tp command

go into the sawtooth-core/bin folder where all the deafualt TPs will be there. You will find build_xxx_identity-tp.
Start your validator, settings TP and run above shell script file from bin.
You will see log in your validator, that identity-tp is registered.

Install package python3-sawtooth-identity
To start a TP, including the Identity TP, just type it on the command line. For example,
/usr/bin/identity-tp -v -C tcp://localhost:4004
For Docker, you normally run the Identity TP in its own container, just like other transaction processors.
For more info, see https://sawtooth.hyperledger.org/docs/core/releases/latest/cli/identity-tp.html
Edit: as requested, here's the Identity Transaction Processor Specification:
https://sawtooth.hyperledger.org/docs/core/nightly/master/transaction_family_specifications/identity_transaction_family.html

Related

can't run geth command on windows10

I install GETH on Windows10 but when I execute the command
geth version
it will get an error "geth' is not recognized as an internal or external command,
operable program or batch file."
Sounds like something to do with your environmental variables and dependencies.
Firstly ensure that you have installed all dependencies correctly. Follow the installation steps here
https://github.com/ethereum/go-ethereum/wiki/Installation-instructions-for-Windows
This will ensure that all env paths are set correctly and that geth will be configured into your system variables.
Hope this helps! feel free to message me if you have anymore trouble

Minishift Error While Downloading OC Binary Version

I am trying to install minishift on Windows. However, I am facing with below Issue while installing It.I have tried with multiple version of Open Shift
Command Used : minishift.exe start --vm-driver virtualbox
Console Out Put :
-- Downloading OpenShift v3.9.0 checksums ... OKError starting the cluster: Error attempting to download and cache 'oc': Failed to validate hash - expected: 7ed04f7bc411056425d98aa6a10536fab15bdb569549446223f6ed22421ea4e6, actual: 705eb110587fdbd244fbb0f93146a643b24295cfe2410ff9fe67a0e880912663
Is their anyway to skip hash validation Check..?
There is currently no option to disable hash validation check. However there is a workaround for your issue, you can download the binary manually and put it into Minishift home - then Minishift on next start will not attempt to download it.
Download oc v3.9.0 release for Windows from its release page
Extract
Move oc.exe binary to .minishift/cache/oc/v3.9.0/windows/oc.exe
Start Minishift
Similar steps will also apply for other platforms and versions of oc. You can search for all releases by tag at Origin release page.
Reason of failure:
I have checked both Checksum on release and actual sha256sum of the binary and your sum is correct. Is it possible that the CHEKSUM file is cached on your proxy? What is the version of Minishift you are using?
If you are willing to invest some of your time, you can create an issue on Minishift so the team can take a deeper look into the problem.
you might receive a 403 forbidden status from GitHub if your request exceeds the rate limit for your IP address.Instead of waiting for GitHub to reset the limit for your IP address, you can create a Personal API Tokens from your personal GitHub account.
Personal API Token generation URL: https://github.com/blog/1509-personal-api-tokens
You need to set Token to environment variable.
For windows10: set MINISHIFT_GITHUB_API_TOKEN=<token_ID>
For Linux: export MINISHIFT_GITHUB_API_TOKEN=<token_ID>
Then run below command on windows CMD:
minishift.exe start --vm-driver virtualbox

Transaction Processor gossip in Hyperledger Sawtooth distibuted mode

AFAIK, in Hyperledger Sawtooth I can add custom Transaction Processors, but I don't clearly understand can I add them dynamically, and how it will work?
For example, I have working validators network with dynamic peering and want to add new custom Transaction Processor to support new transaction family. Probably, I can run docker container with TP on some machines of network, but often I will not able to do that on all machines (which can be closed to me in production).
Thanks advance
You run the Identity TP just like any other Sawtooth Transaction Processor, on the command line. After installing package python3-sawtooth-identity, thpe something like this on the command line:
/usr/bin/identity-tp -v -C tcp://localhost:4004
You can also automate it as a service.

Error when running 'embark run'

When run the command 'embark run'. I got the error message:
Running "deploy_contracts:development" (deploy_contracts) task
Warning: ==== can't connect to localhost:8101 check if an ethereum node is running Use --force to continue.
Error: ==== can't connect to localhost:8101 check if an ethereum node is running
Could you please give me some help about it?
Before you can run embark, you have to run an ethereum rpc simulator, simply run:
$ embark simulator
Or Alternatively, you can run a REAL ethereum node for development purposes:
$ embark blockchain
By default embark blockchain will mine a minimum amount of ether and will only mine when new transactions come in. This is quite usefull to keep a low CPU. The option can be configured at config/blockchain.yml
You will see a geth node starting in the terminal. Then, open another terminal and type:
$ embark run
This will automatically deploy the contracts, update their JS bindings and deploy your DApp to a local server at http://localhost:8000
Note that if you update your code it will automatically be re-deployed, contracts included. There is no need to restart embark, refreshing the page on the browser will do.
See also newest embark tagged questions on Ethereum Stack Exchange for future reference.
In your embark project directory:
run $ embark blockchain and leave this running on your terminal.
Open a new terminal, cd <yourProject> and run $ embark run
You will now be up and running on your local host at http://localhost:8000

Unable to get cuda to work in tensorflow

I'm trying to use cuda to accelerate tensorflow. I'm running tensorflow using the docker image.
Firstly, when I launch the gpu image, it has a mismatch in the LT_LIBRARY_PATH environment variable:
~# echo $LD_LIBRARY_PATH
/usr/local/nvidia/lib:/usr/local/nvidia/lib64:
root#d578acbbc2cd:~# ls /usr/local/
bin cuda cuda-7.0 etc games include lib man sbin share src
There's no nvidia directory there. When I try to run the convolutional.py demo, it can't initialise the cuda support:
# python models/image/mnist/convolutional.py
Succesfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Succesfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Succesfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Succesfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 8
modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.2.0-23-generic/modules.dep.bin'
E tensorflow/stream_executor/cuda/cuda_driver.cc:466] failed call to cuInit: CUDA_ERROR_UNKNOWN
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:98] retrieving CUDA diagnostic information for host: d578acbbc2cd
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:106] hostname: d578acbbc2cd
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:131] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:242] driver version file contents: """NVRM version: NVIDIA UNIX x86_64 Kernel Module 352.68 Tue Dec 1 17:24:11 PST 2015
GCC version: gcc version 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2)
"""
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:135] kernel reported version is: 352.68
I tensorflow/core/common_runtime/gpu/gpu_init.cc:112] DMA:
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 8
It then goes on to train using cpu only.
# find /usr -name libcuda.so
/usr/lib/x86_64-linux-gnu/libcuda.so
So in the docker image, there's only the gnu cpu cuda implementation. No NVIDIA stuff. In the host ubuntu 15.10 session, I have libcuda.so installed:
$ find /usr -name libcuda.so
/usr/lib/x86_64-linux-gnu/libcuda.so
/usr/lib/i386-linux-gnu/libcuda.so
/usr/local/cuda-7.5/targets/x86_64-linux/lib
/stubs/libcuda.so
So these seem to be stubs ... not sure why.
Is there some trick to getting this to work?
Try rebuilding the Docker image directly from the Tensorflow repository (i.e. don't rely on the image on the container registry) and use https://github.com/NVIDIA/nvidia-docker to run the container (the Docker command described in the Tensorflow documentation is not portable).
I had a similar problem, though not in docker. The libcuda.so in /usr/local/cuda/lib64/stubs was a broken sym link. When I searched for libcuda.so it only turned up a file in a lib32 folder.
It seems that the problem was how I originally installed the NVIDIA device driver. At some point in the driver install process you're given the option to install the lib32 drivers. I had thought this meant in addition to lib64 drivers so I selected it. Turns out it only installs lib32 and not lib64 drivers.
I reinstalled the NIVDIA device driver, this time not selecting the lib32 'option'. Now tensorflow finds libcuda.so.
I had the same problem with running tensorflow on a Ubuntu machine after I upgraded my driver to 352.63 and 352.93. (I remember it works with 346.* but when I try to install 346., it installs 352. automatically for some reason).
I finally figured out that it's caused by permission issue. (I can run it with root) So, I changed the permission of the libcuda.so.352-63 file to executable by anyone and it works well now.
Hope this will be helpful to those still struggling with this issue.
I didn't try the docker one, but I guess it's also caused by permission setting.
Try this command
sudo apt-get install nvidia-modprobe
As mentioned here:
https://github.com/tensorflow/tensorflow/issues/394
and
http://kkjkok.blogspot.in/2016_08_01_archive.html
After I updated NVIDIA driver to 378.09 on Ubuntu 14.10 I had the same error,
although all the right for lib files were set correctly.
Thanks to #PhoenixQ, I tried to run with sudo and it worked.
After that I tried to run without sudo one more time and error disappeared. I'm not sure what ecxactly happened, but maybe something was configured during call with sudo, which was not possible withous sudo.
So the solution:
Try to run the same thing with sudo.
After this. Tryu running without sudo. Worked for me.