Starting with sawtooth xo transaction - hyperledger-sawtooth

I have setup Hyperledger Sawtooth on Docker,
I am trying to test sawtooth XO transactions with following commands
uname#uname:~/sawtooth$ docker exec -it sawtooth-shell-default bash
root#5279e5a413c1:/# xo create one
but I am getting following error
Error: Failed to connect to http://127.0.0.1:8008/batches: HTTPConnectionPool(host='127.0.0.1', port=8008): Max retries exceeded with url: /batches (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))
FYKI This commands works for me
From shell this gives me blocks
curl http://rest-api:8008/blocks
From my host this works as expected
curl http://localhost:8008/blocks
curl http://127.0.0.1:8008/blocks
What is wrong with this?
My yaml file is the default one, you can find it here

If you are using docker you have to mention url to APIs this way
xo create one --url http://rest-api:8008
I found this after Ashish gave this hint at
https://chat.hyperledger.org/channel/sawtooth

Related

Pods stuck containercreating

previously my MySQL pod stuck at terminating status, and then I tried to force delete using command like this
kubectl delete pods <pod> --grace-period=0 --force
Later I tried to helm upgrade again, my pod was stuck at containercreating status, and this event from pod
17s Warning FailedMount pod/db-mysql-primary-0 MountVolume.SetUp failed for volume "pvc-f32a6f84-d897-4e35-9595-680302771c54" : kubernetes.io/csi: mount
er.SetUpAt failed to check for STAGE_UNSTAGE_VOLUME capability: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix
/var/lib/kubelet/plugins/dobs.csi.digitalocean.com/csi.sock: connect: no such file or directory"
17s Warning FailedMount pod/db-mysql-secondary-0 MountVolume.SetUp failed for volume "pvc-61fc6eda-97fa-455f-ac2c-df8ebcb90f1c" : kubernetes.io/csi: mount
er.SetUpAt failed to check for STAGE_UNSTAGE_VOLUME capability: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix
/var/lib/kubelet/plugins/dobs.csi.digitalocean.com/csi.sock: connect: no such file or directory"
anyone please can help me to resolve this issue, thanks a lot.
When you run the command
kubectl delete pods <pod> --grace-period=0 --force
you ask kubernetes to forget the Pod, not to delete it. You have to be careful while using this command. You have to make sure that the containers of the Pod are not running in the host especially when they are mounted to a PVC. Probably the containers is still running and attached to the PVC.
pool-product-8jd40 0
spec:
drivers: null
and on my some pool the driver csi not ready (null), it's supposed to be equal 1 (ready)
*sorry i can't attach the image yet

Powershell remote fails with "The SSH client session has ended with error message: subsystem request failed on channel 0."

I am running Windows 10 Pro on one of my workstations with ssh enabled. I am able to ssh from my Mac to Windows successfully but when I try the command
New-PSSession -HostName xxxx -UserName yyyy
I receive the following message after entering my password: The background process reported an error with the following message:
The SSH client session has ended with error message: subsystem request failed on channel 0.
Sorry, I also ran into the same issue and couldn't get it done.
the best way I think is to run it using shh directly
Eg:
ssh user#ip-or-hostname "quser"
also have this issue.... using ssh directly is working but I was trying to do something like that:
$s = New-PSSession -ComputerName myComputer -UserName userName -Port sshPort
Invoke-Command -Session $s -ScriptBlock {
cd /pathToDockerCompose
docker-compose down
docker-compose up -d
}
so if someone has an alternative to do this, I'm all ears :-).
edit: btw, I want it to be called from a Windows machine and executed on linux.
Had a similar situation just occur on one of my machines, SSH worked but trying to SSH through powershell with Enter-PSSession gave the same "subsystem request failed on channel 0" error. Turns out I didn't have the powershell subsystem registered in the sshd_config file for OpenSSH.
See https://lazyadmin.nl/powershell/powershell-ssh/ for additional info and the same change I made to get it working

Getting curl: (52) Empty reply from server when trying to send a curl command to a http address of a docker running an AutoML model

I am trying to send a prediction request as a JSON to a docker image of AutoML model running on a docker container. I have exported the image from the AutoML UI and stored it in the Google Cloud Storage.
I am running the following to launch the docker image.
CPU_DOCKER_GCS_PATH="gcr.io/automl-vision-ondevice/gcloud-container-1.12.0:latest"
YOUR_MODEL_PATH="gs://../../saved_model.pb"
PORT=8501
CONTAINER_NAME="my_random_name"
sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v ${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCS_PATH}
when I run this command, I get the following error but the program runs.
2019-05-09 11:29:06.810470: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:369] FileSystemStoragePathSource encountered a file-system access error: Could not find base path /tmp/mounted_model/ for servable default
I am running the following command to send the prediction request.
curl -d #/home/arkanil/saved_model/cloud_output.json -X POST http://localhost:8501/v1/models/default:predict
This returns
curl: (52) Empty reply from server.
I have tried to follow the steps written in the google docs mentioned below.
https://cloud.google.com/vision/automl/docs/containers-gcs-tutorial#install-docker
https://docs.docker.com/install/linux/docker-ce/debian/
Getting output as
curl: (52) Empty reply from server.
The expected result should be a JSON file depicting the prediction numbers of the AutoML model that is running in the docker.
Seems like you are trying to run with passing path of your model at google storage.
You should download saved_model.pb from GS to your local computer and pass its path to YOUR_MODEL_PATH variable.
To download model use:
gsutil cp ${YOUR_MODEL_PATH} ${YOUR_LOCAL_MODEL_PATH}/saved_model.pb

Orion Context Broker functional test failure

I have successfully forked and built the Context Broker source code on a CentOS 6.9 VM and now I am trying to run the functional tests as the official documentation suggests. First, I installed the accumulator-server.py script:
$ make install_scripts INSTALL_DIR=~
Verified that it is installed:
$ accumulator-server.py -u
Usage: accumulator-server.py --host <host> --port <port> --url <server url> --pretty-print -v -u
Parameters:
--host <host>: host to use database to use (default is '0.0.0.0')
--port <port>: port to use (default is 1028)
--url <server url>: server URL to use (default is /accumulate)
--pretty-print: pretty print mode
--https: start in https
--key: key file (only used if https is enabled)
--cert: cert file (only used if https is enabled)
-v: verbose mode
-u: print this usage message
And then run the functional tests:
$ make functional_test INSTALL_DIR=~
But the test fails and exits with the message below:
024/927: 0000_ipv6_support/ipv4_ipv6_both.test ........................................................................ (FAIL 11 - SHELL-INIT exited with code 1) testHarness.sh/IPv6 IPv4 Both : (0000_ipv6_support/ipv4_ipv6_both.test)
make: *** [functional_test] Error 11
$
I checked the file ../0000_ipv6_support/ipv4_ipv6_both.shellInit.stdout for any hint on what may be going wrong but error log does not lead me anywhere:
{ "dropped" : "ftest", "ok" : 1 }
accumulator running as PID 6404
Unable to start listening application after waiting 30
Does anyone have any idea about what may be going wrong here?
I checked the script which prints the error line Unable to start listening application after waiting 30 and noticed that stderr for accumulator-server.py is logged into the /tmp folder.
The accumulator_9977_stderr file had this log: 0000_ipv6_support/ipv4_ipv6_both.shellInit: line 27: accumulator-server.py: command not found
Once I saw this log I understood the mistake I made. I was running the
functional tests with sudo and the secure_path was being used instead of my PATH variable.
So at the end, running the functional tests with the command below solved the issue for me.
$ sudo "PATH=$PATH" make functional_test INSTALL_DIR=~
This can also be solved by editing the /etc/sudoers file by:
$ sudo visudo
and modifying the secure_path value.

Connection Timeout at "./rebar get-deps" / compiling EJabberd

I am trying to compile eJabberd on CentOS6. I am following the steps mentioned # [https://www.process-one.net/docs/ejabberd/guide_en.html#htoc12][1]
However, this aborts with connection-timeout error while executing "make".
Following is the error snipet from command prompt:
*
[root#CentOS-6-64-EN ejabberd-15.04]# make
rm -rf deps/.got
rm -rf deps/.built
/usr/lib64/erlang/bin/escript rebar get-deps && :> deps/.got
==> rel (get-deps)
==> ejabberd-15.04 (get-deps)
Pulling p1_cache_tab from {git,"git://github.com/processone/cache_tab",
"cca096330ce39e8b56fe0e0c478df1ff452e7751"}
github.com[0: 192.30.252.131]: errno=Connection timed out
fatal: unable to connect a socket (Connection timed out)
Initialized empty Git repository in /root/Desktop/eJabberd/ejabberd-15.04/deps/p1_cache_tab/.git/
ERROR: git clone -n git://github.com/processone/cache_tab p1_cache_tab failed with error: 128 and output:
github.com[0: 192.30.252.131]: errno=Connection timed out
fatal: unable to connect a socket (Connection timed out)
Initialized empty Git repository in /root/Desktop/eJabberd/ejabberd-15.04/deps/p1_cache_tab/.git/
ERROR: 'get-deps' failed while processing /root/Desktop/eJabberd/ejabberd-15.04: rebar_abort
make: *** [deps/.got] Error 1
*
On trying the command "./rebar get-deps", I get the same connection timeout error.
My network connectivity is fine and it seems the github link is broken. Please Help!
You should try replacing the dependancy link to Github using https:// instead of git://
It should fix your issue.
We will check the project to make sure all our dependancies use https url scheme instead of ssh.