How can i publish my node on network using sawtooth hyperledger application - hyperledger-sawtooth

How can I publish my node on network using sawtooth hyperledger-sawtooth application
I am new on sawtooth try to use below git repro
git repro for this is
I have found some links regarding this
https://lists.hyperledger.org/pipermail/hyperledger-stl/2018-January/000146.html
here is course guide for sawtooth
Hyperledger courseguidelink
already ask a question on GitHub
https://github.com/hyperledger/education/issues/18
Here is code i am using
https://github.com/hyperledger/education/tree/master/LFS171x/sawtooth-material/sawtooth-tuna

If we wish to connect two nodes using the file you only need the following need to use validator.toml, inside /etc/sawtooth
If this file is not there then we need to create that:
$ cd /etc/sawtooth
$ sudo touch validator.toml
Remember to execute below line to run the validator with sawtooth:
$ sudo chown root:sawtooth validator.toml
Sample validator.toml file contents:
# Set the network and component endpoints
bind = [
"network:tcp://127.0.0.1:8800",
"component:tcp://127.0.0.1:4004"
]
# The type of peering approach the validator should take
peering = "static"
# Advertised network endpoint
endpoint = "tcp://127.0.0.1:8800"
# Uris to initially connect to the validator network
seeds = ["tcp://127.0.0.1:8801"]
# A list of peers to attempt to connect to
peers = ["tcp://127.0.0.1:8801"]

Related

Why is the checkout of a private repository on GitHub Actions returning "Error : fatal: could not read Username for 'https://github.com'"?

The project's local development environment makes it mandatory to have a .npmrc file with the following content:
registry=https://registry.npmjs.org/
#my-organization:registry=https://npm.pkg.github.com/
//npm.pkg.github.com/:_authToken=your-GitHub-token-should-be-here-and-I-will-not-share-my-for-security-reasons
Hence, any client properly authenticated into the GitHub Packages
Registry can install our private NPM packages hosted for free on GitHub Registry by running:
npm ci #my-organization/our-package
Ok, it works on my local development environment.
Now, I am building a Continuous Integration process with GitHub Actions which is a different but similar challenge. I have this on my .yaml file:
- name: Create .npmrc for token authentication
uses: healthplace/npmrc-registry-login-action#v1.0
with:
scope: '#my-organization'
registry: 'https://npm.pkg.github.com'
# Every user has a GitHub Personal Access Token (PAT) to
# access NPM private repos. The build of GitHub Actions is
# symmetrical to what every developer on the project has to
# face to build the application on their local development
# environment. Hence, GitHub Actions also needs a Token! But,
# it is NOT SAFE to insert the text of a real token on this
# yml file. Thus, the institutional workaround is to insert
# the `{{secret}}` below which is aligned/set in the project
# settings on GitHub!
auth-token: ${{secrets.my_repo_secret_key_which_is_not_being_shared}}
On GitHub settings->secrets->actions->"add secret":
On the secret value, I added the same content I have on .npmrc.
I was expecting it to work. Unfortunately, an error message is retrieved:
Error: fatal: could not read Username for 'https://github.com': terminal prompts disabled
Why is that so?
I made the mistake of adding all the content on my .npmrc.
It is wrong. And GitHub already knows some things, such as the scope. #my-organization.
Hence, the solution is only adding the following snippet (using the example provided on the question):
your-GitHub-token-should-be-here-and-I-will-not-share-my-for-security-reasons
And it works as expected :)

Using aws cli without a homedirectory

I need to use aws cli on an OpenShift Cluster that is quite restricted - it looks like the homedirectory is set to /, while the user in the container does not have permissions to write to /.
The only directory that is writeable from that user is /tmp. Now I need to use aws cli from within a pod of this OpenShift cluster. I came across the environment variables AWS_CONFIG_FILE and AWS_SHARED_CREDENTIALS_FILE. So I would place each a credentials file and a config file to /tmp.
When running aws configure list-profiles with this setup, only the one profile from AWS_SHARD_CREDENTIALS_FILE is listed. Not the one from AWS_CONFIG_FILE.
So it looks to me like AWS_CONFIG_FILE is not respected by aws cli.
Do you have an idea why these files might not be respected by the aws executable? Is there a way to pass the location of these files directly to the cli as parameter or s.th.?
Instead of configuring files for the AWS CLI, I would assume you could set the following 2 environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and issue your CLI commands immediately.
bruno#pop-os ~> export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
bruno#pop-os ~> export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
bruno#pop-os ~> aws cloudformation list-stacks --region us-east-2
{
"StackSummaries": []
}
To answer on:
So it looks to me like AWS_CONFIG_FILE is not respected by aws cli.
The AWS CLI does respect this.
You can specify a non-default location for the config file by setting
the AWS_CONFIG_FILE environment variable to another local path.

EB: Trigger container commands / deploy scripts on configuration change

I am running my web server on Elastic Beanstalk, and using Papertrail for logging. I am using the official .ebextensions script to get papertrail set up during deployment, but I have a problem. I use environment variables as part of my hostname used as the sender when remote_syslog uploads logs to papertrail, and while this works fine during deployment, when the 01_set_logger_hostname container command is triggered, I run into problems whenever I change environment variables by modifying the environment's configuration, since it seems an eb config call will only restart the application server, not run any of the scripts run during deployment, including the ebextensions container commands.
"/tmp/set-logger-hostname.sh":
mode: "00555"
owner: root
group: root
encoding: plain
content: |
#!/bin/bash
logger_config="/etc/log_files.yml"
appname=`{ "Ref" : "AWSEBEnvironmentName" }`
instid=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`
myhostname=${SOME_VARIABLE}_${appname}_${instid}
if [ -f $logger_config ]; then
# Sub the hostname
sed "s/hostname:.*/hostname: $myhostname/" -i $logger_config
fi
As you can see, since my hostname depends on ${SOME_VARIABLE}, I need to refresh the hostname whenever ${SOME_VARIABLE} is modified following eb config.
Is there a way to trigger a script to be run whenever an eb config command is run, so that I can not only restart my web application but also reconfigure and restart remote_syslog with the updated hostname?
This is now possible on AWS Linux 2 based environments with Configuration deployment platform hooks.
For example, you can make a shell script .platform/confighooks/predeploy/predeploy.sh that will run on all configuration changes. Make sure that you make this file executable according to git, or Elastic Beanstalk will give you a permission denied error.

Framework with ID x does not exist on slave with ID y

I keep getting this error on my marathon dashboard
Framework with ID 'a5a96e8c-c3f2-4591-8eb3-43f8dc902585-0001' does not exist on slave with ID '9959ba51-f6f7-448f-99d2-289767f12179-S2'.
The path to make this error occur is to click "Sandbox" next to a task on the main marathon dashboard.
The path looks something like this
http://mesos.dev.internal/#/slaves/9959ba51-f6f7-448f-99d2-289767f12179-S2/frameworks/a5a96e8c-c3f2-4591-8eb3-43f8dc902585-0001/executors/rabbitmq.6316bf0a-d089-11e5-b895-fa163e196ca3/browse
However, if I go to the slave through the slave panel, and click the framework from there, I am able to access the sandbox. The link in this case looks like the following
http://mesos.dev.internal/#/slaves/9959ba51-f6f7-448f-99d2-289767f12179-S2/browse?path=%2Ftmp%2Fmesos%2Fslaves%2Fc223b6b1-cef8-4599-8cea-b402bf20afc5-S0%2Fframeworks%2F20160108-205802-16842879-5050-1210-0001%2Fexecutors%2Frabbitmq.91b8bbf6-ceba-11e5-8047-0242ffdabb3e%2Fruns%2Fc66eb4d5-ea6d-451d-982f-6a0d29b25441
Any ideas on what I have misconfigured?
Mesos Web UI does not proxy logs through mesos-master (although it would be nice). Basically you need to be able to resolve slave's name from your browser (computer) and port 5051 needs to be open for you:
$ nc -z -w5 mesos.dev.internal 5051; echo $?
0 # port is open
It's not a good idea to leave Mesos ports open for public, so either you can:
connect via VPN
whitelist your public IP on all slaves
use CLI instead of Web UI
Using CLI is quite easy, once you set master's URI. You can install it:
pip install mesos.cli mesos.interface
Then you can list all tasks using mesos ps, or fetch stdout:
mesos tail -f rabbitmq.6316bf0a-d089-11e5-b895-fa163e196ca3
and stderr:
mesos tail -f rabbitmq.6316bf0a-d089-11e5-b895-fa163e196ca3 stderr
Note that the mesos-cli is no longer developed, similar features and much more you should be able to do with Mesosphere's DCOS CLI

How to accept connections for ipython from other computers?

I run ipython 0.12.1 on Ubuntu 12.04. You can run it in browser using notebook interface by running:
ipython notebook --pylab
Configuration files can be found in ~/.config/ipython/profile_default/. It seems that connection parameters for every kernel is placed in ~/.config/ipython/profile_default/security/kernel-4e424cf4-ba44-441a-824c-c6bce727e585.json. Here is the content of this file (new files are created as you start new kernels):
{
"stdin_port": 54204,
"ip": "127.0.0.1",
"hb_port": 58090,
"key": "2a105dd9-26c5-40c6-901f-a72254d59876",
"shell_port": 52155,
"iopub_port": 42228
}
It's rather self-explanatory but how can I set a server that would have a permanent configuration, so I can use notebook interface from other computers in the LAN?
If you are using an old version of the notebook, the following could still apply. For new versions see the other answers below.
Relevant section of the IPython docs
The Notebook server listens on localhost by default. If you want it to be visible to all machines on your LAN, simply instruct it to listen on all interfaces:
ipython notebook --ip='*'
Or a specific IP visible to other machines:
ipython notebook --ip=192.168.0.123
Depending on your environment, it is probably a good idea to enable HTTPS and a password when listening on external interfaces.
If you plan on serving publicly a lot, then it's a also good idea to create an IPython profile (e.g. ipython profile create nbserver) and edit the config accordingly, so all you need to do is:
ipython notebook --profile nbserver
To load all your ip/port/ssl/password settings.
The accepted answer/information is for an old version. How to enable remote access to your new jupyter notebook? I got you covered
First, generate a config file if you don't have it already:
jupyter notebook --generate-config
Notice the output of this command as it would tell you where the jupyter_notebook_config.py file was generated. Or if you already have it, it will ask you if you would like to overwrite it with the default config. Edit the following line:
## The IP address the notebook server will listen on.
c.NotebookApp.ip = '0.0.0.0' # Any ip
For added security, type in a python/IPython shell:
from notebook.auth import passwd; passwd()
You will be asked to input and confirm a password string. Copy the contents of the string, which should be of the form type:salt:hashed-password. Find and edit the lines as follows:
## Hashed password to use for web authentication.
#
# To generate, type in a python/IPython shell:
#
# from notebook.auth import passwd; passwd()
#
# The string should be of the form type:salt:hashed-password.
c.NotebookApp.password = 'type:salt:the-hashed-password-you-have-generated'
## Forces users to use a password for the Notebook server. This is useful in a
# multi user environment, for instance when everybody in the LAN can access each
# other's machine through ssh.
#
# In such a case, server the notebook server on localhost is not secure since
# any user can connect to the notebook server via ssh.
c.NotebookApp.password_required = True
## Set the Access-Control-Allow-Origin header
#
# Use '*' to allow any origin to access your server.
#
# Takes precedence over allow_origin_pat.
c.NotebookApp.allow_origin = '*'
(Re)start your jupyter notebook, voila!