How to create Hyperledger Sawtooth network keys - hyperledger-sawtooth

I am setting up a Hyperledger Sawtooth network. In /etc/sawtooth/validator.toml.example, I saw the following:
# A Curve ZMQ key pair are used to create a secured network based on side-band
# sharing of a single network key pair to all participating nodes.
# Note if the config file does not exist or these are not set, the network
# will default to being insecure.
network_public_key = 'wFMwoOt>yFqI/ek.G[tfMMILHWw#vXB[Sv}>l>i)'
network_private_key = 'r&oJ5aQDj4+V]p2:Lz70Eu0x#m%IwzBdP(}&hWM*'
Can anybody tell me how to create another keypair?

These are the ZMQ message keys used to securely communicate with other nodes.
If you've installed sawtooth already, python3 and python3-zmq would have been already installed and available in your system. Here's an example to create the keypair in Python:
import zmq
(public, secret) = zmq.curve_keypair()
print("network_public_key =", public.decode("utf-8"),
"\nnetwork_private_key =", secret.decode("utf-8"))
Also, if you can use a compiled binary tool:
$ sudo apt-get install g++ libzmq3-dev
$ wget https://raw.githubusercontent.com/zeromq/libzmq/master/tools/curve_keygen.cpp
$ g++ curve_keygen.cpp -o curve_keygen -lzmq
$ ./curve_keygen
Copy the corresponding public key output to network_public_key and the private key output to network_private_key fields in /etc/sawtooth/validator.toml
The above was from my Sawtooth FAQ at
https://sawtooth.hyperledger.org/faq/validator/#how-do-i-generate-the-network-public-key-and-network-private-key-in-validator-toml

Related

Error installing artifactory-oss helm chart in openshift

I am trying to install artifactory oss in a openshift cluster. I am using this helm chart https://charts.jfrog.io/artifactory-oss-107.39.4.tgz (Warning I am very new to openshift etc.. I am on a steep learning curve )
I am running the helm chart as the openshift cluster-admin account
However I am getting this error
pods "artifactory-artifactory-nginx-5c66b8c948-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{107}: 107 is not an allowed group, spec.initContainers[0].securityContext.runAsUser: Invalid value: 104: must be in the ranges: [1000970000, 1000979999], spec.containers[0].securityContext.runAsUser: Invalid
I think it is a openshift permissions error .. in that it requires a more permissive security constraint. However given I am running as cluster-admin I find that a little suprising.
Can anyone offer a suggestion how to resolve this issue and get artifactory-oss running in openshift?
Thanks in advance !
--
Tried passing some options to set the uid and gild..
I tried starting with this
helm upgrade --install artifactory --set artifactory.uid=1001010042,artifactory.gid=1001010042,nginx.uid=1001010042,nginx.gid=1001010042,artifactory.masterKey=${MASTER_KEY},artifactory.joinKey=${JOIN_KEY},artifactory.postgresql.postgresqlPassword=$POSTGRES_PASSWORD --namespace artifactory jfrog/artifactory-oss
The options should have set the uids and gids.. but I still got.. Seems the helm chart ignores efforts to overwrite the values
pods "artifactory-artifactory-nginx-5c66b8c948-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{107}: 107 is not an allowed group, spec.initContainers[0].securityContext.runAsUser: Invalid value: 104: must be in the ranges: [1000930000, 1000939999], spec.containers[0].securityContext.runAsUser: Invalid
Regarding the JFrog Artifactory OSS Helm Chart, its documentation Installing Artifactory points out to some prerequisites.
When installing Artifactory, you must run the installation as a root user or provide sudo access to a non-root user.
For Helm
Create a unique Master Key (Artifactory requires a unique master key) pass it to the template during installation.
Create a secret containing the key. The key in the secret must be named master-key
kubectl create secret generic my-masterkey-secret -n artifactory --from-literal=master-key=${MASTER_KEY}
make sure to pass the same master key on all future calls to Helm install and Helm upgrade.
This means always passing --set artifactory.masterKey=${MASTER_KEY} (for the custom master key) or --set artifactory.masterKeySecretName=my-masterkey-secret (for the manual secret) and verifying that the contents of the secret remain unchanged.
create a unique join key: By default the chart has one set in the values.yaml (artifactory.joinKey).
However, this key is for demonstration purposes only and should not be used in a production environment
The point is: it depends on the exact command used to install the Helm Chart.
helm upgrade --install artifactory --set artifactory.masterKey=${MASTER_KEY} \
--set artifactory.joinKey=${JOIN_KEY} \
--namespace artifactory jfrog/artifactory
As illustrated here, the value for "runAsUser" and "fsGroup" in values.yaml can have an influence on the error message..
Unlike other installations, Helm Chart configurations are made to the values.yaml and are then applied to the system.yaml.
Follow these steps to apply the configuration changes.
Make the changes to values.yaml.
Run the command.
helm upgrade --install artifactory -n artifactory -f values.yaml
See Managing security context constraints for more.

Using TPM key handle for device CA key and device Identity Keys

Does anyone tried using the TPM key for the device CA and identity certificates in Edge Device?
Currently the device CA and the identity keys are generated in PEM files and set the path in the config.yaml as URI link.
I have generated a TPM key and generate device CA and Identity certificate with a root CA. How do I use the TPM key instead of the PEM key file by referencing to the handle example 0x81000002?
Objective is to secure the certificates used by edge device using TPM for upstream(device identity) and downstream (device CA)operations.
Currently both keys above are using PEM key files which is unsafe.
Operation example:
Step1: User Create TPM keys for device CA and identity keys with peristent handle under a SRK primary key using tpm2-tools
example : device identity at 0x81020000 and device CA at 0x81000002
echo ">>>>>>>> Create SRK primary"
tpm2_createprimary -C o -g sha256 -G ecc -c SRK_primary.ctx
tpm2_evictcontrol -C o -c SRK_primary.ctx 0x81000001
echo "create persistent IDevID Key"
tpm2_create -C 0x81000001 -g sha256 -G ecc -r ID_Priv.key -u ID_Pub.key
tpm2_load -C 0x81000001 -u ID_Pub.key -r ID_Priv.key -n ID_key_name_structure.data -c ID_keycontext.ctx
tpm2_evictcontrol -C o -c ID_keycontext.ctx 0x81020000
echo "create persistent devCA Key"
tpm2_create -C 0x81000001 -g sha256 -G rsa -r DevCA_Priv.key -u DevCA_Pub.key
tpm2_load -C 0x81000001 -u DevCA_Pub.key -r DevCA_Priv.key -n DevCA_key_name_structure.data -c DevCA_keycontext.ctx
tpm2_evictcontrol -C o -c DevCA_keycontext.ctx 0x81000002
Step2: Create CSR and certificates using the above key handles
openssl req -new -engine tpm2tss -key 0x81020000 -passin pass:"" -keyform engine -subj /CN=DeviceIdentity -out dev_iden.csr
Step3: need modification in the security daemon to make this work
modify config.yaml to use the above handles for the keys of device ca and idendtity and specify certs as URI path
Great question!
The IoT Edge runtime needs to access the TPM to automatically provision your device. See how to do it here: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-auto-provision-simulated-device-linux#give-iot-edge-access-to-the-tpm
The way the attestation process works is like this:
When a device with a TPM first connects to the Device Provisioning Service, the service first checks the provided EK_pub against the EK_pub stored in the enrollment list. If the EK_pubs do not match, the device is not allowed to provision. If the EK_pubs do match, the service then requires the device to prove ownership of the private portion of the EK via a nonce challenge, which is a secure challenge used to prove identity. The Device Provisioning Service generates a nonce and then encrypts it with the SRK and then the EK_pub, both of which are provided by the device during the initial registration call. The TPM always keeps the private portion of the EK secure. This prevents counterfeiting and ensures SAS tokens are securely provisioned to authorized devices.
Ref: https://learn.microsoft.com/en-us/azure/iot-dps/concepts-tpm-attestation
I believe your case is different though (you want to add your CA certificate to your TPM and then retrieve it from there?). I saw your feedback request, sharing here for others to vote:
Using TPM keys for Device CA and Identity - https://feedback.azure.com/forums/907045-azure-iot-edge/suggestions/40920013-using-tpm-keys-for-device-ca-and-identity
If your idea is identical to the one already added in IoT Edge Feedback forum, please merge it and vote:
Store Private key for X.509 based DPS securely on HSM - https://feedback.azure.com/forums/907045-azure-iot-edge/suggestions/39457678-store-private-key-for-x-509-based-dps-securely-on

openjdk8: download from Mercurial using hg

I am trying to download openjdk8 source code from Mercurial repository using
hg clone http://hg.openjdk.java.net/jdk8/jdk8 openJDK8
Am getting the below error:
abort: error: node name or service name not known
If we add the ipaddress and hostname to /etc/hosts file, will it get resolved.
But i dont know how to find the ipaddress and hostname of http://hg.openjdk.java.net.
From another S10 system, i could be able to download the source. i checked /etc/hosts and /etc/resolve.conf. Both are same. When i copy the downloaded source to my system and tried to build it in my system, am getting some timestamp error in hotfolder:
WARNING: You are using cc version 5.13 and should be using version 5.10.
Set ENFORCE_CC_COMPILER_REV=5.13 to avoid this warning.
/opt/csw/bin//gmake: invalid option -- /
/opt/csw/bin//gmake: invalid option -- c
/opt/csw/bin//gmake: invalid option -- c
/opt/csw/bin//gmake: invalid option -- 8
/opt/csw/bin//gmake: invalid option -- /
/opt/csw/bin//gmake: invalid option -- a
/opt/csw/bin//gmake: invalid option -- /
/opt/csw/bin//gmake: invalid option -- c
Usage: gmake [options] [target] ...
Options:
-b, -m Ignored for compatibility.
-B, --always-make Unconditionally make all targets.
-C DIRECTORY, --directory=DIRECTORY
Change to DIRECTORY before doing anything.
-d Print lots of debugging information.
--debug[=FLAGS] Print various types of debugging information.
-e, --environment-overrides
Environment variables override makefiles.
-E STRING, --eval=STRING Evaluate STRING as a makefile statement.
-f FILE, --file=FILE, --makefile=FILE
Read FILE as a makefile.
-h, --help Print this message and exit.
-i, --ignore-errors Ignore errors from recipes.
-I DIRECTORY, --include-dir=DIRECTORY
Search DIRECTORY for included makefiles.
-j [N], --jobs[=N] Allow N jobs at once; infinite jobs with no arg.
-k, --keep-going Keep going when some targets can't be made.
-l [N], --load-average[=N], --max-load[=N]
Don't start multiple jobs unless load is below N.
-L, --check-symlink-times Use the latest mtime between symlinks and target.
-n, --just-print, --dry-run, --recon
Don't actually run any recipe; just print them.
-o FILE, --old-file=FILE, --assume-old=FILE
Consider FILE to be very old and don't remake it.
-O[TYPE], --output-sync[=TYPE]
Synchronize output of parallel jobs by TYPE.
-p, --print-data-base Print make's internal database.
-q, --question Run no recipe; exit status says if up to date.
-r, --no-builtin-rules Disable the built-in implicit rules.
-R, --no-builtin-variables Disable the built-in variable settings.
-s, --silent, --quiet Don't echo recipes.
--no-silent Echo recipes (disable --silent mode).
-S, --no-keep-going, --stop
Turns off -k.
-t, --touch Touch targets instead of remaking them.
--trace Print tracing information.
-v, --version Print the version number of make and exit.
-w, --print-directory Print the current directory.
--no-print-directory Turn off -w, even if it was turned on implicitly.
-W FILE, --what-if=FILE, --new-file=FILE, --assume-new=FILE
Consider FILE to be infinitely new.
--warn-undefined-variables Warn when an undefined variable is referenced.
This program built for i386-pc-solaris2.10
Report bugs to <bug-make#gnu.org>
gmake[5]: *** [/export/home/preethi/buildopenjdk/check8/hotspot/make/solaris/makefiles/top.make:84: ad_stuff] Error 2
gmake[4]: *** [/export/home/preethi/buildopenjdk/check8/hotspot/make/solaris/Makefile:225: product] Error 2
gmake[3]: *** [Makefile:217: generic_build2] Error 2
gmake[2]: *** [Makefile:167: product] Error 2
gmake[1]: *** [HotspotWrapper.gmk:45: /export/home/preethi/buildopenjdk/check8/build/solaris-x86-normal-server-release/hotspot/_hotspot.timestamp] Error 2
gmake: *** [/export/home/preethi/buildopenjdk/check8//make/Main.gmk:109: hotspot-only] Error 2
Following steps from:
https://hg.openjdk.java.net/jdk8u/jdk8u/raw-file/tip/README-builds.html
System spec:
SunOS pkg.oracle.com 5.10 Generic_150401-16 i86pc i386 i86pc
1) If we add any ipaddress and host in /etc/hosts whether the problem will get resolved?
2) Why the copied source is not working in another S10?
In /etc/hosts Added 137.254.56.60 openjdk.java.net. But same error. From my system am not able to ping openjdk.java.net. no answer from 137.254.56.60. Am new to solaris and not very familiar with proxy settings. Can anyone please help.

setting up microcks in openshift

I am trying to set up microcks in the openshift..
I am just using the free starter from openshift at the https://console.starter-us-west-2.openshift.com/console/catalog
In the http://microcks.github.io/installing/openshift/ , the command is given as below
oc new-app --template=microcks-persistent --param=APP_ROUTE_HOSTNAME=microcks-microcks.192.168.99.100.nip.io --param=KEYCLOAK_ROUTE_HOSTNAME=keycloak-microcks.192.168.99.100.nip.io --param=OPENSHIFT_MASTER=https://192.168.99.100:8443 --param=OPENSHIFT_OAUTH_CLIENT_NAME=microcks-client
In that , how can i find the route for my project ? my project is called testcoolers .
so what will be instead microcks-microcks.192.168.99.100.nip.io? I guess something will replace 192.168.99.100.nip.io
same with keycloak hostname ?also what will be the Public OpenShift master address? Its now https://192.168.99.100:8443
Installing Microcks appears to assume some level of OpenShift familiarity. Also, there are several restrictions that make this not an ideal install for OpenShift Online Starter, but it can definitely still be made to work.
# Create the template within your namespace
oc create -f https://raw.githubusercontent.com/microcks/microcks/master/install/openshift/openshift-persistent-full-template-https.yml
# Deploy the application from the template, be sure to replace <NAMESPACE> with your proper namespace
oc new-app --template=microcks-persistent-https \
--param=APP_ROUTE_HOSTNAME=microcks-<NAMESPACE>.7e14.starter-us-west- 2.openshiftapps.com \
--param=KEYCLOAK_ROUTE_HOSTNAME=keycloak-<NAMESPACE>.7e14.starter-us-west-2.openshiftapps.com \
--param=OPENSHIFT_MASTER=https://api.starter-us-west-2.openshift.com \
--param=OPENSHIFT_OAUTH_CLIENT_NAME=microcks-client \
--param=MONGODB_VOL_SIZE=1Gi \
--param=MEMORY_LIMIT=384Mi \
--param=MONGODB_MEMORY_LIMIT=384Mi
# The ROUTE params above are still necessary for the variables, but in Starter, you can't specify a hostname in a route, so you'll have to manually create the routes
oc create route edge microcks --service=microcks --insecure-policy=Redirect
oc create route edge keycloak --service=microcks-keycloak --insecure-policy=Redirect
You should also see an error about not being able to create the OAuthClient. This is expected because you don't have permissions to create this for the whole cluster. You will instead need to manually create a user in KeyCloak.
I was able to get this to successfully deploy and logged in on OpenShift Online Starter, so use the comments if you struggle at all.

How to create re-encrypting route for hawkular-metrics in OpenShift Enterprise 3.2

As per documentation to enable cluster metrics, I should create re-encrypting route as per the below statement
$ oc create route reencrypt hawkular-metrics-reencrypt \
--hostname hawkular-metrics.example.com \
--key /path/to/key \
--cert /path/to/cert \
--ca-cert /path/to/ca.crt \
--service hawkular-metrics
--dest-ca-cert /path/to/internal-ca.crt
What exactly should I use for these keys and certificates?
Are these already exists somewhere or I need to create them?
Openshift Metrics developer here.
Sorry if the docs were not clear enough.
The route is used to expose Hawkular Metrics, particularly to the browser running the OpenShift console.
If you don't specify any certificates, the system will use a self signed certificate instead. The browser will complain that this self signed certificate is not trusted, but you can usually just click through to accept it anyways. If you are ok with this, then you don't need to do any extra steps.
If you want the browser to trust this connection by default, then you will need to provide your own certificates signed by a trusted certificate authority. This is exactly similar to how you would have to generate your own certificate if you are running a normal site under https.
From the following command:
$ oc create route reencrypt hawkular-metrics-reencrypt \ --hostname hawkular-metrics.example.com \ --key /path/to/key \ --cert /path/to/cert \ --ca-cert /path/to/ca.crt \ --service hawkular-metrics --dest-ca-cert /path/to/internal-ca.crt
'cert' corresponds to your certificate signed by the certificate authority
'key' corresponds to the key for your certificate
'ca-cert' corresponds to the certificate authorities certificate
'dest-ca-cert' corresponds to the certificate authority which signed the self signed certificate generated by the metrics deployer
The docs https://docs.openshift.com/enterprise/3.2/install_config/cluster_metrics.html#metrics-reencrypting-route should explain how to get the dest-ca-cert from the system
First of all and as far as I know, note that using a re-encrypting route is optional. The documentation mentions deploying without importing any certificate:
oc secrets new metrics-deployer nothing=/dev/null
And you should be able to start with that and make hawkular working (for instance you'll be able to curl with '-k' option). But re-encrypting route is sometimes necessary, some clients refuse to communicate with untrusted certificates.
This page explains what are the certificates needed here: https://docs.openshift.com/enterprise/3.1/install_config/cluster_metrics.html#metrics-reencrypting-route
Note that you can also configure it from the web console if you find it more convenient: from https://(your_openshift_host)/console/project/openshift-infra/browse/routes , you can create a new route and upload the certificate files from that page. Under "TLS termination" select "Re-Encrypt", then provide the 4 certificate files.
If you don't know how to generate self-signed certificates you can follow steps described here: https://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certificate-authority/ . You'll end up with a rootCA.pem file (use it for "CA Certificate"), a device.key file (or name it hawkular.key, and upload it as private key) and a device.crt file (you can name it hawkular.pem, it's the PEM format certificate). When asked for the Common Name, make sure to enter the hostname for your hawkular server, like "hawkular-metrics.example.com"
The final one to provide is the current self-signed certificate used by Hawkular, under so-called "Destination CA Certificate". OpenShift documentation explains how to get it: run
base64 -d <<< \
`oc get -o yaml secrets hawkular-metrics-certificate \
| grep -i hawkular-metrics-ca.certificate | awk '{print $2}'`
and, if you're using the web console, save it to a file then upload it under Destination CA Certificate.
Now you should be done with re-encrypting.