Remove older Hyperledger-sawtooth or pull latest repo and rerun build_all? - hyperledger-sawtooth

I had previously brought down 0.8 and want to use new version.
Is it ok to update local repo and 'build_all' or must I remove all the older docker images first?

This may be brute force, but this is what I ended up doing.
Caution, the docker command will take out all images so if you want to
preserve some of them you may want a more selective approach.
Sawtooth platform
Remove all docker images using this command docker rmi -f $(docker images -a -q)
Bring down the latest sawtooth compose file sawtooth-default.yaml
Execute compose docker-compose -f sawtooth-default.yaml up
Sawtooth repo development
Clone the latest repository
Go to the root directory of the repo cd ~\sawtooth-core
At a minimum do bin\build_all -l python
I am using java so I do a bin\build_all -l java as well
Access to individual CLI and dev languages tested out 100% as per the Hyperledger Sawtooth documentation

Related

msys2 ssh: invalid key format (under Github CI only)

I am cloning repoB within GitHub CI workflow of repoA, using deployment key of repoB (stored in repoA). I understand this might not be a good practice, no need to comment on that (thanks).
The workflow tests this procedure natively on ubuntu-20.04 and using MSYS2 on windows-latest. It works on Ubuntu, I can run the commands manually on regular MSYS2 installation, but it fails on GitHub CI. The CI log has all the details, the essential command being
git -c core.sshCommand='ssh -vvv -i repoB_deployment_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' clone -b main git#github.com:eudoxos/repoB.git repoB
The failure under Windows is Load key "repoB_deployment_key": invalid format and I am not able to find out what's wrong. I tried changing permissions for the private key (chmod 600), adding the -o UserKnownHostsFile=/dev/null and -o StrictHostKeyChecking=no, running the key through unix2dos, adding extra trailing newline — nothing helped.
Again, the same command works under Ubuntu and under MSYS2 on desktop Windows.
The repos are publicly accessible fro reading; you are welcome to open a PR to the repoA repository; a PR should trigger the CI run.
Running the key through dos2unix (not unix2dos) fixed the issue. I will file that as an issue with msys2, as this should be documented.
Try in your GitHub workflow to display the key first, in order to visually check its content and its format (as seen here)
Check also which SSH you used when creating the key, the c:\Program Files\OpenSSH-Win64 one, or the one with Git for Windows (c:\Program Files\git\usr\bin\ssh.exe).

Install MySQL on Windows Docker Image

Anyone had success adding MySQL to a Windows docker image? I tried two different ways of deploying MySQL to my image.
I tried using the msi from MySQL in non-interactive mode. Does not work at all in a container.
While Installing Mysql.msi through powershell getting below error
I tried extracting the zip to set things up manually using the mysqld commands does nothing at all. Literally nothing, the exectutables behave as if they just run and exit (no output, nothing):
https://github.com/Somesh-K/Automation-Mysql/blob/main/1.mysql_setup_v2.ps1
Something is very weird about all of this.
Yes, I know that there's a perfectly good MySQL docker linux container published by Oracle to Docker hub. This works. The problem is that running a Windows container and Linux container that need to interact creates a really unnecessary frustration for the user in terms of networking between the two.
Using a different back-end (like SQL server) for our application is not feasible and using .NET core instead of .NET framework is not feasible. To simplify, I'd like to just install MySQL on our windows based webserver docker image. This seems do-able using the two methods described in the links above, but as noted, it does not work and there's very odd behavior from the MySQL binaries when they are run in the container.
Here's an example of the odd behavior:
Install Docker Desktop for Windows
Download the Win32 install zips from MySQL and place in C:\mydata
https://dev.mysql.com/downloads/mysql/
Pull down the ASPNET image from Docker Hub, Run it, and Open up Powershell:
# docker pull mcr.microsoft.com/dotnet/framework/aspnet:4.8
# docker run --name testweb -v C:\mydata:C:\mydata:R -d mcr.microsoft.com/dotnet/framework/aspnet:4.8
# docker exec -it testweb powershell
C:\ > cd C:\mydata
C:\mydata\ > Expand-archive -path .\mysql-5.7.36-winx64.zip .
C:\mydata\ > cd \mysql-5.7.36-winx64\bin
C:\mydata\mysql-5.7.36-winx64\bin\ > .\mysql.exe -version
[zero output, acts like it's an empty executable]
Results
None of the executables/binaries in the extracted mysql bin directory on the container do anything at all. They behave as if someone wrote and executable that just exits. I thought I had a bad install zip so I extracted the same zip on my regular Windows 10 workstation. All of the binaries at least return errors or do something.
This is super odd. Any help would be appreciated.
Downloading this executable and putting it into my container seemed to do the trick:
https://download.microsoft.com/download/2/E/6/2E61CFA4-993B-4DD4-91DA-3737CD5CD6E3/vcredist_x64.exe
Placed this on my container and started it
C:\vcredist.exe /Q
After doing this, the executables starting working:
C:\ > cmd.exe /C "C:\mysql\bin\mysqld" --initialize-insecure
C:\ > cmd.exe /C "C:\mysql\bin\mysqld" --install
C:\ > start-service mysql
C:\ > cmd.exe /C "C:\mysql\bin\mysql" -u root

Using a connector with Helm-installed Kafka/Confluent

I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so:
helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360
The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of 3 (just because I'm trying to conserve resources on my local minikube; hopefully that's not relevant to my problem).
Also running on Minikube is a MySQL instance. Here's the output of kubectl get pods --namespace myNamespace:
I want to connect MySQL and Kafka, using one of the connectors (like Debezium MySQL CDC, for instance). In the instructions, it says:
Install your connector
Use the Confluent Hub client to install this
connector with:
confluent-hub install debezium/debezium-connector-mysql:0.9.2
Sounds good, except 1) I don't know which pod to run this command on, 2) None of the pods seem to have a confluent-hub command available.
Questions:
Does confluent-hub not come installed via those Helm charts?
Do I have to install confluent-hub myself?
If so, which pod do I have to install it on?
Ideally this should be configurable as part of the helm script, but unfortunately it is not as of now. One way to work around this is to build a new Docker from Confluent's Kafka Connect Docker image. Download the connector manually and extract the contents into a folder. Copy the contents of this to a path in the container. Something like below.
Contents of Dockerfile
FROM confluentinc/cp-kafka-connect:5.2.1
COPY <connector-directory> /usr/share/java
/usr/share/java is the default location where Kafka Connect looks for plugins. You could also use different location and provide the new location (plugin.path) during your helm installation.
Build this image and host it somewhere accessible. You will also have to provide/override the image and tag details during the helm installation.
Here is the path to the values.yaml file. You can find the image and plugin.path values here.
Just an add-on to Jegan's comment above: https://stackoverflow.com/a/56049585/6002912
You can choose to do the Dockerfile below. Recommended.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
RUN confluent-hub install --no-prompt debezium/debezium-connector-postgresql:1.0.0
Or you can use a Docker's multi-stage build instead.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
COPY --from=debezium/connect:1.0 \
/kafka/connect/debezium-connector-postgres/ \
/usr/share/confluent-hub-components/debezium-connector-postgres/
This will help you to save time on getting the right jar files for your plugins like debezium-connector-postgres.
From Confluent documentation: https://docs.confluent.io/current/connect/managing/extending.html#create-a-docker-image-containing-c-hub-connectors
The Kafka Connect pod should already have the confluent-hub installed. It is that pod you should run the commands on.
The cp kafka connect pod has 2 containers, one of them is a cp-kafka-connect-server container.That container has confluent-hub installed.You can login into that container and run your connector commands there.To login into that container, run the following command:
kubectl exec -it {pod-name} -c cp-kafka-connect-server -- /bin/bash
As of latest version of chart, this can be achieved using customEnv.CUSTOM_SCRIPT_PATH
See README.md
Script can be passed as a secret and mounted as a volume

kubernetes installation on coreOS

I am setting up Kubernetes on coreOS, on GCE. However, it is not going through due to the SDK dependency on Python. I downloaded python and tried installing it, but it is looking for a C compiler. Unfortunately I couldn't get one. Could someone help with this?
Below is the link I am following to set this up
https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md
You're probably better off using a cloud-init file that curls, installs and runs each binary for kubernetes as a systemd unit. So each would look like:
- name: kube-apiserver.service
command: start
content: |
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=etcd2.service setup-network-environment.service
After=etcd2.service setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-apiserver -z /opt/bin/kube-apiserver https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-apiserver
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kubectl -z /opt/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubectl
ExecStartPre=/usr/bin/chmod 755 /opt/bin/kubectl
ExecStart=/opt/bin/kube-apiserver --portal_net=10.244.0.0/16 --etcd_servers=http://127.0.0.1:4001 --logtostderr=true --insecure_port=8080 --insecure_bind_address=0.0.0.0
Restart=always
RestartSec=10
And similar for each other binary. Just make sure you set them up to follow the chain of dependencies. This way the binaries are already compiled, compiling is something that coreos isn't exactly designed for.

Alternative ways to deploy code to Openshift

I am trying to setup Travis CI to deploy my repository to Openshift on a successful build. Is there a way to deploy a repository besides using Git?
Git is the official mechanism for how your code is update, however depending on the type of application that you are deploying you may not need to deploy your entire code base.
For example Java application (war, ear, etc) can be deployed to JBoss or Tomcat servers, by simply taking the built application and checking it into the OpenShift git repositories, webapps or deploy directories.
An alternative to this (and it will be unsupported), is to scp your application to the gear using the SSH key. However any time the application is moved or updated (with git) this content stands a good chance of getting deleted(cleaned), by the gear.
We're working on direct binary deploys ("push") and "pull" style deploys (Openshift downloads a binary for you. The design/process is described here:
https://github.com/openshift/openshift-pep/blob/master/openshift-pep-006-deploy.md
You can do a SCP to the app-root/dependencies/jbossews/webapps directory direcly. I was able to do that successfully and have the app working. Here is the link
Here is the code which I had in the after_success blck
after_success:
- sudo apt-get -y install sshpass
- openssl aes-256-cbc -K $encrypted_8544f7cb7a3c_key -iv $encrypted_8544f7cb7a3c_iv
-in id_rsa.enc -out ~/id_rsa_dpl -d
- chmod 600 ~/id_rsa_dpl
- sshpass scp -i ~/id_rsa_dpl webapps/ROOT.war $DEPLOY_HOST:$DEPLOY_PATH
Hope this helps