how to retrieve the ip of openshift private docker registry when started through minishift and accessed from machine where minishift is present - openshift

I have installed openshift through minishift on mac. I am able to run the command docker login -u developer -p <pass> 172.30.1.1:5000 from the shell where openshift is running. However I need to run the same login command from host mac machine and don't know the ip to use.
The openshift console can be accessed from https://192.168.64.3:8443.
The command minishift openshift registry returns an error.
I can run the oc commands from mac host machine.

I think you better login to docker daemon: https://docs.okd.io/latest/minishift/using/docker-daemon.html instead that login to the internal registry directly. Once you've done you can use docker client as it is bound to your minishift environment.

Related

Unable to access my off site MYSQL DB via a VSCode extension when it is running under WSL2

I'm running Windows 11. I have my dev environment in Debian running via WSL2.
I have this VSCode extension installed (although I have tried multiple SQL VSCode extensions and they all act the same)
If I have a VSCode window open in a WSL2 instance I am unable to connect to my DB but if I have a normal VSCode window open I am able to use any extension to access my DB.
In both instances the DB connection details are identical.
I need to use a program called ScaleFT to create a secure tunnel to the DB, I'm assuming this is the cause of the issue in part.
I am able to connect to my local dev MYSQL DB running in docker from both a WSL and normal VSCode window.
I've found that WSL's network sharing with the host system seems to run into trouble a lot with VPN and Ad-Hoc tunnel sharing with the Windows host.
What worked best for me was just to install an independent client for the WSL host. I use Ubuntu personally but I bet this will be a drop-in for your Debian setup, too.
Add the ScaleFT Repo to apt:
echo "deb http://pkg.scaleft.com/deb linux main" | sudo tee -a /etc/apt/sources.list
Add the ScaleFT signing keys to your local keyring:
curl -fsSL https://dist.scaleft.com/pki/scaleft_deb_key.asc | gpg --dearmor | sudo tee /usr/share/keyrings/scaleft-deb-key.gpg
Pull package list and install the Linux tools:
sudo apt update && sudo apt install -y scaleft-client-tools scaleft-url-handler
That should leave you with a ready copy of the sft client tool. You can test with:
sft --version
From there, you can enroll your new WSL client and those connections should start working for you but, of course, your mileage may vary!

Can't launch an instance of the Docker image to run Mysql Server on Mac M1

I'm installing MySQL server on my MacBook (M1 Chip). I run the following command to download the MySQL server
sudo docker pull mcr.microsoft.com/mssql/server:2019-latest
Then I run the following command to launch an instance of the Docker I just downloaded:
docker run -d --name sql_server_demo -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=mypassword' -p 1433:1433 mcr.microsoft.com/mssql/server:2019-latest
But it pull a warning
WARNING: The requested image's platform (linux/amd64) does not match
the detected host platform (linux/arm64/v8) and no specific platform
was requested
559ff6b849b6e62cbdbfa3d7cde403f314798bffb4c1622aab8e305d3b49df97
Any one know how to fix this?
Try downloading using mac with apple chip
https://hub.docker.com/editions/community/docker-ce-desktop-mac?tab=description
I managed to solve this by installing mariadb instead of mysql

How to connect docker container with host machine's localhost mysql database?

I have a war file that uses the MySQL database in the backend.
I have deployed my war file in a docker container and I am able to ping this from my browser.
I want to connect my app with the MySQL database. This database exists on my host machine's localhost:3306
As I am unable to connect this from inside container's localhost, what I tried is,
I run a command docker inspect --format '{{ .NetworkSettings.IPAddress }}' 213be777a837
This command gave me an IP address 172.17.0.2. I went to MySQL server options and put this IP address in the bind field and restarted the server. After that, I have updated my projects database connection string with 172.17.0.2:3306
But it is not working. Could anyone please tell what I am missing?
I have also tried adding a new DB user with root#% and then run command allow all permission to 'root#%' but nothing worked.
Follow the steps:-
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 dockernet
docker run -p 8082:8080 --network dockernet -d 6ab907c973d2
in your project set connection string : jdbc:mysql://host.docker.internal:3306/....
And then deploy.
tl;dr: Use 172.17.0.1:3306 if you're on Linux.
Longer description:
As I understand what you need to do is to connect from your Docker container to a host port. But what you have done is to try to bind the host process (MySQL) to the container networking interface. Not sure what the implications of a host process trying to bind to another host process network namespace, but IIUC your MySQL process should not be able to bind to that address.
When you start MySQL with default settings that bind it to 0.0.0.0 it's available for Docker containers through the Docker virtual bridge. Therefore, what you should do is to route your requests from the WAR process to the host process through that virtual bridge (if this is the networking mode you're using. If you have not changed any Docker networking settings, it should be). This is done by specifying the bridge gateway address as the MySQL address and the port it's started with.
You can get the bridge IP address by checking your network interfaces. When Docker is installed, it configures the virtual bridge by default, and that should show up as docker0 if you're on Linux. The IP address for this will most probably be 172.17.0.1. So your MySQL address from the container's point of view is jdbc:mysql://172.17.0.1:3306/....
1 - https://docs.docker.com/network/
2 - https://docs.docker.com/network/bridge/
From your question, I am assuming you both your war file and MySQL is deployed locally, and you want to connect them. One way to allow both containers that are locally deployed to talk to each other is by:
Create your own network docker network create <network-name>
Then when you run your war file and MySQL, deploy both of them using the --network. E.g.
War File: docker run --name war-file --network <network-name> <war file image>
MySQL: docker run --name mysql --network <network-name> <MySQL image>
After that, if you should be able to connect to your MySQL using mysql:3306 from inside your war file docker container, since they are both on the same custom network.
If you want to read up more about this, can take a look at docker documentation on network. (https://docs.docker.com/network/bridge/).
Your setup is fine. You just need to do this one change.
While running the application container (the one in which you are deploying your war file), you need to add following argument in its docker run command.
--net=host
Example:
docker run -itd -p 8082:8080 --net=host --name myapp myimage
With this change, you need not to change connection string as well. localhost:3306 would work fine. And you will be able to set up a connection with MySQL.

docker login to openshift internal docker registry - Gateway Timeout

Running the openshift cluster using minishift in ubuntu OS. minishift IP is "192.168.42.48". I am following the URL to access the internal docker registry.
After the minishift has started successfully, logged in as administrator using "oc login -u system:admin" then added the cluster-role to user "chak".
~/github/cheatsheets$ oc adm policy add-cluster-role-to-user cluster-admin chak
cluster role "cluster-admin" added: "chak"
Then copied the token for user "chak" and trying to login to docker registry but it has failed with below error. The minishift ip and ip in the error output is different. In the terminal, already logged in as administrator and added a cluster-admin role.
So, I expect docker daemon to login to the openshift cluster ip that is started by the minishift. why is docker daemon trying to login to ip in the error rather than than minishift ip?
I also have http_proxy, https_proxy and no_proxy set, since i am connected to corporate network.
~/github/cheatsheets$ docker login -u chak -p C5u5F1iwA6gl4va1K8OZ01DaRPdMYMnDQklErn2FzjY docker-registry-default.127.0.0.1.nip.io
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
error during connect: Post https://192.168.42.253:2376/v1.39/auth: Gateway Timeout
Edit 1:
~/github/hashitvault$ docker login -u chak -p Naqp6NScYF7zOcKN41SuYQ045qR9zBN6lfGVnvxhrU docker-registry-default.192.168.42.186.nip.io
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get http://docker-registry-default.192.168.42.186.nip.io/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
oc internal docker registry route is exposed.
when hit in browser,reaching the 502 server error.
what am i doing wrong here?

Openshift stable release install centos 7

Please can someone let me know which latest version of openshift-ansible (origin) is stable enough to install on Centos 7?
I am looking for successful multi-node install experience and any tips that was used.
Thanks
the latest stable release is 3.9
git clone https://github.com/openshift/openshift-ansible
cd openshift-ansible
git checkout release-3.9
and follow the Advanced Installation guide
https://docs.openshift.org/latest/install_config/install/advanced_install.html
It is now working.
After enabling openshift_repos_enable_testing=true, I did not run the pre-requisite playbook before the deploy_cluster playbook, which was why it was still giving the error of not finding the packages.
I believe that v3.11.0 version of OpenShift OKD/Origin (latest 3.x release at time) meets your needs. In this answer is a complete roadmap for installing OpenShift OKD/Origin as a single node cluster service.
Some information transposed from the OKD website about OpenShift OKD/Origin...
The Community Distribution of Kubernetes that powers Red Hat
OpenShift. Built around a core of OCI container packaging and
Kubernetes container cluster management, OKD is also augmented by
application lifecycle management functionality and DevOps tooling. OKD
provides a complete open source container application platform.
OKD is a distribution of Kubernetes optimized for continuous
application development and multi-tenant deployment. OKD adds
developer and operations-centric tools on top of Kubernetes to enable
rapid application development, easy deployment and scaling, and
long-term lifecycle maintenance for small and large teams. OKD is a
sibling Kubernetes distribution to Red Hat OpenShift.
OKD embeds Kubernetes and extends it with security and other
integrated concepts. OKD is also referred to as Origin in github and
in the documentation.
If you are looking for enterprise-level support, or information on
partner certification, Red Hat also offers Red Hat OpenShift Container
Platform.
So I recommend starting with OpenShift OKD/Origin using the roadmap below to install on CentOS 7. Then you can explore other possibilities ("multi-node", for example).
However, if you want to test the OpenShift (OKD) 4.X application the guide and the right way to do this is at this link Install the OpenShift (OKD) 4.X cluster (UPI/"bare-metal"). It is a long way and with a reasonable level of complexity.
PLUS:
Informations about OpenShift Ansible on GitHub and RedHat Ansible;
You can take a look at the OpenShift Installer (NOT OKD/Origin!).
OpenShift Origin (OKD) - Open source container application platform:
OpenShift is a family of containerization software products developed by Red Hat. Its flagship product is the OpenShift Container Platform - an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. The family's other products provide this platform through different environments: OKD serves as the community-driven upstream (akin to the way that Fedora is upstream of Red Hat Enterprise Linux), OpenShift Online is the platform offered as software as a service, and Openshift Dedicated is the platform offered as a managed service.
The OpenShift Console has developer and administrator oriented views. Administrator views allow one to monitor container resources and container health, manage users, work with operators, etc. Developer views are oriented around working with application resources within a namespace. OpenShift also provides a CLI that supports a superset of the actions that the Kubernetes CLI provides.
The OpenShift Origin (OKD) is the comunity driven version of OpenShift (non-enterprise-level). That means you can host your own PaaS (Platform as a service) for free and almost with no hassle.
[Ref(s).: https://en.wikipedia.org/wiki/OpenShift ,
https://www.openshift.com/blog/openshift-ecosystem-get-started-openshift-origin-gitlab ]
Setup Local OpenShift Origin (OKD) Cluster on CentOS 7
All commands in this setup must be performed with the "root" user.
Update CentOS 7
Updating your CentOS 7 server...
yum -y update
Install and Configure Docker
OpenShift required docker engine on the host machine for running containers. Install Docker and other dependencies on CentOS 7 using the commands below...
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install git-core
yum -y install wget
yum -y install yum-utils
yum -y install device-mapper-persistent-data
yum -y install lvm2
yum -y install docker-ce
yum -y install docker-ce-cli
yum -y install containerd.io
Add logged in user account to docker group...
usermod -aG docker $USER
newgrp docker
Create necessary folders...
mkdir "/etc/docker"
mkdir "/etc/containers"
Create "registries.conf" file with an insecure registry parameter ("172.30.0.0/16") to the Docker daemon...
tee "/etc/containers/registries.conf" << EOF
[registries.insecure]
registries = ['172.30.0.0/16']
EOF
Create "daemon.json" file with configurations...
tee "/etc/docker/daemon.json" << EOF
{
"insecure-registries": [
"172.30.0.0/16"
]
}
EOF
We need to reload systemd and restart the Docker daemon after editing the config...
systemctl daemon-reload
systemctl restart docker
Enable Docker to start at boot...
systemctl enable docker
Then enable "IP forwarding" on your system...
tee "/etc/sysctl.d/ip_forward.conf" << EOF
net.ipv4.ip_forward=1
EOF
sysctl -w net.ipv4.ip_forward=1
Configure Firewalld.
Add the necessary firewall permissions...
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=8443/tcp --permanent
firewall-cmd --zone=public --add-port=53/udp --permanent
firewall-cmd --zone=public --add-port=8053/udp --permanent
firewall-cmd --reload
NOTE: Allows containers access to the OpenShift master API (8443/tcp), DNS (53/udp) endpoints and add others permissions.
Download OpenShift
Download the OpenShift binaries from GitHub and move them to the "/usr/local/bin/" folder...
wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
tar -zxvf openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
cd ./openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit
mv ./oc /usr/local/bin/
mv ./kubectl /usr/local/bin/
rm -rf ./openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit*
Verify installation of OpenShift client utility...
oc version
Start OpenShift Origin (OKD) Local Cluster
Now bootstrap a local single server OpenShift Origin cluster by running the following command...
oc cluster up --public-hostname="<YOUR_SERVER_IP_OR_NAME>"
... or...
oc cluster up --public-hostname="$(ip route get 1 | awk '{print $NF;exit}')"
This one above will get the primary IP address of the local machine dynamically.
[Ref(s).: https://stackoverflow.com/a/25851186/3223785 ]
TIP: In case of error, try perform the command oc cluster down and repeat the command above.
NOTE: Insufficient hardware configuration (mainly CPU and RAM) will cause timeout on the command above.
IMPORTANT: If the parameter --public-hostname="<YOUR_SERVER_IP_OR_NAME>" is not informed, then calls to the web service ("web console") at URL <YOUR_SERVER_IP_OR_NAME> will be redirected to the local IP "127.0 .0.1".
[Ref(s).: https://github.com/openshift/origin/issues/19699 , https://github.com/openshift/origin/issues/19699#issuecomment-854069124 , https://github.com/openshift/origin/issues/20726 ,
https://github.com/openshift/origin/issues/20726#issuecomment-498078849 , https://hayardillasenlared.blogspot.com/2020/06/instalar-openshift-origin-ubuntu.html , https://www.a5idc.net/helpview_526.html , https://thecodeshell.wordpress.com/ , https://www.techrepublic.com/article/how-to-install-openshift-origin-on-ubuntu-18-04/ ]
The command above will...
Start OKD Cluster listening on the interface informed (<YOUR_SERVER_IP_OR_NAME>:8443);
Start a web console listening on all interfaces at "/console" (<YOUR_SERVER_IP_OR_NAME>:8443);
Launch Kubernetes system components;
Provisions registry, router, initial templates and a default project;
The OpenShift cluster will run as an all-in-one container on a Docker host.
On a successful installation, you should get output similar to below...
[...]
Login to server ...
Creating initial project "myproject" ...
Server Information ...
OpenShift server started.
The server is accessible via web console at:
https://<YOUR_SERVER_IP_OR_NAME>:8443
You are logged in as:
User: developer
Password: <any value>
To login as administrator:
oc login -u system:admin
TIPS:
There are a number of options which can be applied when setting up Openshift Origin. View them with oc cluster up --help;
Command model using custom options...
MODEL
oc cluster up --public-hostname="<PUBLIC_HOSTNAME_OR_IP>" --routing-suffix="<PUBLIC_HOSTNAME_OR_IP>.<SUFFIX>"
EXAMPLE
oc cluster up --public-hostname="192.168.56.124" --routing-suffix="192.168.56.124.nip.io"
;
The OpenShift Origin cluster configuration files will be located inside the "~/openshift.local.clusterup" directory. The "~" is the logged in user home directory.
If your cluster setup was successful the command...
oc cluster status
... will give you a positive output like this...
Web console URL: https://<YOUR_SERVER_IP_OR_NAME>:8443/console/
Config is at host directory
Volumes are at host directory
Persistent volumes are at host directory /root/openshift.local.clusterup/openshift.local.pv
Data will be discarded when cluster is destroyed
Run OpenShift as a single node cluster service on system startup
Create OpenShift service file...
read -r -d '' FILE_CONTENT << 'HEREDOC'
BEGIN
[Unit]
Description=OpenShift oc cluster up service
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/bash -c "/usr/local/bin/oc cluster up --public-hostname=\"$(ip route get 1 | awk '{print $NF;exit}')\""
ExecStop=/usr/bin/bash -c "/usr/local/bin/oc cluster down"
Restart=no
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=occlusterup
User=root
Type=oneshot
RemainAfterExit=yes
TimeoutSec=300
[Install]
WantedBy=multi-user.target
END
HEREDOC
echo -n "${FILE_CONTENT:6:-3}" > '/etc/systemd/system/openshift.service'
NOTE: For some reason without the workaround /usr/bin/bash -c "<SOME_COMMAND>" we were unable to start the OpenShift cluster. Additional information about parameters for the oc cluster up command can be seen in the references immediately below.
[Ref(s).: https://avinetworks.com/docs/18.1/avi-vantage-openshift-installation-guide/ ,
https://github.com/openshift/origin/issues/7177#issuecomment-391478549 ,
https://github.com/minishift/minishift/issues/1910#issuecomment-375031172 ]
[Ref(s).: https://tobru.ch/openshift-oc-cluster-up-as-systemd-service/ , https://eenfach.de/gitblit/blob/RedHatTraining!agnosticd.git/af831991c7c752a1215cfc4cff6a028e31f410d7/ansible!configs!rhte-oc-cluster-vms!files!oc-cluster.service.j2 ]
Start and enable (start at boot) the OpenShift service and see the log output in sequence...
systemctl enable openshift.service
systemctl start openshift.service
journalctl -u openshift.service -f --no-pager | less
Using OpenShift OKD/Origin Admin Console
OKD includes a web console which you can use for creation and other management actions. This web console is accessible on server IP/hostname on the port 8443 via https...
https://<IP_OR_HOSTNAME>:8443/console
NOTE: You should see an OpenShift Origin page with username and password form (USERNAME: developer / PASSWORD: developer ).
Deploy a test application in the Cluster
Login to Openshift cluster as "regular developer" user (USERNAME: developer / PASSWORD: developer )...
oc login
TIP: You begin logged in as "developer".
Create a test project using oc "new-project" command...
MODEL
oc new-project <PROJECT_NAME> --display-name="<PROJECT_DISPLAY_NAME>" --description="<PROJECT_DESCRIPTION>"
EXAMPLE
oc new-project test-project --display-name="Test Project" --description="My cool Test Project."
NOTE: All commands below involving the "deployment-example" parameter value will be linked to the "test-project" because after create this project it will be selected as the project for the subsequent settings. To confirm this login as administrator using the oc login -u system:admin command and see the output of the oc status command. For more information, see the oc project <PROJECT_NAME> command in the "Some OpenShift Origin Cluster Useful Commands" section.
Tag an application image from Docker Hub registry...
oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest
Deploy application to OpenShift...
MODEL
oc new-app <DEPLOYMENT_NAME>
EXAMPLE
oc new-app "deployment-example"
Allow external access to the deployed application...
MODEL
oc expose "svc/<DEPLOYMENT_NAME>"
EXAMPLE
oc expose "svc/deployment-example"
Show application deployment status...
oc status
Show pods status...
oc get pods
Get service detailed information...
oc get svc
Test Application local access...
NOTE: See <CLUSTER_IP> on command oc get svc output above.
curl http://<CLUSTER_IP>:8080
See external access route to the deployed application...
oc get routes
Test external access to the application...
Open the URL <HOST_PORT> on your browser.
MODEL
http://<HOST_PORT>
EXAMPLE
http://deployment-example-test-project.192.168.56.124.nip.io
NOTES:
See <HOST_PORT> on oc get routes output;
The wildcard DNS record *.<IP_OR_HOSTNAME>.nip.io points to OpenShift Origin server IP address.
Delete test project...
MODEL
oc delete project "<PROJECT_NAME>"
EXAMPLE
oc delete project "test-project"
[Ref(s).: https://docs.openshift.com/container-platform/4.2/applications/projects/working-with-projects.html#deleting-a-project-using-the-CLIprojects ]
Delete test deployment...
MODEL
oc delete all -l app="<DEPLOYMENT_NAME>"
EXAMPLE
oc delete all -l app="deployment-example"
Check pods status after deleting the project and the deployment...
oc get pods
TIP: Completely recreate the cluster...
oc cluster down
rm -rf ~/openshift.local.clusterup
. May be necessary reboot the server to delete the above folder;
. The "~" is the logged in user home directory.
Some OpenShift Origin Cluster Useful Commands
To login as an administrator use...
oc login -u system:admin
As administrator ("system:admin") user you can see informations such as node status...
oc get nodes
To get more detailed information about a specific node, including the reason for the current condition...
MODEL
oc describe node "<NODE_NAME>"
EXAMPLE
oc describe node "localhost"
To display a summary of the resources you created...
oc status
Select a project to perform CLI operations...
oc project "<PROJECT_NAME>"
NOTE: The selected project will be used in all subsequent operations that manipulate project-scoped content.
[Ref(s).: https://docs.openshift.com/container-platform/4.2/applications/projects/working-with-projects.html#viewing-a-project-using-the-CLI_projects ]
To return to the "regular developer" user (USERNAME: developer / PASSWORD: developer )...
oc login
To check who is the logged in user...
oc whoami
Thanks! =D