Openshift stable release install centos 7 - openshift

Please can someone let me know which latest version of openshift-ansible (origin) is stable enough to install on Centos 7?
I am looking for successful multi-node install experience and any tips that was used.
Thanks

the latest stable release is 3.9
git clone https://github.com/openshift/openshift-ansible
cd openshift-ansible
git checkout release-3.9
and follow the Advanced Installation guide
https://docs.openshift.org/latest/install_config/install/advanced_install.html

It is now working.
After enabling openshift_repos_enable_testing=true, I did not run the pre-requisite playbook before the deploy_cluster playbook, which was why it was still giving the error of not finding the packages.

I believe that v3.11.0 version of OpenShift OKD/Origin (latest 3.x release at time) meets your needs. In this answer is a complete roadmap for installing OpenShift OKD/Origin as a single node cluster service.
Some information transposed from the OKD website about OpenShift OKD/Origin...
The Community Distribution of Kubernetes that powers Red Hat
OpenShift. Built around a core of OCI container packaging and
Kubernetes container cluster management, OKD is also augmented by
application lifecycle management functionality and DevOps tooling. OKD
provides a complete open source container application platform.
OKD is a distribution of Kubernetes optimized for continuous
application development and multi-tenant deployment. OKD adds
developer and operations-centric tools on top of Kubernetes to enable
rapid application development, easy deployment and scaling, and
long-term lifecycle maintenance for small and large teams. OKD is a
sibling Kubernetes distribution to Red Hat OpenShift.
OKD embeds Kubernetes and extends it with security and other
integrated concepts. OKD is also referred to as Origin in github and
in the documentation.
If you are looking for enterprise-level support, or information on
partner certification, Red Hat also offers Red Hat OpenShift Container
Platform.
So I recommend starting with OpenShift OKD/Origin using the roadmap below to install on CentOS 7. Then you can explore other possibilities ("multi-node", for example).
However, if you want to test the OpenShift (OKD) 4.X application the guide and the right way to do this is at this link Install the OpenShift (OKD) 4.X cluster (UPI/"bare-metal"). It is a long way and with a reasonable level of complexity.
PLUS:
Informations about OpenShift Ansible on GitHub and RedHat Ansible;
You can take a look at the OpenShift Installer (NOT OKD/Origin!).
OpenShift Origin (OKD) - Open source container application platform:
OpenShift is a family of containerization software products developed by Red Hat. Its flagship product is the OpenShift Container Platform - an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. The family's other products provide this platform through different environments: OKD serves as the community-driven upstream (akin to the way that Fedora is upstream of Red Hat Enterprise Linux), OpenShift Online is the platform offered as software as a service, and Openshift Dedicated is the platform offered as a managed service.
The OpenShift Console has developer and administrator oriented views. Administrator views allow one to monitor container resources and container health, manage users, work with operators, etc. Developer views are oriented around working with application resources within a namespace. OpenShift also provides a CLI that supports a superset of the actions that the Kubernetes CLI provides.
The OpenShift Origin (OKD) is the comunity driven version of OpenShift (non-enterprise-level). That means you can host your own PaaS (Platform as a service) for free and almost with no hassle.
[Ref(s).: https://en.wikipedia.org/wiki/OpenShift ,
https://www.openshift.com/blog/openshift-ecosystem-get-started-openshift-origin-gitlab ]
Setup Local OpenShift Origin (OKD) Cluster on CentOS 7
All commands in this setup must be performed with the "root" user.
Update CentOS 7
Updating your CentOS 7 server...
yum -y update
Install and Configure Docker
OpenShift required docker engine on the host machine for running containers. Install Docker and other dependencies on CentOS 7 using the commands below...
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install git-core
yum -y install wget
yum -y install yum-utils
yum -y install device-mapper-persistent-data
yum -y install lvm2
yum -y install docker-ce
yum -y install docker-ce-cli
yum -y install containerd.io
Add logged in user account to docker group...
usermod -aG docker $USER
newgrp docker
Create necessary folders...
mkdir "/etc/docker"
mkdir "/etc/containers"
Create "registries.conf" file with an insecure registry parameter ("172.30.0.0/16") to the Docker daemon...
tee "/etc/containers/registries.conf" << EOF
[registries.insecure]
registries = ['172.30.0.0/16']
EOF
Create "daemon.json" file with configurations...
tee "/etc/docker/daemon.json" << EOF
{
"insecure-registries": [
"172.30.0.0/16"
]
}
EOF
We need to reload systemd and restart the Docker daemon after editing the config...
systemctl daemon-reload
systemctl restart docker
Enable Docker to start at boot...
systemctl enable docker
Then enable "IP forwarding" on your system...
tee "/etc/sysctl.d/ip_forward.conf" << EOF
net.ipv4.ip_forward=1
EOF
sysctl -w net.ipv4.ip_forward=1
Configure Firewalld.
Add the necessary firewall permissions...
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=8443/tcp --permanent
firewall-cmd --zone=public --add-port=53/udp --permanent
firewall-cmd --zone=public --add-port=8053/udp --permanent
firewall-cmd --reload
NOTE: Allows containers access to the OpenShift master API (8443/tcp), DNS (53/udp) endpoints and add others permissions.
Download OpenShift
Download the OpenShift binaries from GitHub and move them to the "/usr/local/bin/" folder...
wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
tar -zxvf openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
cd ./openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit
mv ./oc /usr/local/bin/
mv ./kubectl /usr/local/bin/
rm -rf ./openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit*
Verify installation of OpenShift client utility...
oc version
Start OpenShift Origin (OKD) Local Cluster
Now bootstrap a local single server OpenShift Origin cluster by running the following command...
oc cluster up --public-hostname="<YOUR_SERVER_IP_OR_NAME>"
... or...
oc cluster up --public-hostname="$(ip route get 1 | awk '{print $NF;exit}')"
This one above will get the primary IP address of the local machine dynamically.
[Ref(s).: https://stackoverflow.com/a/25851186/3223785 ]
TIP: In case of error, try perform the command oc cluster down and repeat the command above.
NOTE: Insufficient hardware configuration (mainly CPU and RAM) will cause timeout on the command above.
IMPORTANT: If the parameter --public-hostname="<YOUR_SERVER_IP_OR_NAME>" is not informed, then calls to the web service ("web console") at URL <YOUR_SERVER_IP_OR_NAME> will be redirected to the local IP "127.0 .0.1".
[Ref(s).: https://github.com/openshift/origin/issues/19699 , https://github.com/openshift/origin/issues/19699#issuecomment-854069124 , https://github.com/openshift/origin/issues/20726 ,
https://github.com/openshift/origin/issues/20726#issuecomment-498078849 , https://hayardillasenlared.blogspot.com/2020/06/instalar-openshift-origin-ubuntu.html , https://www.a5idc.net/helpview_526.html , https://thecodeshell.wordpress.com/ , https://www.techrepublic.com/article/how-to-install-openshift-origin-on-ubuntu-18-04/ ]
The command above will...
Start OKD Cluster listening on the interface informed (<YOUR_SERVER_IP_OR_NAME>:8443);
Start a web console listening on all interfaces at "/console" (<YOUR_SERVER_IP_OR_NAME>:8443);
Launch Kubernetes system components;
Provisions registry, router, initial templates and a default project;
The OpenShift cluster will run as an all-in-one container on a Docker host.
On a successful installation, you should get output similar to below...
[...]
Login to server ...
Creating initial project "myproject" ...
Server Information ...
OpenShift server started.
The server is accessible via web console at:
https://<YOUR_SERVER_IP_OR_NAME>:8443
You are logged in as:
User: developer
Password: <any value>
To login as administrator:
oc login -u system:admin
TIPS:
There are a number of options which can be applied when setting up Openshift Origin. View them with oc cluster up --help;
Command model using custom options...
MODEL
oc cluster up --public-hostname="<PUBLIC_HOSTNAME_OR_IP>" --routing-suffix="<PUBLIC_HOSTNAME_OR_IP>.<SUFFIX>"
EXAMPLE
oc cluster up --public-hostname="192.168.56.124" --routing-suffix="192.168.56.124.nip.io"
;
The OpenShift Origin cluster configuration files will be located inside the "~/openshift.local.clusterup" directory. The "~" is the logged in user home directory.
If your cluster setup was successful the command...
oc cluster status
... will give you a positive output like this...
Web console URL: https://<YOUR_SERVER_IP_OR_NAME>:8443/console/
Config is at host directory
Volumes are at host directory
Persistent volumes are at host directory /root/openshift.local.clusterup/openshift.local.pv
Data will be discarded when cluster is destroyed
Run OpenShift as a single node cluster service on system startup
Create OpenShift service file...
read -r -d '' FILE_CONTENT << 'HEREDOC'
BEGIN
[Unit]
Description=OpenShift oc cluster up service
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/bash -c "/usr/local/bin/oc cluster up --public-hostname=\"$(ip route get 1 | awk '{print $NF;exit}')\""
ExecStop=/usr/bin/bash -c "/usr/local/bin/oc cluster down"
Restart=no
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=occlusterup
User=root
Type=oneshot
RemainAfterExit=yes
TimeoutSec=300
[Install]
WantedBy=multi-user.target
END
HEREDOC
echo -n "${FILE_CONTENT:6:-3}" > '/etc/systemd/system/openshift.service'
NOTE: For some reason without the workaround /usr/bin/bash -c "<SOME_COMMAND>" we were unable to start the OpenShift cluster. Additional information about parameters for the oc cluster up command can be seen in the references immediately below.
[Ref(s).: https://avinetworks.com/docs/18.1/avi-vantage-openshift-installation-guide/ ,
https://github.com/openshift/origin/issues/7177#issuecomment-391478549 ,
https://github.com/minishift/minishift/issues/1910#issuecomment-375031172 ]
[Ref(s).: https://tobru.ch/openshift-oc-cluster-up-as-systemd-service/ , https://eenfach.de/gitblit/blob/RedHatTraining!agnosticd.git/af831991c7c752a1215cfc4cff6a028e31f410d7/ansible!configs!rhte-oc-cluster-vms!files!oc-cluster.service.j2 ]
Start and enable (start at boot) the OpenShift service and see the log output in sequence...
systemctl enable openshift.service
systemctl start openshift.service
journalctl -u openshift.service -f --no-pager | less
Using OpenShift OKD/Origin Admin Console
OKD includes a web console which you can use for creation and other management actions. This web console is accessible on server IP/hostname on the port 8443 via https...
https://<IP_OR_HOSTNAME>:8443/console
NOTE: You should see an OpenShift Origin page with username and password form (USERNAME: developer / PASSWORD: developer ).
Deploy a test application in the Cluster
Login to Openshift cluster as "regular developer" user (USERNAME: developer / PASSWORD: developer )...
oc login
TIP: You begin logged in as "developer".
Create a test project using oc "new-project" command...
MODEL
oc new-project <PROJECT_NAME> --display-name="<PROJECT_DISPLAY_NAME>" --description="<PROJECT_DESCRIPTION>"
EXAMPLE
oc new-project test-project --display-name="Test Project" --description="My cool Test Project."
NOTE: All commands below involving the "deployment-example" parameter value will be linked to the "test-project" because after create this project it will be selected as the project for the subsequent settings. To confirm this login as administrator using the oc login -u system:admin command and see the output of the oc status command. For more information, see the oc project <PROJECT_NAME> command in the "Some OpenShift Origin Cluster Useful Commands" section.
Tag an application image from Docker Hub registry...
oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest
Deploy application to OpenShift...
MODEL
oc new-app <DEPLOYMENT_NAME>
EXAMPLE
oc new-app "deployment-example"
Allow external access to the deployed application...
MODEL
oc expose "svc/<DEPLOYMENT_NAME>"
EXAMPLE
oc expose "svc/deployment-example"
Show application deployment status...
oc status
Show pods status...
oc get pods
Get service detailed information...
oc get svc
Test Application local access...
NOTE: See <CLUSTER_IP> on command oc get svc output above.
curl http://<CLUSTER_IP>:8080
See external access route to the deployed application...
oc get routes
Test external access to the application...
Open the URL <HOST_PORT> on your browser.
MODEL
http://<HOST_PORT>
EXAMPLE
http://deployment-example-test-project.192.168.56.124.nip.io
NOTES:
See <HOST_PORT> on oc get routes output;
The wildcard DNS record *.<IP_OR_HOSTNAME>.nip.io points to OpenShift Origin server IP address.
Delete test project...
MODEL
oc delete project "<PROJECT_NAME>"
EXAMPLE
oc delete project "test-project"
[Ref(s).: https://docs.openshift.com/container-platform/4.2/applications/projects/working-with-projects.html#deleting-a-project-using-the-CLIprojects ]
Delete test deployment...
MODEL
oc delete all -l app="<DEPLOYMENT_NAME>"
EXAMPLE
oc delete all -l app="deployment-example"
Check pods status after deleting the project and the deployment...
oc get pods
TIP: Completely recreate the cluster...
oc cluster down
rm -rf ~/openshift.local.clusterup
. May be necessary reboot the server to delete the above folder;
. The "~" is the logged in user home directory.
Some OpenShift Origin Cluster Useful Commands
To login as an administrator use...
oc login -u system:admin
As administrator ("system:admin") user you can see informations such as node status...
oc get nodes
To get more detailed information about a specific node, including the reason for the current condition...
MODEL
oc describe node "<NODE_NAME>"
EXAMPLE
oc describe node "localhost"
To display a summary of the resources you created...
oc status
Select a project to perform CLI operations...
oc project "<PROJECT_NAME>"
NOTE: The selected project will be used in all subsequent operations that manipulate project-scoped content.
[Ref(s).: https://docs.openshift.com/container-platform/4.2/applications/projects/working-with-projects.html#viewing-a-project-using-the-CLI_projects ]
To return to the "regular developer" user (USERNAME: developer / PASSWORD: developer )...
oc login
To check who is the logged in user...
oc whoami
Thanks! =D

Related

OpenShift deploy an application from private registry by using "oc new pp" command

In OpenShift, I want to deploy application by using docker image which its location is on the private docker registry. To do this I have written the following command from terminal by using OpenShift Container Platform Command Line Interface (oc CLI)
oc new-app --docker-image=myregistry.com/mycompany/myimage --name=private --insecure-registry=true
I received an error which type is 407 proxy authentication when I run the above command. Because, To pull the image from my private registry need to authentication. I have a secret for this authentication, too, but I don't know how can add the secret to above command.
Could you help me, please? or another way ...
Finally, I could have solved. The problem is lack of steps while creating secret for private docker registry. The all steps are:
1) If you do not already have a Docker credentials file for the secured registry, you can create a secret by running:
$ oc create secret docker-registry <pull_secret_name> \
--docker-server=<registry_server> \
--docker-username=<user_name> \
--docker-password=<password> \
--docker-email=<email>
2) To use a secret for pulling images for Pods, you must add the secret to your service account:
$ oc secrets link default <pull_secret_name> --for=pull
3) To use a secret for pushing and pulling build images, the secret must be mountable inside of a Pod. You can do this by running:
$ oc secrets link builder <pull_secret_name>
https://docs.openshift.com/container-platform/4.1/openshift_images/managing-images/using-image-pull-secrets.html

minishift - Monitoring pods

As per the documentation, monitoring is shipped with OKD.
OKD ships with a pre-configured and self-updating monitoring stack that is based on the Prometheus open source project and its wider eco-system. It provides monitoring of cluster components and ships with a set of alerts to immediately notify the cluster administrator about any occurring problems and a set of Grafana dashboards.
Further, as per the documentation, this command should show links for various monitoring tools. oc -n openshift-monitoring get routes
When I run the oc command with system user, I get a message as: No resources found.
The installation does not go through.
git clone https://github.com/openshift/cluster-monitoring-operator
cd cluster-monitoring-operator
oc apply -f manifests/
Error messages:
namespace "openshift-monitoring" created
serviceaccount "cluster-monitoring-operator" created
unable to decode "manifests/0000_50_cluster_monitoring_operator_02-role.yaml": no kind "ClusterRole" is registered for version "rbac.authorization.k8s.io/v1beta1"
unable to decode "manifests/0000_50_cluster_monitoring_operator_03-role-binding.yaml": no kind "ClusterRoleBinding" is registered for version "rbac.authorization.k8s.io/v1beta1"
unable to decode "manifests/0000_50_cluster_monitoring_operator_04-deployment.yaml": no kind "Deployment" is registered for version "apps/v1"
unable to decode "manifests/0000_50_cluster_monitoring_operator_05-clusteroperator.yaml": no kind "ClusterOperator" is registered for version "config.openshift.io/v1"
unable to decode "manifests/0000_90_cluster_monitoring_operator_00-operatorgroup.yaml": no kind "OperatorGroup" is registered for version "operators.coreos.com/v1"
So, how do we enable monitoring with minishift?
You can follow this to install prometheus in minishift:
https://github.com/minishift/minishift-addons/tree/master/add-ons/prometheus
Be sure that you login as admin. If you encounter problem to login as admin, you can follow these steps:
minishift ssh
[docker#example ~]$ sudo su
[root#example ~]# export KUBECONFIG=/var/lib/minishift/base/openshift-apiserver/admin.kubeconfig PATH="$PATH:/var/lib/minishift/bin"
[root#example ~]# oc adm policy add-cluster-role-to-user cluster-admin admin
[root#example ~]# exit
[docker#example ~]$ exit
oc login -u admin -p admin
oc whoami
You will see you login as admin.
When I enter the command to apply the prometheus, I encountered this problem:
minishift addons apply prometheus --addon-env namespace=kube-system
-- Applying addon 'prometheus':.Error applying the add-on: Error executing command 'oc new-app -f prometheus.yaml -p NAMESPACE=#{namespace} -n #{namespace}'.
Solution:
login Minishift as admin using "oc login -u admin -p admin".
go to the namespace "kube-system" by "oc project kube-system".
click on "Add to project" -> "import YAML/JSON".
clone the prometheus addon in your local machine from https://github.com/minishift/minishift-addons.git
import the ../minishift-addons/add-ons/prometheus/prometheus.yml into the "kube-system" namespace.
Afterwards, the prometheus will be deployed.
You can access the prometheus graph UI: https://prometheus-kube-system.$minishift-host-ip-address.nip.io.

how to retrieve the ip of openshift private docker registry when started through minishift and accessed from machine where minishift is present

I have installed openshift through minishift on mac. I am able to run the command docker login -u developer -p <pass> 172.30.1.1:5000 from the shell where openshift is running. However I need to run the same login command from host mac machine and don't know the ip to use.
The openshift console can be accessed from https://192.168.64.3:8443.
The command minishift openshift registry returns an error.
I can run the oc commands from mac host machine.
I think you better login to docker daemon: https://docs.okd.io/latest/minishift/using/docker-daemon.html instead that login to the internal registry directly. Once you've done you can use docker client as it is bound to your minishift environment.

registry URL and process for installing an external docker image on openshift online (v3)

I am using the Openshift Online platform. I am trying to build a custom docker image locally (on my mac) and push it to the registry of my project on Openshift online.
I am unable to do that. Can someone please advise what the registry URL should be?
I have tried using the following:
registry.starter-us-east-1.openshift.com
registry.access.redhat.com
The full command I am trying to use to login is below however I am not getting a response. The screen just waits.
docker login -u username -e any_email_address -p token_value registry_service_host:port
My intent, after completing above, is to then try and push the image that I have built locally.
Any advice on the above or else alternate approaches would be appreciated. Thank you.
For to discover Openshift Online URL registry, use the following steps bellow:
After you clicked "Copy Login Command" buttom, you copy oc login command;
Run oc login command in the terminal;
Afterwards login, run oc registry info in the terminal.
The registry is at --> registry.<cluster-id>.openshift.com.
For starter tier US East region, the cluster id is --> starter-us-east-1.
So, the registry can be found at --> registry.starter-us-east-1.openshift.com.
Once you know the docker registry endpoint, you can follow the instructions at:
https://docs.openshift.com/online/dev_guide/managing_images.html#accessing-the-internal-registry
to login and pull/push images from/to the registry.
In short, use:
docker login -u `oc whoami` -e `oc whoami` -p `oc whoami -t` \
https://registry.starter-us-east-1.openshift.com
For future reference, the details for accessing the registry will appear in the About page from the help drop down menu, albeit right now for Online that change hasn't managed to propogate into production, although already visible in newer versions of OpenShift.
The OpenShift internal registry is used internally by default to import images from external repositories. If you need to use it as a repository to pull and push images from your machine, you have to run the following command to allow the default route.
oc patch config.imageregistry cluster -n openshift-image-registry --type merge -p '{"spec": {"defaultRoute": true}}'
Then run
oc get route -n openshift-image-registry
to find the registry URL.
When pushing an image, use the following way to push it to the required project.
[URL]/[project]/[image]:[tag]
To login using docker or podman.
TOKEN = $(oc whoami -t)
podman login -u anything -p ${TOKEN} [URL]

OpenDaylight Application Developer’s tutorial ping fails

ubuntu#sdnhubvm:~$ sudo mn --topo single,3 --mac --switch ovsk,protocols=OpenFlow13 --controller remote
s1 ovs-ofctl add-flow tcp:127.0.0.1:6634 -OOpenFlow13 priority=1,action=output:controller
mininet> h1 ping h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
what is the problem please ?
The L2Switch project provides Layer2 switch functionality.
Running the L2Switch project
Check out the project using git
git clone https://git.opendaylight.org/gerrit/p/l2switch.git
The above command creates a directory called "l2switch" with the project.
Run the distribution
To run the karaf distribution, you can use the following command:
./distribution/karaf/target/assembly/bin/karaf
NOTE: if karaf doesn't boot up to console,It is suggested to clear the contents of distribution/target/assembly/data/cache
To run the base distribution, you can use the following command
./distribution/base/target/distributions-l2switch-base-0.1.0-SNAPSHOT-osgipackage/opendaylight/run.sh
If you need additional resources, you can use these command line arguments:
-Xms1024m -Xmx2048m -XX:PermSize=512m -XX:MaxPermSize=1024m'
Creating a network using Mininet
sudo mn --controller=remote,ip=<Controller IP> --topo=linear,3 --switch ovsk,protocols=OpenFlow13
sudo mn --controller=remote,ip=127.0.0.1 --topo=linear,3 --switch ovsk,protocols=OpenFlow13
The above command will create a virtual network consisting of 3 switches. Each switch will connect to the controller located at the specified IP, that is to say, 127.0.0.1.
sudo mn --controller=remote,ip=127.0.0.1 --mac --topo=linear,3 --switch ovsk,protocols=OpenFlow13
The above command has the "mac" option, which makes it easier to distinguish between Host MAC addresses and Switch MAC addresses.
Generating network traffic using Mininet
h1 ping h2
The above command will cause host1 (h1) to ping host2 (h2)
pingall
'pingall' will cause every host to ping all other hosts.