Helm: could not find tiller - openshift

I'm getting this error message:
➜ ~ helm version
Error: could not find tiller
I've created tiller project:
➜ ~ oc new-project tiller
Now using project "tiller" on server "https://192.168.99.100:8443".
Then, I've created tiller into tiller namespace:
➜ ~ helm init --tiller-namespace tiller
$HELM_HOME has been configured at /home/jcabre/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
So, after that, I've been waiting for tiller pod is ready.
➜ ~ oc get pod -w
NAME READY STATUS RESTARTS AGE
tiller-deploy-66cccbf9cd-84swm 0/1 Running 0 18s
NAME READY STATUS RESTARTS AGE
tiller-deploy-66cccbf9cd-84swm 1/1 Running 0 24s
^C%
Any ideas?

Try deleting your cluster tiller
kubectl get all --all-namespaces | grep tiller
kubectl delete deployment tiller-deploy -n kube-system
kubectl delete service tiller-deploy -n kube-system
kubectl get all --all-namespaces | grep tiller
Initialise it again:
helm init
Now add the service account:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
This solved my issue!

You don't have helm configured yet, use the following command:
helm init
This will create .helm with repository, plugins, etc, in your home directory.
Background:
helm comes with client and server, if you have a different deployment environment, it might be possible that your helm server (known as tiller) is different, in that case, there are two ways to point to tiller
set environment variable TILLER_NAMESPACE
--tiller-namespace string namespace of Tiller (default "kube-system")
For more details check the helm READ.md file.

You installed tiller into a non-default namespace, so you have to tell helm where to look.
helm --tiller-namespace tiller version

First of all you need to create service account for teller to use in helm:
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
To verify that Tiller is running:
kubectl get pods --namespace kube-system
DigitalOcean Reference

Now you can upgrade to the latest version of Helm or any version > 3.0.0.
You don't need to do
helm init
anymore.
The Tiller and client directories are initialised automatically when you start using helm. As mentioned here

I was facing the same issue, try to re-install helm by using the commands below:
For linux: (Via Snap)
sudo snap install helm --classic
For Linux (from Binary source):
Download your desired version
Unpack it (tar -zxvf helm-v2.0.0-linux-amd64.tgz)
Find the helm binary in the unpacked directory, and move it to its desired destination
(mv linux-amd64/helm /usr/local/bin/helm)
For MacOS (Via brew):
brew install kubernetes-helm
For windows (Via Chocolatey):
choco install kubernetes-helm
And finaly, intialize the helm:
helm init

With helm 3 releases, we do not need tiller anymore. Try to upgrade the helm version to 3. It provides more security to your cluster.Because tiller runs in your Kubernetes cluster with full administrative rights, which is a risk if somebody gets unauthorized access to the cluster.
If you migrate to helm3, you do not need to do helm init thereafter because helm version 3 is a tiller-less architecture.

try
cp /usr/local/bin/tiller ~/.helm/
and check if the helm is deployed on server with
helm version

Related

Basic install of k3s has no nodes

I followed the instructions here to install k3s. I also watched this tutorial.
In both cases they show running this command after the install:
k3s kubectl get node
However when I do that I get this:
# k3s kubectl get node
No resources found
What reasons could there be for this not working?
If I specify the kubeconfig file that Rancher creates, I get the same response.
# kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get node
No resources found
I believe that the cluster is running:
# kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Services and Namespaces
# kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 16h
# kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get ns
NAME STATUS AGE
default Active 16h
kube-system Active 16h
kube-public Active 16h
kube-node-lease Active 16h
OS
# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
This is a VM with 2 CPUs and 8 GB RAM.
Was caused by an incompatible file system. This was in the logs.
ERRO[2021-09-24T10:40:28.848795952-04:00] Failed to configure agent: "overlayfs" snapshotter cannot be enabled for "/var/lib/rancher/k3s/agent/containerd", try using "fuse-overlayfs" or "native": /var/lib/rancher/k3s/agent/containerd does not support d_type. If the backing filesystem is xfs, please reformat with ftype=1 to enable d_type support

Kubernetes ingress address is empty

I have set up a Kubernetes cluster using Minikube in an Ubuntu VM. I cloned this GitHub repo and created the namespace, deployment, service and ingress.
I have also enabled ingress addon by running minikube addons enable ingress.
When I run kubectl get svc -n ingress-nginx, the external ip is none.
When I run kubectl get ingress -n sample, the address is empty.
Please advise how to set up k8s ingress.
PS: I had minikube tunnel running.
PS 2: kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create--1-* 0/1 Completed 0 11m
ingress-nginx-admission-patch--1-* 0/1 Completed 1 11m
ingress-nginx-controller-* 1/1 Running 0 11m
Thanks to this SO post. It worked after I downgraded Minikube to v1.11.0. I used --driver=none.

Openshift container with wrong openshift.io/scc

Having unexplained behavior in openshift 4.4.17 cluster: oauth-openshift Deployment (in openshift-authentication namespace) has replicas=2, the first pod is Running with:
openshift.io/scc: anyuid
the second pod goes in CrashLoopBackOff state, and scc assigned to it is the one below:
openshift.io/scc: nginx-ingress-scc (that is a customized scc for nginx purposes)
By documentation:
By default, the pods inside openshift-authentication and openshift-authentication-operator namespace runs with anyuid SCC.
I suppose something has been changed in the cluster but i cannot figure out where the mistake is.
Oauth-penshift Deployment is in its default configuration:
serviceAccountName: oauth-openshift
namespace: openshift-authentication
$ oc get scc anyuid -o yaml
users:
system:serviceaccount:default:oauth-openshift
system:serviceaccount:openshift-authentication:oauth-openshift
system:serviceaccount:openshift-authentication:default
$ oc get pod -n openshift-authentication
NAME READY STATUS RESTARTS AGE
oauth-openshift-59f498986d-lmxdv 0/1 CrashLoopBackOff 158 13h
oauth-openshift-d4968bd74-ll7mn 1/1 Running 0 23d
$ oc logs oauth-openshift-59f498986d-lmxdv -n openshift-authentication
Copying system trust bundle
cp: cannot remove '/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem': Permission denied
$ oc get pod oauth-openshift-59f498986d-lmxdv -n openshift-authentication -o=yaml|grep serviceAccount
serviceAccount: oauth-openshift
serviceAccountName: oauth-openshift
$ oc get pod oauth-openshift-59f498986d-lmxdv -n openshift-authentication -o=yaml|grep scc
openshift.io/scc: nginx-ingress-scc
Auth Operator:
$ oc get pod -n openshift-authentication-operator
NAME READY STATUS RESTARTS AGE
authentication-operator-5498b9ddcb-rs9v8 1/1 Running 0 33d
$ oc get pod authentication-operator-5498b9ddcb-rs9v8 -n openshift-authentication-operator -o=yaml|grep scc
openshift.io/scc: anyuid
The managementState is set to Managed
First of all, you should check if your SCC priority is customized or not. For example, anyuid scc priority is 10 and it's the highest by default.
But if other SCC(in this case, nginx-ingress-scc) is configured more than 10 priority, then the SCC is selected by the oauth pod unexpectedly. It may causes this issue.
The problem was the customized scc (nginx-ingress-scc) had priority higher than 10, that is the anyuid's priority.
Now solved.

Procedure to install an Ingress controller

Unable to install ingress-nginx for kubernetes on Docker desktop
I was using the following in cmd line to install ingress nginx so far:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
as shown in the web page: https://che.eclipse.org/running-eclipse-che-on-kubernetes-using-docker-desktop-for-mac-5d972ed511e1
I seems like the installatio procedure has changed. Can anyone let me know step by step instructions to install ingress-nginx? I coudnt install it by following the procedure described here: https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md
Installation via helm works perfectly for me. Assuming you have kubectl binary installed and configured to use for your k8s cluster you can follow below steps one by one to achieve installation of nginx-ingress controller
1.Install helm binary (if doesn't exist)
curl -s https://raw.githubusercontent.com/nurlanf/deployments-kubernetes/master/helm/get_helm.sh | bash
2.Install helm for your cluster (if not installed yet)
curl -s https://raw.githubusercontent.com/nurlanf/deployments-kubernetes/master/helm/install.sh | bash
You should see output like
...
Waiting for tiller install...
Helm install complete
3.Then install nginx-ingress via helm
helm install stable/nginx-ingress --name nginx-ingress

Openshift stable release install centos 7

Please can someone let me know which latest version of openshift-ansible (origin) is stable enough to install on Centos 7?
I am looking for successful multi-node install experience and any tips that was used.
Thanks
the latest stable release is 3.9
git clone https://github.com/openshift/openshift-ansible
cd openshift-ansible
git checkout release-3.9
and follow the Advanced Installation guide
https://docs.openshift.org/latest/install_config/install/advanced_install.html
It is now working.
After enabling openshift_repos_enable_testing=true, I did not run the pre-requisite playbook before the deploy_cluster playbook, which was why it was still giving the error of not finding the packages.
I believe that v3.11.0 version of OpenShift OKD/Origin (latest 3.x release at time) meets your needs. In this answer is a complete roadmap for installing OpenShift OKD/Origin as a single node cluster service.
Some information transposed from the OKD website about OpenShift OKD/Origin...
The Community Distribution of Kubernetes that powers Red Hat
OpenShift. Built around a core of OCI container packaging and
Kubernetes container cluster management, OKD is also augmented by
application lifecycle management functionality and DevOps tooling. OKD
provides a complete open source container application platform.
OKD is a distribution of Kubernetes optimized for continuous
application development and multi-tenant deployment. OKD adds
developer and operations-centric tools on top of Kubernetes to enable
rapid application development, easy deployment and scaling, and
long-term lifecycle maintenance for small and large teams. OKD is a
sibling Kubernetes distribution to Red Hat OpenShift.
OKD embeds Kubernetes and extends it with security and other
integrated concepts. OKD is also referred to as Origin in github and
in the documentation.
If you are looking for enterprise-level support, or information on
partner certification, Red Hat also offers Red Hat OpenShift Container
Platform.
So I recommend starting with OpenShift OKD/Origin using the roadmap below to install on CentOS 7. Then you can explore other possibilities ("multi-node", for example).
However, if you want to test the OpenShift (OKD) 4.X application the guide and the right way to do this is at this link Install the OpenShift (OKD) 4.X cluster (UPI/"bare-metal"). It is a long way and with a reasonable level of complexity.
PLUS:
Informations about OpenShift Ansible on GitHub and RedHat Ansible;
You can take a look at the OpenShift Installer (NOT OKD/Origin!).
OpenShift Origin (OKD) - Open source container application platform:
OpenShift is a family of containerization software products developed by Red Hat. Its flagship product is the OpenShift Container Platform - an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. The family's other products provide this platform through different environments: OKD serves as the community-driven upstream (akin to the way that Fedora is upstream of Red Hat Enterprise Linux), OpenShift Online is the platform offered as software as a service, and Openshift Dedicated is the platform offered as a managed service.
The OpenShift Console has developer and administrator oriented views. Administrator views allow one to monitor container resources and container health, manage users, work with operators, etc. Developer views are oriented around working with application resources within a namespace. OpenShift also provides a CLI that supports a superset of the actions that the Kubernetes CLI provides.
The OpenShift Origin (OKD) is the comunity driven version of OpenShift (non-enterprise-level). That means you can host your own PaaS (Platform as a service) for free and almost with no hassle.
[Ref(s).: https://en.wikipedia.org/wiki/OpenShift ,
https://www.openshift.com/blog/openshift-ecosystem-get-started-openshift-origin-gitlab ]
Setup Local OpenShift Origin (OKD) Cluster on CentOS 7
All commands in this setup must be performed with the "root" user.
Update CentOS 7
Updating your CentOS 7 server...
yum -y update
Install and Configure Docker
OpenShift required docker engine on the host machine for running containers. Install Docker and other dependencies on CentOS 7 using the commands below...
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install git-core
yum -y install wget
yum -y install yum-utils
yum -y install device-mapper-persistent-data
yum -y install lvm2
yum -y install docker-ce
yum -y install docker-ce-cli
yum -y install containerd.io
Add logged in user account to docker group...
usermod -aG docker $USER
newgrp docker
Create necessary folders...
mkdir "/etc/docker"
mkdir "/etc/containers"
Create "registries.conf" file with an insecure registry parameter ("172.30.0.0/16") to the Docker daemon...
tee "/etc/containers/registries.conf" << EOF
[registries.insecure]
registries = ['172.30.0.0/16']
EOF
Create "daemon.json" file with configurations...
tee "/etc/docker/daemon.json" << EOF
{
"insecure-registries": [
"172.30.0.0/16"
]
}
EOF
We need to reload systemd and restart the Docker daemon after editing the config...
systemctl daemon-reload
systemctl restart docker
Enable Docker to start at boot...
systemctl enable docker
Then enable "IP forwarding" on your system...
tee "/etc/sysctl.d/ip_forward.conf" << EOF
net.ipv4.ip_forward=1
EOF
sysctl -w net.ipv4.ip_forward=1
Configure Firewalld.
Add the necessary firewall permissions...
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=8443/tcp --permanent
firewall-cmd --zone=public --add-port=53/udp --permanent
firewall-cmd --zone=public --add-port=8053/udp --permanent
firewall-cmd --reload
NOTE: Allows containers access to the OpenShift master API (8443/tcp), DNS (53/udp) endpoints and add others permissions.
Download OpenShift
Download the OpenShift binaries from GitHub and move them to the "/usr/local/bin/" folder...
wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
tar -zxvf openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
cd ./openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit
mv ./oc /usr/local/bin/
mv ./kubectl /usr/local/bin/
rm -rf ./openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit*
Verify installation of OpenShift client utility...
oc version
Start OpenShift Origin (OKD) Local Cluster
Now bootstrap a local single server OpenShift Origin cluster by running the following command...
oc cluster up --public-hostname="<YOUR_SERVER_IP_OR_NAME>"
... or...
oc cluster up --public-hostname="$(ip route get 1 | awk '{print $NF;exit}')"
This one above will get the primary IP address of the local machine dynamically.
[Ref(s).: https://stackoverflow.com/a/25851186/3223785 ]
TIP: In case of error, try perform the command oc cluster down and repeat the command above.
NOTE: Insufficient hardware configuration (mainly CPU and RAM) will cause timeout on the command above.
IMPORTANT: If the parameter --public-hostname="<YOUR_SERVER_IP_OR_NAME>" is not informed, then calls to the web service ("web console") at URL <YOUR_SERVER_IP_OR_NAME> will be redirected to the local IP "127.0 .0.1".
[Ref(s).: https://github.com/openshift/origin/issues/19699 , https://github.com/openshift/origin/issues/19699#issuecomment-854069124 , https://github.com/openshift/origin/issues/20726 ,
https://github.com/openshift/origin/issues/20726#issuecomment-498078849 , https://hayardillasenlared.blogspot.com/2020/06/instalar-openshift-origin-ubuntu.html , https://www.a5idc.net/helpview_526.html , https://thecodeshell.wordpress.com/ , https://www.techrepublic.com/article/how-to-install-openshift-origin-on-ubuntu-18-04/ ]
The command above will...
Start OKD Cluster listening on the interface informed (<YOUR_SERVER_IP_OR_NAME>:8443);
Start a web console listening on all interfaces at "/console" (<YOUR_SERVER_IP_OR_NAME>:8443);
Launch Kubernetes system components;
Provisions registry, router, initial templates and a default project;
The OpenShift cluster will run as an all-in-one container on a Docker host.
On a successful installation, you should get output similar to below...
[...]
Login to server ...
Creating initial project "myproject" ...
Server Information ...
OpenShift server started.
The server is accessible via web console at:
https://<YOUR_SERVER_IP_OR_NAME>:8443
You are logged in as:
User: developer
Password: <any value>
To login as administrator:
oc login -u system:admin
TIPS:
There are a number of options which can be applied when setting up Openshift Origin. View them with oc cluster up --help;
Command model using custom options...
MODEL
oc cluster up --public-hostname="<PUBLIC_HOSTNAME_OR_IP>" --routing-suffix="<PUBLIC_HOSTNAME_OR_IP>.<SUFFIX>"
EXAMPLE
oc cluster up --public-hostname="192.168.56.124" --routing-suffix="192.168.56.124.nip.io"
;
The OpenShift Origin cluster configuration files will be located inside the "~/openshift.local.clusterup" directory. The "~" is the logged in user home directory.
If your cluster setup was successful the command...
oc cluster status
... will give you a positive output like this...
Web console URL: https://<YOUR_SERVER_IP_OR_NAME>:8443/console/
Config is at host directory
Volumes are at host directory
Persistent volumes are at host directory /root/openshift.local.clusterup/openshift.local.pv
Data will be discarded when cluster is destroyed
Run OpenShift as a single node cluster service on system startup
Create OpenShift service file...
read -r -d '' FILE_CONTENT << 'HEREDOC'
BEGIN
[Unit]
Description=OpenShift oc cluster up service
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/bash -c "/usr/local/bin/oc cluster up --public-hostname=\"$(ip route get 1 | awk '{print $NF;exit}')\""
ExecStop=/usr/bin/bash -c "/usr/local/bin/oc cluster down"
Restart=no
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=occlusterup
User=root
Type=oneshot
RemainAfterExit=yes
TimeoutSec=300
[Install]
WantedBy=multi-user.target
END
HEREDOC
echo -n "${FILE_CONTENT:6:-3}" > '/etc/systemd/system/openshift.service'
NOTE: For some reason without the workaround /usr/bin/bash -c "<SOME_COMMAND>" we were unable to start the OpenShift cluster. Additional information about parameters for the oc cluster up command can be seen in the references immediately below.
[Ref(s).: https://avinetworks.com/docs/18.1/avi-vantage-openshift-installation-guide/ ,
https://github.com/openshift/origin/issues/7177#issuecomment-391478549 ,
https://github.com/minishift/minishift/issues/1910#issuecomment-375031172 ]
[Ref(s).: https://tobru.ch/openshift-oc-cluster-up-as-systemd-service/ , https://eenfach.de/gitblit/blob/RedHatTraining!agnosticd.git/af831991c7c752a1215cfc4cff6a028e31f410d7/ansible!configs!rhte-oc-cluster-vms!files!oc-cluster.service.j2 ]
Start and enable (start at boot) the OpenShift service and see the log output in sequence...
systemctl enable openshift.service
systemctl start openshift.service
journalctl -u openshift.service -f --no-pager | less
Using OpenShift OKD/Origin Admin Console
OKD includes a web console which you can use for creation and other management actions. This web console is accessible on server IP/hostname on the port 8443 via https...
https://<IP_OR_HOSTNAME>:8443/console
NOTE: You should see an OpenShift Origin page with username and password form (USERNAME: developer / PASSWORD: developer ).
Deploy a test application in the Cluster
Login to Openshift cluster as "regular developer" user (USERNAME: developer / PASSWORD: developer )...
oc login
TIP: You begin logged in as "developer".
Create a test project using oc "new-project" command...
MODEL
oc new-project <PROJECT_NAME> --display-name="<PROJECT_DISPLAY_NAME>" --description="<PROJECT_DESCRIPTION>"
EXAMPLE
oc new-project test-project --display-name="Test Project" --description="My cool Test Project."
NOTE: All commands below involving the "deployment-example" parameter value will be linked to the "test-project" because after create this project it will be selected as the project for the subsequent settings. To confirm this login as administrator using the oc login -u system:admin command and see the output of the oc status command. For more information, see the oc project <PROJECT_NAME> command in the "Some OpenShift Origin Cluster Useful Commands" section.
Tag an application image from Docker Hub registry...
oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest
Deploy application to OpenShift...
MODEL
oc new-app <DEPLOYMENT_NAME>
EXAMPLE
oc new-app "deployment-example"
Allow external access to the deployed application...
MODEL
oc expose "svc/<DEPLOYMENT_NAME>"
EXAMPLE
oc expose "svc/deployment-example"
Show application deployment status...
oc status
Show pods status...
oc get pods
Get service detailed information...
oc get svc
Test Application local access...
NOTE: See <CLUSTER_IP> on command oc get svc output above.
curl http://<CLUSTER_IP>:8080
See external access route to the deployed application...
oc get routes
Test external access to the application...
Open the URL <HOST_PORT> on your browser.
MODEL
http://<HOST_PORT>
EXAMPLE
http://deployment-example-test-project.192.168.56.124.nip.io
NOTES:
See <HOST_PORT> on oc get routes output;
The wildcard DNS record *.<IP_OR_HOSTNAME>.nip.io points to OpenShift Origin server IP address.
Delete test project...
MODEL
oc delete project "<PROJECT_NAME>"
EXAMPLE
oc delete project "test-project"
[Ref(s).: https://docs.openshift.com/container-platform/4.2/applications/projects/working-with-projects.html#deleting-a-project-using-the-CLIprojects ]
Delete test deployment...
MODEL
oc delete all -l app="<DEPLOYMENT_NAME>"
EXAMPLE
oc delete all -l app="deployment-example"
Check pods status after deleting the project and the deployment...
oc get pods
TIP: Completely recreate the cluster...
oc cluster down
rm -rf ~/openshift.local.clusterup
. May be necessary reboot the server to delete the above folder;
. The "~" is the logged in user home directory.
Some OpenShift Origin Cluster Useful Commands
To login as an administrator use...
oc login -u system:admin
As administrator ("system:admin") user you can see informations such as node status...
oc get nodes
To get more detailed information about a specific node, including the reason for the current condition...
MODEL
oc describe node "<NODE_NAME>"
EXAMPLE
oc describe node "localhost"
To display a summary of the resources you created...
oc status
Select a project to perform CLI operations...
oc project "<PROJECT_NAME>"
NOTE: The selected project will be used in all subsequent operations that manipulate project-scoped content.
[Ref(s).: https://docs.openshift.com/container-platform/4.2/applications/projects/working-with-projects.html#viewing-a-project-using-the-CLI_projects ]
To return to the "regular developer" user (USERNAME: developer / PASSWORD: developer )...
oc login
To check who is the logged in user...
oc whoami
Thanks! =D