OpenShift Login failed (401 Unauthorized) - openshift

I'm new to openshift or K8'S. I have installed Openshift v3.11.0+bf985b1-463 cluster in my centos 7.
While running prerequisites.yml and deploy_cluster.yml run status is successful. And i have updated htpasswd and granted the cluster-admin role for my user.
htpasswd -b ${HTPASSWD_PATH}/htpasswd $OKD_USERNAME ${OKD_PASSWORD}
oc adm policy add-cluster-role-to-user cluster-admin $OKD_USERNAME
and i have create the user and identity also by the below cmd.
oc create user bob
oc create identity ldap_provider:bob
oc create useridentitymapping ldap_provider:bob bob
When i try to login with oc login -u bob -p password it say's
Login failed (401 Unauthorized)
Verify you have provided correct credentials.
But i can able to login with oc login -u system:admin
For your information: the okd deploy_cluster.yml ran successfully but the below pod is in error state.
Is that cause the problem? cmd oc get pods
Suggest me how can i fix the issue. Thank you.
UPDATE:
I have ran the deploy_cluster.yml once again the login issue is solved able to login. But it fails with the below error.
This phase can be restarted by running: playbooks/openshift-logging/config.yml
Node logging-es-data-master-ioblern6 in cluster logging-es was unable to rollout. Please see documentation regarding recovering during a rolling cluster restart
In openshift console the Logging Pod have the below event.
But all the servers have enough memory like more than 65% is free.
And the Ansible version is 2.6.5
1 Master node config:
4CPU, 16GB RAM, 50GB HDD
2 Slave and 1 infra node config:
4CPU, 16GB RAM, 20GB HDD

To create a new user try to follow these steps:
1 Create on each master node the password entry in htpasswd file with:
$ htpasswd -b </path/to/htpasswd> <user_name>
$ htpasswd -b /etc/origin/master/htpasswd myUser myPassword
2 Restart on each master node the master api and master controllers
$ master-restart controllers && master-restart api
or
$ /usr/local/bin/master-restart api && /usr/local/bin/master-restart controllers
3 Apply needed roles
$ oc adm policy add-cluster-role-to-user cluster-admin myUser
4 Login as myUser
$ oc login -u myUser -p myPassword
Running again the deploy_cluster.yaml after configuring the htpasswd file, will force the restart of master controllers and api so you've been able to login as your new user.
About the other problem, registry-console and loggin-es-data-master pods not running it's because you cannot run again the deploy_cluster.yaml when your cluster is already up and running so you have to uninstall okd and then run again the playbook. This happens because the SDN is already working and all your nodes already own all needed certificates.
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
and then again
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
More detailed informations are here
If, after all this procedure, the logging-es-data-master pod should not run then uninstall the logging component with
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
and then uninstall the whole okd and install it again.
If your cluster is already working and you cannot perform again the installation so try only to uninstall and reinstall the logging component:
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=True
RH detailed instructinos are here

Related

When I try to build the project:deep_pix_bis_pad.icb2019, I cannot pass the ssh authentation

The project is here: deep_pix_bis_pad.icb2019
The paper: Deep Pixel-wise Binary Supervision for Face Presentation Attack Detection
I follow the installation instructions just as follow
$ cd bob.paper.deep_pix_bis_pad.icb2019
$ conda env create -f environment.yml
$ source activate bob.paper.deep_pix_bis_pad.icb2019 # activate the environment
$ buildout
$ bin/bob_dbmanage.py all download --missing
$ bin/train_pixbis.py --help # test the installation
When I try to run buildout,I got error message like that
mr.developer: Cloned 'bob.bio.base' with git from 'git#gitlab.idiap.ch:bob/bob.bio.base'.
mr.developer: git cloning of 'bob.bio.base' failed.
mr.developer: Connection closed by 192.33.221.117 port 22
mr.developer: fatal: Could not read from remote repository.
I can connect to git#gitlab.com while I was rejected by git#gitlab.idiap.ch.
$ ssh -T git#gitlab.com
Welcome to GitLab, #buzuyun!
$ ssh -T git#gitlab.idiap.ch
Connection closed by 192.33.221.xxx port 22
I guess it's because this is a server belonging to the Institute Idiap.
But I cannot login or register an account, so I can never pass the ssh authentation as I cannot add my ssh key without an account!
Has anyone reproduce this paper?

How to register account to "image-registry.openshift-image-registry"

Environment
Openshift 4.3
Question
When we push or pull the image in openshift image registry, (According to document,) It is recommended to use kubeadmin account.
But don't want to use kubeadmin account.
So, My question is : How can I register another account to podman?
$ oc debug node/workernode
sh-4.2# chroot /host
I want to use another account instead of kubeadmin:
sh-4.4# podman login -u kubeadmin -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
Thanks
You can use any account which is granted "registry-viewer" or "registry-editor" role. It's mentioned it on the docs you provided either.
You are required to use other credential except "kubeadmin" default admin account using additional identity provider. Refer Understanding identity provider configuration for more details.
For example, if you want to login to internal image registry using "testuser",
Grant required permissions to "testuser".
For pulling images, for example when using the podman pull command, the user must have the registry-viewer role. To add this role:
$ oc policy add-role-to-user registry-viewer testuser
For writing or pushing images, for example when using the podman push command, the user must have the registry-editor role. To add this role:
$ oc policy add-role-to-user registry-editor testuser
Get the token of "testuser" for using credential of the image registry.
$ oc login -u testuser -p your_password
$ oc whoami -t
XXXXXX
Verify it whether the "testuser" can login or not.
$ oc debug node/workernode
sh-4.2# chroot /host
sh-4.4# podman login -u testuser -p XXXXXX image-registry.openshift-image-registry.svc:5000

couchbase-cli: command not found

I downloaded deb package from https://www.couchbase.com/downloads and installed it using:
sudo dpkg -i couchbaseXXX.deb
It is successfully installed but when I try to execute:
couchbase-cli bucket-create -c localhost:8091 -u Administrator ****
Returns:
couchbase-cli: command not found
What is the issue behind that, How to fix it?
First you have to setup the couchbase cluster with same command before the bucket creation. An example below, --services could be index,data,query.
/opt/couchbase/bin/couchbase-cli cluster-init -c 127.0.0.1:8091 -u Administrator -p Public123 --cluster-username=Administrator --cluster-password=Public123 --cluster-port=8091 --cluster-ramsize=49971 --cluster-index-ramsize=2000 --services=data
you have to go into CLI directory location and run the command.
Below are the steps I have done.
cd /opt/couchbase/bin
./couchbase-cli bucket-create -c localhost:8091 -u Administrator -p password --bucket test-data --bucket-type couchbase --bucket-ramsize 100
Once I run the above command, I got the success message and the bucket has been created.

minishift - Monitoring pods

As per the documentation, monitoring is shipped with OKD.
OKD ships with a pre-configured and self-updating monitoring stack that is based on the Prometheus open source project and its wider eco-system. It provides monitoring of cluster components and ships with a set of alerts to immediately notify the cluster administrator about any occurring problems and a set of Grafana dashboards.
Further, as per the documentation, this command should show links for various monitoring tools. oc -n openshift-monitoring get routes
When I run the oc command with system user, I get a message as: No resources found.
The installation does not go through.
git clone https://github.com/openshift/cluster-monitoring-operator
cd cluster-monitoring-operator
oc apply -f manifests/
Error messages:
namespace "openshift-monitoring" created
serviceaccount "cluster-monitoring-operator" created
unable to decode "manifests/0000_50_cluster_monitoring_operator_02-role.yaml": no kind "ClusterRole" is registered for version "rbac.authorization.k8s.io/v1beta1"
unable to decode "manifests/0000_50_cluster_monitoring_operator_03-role-binding.yaml": no kind "ClusterRoleBinding" is registered for version "rbac.authorization.k8s.io/v1beta1"
unable to decode "manifests/0000_50_cluster_monitoring_operator_04-deployment.yaml": no kind "Deployment" is registered for version "apps/v1"
unable to decode "manifests/0000_50_cluster_monitoring_operator_05-clusteroperator.yaml": no kind "ClusterOperator" is registered for version "config.openshift.io/v1"
unable to decode "manifests/0000_90_cluster_monitoring_operator_00-operatorgroup.yaml": no kind "OperatorGroup" is registered for version "operators.coreos.com/v1"
So, how do we enable monitoring with minishift?
You can follow this to install prometheus in minishift:
https://github.com/minishift/minishift-addons/tree/master/add-ons/prometheus
Be sure that you login as admin. If you encounter problem to login as admin, you can follow these steps:
minishift ssh
[docker#example ~]$ sudo su
[root#example ~]# export KUBECONFIG=/var/lib/minishift/base/openshift-apiserver/admin.kubeconfig PATH="$PATH:/var/lib/minishift/bin"
[root#example ~]# oc adm policy add-cluster-role-to-user cluster-admin admin
[root#example ~]# exit
[docker#example ~]$ exit
oc login -u admin -p admin
oc whoami
You will see you login as admin.
When I enter the command to apply the prometheus, I encountered this problem:
minishift addons apply prometheus --addon-env namespace=kube-system
-- Applying addon 'prometheus':.Error applying the add-on: Error executing command 'oc new-app -f prometheus.yaml -p NAMESPACE=#{namespace} -n #{namespace}'.
Solution:
login Minishift as admin using "oc login -u admin -p admin".
go to the namespace "kube-system" by "oc project kube-system".
click on "Add to project" -> "import YAML/JSON".
clone the prometheus addon in your local machine from https://github.com/minishift/minishift-addons.git
import the ../minishift-addons/add-ons/prometheus/prometheus.yml into the "kube-system" namespace.
Afterwards, the prometheus will be deployed.
You can access the prometheus graph UI: https://prometheus-kube-system.$minishift-host-ip-address.nip.io.

How to import mysql database (schema.sql) in OpenShift v3 with mysql datastore?

Before I'm using OpenShift v2 and it is quite easy to import the mysql schema to the app. I'll just add a phpMyadmin cartridges to my OpenShift app and then import my sql file. But now in OpenShift v3 they don't have a phpMyadmin cartridge.
If I understand correctly, you want to migrate MySQL database applications from OpenShift version 2 (v2) to OpenShift version 3 (v3). If so, here are the steps:
Export all databases to a dump file and copy it to a local machine (into the current directory):
$ rhc ssh <v2_application_name>
$ mysqldump --skip-lock-tables -h $OPENSHIFT_MYSQL_DB_HOST -P ${OPENSHIFT_MYSQL_DB_PORT:-3306} -u ${OPENSHIFT_MYSQL_DB_USERNAME:-'admin'} \
--password="$OPENSHIFT_MYSQL_DB_PASSWORD" --all-databases > ~/app-root/data/all.sql
$ exit
Download dbdump to your local machine:
$ mkdir mysqldumpdir
$ rhc scp -a <v2_application_name> download mysqldumpdir app-root/data/all.sql
Create a v3 mysql-persistent pod from template:
$ oc new-app mysql-persistent -p \
MYSQL_USER=<your_V2_mysql_username> -p \
MYSQL_PASSWORD=<your_v2_mysql_password> -p MYSQL_DATABASE=<your_v2_database_name>
Check to see if the pod is ready to use:
$ oc get pods
When the pod is up and running, copy database archive files to your v3 MySQL pod:
$ oc rsync /local/mysqldumpdir <mysql_pod_name>:/var/lib/mysql/data
Restore the database in the v3 running pod:
$ oc rsh <mysql_pod>
$ cd /var/lib/mysql/data/mysqldumpdir
In v3, to restore databases you need to access MySQL as root user.
In v2, the $OPENSHIFT_MYSQL_DB_USERNAME had full privileges on all databases. In v3, you must grant privileges to $MYSQL_USER for each database.
$ mysql -u root
$ source all.sql
Grant all privileges on <dbname> to <your_v2_username>#localhost, then flush privileges.
Remove the dump directory from the pod:
$ cd ../; rm -rf /var/lib/mysql/data/mysqldumpdir