Running the openshift cluster using minishift in ubuntu OS. minishift IP is "192.168.42.48". I am following the URL to access the internal docker registry.
After the minishift has started successfully, logged in as administrator using "oc login -u system:admin" then added the cluster-role to user "chak".
~/github/cheatsheets$ oc adm policy add-cluster-role-to-user cluster-admin chak
cluster role "cluster-admin" added: "chak"
Then copied the token for user "chak" and trying to login to docker registry but it has failed with below error. The minishift ip and ip in the error output is different. In the terminal, already logged in as administrator and added a cluster-admin role.
So, I expect docker daemon to login to the openshift cluster ip that is started by the minishift. why is docker daemon trying to login to ip in the error rather than than minishift ip?
I also have http_proxy, https_proxy and no_proxy set, since i am connected to corporate network.
~/github/cheatsheets$ docker login -u chak -p C5u5F1iwA6gl4va1K8OZ01DaRPdMYMnDQklErn2FzjY docker-registry-default.127.0.0.1.nip.io
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
error during connect: Post https://192.168.42.253:2376/v1.39/auth: Gateway Timeout
Edit 1:
~/github/hashitvault$ docker login -u chak -p Naqp6NScYF7zOcKN41SuYQ045qR9zBN6lfGVnvxhrU docker-registry-default.192.168.42.186.nip.io
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get http://docker-registry-default.192.168.42.186.nip.io/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
oc internal docker registry route is exposed.
when hit in browser,reaching the 502 server error.
what am i doing wrong here?
Related
I'm new to openshift or K8'S. I have installed Openshift v3.11.0+bf985b1-463 cluster in my centos 7.
While running prerequisites.yml and deploy_cluster.yml run status is successful. And i have updated htpasswd and granted the cluster-admin role for my user.
htpasswd -b ${HTPASSWD_PATH}/htpasswd $OKD_USERNAME ${OKD_PASSWORD}
oc adm policy add-cluster-role-to-user cluster-admin $OKD_USERNAME
and i have create the user and identity also by the below cmd.
oc create user bob
oc create identity ldap_provider:bob
oc create useridentitymapping ldap_provider:bob bob
When i try to login with oc login -u bob -p password it say's
Login failed (401 Unauthorized)
Verify you have provided correct credentials.
But i can able to login with oc login -u system:admin
For your information: the okd deploy_cluster.yml ran successfully but the below pod is in error state.
Is that cause the problem? cmd oc get pods
Suggest me how can i fix the issue. Thank you.
UPDATE:
I have ran the deploy_cluster.yml once again the login issue is solved able to login. But it fails with the below error.
This phase can be restarted by running: playbooks/openshift-logging/config.yml
Node logging-es-data-master-ioblern6 in cluster logging-es was unable to rollout. Please see documentation regarding recovering during a rolling cluster restart
In openshift console the Logging Pod have the below event.
But all the servers have enough memory like more than 65% is free.
And the Ansible version is 2.6.5
1 Master node config:
4CPU, 16GB RAM, 50GB HDD
2 Slave and 1 infra node config:
4CPU, 16GB RAM, 20GB HDD
To create a new user try to follow these steps:
1 Create on each master node the password entry in htpasswd file with:
$ htpasswd -b </path/to/htpasswd> <user_name>
$ htpasswd -b /etc/origin/master/htpasswd myUser myPassword
2 Restart on each master node the master api and master controllers
$ master-restart controllers && master-restart api
or
$ /usr/local/bin/master-restart api && /usr/local/bin/master-restart controllers
3 Apply needed roles
$ oc adm policy add-cluster-role-to-user cluster-admin myUser
4 Login as myUser
$ oc login -u myUser -p myPassword
Running again the deploy_cluster.yaml after configuring the htpasswd file, will force the restart of master controllers and api so you've been able to login as your new user.
About the other problem, registry-console and loggin-es-data-master pods not running it's because you cannot run again the deploy_cluster.yaml when your cluster is already up and running so you have to uninstall okd and then run again the playbook. This happens because the SDN is already working and all your nodes already own all needed certificates.
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
and then again
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
More detailed informations are here
If, after all this procedure, the logging-es-data-master pod should not run then uninstall the logging component with
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
and then uninstall the whole okd and install it again.
If your cluster is already working and you cannot perform again the installation so try only to uninstall and reinstall the logging component:
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=True
RH detailed instructinos are here
As per the documentation, monitoring is shipped with OKD.
OKD ships with a pre-configured and self-updating monitoring stack that is based on the Prometheus open source project and its wider eco-system. It provides monitoring of cluster components and ships with a set of alerts to immediately notify the cluster administrator about any occurring problems and a set of Grafana dashboards.
Further, as per the documentation, this command should show links for various monitoring tools. oc -n openshift-monitoring get routes
When I run the oc command with system user, I get a message as: No resources found.
The installation does not go through.
git clone https://github.com/openshift/cluster-monitoring-operator
cd cluster-monitoring-operator
oc apply -f manifests/
Error messages:
namespace "openshift-monitoring" created
serviceaccount "cluster-monitoring-operator" created
unable to decode "manifests/0000_50_cluster_monitoring_operator_02-role.yaml": no kind "ClusterRole" is registered for version "rbac.authorization.k8s.io/v1beta1"
unable to decode "manifests/0000_50_cluster_monitoring_operator_03-role-binding.yaml": no kind "ClusterRoleBinding" is registered for version "rbac.authorization.k8s.io/v1beta1"
unable to decode "manifests/0000_50_cluster_monitoring_operator_04-deployment.yaml": no kind "Deployment" is registered for version "apps/v1"
unable to decode "manifests/0000_50_cluster_monitoring_operator_05-clusteroperator.yaml": no kind "ClusterOperator" is registered for version "config.openshift.io/v1"
unable to decode "manifests/0000_90_cluster_monitoring_operator_00-operatorgroup.yaml": no kind "OperatorGroup" is registered for version "operators.coreos.com/v1"
So, how do we enable monitoring with minishift?
You can follow this to install prometheus in minishift:
https://github.com/minishift/minishift-addons/tree/master/add-ons/prometheus
Be sure that you login as admin. If you encounter problem to login as admin, you can follow these steps:
minishift ssh
[docker#example ~]$ sudo su
[root#example ~]# export KUBECONFIG=/var/lib/minishift/base/openshift-apiserver/admin.kubeconfig PATH="$PATH:/var/lib/minishift/bin"
[root#example ~]# oc adm policy add-cluster-role-to-user cluster-admin admin
[root#example ~]# exit
[docker#example ~]$ exit
oc login -u admin -p admin
oc whoami
You will see you login as admin.
When I enter the command to apply the prometheus, I encountered this problem:
minishift addons apply prometheus --addon-env namespace=kube-system
-- Applying addon 'prometheus':.Error applying the add-on: Error executing command 'oc new-app -f prometheus.yaml -p NAMESPACE=#{namespace} -n #{namespace}'.
Solution:
login Minishift as admin using "oc login -u admin -p admin".
go to the namespace "kube-system" by "oc project kube-system".
click on "Add to project" -> "import YAML/JSON".
clone the prometheus addon in your local machine from https://github.com/minishift/minishift-addons.git
import the ../minishift-addons/add-ons/prometheus/prometheus.yml into the "kube-system" namespace.
Afterwards, the prometheus will be deployed.
You can access the prometheus graph UI: https://prometheus-kube-system.$minishift-host-ip-address.nip.io.
I am creating a development/testing container that contains a number of elements including a mysql server that must run internally for code to access. To demonstrate the issue, I run the following Dockerfile with docker run -i -t demo_mysql_server:
FROM amazonlinux:2018.03
RUN yum -y update && yum -y install shadow-utils mysql-server
Unfortunately, after building the docker container I receive a common connection error (see 1, 2)
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
which can be fixed by logging in as admin with docker run -i -u 0 -t demo_mysql_server
and executing:
echo "NETWORKING=yes" >/etc/sysconfig/network
service network restart
/etc/init.d/mysqld start
chkconfig mysqld on
which seems to turn everything on. However, incorporating these into RUN commands doesn't seem to keep the service on and logging in as admin requires restarting the service as above and adding a user and working as a non-admin and trying to start the service results in errors of the flavor:
bash: /etc/sysconfig/network: Permission denied
[testUser#544a938c44c1 /]$ service network restart
[testUser#544a938c44c1 /]$ /etc/init.d/mysqld start
/etc/init.d/mysqld: line 16: /etc/sysconfig/network: No such file or directory
[testUser#544a938c44c1 /]$ chkconfig mysqld on
You do not have enough privileges to perform this operation.
I this a normal error to see and how do I get the MySQL server instance to stay running?
I have installed openshift through minishift on mac. I am able to run the command docker login -u developer -p <pass> 172.30.1.1:5000 from the shell where openshift is running. However I need to run the same login command from host mac machine and don't know the ip to use.
The openshift console can be accessed from https://192.168.64.3:8443.
The command minishift openshift registry returns an error.
I can run the oc commands from mac host machine.
I think you better login to docker daemon: https://docs.okd.io/latest/minishift/using/docker-daemon.html instead that login to the internal registry directly. Once you've done you can use docker client as it is bound to your minishift environment.
On my AWS ec2 server I have docker 1.9.1 installed.
In an image test_image based from ubuntu:trusty official docker image, I have tried to setup the LEMP(Linux, Nginx, MySQL, PHP) architecture.
Following is the docker command i have used to start my container:
docker run --name test_1 -d -p 80:80 -p 3306:3306 test_image /bin/sh -c "while true; do echo daemonized docker container; sleep 5000; done"
I have exposed port 80 and 3306 to the host's network interface and have also allowed AWS's security group to allow inbound connections to these ports. Connection type in security group is: MYSQL/Aurora and protocol is: TCP (I know its not very secure, its only for initial implementation. Production setup will be different)
I followed this DigitalOcean tutorial: https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-on-ubuntu-14-04
After installing Nginx and starting it I am able to test it in the browser via ec2's pubic ip i.e. http://xxx.xxx.xxx.xxx shows the default nginx welcome page.
While installing MySQL, I followed the following commands in the docker container:
apt-get install mysql-server
mysql_install_db
/etc/init.d/mysql start
mysql_secure_installation
I have given a password to my root user and during mysql_secure_installation i had allowed remote access to root user.
mysql -u root -p command from inside the container connects me to the mysql db but not from outside the container.
Also from my local machine:
I tried with mysql-client:
mysql -h xxx.xxx.xxx.xxx -u root -p
I got the following error: ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (111)
and also through mysql workbench but I still can't connect to the mysql db.
What am I doing wrong?
In your host mysql's my.cnf set the bind address to 0.0.0.0 so that mysql listens on all network interfaces
bind-address = 0.0.0.0
The default config is:
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
#bind-address = 127.0.0.1