Vitess MySQL authentication is not working - mysql

While installing Vitess through helm in site-values.YAML we enabled authentication
mysqlProtocol:
enabled: false
authType: secret
# authType can be: none or secret. For secret, perform the following changes:
username: mysqluser
# this is the secret that will be mounted as the user password
# kubectl create secret generic mysql-user-passowrd --from-literal=password=abc_123
passwordSecret: mysql-user-passowrd
but after this, if we try to connect to mysql like
mysql -h 10.108.8.197 -p 15991 -u mysqluser
and after entering password it's not authenticating
and showing error Can't connect to MySQL server on '10.108.8.197' (111)
10.108.8.197 is our Vtgate service cluster IP, if we try from 127.0.0.1 also same
Is there anything we are missing?

What worked for us is
deleted vitess installed through helm by helm delete vitess --purge then recreated vitess by enabling mysql protocol
mysqlProtocol:
enabled: true
authType: secret
# authType can be: none or secret. For secret, perform the following changes:
username: mysqluser
# this is the secret that will be mounted as the user password
# kubectl create secret generic mysql-user-passowrd --from-literal=password=abc_123
passwordSecret: mysql-user-passowrd

Related

Connecting to MySQL 5.6 inside Docker For Desktop/Kubernetes: ERROR 1130 (HY000): Host 'xx.xx.xx.xx' is not allowed to connect to this MySQL server

I'm following theses instructions (page 181) to create a persistent volume & claim, a mysql replica set & service. I specify mysql v5.6 in the yaml file for the replica set.
After viewing the log for the pod, it looks like it is successful. So then I
kubectl run -it --rm --image=mysql --restart=Never mysql-client -- bash
mysql -h mysql -p 3306 -u root
It prompts me for the password and then I get this error:
ERROR 1130 (HY000): Host '10.1.0.17' is not allowed to connect to this MySQL server
Apparently MySQL has a feature where it does not allow remote connections by default and I have to change the configuration files and I don't know how to do that inside a yaml file. Below is my YAML. How do I change it to allow remote connections?
Thanks
Siegfried
cat <<END-OF-FILE | kubectl apply -f -
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mysql
# labels so that we can bind a Service to this Pod
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: tododata
image: mysql:5.6
resources:
requests:
cpu: 1
memory: 2Gi
env:
# Environment variables are not a best practice for security,
# but we're using them here for brevity in the example.
# See Chapter 11 for better options.
- name: MYSQL_ROOT_PASSWORD
value: some-password-here
livenessProbe:
tcpSocket:
port: 3306
ports:
- containerPort: 3306
volumeMounts:
- name: tododata
# /var/lib/mysql is where MySQL stores its databases
mountPath: "/var/lib/mysql"
volumes:
- name: tododata
persistentVolumeClaim:
claimName: tododata
END-OF-FILE
Sat Oct 24 2020 3PM EDT Update: Try Bitnami MySQL
I like Ben's idea of using bitnami mysql because then I don't have to create my own custom docker image. However, when using bitnami and trying to connect to they mysql server I get
ERROR 2003 (HY000): Can't connect to MySQL server on 'my-release-mysql.default.svc.cluster.local' (111)
This happens after I successfully get a bash shell with this command:
kubectl run my-release-mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
Then, as per the instructions, I do this and get the HY000 error above.
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p
Wed Nov 04 2020 Update:
Thanks Ben.. Yes -- I had already tried that on Oct 24 (approx) and when I do a k describe pod I get mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/opt/bitnami/mysql/tmp/mysql.sock' exists!.
Of course, when I run the mysql client as described in the nicely generated instructions, the client cannot connect because mysqld has died.
This is after having deleted the pvcs and stss and doing helm delete my-release prior to reinstalling via helm.
Unfortunately, when I tried this the first time (a couple of weeks ago) I did not set the root password and used the default generated password and I think it is still trying to use that.
This did work on azure kubernetes after having created a fresh azure kubernetes cluster. How can I reset my kubernetes cluster in my docker for desktop windows? I tried google searching and no luck so far.
Thanks
Siegfried
After a lot of help from the bitnami folks, I learned that my spinning disks on my 4 year old notebook computer are kinda slow (now why this is a problem with Bitnami MySQL and not Bitnami PostreSQL is a mystery).
This works for me:
helm install my-mysql bitnami/mysql \
--set image.debug=true \
--set primary.persistence.enabled=false,secondary.persistence.enabled=false \
--set primary.readinessProbe.enabled=false,primary.livenessProbe.enabled=false \
--set secondary.readinessProbe.enabled=false,secondary.livenessProbe.enabled=false
This turns off the peristent volumes so the data is lost when the pod dies.
Yes this is useful for me for development purposes and no one should be using Docker For Desktop/Kubernetes for production anyway... I just need to populate a tiny database and test my queries and if I need to repopulate database every time I reboot, well, that is not a big problem.
So maybe I need to get a new notebook computer? The price of notebook computers with 4TB of spinning disk space has gone up in the last couple of years.... And I cannot find any SSD drives of that size so even if I purchased a new replacement with spinning disks I might have the same problem? Hmm....
Thanks everyone for your help!
Siegfried
This appears to work just fine for me on windows. Complete the following steps:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release --set root.password=awesomePassword bitnami/mysql
This is all you need to run the mysql instance. It does not makes a few services and a statefulset. Then, to connect to it, you
Either have to be in another another kubernetes container. Without this, you will not find the dns record for my-release-mysql.default.svc.cluster.local
run my-release-mysql-client --rm --tty -i --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p my_database
For the password, it should be 'awesomePassword'
Port forward the service to your local machine.
kubectl port-forward svc/my-release-mysql 3306:3306
As a note, a bitnami container will have issues if you kill it and restart it with only your helm commands and the password is not set. The persistent volume claim will usually stick around - so you would need to set the password to the old password. If you do not specify the password, you can get the password by running the commands bitnami tells you about.
NAME: my-release
LAST DEPLOYED: Thu Oct 29 20:39:23 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES: Please be patient while the chart is being deployed
Tip:
Watch the deployment status using the command: kubectl get pods -w
--namespace default
Services:
echo Master: my-release-mysql.default.svc.cluster.local:3306 echo
Slave: my-release-mysql-slave.default.svc.cluster.local:3306
Administrator credentials:
echo Username: root echo Password : $(kubectl get secret
--namespace default my-release-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
To connect to your database:
Run a pod that you can use as a client:
kubectl run my-release-mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
To connect to master service (read/write):
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p my_database
To connect to slave service (read-only):
mysql -h my-release-mysql-slave.default.svc.cluster.local -uroot -p my_database
To upgrade this helm chart:
Obtain the password as described on the 'Administrator credentials' section and set the 'root.password' parameter as shown
below:
ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-mysql -o jsonpath="{.data.mysql-root-password}" | base64
--decode)
helm upgrade my-release bitnami/mysql --set root.password=$ROOT_PASSWORD

OpenShift Login failed (401 Unauthorized)

I'm new to openshift or K8'S. I have installed Openshift v3.11.0+bf985b1-463 cluster in my centos 7.
While running prerequisites.yml and deploy_cluster.yml run status is successful. And i have updated htpasswd and granted the cluster-admin role for my user.
htpasswd -b ${HTPASSWD_PATH}/htpasswd $OKD_USERNAME ${OKD_PASSWORD}
oc adm policy add-cluster-role-to-user cluster-admin $OKD_USERNAME
and i have create the user and identity also by the below cmd.
oc create user bob
oc create identity ldap_provider:bob
oc create useridentitymapping ldap_provider:bob bob
When i try to login with oc login -u bob -p password it say's
Login failed (401 Unauthorized)
Verify you have provided correct credentials.
But i can able to login with oc login -u system:admin
For your information: the okd deploy_cluster.yml ran successfully but the below pod is in error state.
Is that cause the problem? cmd oc get pods
Suggest me how can i fix the issue. Thank you.
UPDATE:
I have ran the deploy_cluster.yml once again the login issue is solved able to login. But it fails with the below error.
This phase can be restarted by running: playbooks/openshift-logging/config.yml
Node logging-es-data-master-ioblern6 in cluster logging-es was unable to rollout. Please see documentation regarding recovering during a rolling cluster restart
In openshift console the Logging Pod have the below event.
But all the servers have enough memory like more than 65% is free.
And the Ansible version is 2.6.5
1 Master node config:
4CPU, 16GB RAM, 50GB HDD
2 Slave and 1 infra node config:
4CPU, 16GB RAM, 20GB HDD
To create a new user try to follow these steps:
1 Create on each master node the password entry in htpasswd file with:
$ htpasswd -b </path/to/htpasswd> <user_name>
$ htpasswd -b /etc/origin/master/htpasswd myUser myPassword
2 Restart on each master node the master api and master controllers
$ master-restart controllers && master-restart api
or
$ /usr/local/bin/master-restart api && /usr/local/bin/master-restart controllers
3 Apply needed roles
$ oc adm policy add-cluster-role-to-user cluster-admin myUser
4 Login as myUser
$ oc login -u myUser -p myPassword
Running again the deploy_cluster.yaml after configuring the htpasswd file, will force the restart of master controllers and api so you've been able to login as your new user.
About the other problem, registry-console and loggin-es-data-master pods not running it's because you cannot run again the deploy_cluster.yaml when your cluster is already up and running so you have to uninstall okd and then run again the playbook. This happens because the SDN is already working and all your nodes already own all needed certificates.
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
and then again
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
More detailed informations are here
If, after all this procedure, the logging-es-data-master pod should not run then uninstall the logging component with
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
and then uninstall the whole okd and install it again.
If your cluster is already working and you cannot perform again the installation so try only to uninstall and reinstall the logging component:
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=True
RH detailed instructinos are here

Can't expose mysql tcp service running inside kubernetes cluster publicly using nginx-ingress

I ran into a problem exposing a mysql database running inside a kubernetes cluster publicly. The cluster runs with kops on AWS. Im using a helm chart for nginx-ingress: https://github.com/helm/charts/tree/master/stable/nginx-ingress
controller:
config:
use-proxy-protocol: "true"
metrics:
enabled: true
replicaCount: 2
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
stats:
enabled: true
rbac:
create: true
tcp:
5000: default/cbioportal-prod-db-mysql:3306
From within the cluster I can telnet to the db through nginx over port 5000 :
# telnet eating-dingo-nginx-ingress-controller 5000
J
5.7.14
ke_|c&tc"ui%]}mysql_native_passwordConnection closed by foreign host
But i can't seem to connect from outside using the hostname of the aws load balancer.
telnet xxx.us-east-1.elb.amazonaws.com 5000
Trying x.x.x.x...
When i look in aws ec2 dashboard i see the load balancer's security group allows connections from everywhere on port 5000.
UPDATE
I can connect when I use port 3306 instead of 5000:
tcp:
3306: default/cbioportal-prod-db-mysql:3306
However now that the port is open:
$ nmap --verbose -Pn x.x.x.x
PORT STATE SERVICE
21/tcp open ftp
80/tcp open http
443/tcp open https
3306/tcp open mysql
I am getting an authorization issue:
$ mysql -h x.x.x.x -uroot -pabcdef
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading authorization packet', system error: 2
I can connect directly to the nginx controller without issues from within the cluster:
kubectl run -it --rm --image=mysql:5.7 --restart=Never mysql-client -- mysql -h eating-dingo-nginx-ingress-controller -uroot -pabcdef
I'm using this mysql helm chart:
https://github.com/helm/charts/tree/master/stable/mysql

Unable to connect to dockerized mysql db remotely

On my AWS ec2 server I have docker 1.9.1 installed.
In an image test_image based from ubuntu:trusty official docker image, I have tried to setup the LEMP(Linux, Nginx, MySQL, PHP) architecture.
Following is the docker command i have used to start my container:
docker run --name test_1 -d -p 80:80 -p 3306:3306 test_image /bin/sh -c "while true; do echo daemonized docker container; sleep 5000; done"
I have exposed port 80 and 3306 to the host's network interface and have also allowed AWS's security group to allow inbound connections to these ports. Connection type in security group is: MYSQL/Aurora and protocol is: TCP (I know its not very secure, its only for initial implementation. Production setup will be different)
I followed this DigitalOcean tutorial: https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-on-ubuntu-14-04
After installing Nginx and starting it I am able to test it in the browser via ec2's pubic ip i.e. http://xxx.xxx.xxx.xxx shows the default nginx welcome page.
While installing MySQL, I followed the following commands in the docker container:
apt-get install mysql-server
mysql_install_db
/etc/init.d/mysql start
mysql_secure_installation
I have given a password to my root user and during mysql_secure_installation i had allowed remote access to root user.
mysql -u root -p command from inside the container connects me to the mysql db but not from outside the container.
Also from my local machine:
I tried with mysql-client:
mysql -h xxx.xxx.xxx.xxx -u root -p
I got the following error: ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (111)
and also through mysql workbench but I still can't connect to the mysql db.
What am I doing wrong?
In your host mysql's my.cnf set the bind address to 0.0.0.0 so that mysql listens on all network interfaces
bind-address = 0.0.0.0
The default config is:
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
#bind-address = 127.0.0.1

Mysql can't connect to local server through socket on Amazon EC2

I opened my application (in Rails) on Amazon EC2 and got an error - thus I checked the logs and there is following:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
10 hours ago was everything working yet. What's the problem? A lot of traffic?
The app is running on Micro Instance.
How to fix this issue and how to avoid it in the future?
Thank you very much
EDIT:
sudo find / -type s
---
/tmp/.sock
/dev/log
/var/lib/apt-xapian-index/update-socket
/run/mysqld/mysqld.sock
/run/acpid.socket
/run/dbus/system_bus_socket
/run/udev/control
find: `/proc/4739/task/4739/fd/5': No such file or directory
find: `/proc/4739/task/4739/fdinfo/5': No such file or directory
find: `/proc/4739/fd/5': No such file or directory
find: `/proc/4739/fdinfo/5': No such file or directory
In my case, the server had apparently restarted at some point, and mysql was not started when it did. I simply typed the command to start and it worked:
sudo service mysqld start
locate your locate my.cnf file see there for line as socket = ../../mysql.sock
you need to replace the path with your path as
socket = /run/mysqld/mysqld.sock
if the path matches let me know
Cross posting my response from here.
I had this same problem on an ec2 instance. These instructions worked perfectly for me:
Redhat Enterprise Linux - RHEL 5 / 6 MySQL installation
Type the following command as root user:
yum install mysql-server mysql
Redhat Enterprise Linux - RHEL 4/3 MySQL installation
Type the following command as root user:
up2date mysql-server mysql
Start MySQL Service
To start the mysql server type the following command:
chkconfig mysqld on
/etc/init.d/mysqld start
Setup the mysql root password
Type the following command to setup a password for root user:
mysqladmin -u root password NEWPASSWORD
Test the mysql connectivity
Type the following command to connect to MySQL server:
$ mysql -u root -p
In my case I used command below:
project_folder > mysql.server start
After system return SUCCESS message it worked OK.
mysql-server is not installed by default on the Amazon Linux AMI. Use these commands to install it:
$ sudo yum install mysql-server
$ sudo chkconfig mysqld on
$ sudo /etc/init.d/mysqld start
$ mysqladmin -u root password NEWPASSWORD
$ mysql -u root -p
To solve this error first find your socket file, run the following commands in terminal
mysqladmin variables | grep socket
For me, this gives:
| socket | /var/run/mysqld/mysqld.sock
Then, add a line to your Rails application: config/database.yml:
development:
adapter: mysql2
host: localhost
username: root
password: xxxx
database: xxxx
socket: /var/run/mysqld/mysqld.sock
This will solve this problem.