Whats wrong with my saltstack mysql installation? - mysql

I am a saltstack newbie and mysql newbie too... :-/ was trying to install my-sql on Ubuntu 14. I am getting following error
I use the following sls file
mysql_package:
pkg.installed:
- name: mysql-server
mysql_conf:
file.managed:
- name: /etc/my.cnf
- source: salt://mysql/files/my.cnf
- user: root
- group: root
- mode: 0644
- require:
- pkg: mysql_package
mysql_service:
service:
- name: mysqld
- running
- enable: True
- require:
- pkg: mysql_conf
- watch:
- pkg: mysql_conf
# required packages start
server_pkgs:
pkg:
- installed
- pkgs:
- python-dev
- refresh: True
mysql_python_pkgs:
pkg.installed:
- pkgs:
- libmysqlclient-dev
- mysql-client
- python-mysqldb
- require:
- pkg: server_pkgs
python-pip:
pkg:
- installed
- refresh: False
mysql:
pip.installed:
- require:
- pkg: python-pip
- pkg: mysql_python_pkg
# required package end
stg_databases:
mysql_database.present:
- name: stagingdb
- require:
- pkg: mysql
- service: mysql_service
first_db_user:
mysql_user.present:
- name: stg-admin
- password: "pass4admin"
- host: '%'
- connection_charset: utf8
- saltenv:
- LC_ALL: "en_US.utf8"
- require:
- mysql_database: stg_databases
create_first_table:
mysql_query.run:
- database: stagingdb
- query: "create table first_table(id INT NOT NULL AUTO_INCREMENT, name VARCHAR(100) NOT NULL, PRIMARY KEY ( id ));"
- output: "/tmp/create_first_table.txt"
- require:
- mysql_database: stg_databases
first_table_grants:
mysql_grants.present:
- grant: all privileges
- database: stagingdb.*
- user: stg-admin
- host: '%'
- require:
- mysql_user: first_db_user
And using following conf file
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Came across following link MySQL - ERROR 1045 - Access denied , thats not what i want to do...
Guessing there is a relation between user defined in my.cnf and in sls file, which i am not getting correct ?

Where there any errors when you ran this state?
Also if you run:
salt-call -ldebug state.sls <state file>
That will give you a lot of useful information as to what salt is exactly running. Is the root user still available? Can you login and see if that user was actually created?

This is obvious Mysql GRANT issues that you don't need to debug. It means you make some mistake of GRANTING the user. Just check the doc : https://docs.saltstack.com/en/latest/ref/states/all/salt.states.mysql_user.html
mysql_user.present DOES NOT grant a user. you need mysql_grants.present together with mysql_user.present. The confusing part : mysql_grants.present doesn't set password; mysql_user.present set password but doesn't grant DB rights.
first_db_user:
mysql_user.present:
- name: stg-admin
- password: "pass4admin"
- host: '%'
- connection_charset: utf8
- saltenv:
- LC_ALL: "en_US.utf8"
- require:
- mysql_database: stg_databases
grant_my_first_db_user:
mysql_grants.present:
- grant: select,insert,update
- database: stagingdb
- user: stg-admin
- host: localhost
And saltstack mysql formula confirm this.
https://github.com/saltstack-formulas/mysql-formula/blob/master/mysql/salt-user.sls

Related

Ansible - Set MySQL 8 initial root password on RHEL 7

I try to setup a MySQL DB with Ansible, however, I have trouble with changing the initial root password.
- name: Get temporary root password from install log
shell: cat /var/log/mysqld.log | grep "temporary password" | grep -oE '[^ ]+$'
register: tmp_root_password
- name: Set new password from temporary password
shell: 'mysql -e \"ALTER USER root IDENTIFIED BY("{{ mysql_root_password }}");\" --connect-expired-password -uroot -p"{{ tmp_root_password.stdout }}"'
Fails with the following error:
fatal: [mysqlhost.mydomain]: FAILED! => {"changed": true, "cmd": "mysql -e \\\"ALTER USER root IDENTIFIED BY(\" MyNewPassword\");\\\" --connect-expired-password -uroot -p\"MyTmpPassword\"", "delta": "0:00:00.003081", "end": "2021-11-28 08:40:52.000198", "msg": "non-zero return code", "rc": 1, "start": "2021-11-28 08:40:51.997117", "stderr": "/bin/sh: -c: line 0: syntax error near unexpected token `('\n/bin/sh: -c: line 0: `mysql -e \\\"ALTER USER root IDENTIFIED BY(\" MyNewPassword\");\\\" --connect-expired-password -uroot -p\"MyTmpPassword\"'", "stderr_lines": ["/bin/sh: -c: line 0: syntax error near unexpected token `('", "/bin/sh: -c: line 0: `mysql -e \\\"ALTER USER root IDENTIFIED BY(\" MyNewPassword\");\\\" --connect-expired-password -uroot -p\"MyTmpPassword\"'"], "stdout": "", "stdout_lines": []}
I've tried to set the root password based on the below guide, as well, without any luck.
https://docs.ansible.com/ansible/latest/collections/community/mysql/mysql_user_module.html#ansible-collections-community-mysql-mysql-user-module
Thanks!
The following is based the Ansible role I created for mysql/percona and is idempotent.
This is the playbook you could use, taken from the repo described above.
This sets the 'debian-sys-main' user as a root user of the database.
This also assumes you build MySQL for the first time, and not while already being active/installed.
---
- name: root | stat to check whether /root/.my.cnf exists
stat:
path: /root/.my.cnf
register: cnf_file
- block:
- name: root | place temporary cnf file
template:
src: temp_cnf.j2
dest: /etc/my.cnf
mode: '0644'
- name: root | start mysql to add the debian-sys-maint user
systemd:
name: mysql
state: started
enabled: true
- name: root | get temp root password
shell: >-
grep 'temporary password' /var/log/mysqld.log |
awk '{print $NF}' | tail -n 1
register: temp_root_pw
no_log: true
- name: root | set root password
shell: >-
mysqladmin -u root
--password="{{ temp_root_pw.stdout }}"
password "{{ mysql_root_password }}"
no_log: true
- name: root | set debian-sys-maint user and password
mysql_user:
name: debian-sys-maint
password: "{{ mysql_system_password }}"
priv: '*.*:ALL,GRANT'
update_password: always
state: present
login_unix_socket: /var/run/mysqld/mysqld.sock
login_user: root
login_password: "{{ mysql_root_password }}"
no_log: true
- name: root | copy root.cnf
template:
src: root.cnf.j2
dest: /etc/mysql/root.cnf
mode: '0600'
owner: root
group: root
- name: root | make symlink of file for root db access
file:
state: link
src: /etc/mysql/root.cnf
path: /root/.my.cnf
- name: root | delete anonymous connections
mysql_user:
name: ""
host_all: true
state: absent
no_log: true
- name: root | secure root user
mysql_user:
name: root
host: "{{ item }}"
no_log: true
loop:
- ::1
- 127.0.0.1
- localhost
- name: root | ensure test database is removed
mysql_db:
name: test
login_user: root
state: absent
- name: root | stop mysql again
systemd:
name: mysql
state: stopped
enabled: true
- name: root | remove mysqld log file
file:
path: /var/log/mysqld.log
state: absent
when: not cnf_file.stat.exists
The temp_cnf.j2:
[client]
socket=/var/run/mysqld/mysqld.sock
[mysqld]
server-id=1
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
And the root.cnf.j2
{{ ansible_managed | comment }}
# This file is symlinked to /root/.my.cnf to use passwordless login for the root user
[client]
socket = {{ mysqld.socket }}
user = debian-sys-maint
password = {{ percona_system_password }}
[mysql_upgrade]
socket = {{ mysqld.socket }}
user = debian-sys-maint
password = {{ percona_system_password }}
Some vars:
mysql_root_password: my_password
mysql_system_password: my_password
mysqld:
socket: /var/run/mysqld/mysqld.sock
Should work for CentOS 8, Rocky Linux and Oracle Linux as well.
Regarding the initial question for RHEL 7 and MySQL Server 8.0.21, I've found the following approach working in the mentioned environment.
- name: Delete all anonymous SQL user accounts
mysql_user:
user: ""
host_all: yes
state: absent
- name: Remove the SQL test database
mysql_db:
db: test
state: absent
- name: Change root user password on first run
mysql_user:
login_user: root
login_password: ''
name: root
password: "{{ SQL_ROOT_PASSWORD }}"
priv: "*.*:ALL,GRANT"
host: "{{ item }}"
with_items:
- "{{ ansible_hostname }}"
- "127.0.0.1"
- "::1"
- "localhost"

Running database migrations during Google Cloud Build fails with ENOTFOUND error

I am trying to run migrations through Sequelize in Node JS on Google Cloud Run connecting to a MySQL Google Cloud SQL database. I followed
https://stackoverflow.com/a/58441728/4487248 to get the Google Cloud proxy setup. Given this log setting up the proxy connection to the database seems to have worked:
Step #2 - "migrate": Already have image (with digest): gcr.io/cloud-builders/yarn
Step #2 - "migrate": 2021/10/02 14:19:58 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
Step #2 - "migrate": 2021/10/02 14:19:58 Listening on /workspace/<MY-INSTANCE-NAME> for <MY-INSTANCE-NAME>
Step #2 - "migrate": 2021/10/02 14:19:58 Ready for new connections
Step #2 - "migrate": 2021/10/02 14:19:58 Generated RSA key in 74.706896ms
However, when I try to run migrations with yarn knex migrate:latest or ./node_modules/.bin/sequelize db:migrate I run into:
getaddrinfo ENOTFOUND /workspace/<MY-INSTANCE-NAME>
This seems to imply that the host could not be found.
Output / Logs
My cloudbuild.yaml (composed of https://stackoverflow.com/a/52366671/4487248 & https://stackoverflow.com/a/58441728/4487248):
steps:
# Install Node.js dependencies
- id: yarn-install
name: gcr.io/cloud-builders/yarn
waitFor: ["-"]
# Install Cloud SQL proxy
- id: proxy-install
name: gcr.io/cloud-builders/yarn
entrypoint: sh
args:
- "-c"
- "wget https://storage.googleapis.com/cloudsql-proxy/v1.25.0/cloud_sql_proxy.linux.amd64 -O /workspace/cloud_sql_proxy && chmod +x /workspace/cloud_sql_proxy"
waitFor: ["-"]
- id: migrate
name: gcr.io/cloud-builders/yarn
entrypoint: sh
args:
- "-c"
- "(/workspace/cloud_sql_proxy -dir=/workspace -instances=<MY-INSTANCE-NAME> & sleep 2) && ./node_modules/.bin/sequelize db:migrate"
timeout: "1200s"
waitFor: ["yarn-install", "proxy-install"]
timeout: "1200s"
My .sequelizerc (Documentation here):
const path = require('path');
module.exports = {
'config': path.resolve('config', 'config.js')
}
My config/config.js:
module.exports = {
production: {
username: process.env.PROD_DB_USERNAME,
password: process.env.PROD_DB_PASSWORD,
database: process.env.PROD_DB_NAME,
host: `/workspace/${process.env.INSTANCE_CONNECTION_NAME}`, // Replacing this line with `/workspace/cloudsql/${..}` or `/cloudsql/${..}` leads to the same error
dialect: 'mysql',
}
}
I did enable Public IP on the MySQL instance:
Setting the host to localhost and adding the instance path in socketPath in config.js fixed the issue:
module.exports = {
production: {
username: process.env.PROD_DB_USERNAME,
password: process.env.PROD_DB_PASSWORD,
database: process.env.PROD_DB_NAME,
host: localhost,
dialect: 'mysql',
dialectOptions: {
socketPath: `/workspace/${process.env.INSTANCE_CONNECTION_NAME}`,
},
}
}

Connect JavaScript running in docker container to MySQL database running on another docker container

I'm currently running a local instance of RocketChat and the RocketBot using docker-compose and a corresponding docker-compose.yaml file:
I use the standard mysql module like this:
var con = mysql.createConnection({
host: '<placeholder>',
user: 'root',
port: '3306',
password: '<placeholder>',
});
The host, user, port and password are gathered from running the inspect command on the container containing the MySQL server. The MySQL does work as I can run it and make changes to it and even connect to it using MySQL workbench. I get this error:
rosbot_1 | [Tue Jun 18 2019 18:42:06 GMT+0000 (UTC)] ERROR Error: connect ETIMEDOUT
rosbot_1 | at Connection._handleConnectTimeout (/home/hubot/node_modules/mysql/lib/Connection.js:412:13)
I have no idea how to proceed now, how can I connect from the bot served by docker-compose to the MySQL container using JavaScript?
EDIT:
docker-compose.yaml:
version: '2.1'
services:
mongo:
image: mongo:3.2
hostname: 'mongo'
volumes:
- ./db/data:/data/db
- ./db/dump:/dump
command: mongod --smallfiles --oplogSize 128 --replSet rs0
mongo-init-replica:
image: mongo:3.2
command: 'mongo mongo/rocketchat --eval "rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''localhost:27017'' } ]})"'
links:
- mongo:mongo
rocketchat:
image: rocketchat/rocket.chat:latest
hostname: 'rocketchat'
volumes:
- ./rocketchat/uploads:/app/uploads
environment:
- PORT=3000
- ROOT_URL=http://localhost:3000
- MONGO_URL=<placeholder>
- MONGO_OPLOG_URL=<placeholder>
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 5
links:
- mongo:mongo
ports:
- 3000:3000
<placeholder>:
image: <placeholder>
hostname: "<placeholder>"
environment:
- ROCKETCHAT_URL=<placeholder>
- ROCKETCHAT_ROOM=""
- ROCKETCHAT_USER=<placeholder>
- ROCKETCHAT_PASSWORD=<placeholder>
- ROCKETCHAT_AUTH=<placeholder>
- BOT_NAME=<placeholder>
- LISTEN_ON_ALL_PUBLIC=true
- EXTERNAL_SCRIPTS=hubot-help,hubot-seen,hubot-links,hubot-diagnostics,hubot-pugme,hubot-reload
- PENTEXT_PATH=/home/hubot/pentext
- ADDITIONAL_PACKAGES=mysql,lodash
- RESPOND_TO_LIVECHAT=true
- RESPOND_TO_DM=true
depends_on:
rocketchat:
condition: service_healthy
links:
- rocketchat:rocketchat
volumes:
- <placeholder>
ports:
- 3001:3001
Normally, you can connect to another container using the container name as hostname:
If you have a container with mysql, the container name (in this example 'db') is the host name to access the mysql container (also, you can use a hostname: 'mysqlhostname' to specify a different name):
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
In your rocketchat container you should add some environment variables for mysq root password and database to make it available to your container
rocketchat:
image: rocketchat/rocket.chat:latest
hostname: 'rocketchat'
volumes:
- ./rocketchat/uploads:/app/uploads
environment:
- PORT=3000
- ROOT_URL=http://localhost:3000
- MONGO_URL=<placeholder>
- MONGO_OPLOG_URL=<placeholder>
- MYSQL_ROOT_PASSWORD: mypass
- MYSQL_DATABASE: mydb
- MYSQL_HOSTNAME: db
...
links:
- rocketchat:rocketchat
- db : db
And then, use the host name and the environment variables to create your connection:
var con = mysql.createConnection({
host: 'db', // or process.env.MYSQL_HOSTNAME
user: 'root',
port: '3306',
password: 'mypass', // or process.env.MYSQL_ROOT_PASSWORD
});

Can't login mysql server deployed in k8s cluster

I am using k8s in mac-docker-desktop. I deploy a mysql pod with below config.
run with: kubectl apply -f mysql.yaml
# secret
apiVersion: v1
kind: Secret
metadata:
name: mysql
type: Opaque
data:
# root
mysql-root-password: cm9vdAo=
---
# configMap
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-conf
data:
database: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mysql
spec:
volumes:
- name: mysql
persistentVolumeClaim:
claimName: mysql
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: mysql-root-password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: mysql-conf
key: database
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql
mountPath: /var/lib/mysql
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql
labels:
app: mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
# services
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
After that. it shows ok . and then, I want to connect the mysql server with node ip, but failed. then I exec in the pod, and got failed either.
I execute in the pod and can't login.
☁ gogs-k8s kubectl get pods
NAME READY STATUS RESTARTS AGE
blog-59fb8cbd44-frmtx 1/1 Running 0 37m
blog-59fb8cbd44-gdskp 1/1 Running 0 37m
blog-59fb8cbd44-qrs8f 1/1 Running 0 37m
mysql-6c794ccb7b-dz9f4 1/1 Running 0 31s
☁ gogs-k8s kubectl exec mysql-6c794ccb7b-dz9f4 -it bash
root#mysql-6c794ccb7b-dz9f4:/# ls
bin boot dev docker-entrypoint-initdb.d entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#mysql-6c794ccb7b-dz9f4:/# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
root#mysql-6c794ccb7b-dz9f4:/# echo $MYSQL_ROOT_PASSWORD
root
root#mysql-6c794ccb7b-dz9f4:/# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
It there any problems with my config file ?
Probably you have invalid base64 encoded password. Try this one:
data:
pass: cm9vdA==
As #Vasily Angapov pointed out, your base64 encoding is wrong.
When you do the following you are encoding the base for root\n
echo "root" | base64
Output:
cm9vdAo=
If you want to remove the newline character you should use the option -n:
echo -n "root" | base64
Output:
cm9vdA==
Even better is to do the following:
echo -n "root" | base64 -w 0
That way base64 will not insert new lines in longer outputs.
Also you can verify if your encoding is right by decoding the encoded text:
echo "cm9vdA==" | base64 --decode
The output should not create a new line.

Install mysql-server 5.7 on lv mounted on /var/lib/mysql

Since 2 weeks, I'm working on MySQL deployment and stuff with Ansible. I have to install MySQL on a LV.
Before MySQL deployment, Ansible script creates /var/lib/mysql, LV and mount it on /var/lib/mysql. Then, it create MySQL user and MySQL group to set 0700 right on MySQL directory. When it done, Ansible deploy MySQL 5.7.
Part of my Ansible code :
- name: "Group : mysql"
group:
name: "mysql"
state: "present"
tags:
- User mysql
- name: "user : mysql"
user:
name: "mysql"
shell: "mysql"
group: "mysql"
createhome: "no"
append: "True"
state: "present"
tags:
- User
- name: "Set rights on mysql dir "
file:
path: "/var/lib/mysql"
owner: "mysql"
group: "mysql"
mode: 0700
tags:
- mysql dir rights
- name: "mysql root password"
debconf:
name: "mysql-server"
question: "mysql-server/root_password"
value: "{{ password_root_mysql }}"
vtype: "password"
when: password_root_mysql is defined
tags:
- Install
- name: "mysql root password confirmation"
debconf:
name: "mysql-server"
question: "mysql-server/root_password_again"
value: "{{ password_root_mysql }}"
vtype: "password"
when: password_root_mysql is defined
tags:
- Install mysql
- name: "Install : MySQL Server"
apt:
update_cache: "True"
name: "mysql-server"
install_recommends: "True"
tags:
- Install mysql
notify:
- stop mysql
- name: "Copie du template root.cnf.j2 vers root/.my.cnf "
template:
src: "{{ mysql_template_rootcnf }}"
dest: "~/.my.cnf"
owner: "root"
mode: "0600"
tags:
- Install mysql
So when I try to install mysql-server without any LV and directory settings, it works. But when I prepare directory MySQL with good rights, installation doesn't work, whether manual or automatic deployment.
Any ideas ?
Ubuntu 16.04 with MYSQL 5.7.
Ansible v2.7
Ok, I've found the problem, Lost+Found directory in /var/lib/mysql (lv mounted on it) is considerated like a database, mysql doesn't like it. In my code, ive just added :
- name: "Remove lost+found from {{ mysql_dir }}"
file:
path: "{{ mysql_dir }}/lost+found"
state: absent