Docker IP changes makes locking down MySQL access tricky - mysql

I'm trying to lock down access to a MySQL user account to one IP address, but it seems that every time you start a docker container, the IP address changes.
docker run -it company/my-app bash
Setup mysql-client on it
apt-get update
apt-get upgrade
apt-get install mysql-client
Now I would connect using:
mysql -u blah -h database.host.com -p
Access denied for user 'blah'#'172.17.0.63' (using password: YES)
Then I would grant all privileges for blah'#'172.17.0.63 and I'd be able to access the database from the container. Now I would start a new docker container and repeat the above steps and I would once again get:
Access denied for user 'blah'#'172.17.0.64' (using password: YES)
The IP address seems to increment every time you start a docker container.
I can limit the hosts to %.%.%.%, but that just means any IP address can connect which is not as secure as I want it.
Is there some sort of way to limit access to a mysql account to only one docker container or group of containers?

You can configure a small dnsmasq instance to be used by MySQL, and run a script to automatically update the DNS record when the container's IP address has changed.
I've written a small script to do this (pasted below), which automatically update DNS record which has the same name as the containers' name and points them to the containers' IP addresses:
#!/bin/bash
# 10 seconds interval time by default
INTERVAL=${INTERVAL:-10}
# dnsmasq config directory
DNSMASQ_CONFIG=${DNSMASQ_CONFIG:-.}
# commands used in this script
DOCKER=${DOCKER:-docker}
SLEEP=${SLEEP:-sleep}
TAIL=${TAIL:-tail}
declare -A service_map
while true
do
changed=false
while read line
do
name=${line##* }
ip=$(${DOCKER} inspect --format '{{.NetworkSettings.IPAddress}}' $name)
if [ -z ${service_map[$name]} ] || [ ${service_map[$name]} != $ip ] # IP addr changed
then
service_map[$name]=$ip
# write to file
echo $name has a new IP Address $ip >&2
echo "host-record=$name,$ip" > "${DNSMASQ_CONFIG}/docker-$name"
changed=true
fi
done < <(${DOCKER} ps | ${TAIL} -n +2)
# a change of IP address occured, restart dnsmasq
if [ $changed = true ]
then
systemctl restart dnsmasq
fi
${SLEEP} $INTERVAL
done
Then, create your MySQL user with the host equal to the container's name, e.g. your container's name is blah then you create MySQL user as 'you'#'blah'.

I think your current approach is wrong. You can simply use the official MySQL container and link to it the cointainers you want to have access:
docker run --name some-app --link some-mysql:mysql -d app-that-uses-mysql
This will add an entry to the some-app /etc/hosts file with the name "mysql" pointing to the MySQL container, as described in the docker linking docs.

Related

MySQL Docker Container storing password after first run

All my experience with Docker so far has led me to believe that containers are stateless.
If so, why is my container storing the password that I change it to after the first run if I spun it up without specifying a volume or bind mount? I am especially puzzled since none of the other edits I make to the dbms persist (like creating tables).
Additional Details:
Versions:
1. Docker - 18.09.0 build 4d60db4
2. Image - mysql/mysql-server:latest
Commands:
1. $ docker run --name=sql -d mysql/mysql-server:latest
2. $ docker logs sql 2>&1 | grep GENERATED to grab the generated password for first login
3. $ docker exec -it sql mysql -uroot -p
4. mysql> Enter Password: <generated password>
5. mysql> ALTER USER 'root'#'localhost' IDENTIFIED BY 'stkoverflw';
6. mysql> exit
7. $ docker stop sql
8. $ docker start sql
9. $ docker exec -it sql mysql -uroot -p
10. mysql> Enter Password: <stkoverflw>
How does the password configuration persist across restarts of the container?
Containers are not stateless. Containers are easy to create and destroy, so they can be used to run a service which is stateless, but each container is itself stateful.
When the container is running, there is a volume containing its root filesystem. You don't have to tell Docker to create it. Docker has to create it because otherwise where do the container's files go?
When you say docker stop, the container stops running but it is not destroyed. When you say docker start, the same container resumes with the same root volume. That's where the changed password persists. The process running in the container was stopped and a new process was started (so state held in memory would be lost), but the filesystem is still there.
To get rid of a container (including the changed password), say docker rm. Then you can say docker run to start from scratch.

MySQL container: grant access and create a new image

I'm trying to use MySQL docker container in my host system to make installation and configuration processes much easier and faster.
So, I've pulled an image from:
https://hub.docker.com/r/mysql/mysql-server/
Then started container based on this image..
Container started fine, but I was not able to connect to this DB from my host system (everything is ok if I try to connect from container). It failed with a message:
ERROR 1130 (HY000): Host '<here goes my IP>' is not allowed to connect to this MySQL server
So, as I understand, my root user has no enough permissions.
I've entered my container:
docker exec -it mysql bash
Connected to DB:
mysql -uroot -ppassword
Updated permissions for my root user:
use mysql;
UPDATE user SET Host="%" WHERE User='root';
It's updated fine.
Than I decided to save my updated image somehow... I've found this guide:
http://docs.oracle.com/cd/E52668_01/E75728/html/section_c5q_n2z_fp.html
After executing:
docker stop mysql
docker commit -m "Fixed permissions for root user" -a "Few words about author" `docker ps -l -q` myrepo/mysql:v1
docker rm mysql
docker run --name new-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=pass -d myrepo/mysql:v1
I've found that my root user hasn't permissions again.
What is wrong here?
How to public my updated image into my Dockerhub?
My original answer is for persisting the change in the MySQL data after it has been initialized. But since you want to do this in the image for every initialization automatically there is a different approach for this. You can use one of the following options:
There is an environment variable called MYSQL_ROOT_HOST for this image where you can set the host (https://github.com/mysql/mysql-docker/blob/mysql-server/5.7/docker-entrypoint.sh#L63-L69). You should be able to set this to % to allow all hosts to connect as root such as -e MYSQL_ROOT_HOST="%".
The image supports adding SQL files to /docker-entrypoint-initdb.d/ to be initialized on startup (https://github.com/mysql/mysql-docker/blob/mysql-server/5.7/docker-entrypoint.sh#L98-L105). You can create your SQL file that has UPDATE mysql.user SET Host="%" WHERE User='root'; in it and then ADD that file to /docker-entrypoint-initdb.d/ in your own image. Then, when starting a container based on that image it will initialize that SQL file.
That image specifies a default volume to hold the MySQL data at https://github.com/mysql/mysql-docker/blob/mysql-server/5.7/Dockerfile#L11. When you start the container, a volume is created for that container. When you update the permissions for the root user, it is saved in this volume (it is actually part of MySQL data for the mysql database). But once you remove the container, that volume is also lost.
There are usually two things you can do in this case to preserve the data between container restarts or even new containers:
Create a named volume and mount the data there. To do this you can run docker volume create mysqldata. Then, when starting the container mount the data with -v mysqldata:/var/lib/mysql. This volume will persist even after you stop or delete your MySQL container.
Bind mount the data to a host folder. Instead of creating a volume, you can just mount a folder such as -v /mnt/mysqldata:/var/lib/mysql. This will persist all your MySQL data on the host at /mnt/mysqldata.
Though, these are not the only ways to persist data, they are two built-in methods. There are also Docker volume plugins that allow you to use other storage mediums (examples might be https://github.com/rancher/convoy for NFS and https://github.com/NetApp/netappdvp for NetApp).
docker exec -it mysql bash
chown -R mysql:mysql /var/lib/mysql
if you change permission of volume in host, above code will correct permission denied for root.

Bash script for interactive ssh and mysql commands

I'm studying MySQL, and every time I have to
Enter ssh XXX#XXX command, and enter my password to the school server.
Enter mysql -u XXX -p command, and enter MySQL password.
I want to create a Bash script for performing the steps above automatically.
I can accomplish the first step with this code:
#!/usr/bin/expect -f
set address xxx.com
set password xxx
set timeout 10
spawn ssh xxx#$address
expect { "*yes/no" { send "yes\r"; exp_continue} "*password:" { send "$password\r" } }
send clear\r
interact
But I don't know how to automatically input the next command (mysql -u xxx -p) and the password.
How can I do this?
You don't need such a complex script to just enter the MySQL console on remote machine. Use the features of the ssh tool:
ssh -tt user#host -- mysql -uuser -ppassword
The -t option forces pseudo-terminal allocation. Multiple -t force tty allocation, even if ssh has no local tty (see man ssh). Note the use of -p option. There must be no spaces between -p and password (see the manual for mysql).
Or even connect via mysql directly, if the MySQL host is accessible from your local machine:
mysql -hhost -uuser -p
Don't forget to adjust the shebang:
#!/bin/bash -
Use my.cnf to store your password securly like ssh keys.
https://easyengine.io/tutorials/mysql/mycnf-preference/
Same way ssh is also possible through ssh -i parameter and passing the private key path of the remote host.
Best of luck!

Bash for all elements in massive exec shut down

i have to do shut down of my vm's, smth like garbage collector, my script find name of vm's in database which are empty and ready for off. I have got trouble when i path to massive names of vm the output if echo is trully, but when i try to do turn off it takes only one elenent in mass and didn't path to variable.
How can i do shut down for all elements in massive? and path the names of vm's to property?
arr1=($(/usr/bin/mysql --skip-column-names -u $DB_USER -p$DB_PASS $DB -e "SELECT VM_NAME FROM VM_LIST Where BUSY="0" AND POOL='LINUX';"))
echo ${#arr1[#]}
if arr1 -gt 0
then
for availiable_node in "${arr1[#]}"
do
shutdown $availiable_node
echo $availiable_node
Though I couldn't understand everything, still if someone wants to do such activity,
It should be like:
#!/bin/bash
vmNode=(db query )
for node in ${vmNode[*]}
do
echo $node
ssh user#${node} "sudo shutdown -h now" #user should be same on all nodes.
# sudo use is subject to requirement
done
Requirements for this to work:
1) ssh should be passwordless
2) sudo should be passwordless and should not need tty
solution will still work if sudo or ssh are not password less but you shall have to enter password 2 times for eveynode, once for ssh and then for sudo
3) The node you are firing this script from, should be last or should not be there in vmNode array.
4) If query is fetching node host name then they should be configured in the /etc/hosts and ~/.ssh/known_hosts
This will work for OS shutdown(*nix) only.

execute mysql on remote server via bash script

I need to execute a mysql command on a remote server but seem to be hitting problem when it comes to executing the actual mysql bit
#!/usr/bin/expect -f
spawn /usr/bin/ssh -t root#10.0.0.2
expect "password: "
sleep 1
send "password\r"
sleep 2
/usr/bin/mysql databasename -e "update device_log set status = 'Y' where device_id in ('1','2');"
basically I want to change the flag to Y on device id's 1&2
but the script outputs
invalid command name "/usr/bin/mysql"
Just append the mysql command to the ssh command to run it in one go, like this:
#!/usr/bin/expect -f
spawn /usr/bin/ssh -t root#10.0.0.2 /usr/bin/mysql databasename -e "the query"
expect "password: "
sleep 1
send "password\r"
I'm not very much into expect, but I'm expecting that your attempt in the mysql line isn't actually valid syntax for expect to run a command.
Additionally:
You should use SSH keys for passwordless login instead of having a root password hardcoded in a script.
Consider running MySQL remotely e.g. mysql -h 10.0.0.2 -e "the query", or
Use port forwarding in SSH to connect to MySQL securely, e.g. run ssh -L 3307:localhost:3306 root#10.0.0.2 in the background and then connect to TCP port 3307 on localhost mysql -h 127.0.0.1 -P 3307.
It sounds like /usr/bin/mysql is not the the path to the mysql binary on that remote server. You could use just mysql instead, assuming that the binary is somewhere in that remote server's PATH. Otherwise you will have to go and find out where the binary is actually located and alter the absolute path accordingly.