How do I restore a dump file from mysqldump using kubernetes? - mysql

I know how to restore a dump file from mysqldump. Now, I am attempting to do that using kubernetes and a docker container. The database files are in persistent (nfs) mount. The docker cannot be accessed outside of the cluster as there is no need for anything external to touch it.
I tried:
kubectl run -i -t dbtest --image=mariadb --restart=Never --rm=true --command -- mysql -uroot -ps3kr37 < dump.sql
and
kubectl exec mariadb-deployment-3614069618-mn524 -i -t -- mysql -u root -p=s3kr37 < dump.sql
But neither commands worked -- errors about TTY, sockets, and other things hinting that I am missing something vital here.
What am I not understanding here?
I could just stop the deployment, scp the database files, and restart the container and hope for the best. However, what can go right?
The question Install an sql dump file to a docker container with mariaDB sure looks like a duplicate but is not: first, I am on Linux not Windows and more importantly the answers all are about initialising with a dump. I want to be able to trash the data and revert to the dump data. This is a test system that will eventually be the "live" so I need to restore from many potential dumps.

As described in here you can use the following command to restore a DB on kubernetes pod from a dump in your machine
$ kubectl exec -it {{podName}} -n {{namespace}} -- mysql -u {{dbUser}} -p{{password}} {{DatabaseName}} < <scriptName>.sql
Example :
$ kubectl exec -it mysql-58 -n sql -- mysql -u root -proot USERS < dump_all.sql

What I did was this:
Create an NFS mount with two sub0drectories: mysql and initd.
In initd, I added several ,sql files, including the dump.
Mount initd as /docker-entrypoint-initdb.d in the deployment.This causes all the files to be read at initialisation time provided that it is the first time we run.
The mysql directory is mounted as /var/lib/mysql and contains all the mariaDB files.
If I need to revert, I trash all the contents of the mysql directory and re-create the deployment.

This should work:
kubectl --kubeconfig=k8s-XXXXXXX-kubeconfig.yaml exec -i ddevdb-XXXXX -- mysql -u root -h mysqlservice -proot drupal < you-dump.sql
kubeconfig is optional, digitalocean for examples provides that so you can run your commands from your local.
To see if everything looks good:
kubectl --kubeconfig=k8s-XXXXXXX-kubeconfig.yaml run -it --rm --image=mariadb:10.4 --restart=Never mysql -- mysql -h mysqlservice -proot
After which you'll have a terminal inside mysql.

Related

pass password from .env on command to mysql inside docker

I basically know nothing about docker. And not that much more about bash neither. So:
There's a command in the README of a Laravel project i'm working on, that shows how to fill some data on local MySQL docker image, by sending a queries from a file located in the HOST.
docker exec -i {image} mysql -uroot -p{password} {database} < location/of/file.sql
What i want to do is "hide" the password from README, and make it read from .env file
So, i want to do something like this:
docker exec --env-file=.env -i {image} mysql -uroot -p$DB_PASSWORD {database} < location/of/file.sql
I've tested that docker ... printenv does show the variables from the file. But echoing one of then outputs a blank line: docker ... echo $DB_PASSWORD and running MySQL command using it, gets me "Access denied for user 'root'#'localhost'"
I've tried run the MySQL command "directly": docker ... mysql ... < file.sql and also "indirectly": docker bash -c "mysql ..." < file.sql.
You should prevent your shell from expanding the local variables (by single-quoting, or by escaping $)
This should be passed to containers shell and expanded there:
docker exec --env-file=.env -i {image} bash -c 'mysql -uroot -p$DB_PASSWORD {database}' < location/of/file.sql
It could possibly be two cases.
Check the key name in your env file and the docker run command
Check the path of the env file you are mapping to.

Restoring mysql data from a Docker Volume

It's the second time that when my local system (Mac OS) crashes and restarts, I lose the running Docker container of MySQL. By "lose" I mean even docker ps -a doesn't show it. It's vanished.
I am using the official mysql-server docker image (https://hub.docker.com/r/mysql/mysql-server) so luckily the data of /var/lib/mysql is in a volume. And I am lucky that after the loss of the container, the volume is there.
The question is, how can I restore the data (e.g. a mysqldump) out of a Docker volume of /var/lib/mysql?
Step 1: Find and verify the volume
Via docker volume ls you can find the name of the volume. Let's say it's <abcdef>.
Then, via docker run -it --rm -v <abcdef>:/var/lib/mysql busybox ls -l /var/lib/mysql make sure you see the files and the dates of the files matches your recent changes to the lost DB. (credits to this answer)
Optionally, you can create a backup out of this volume via this method.
Step 2: Create a new container, and mount this volume on
Whatever docker run command you are already using to start a MySQL container, add -v <abcdef>:/var/lib/mysql_old to it. It should give you a fresh MySQL container up and running, without any issue. Your data is not loaded there yet, just the files are accessible.
Step 3: Copy and overwrite the MySQL data
Now, go into the shell of that container (e.g. docker exec -it <CONTAINER_NAME> bash) and do ls /var/lib/mysql_old to make sure the files from your volume are there.
Then, do cp -R /var/lib/mysql_old/. /var/lib/mysql (or sudo cp ... depending on the user you got in with) and then chown -R mysql:mysql /var/lib/mysql. (Credits to this tutorial)
Step 4: Restart the container
Exit the container and do docker stop <CONTAINER_NAME> to stop the container. Then start it again via docker start <CONTAINER_NAME>. Voila! It should now be a DB with all your data.
Optionally, if you want to start off with a non-hacked container, you can do docker exec <CONTAINER_NAME> sh -c 'exec mysqldump -uroot -p --databases <DATABASE_NAME>' > dump.sql to get a mysqldump out of it, and import that dump.sql into a fresh new container via docker exec -i <CONTAINER_NAME> sh -c 'exec mysql ' < dump.sql.

Is there a way to check loading status of dump mysql import?

I'm using the command below to import a backup.sql in mysql Docker container:
cat backup.sql | docker exec -i CONTAINER /usr/bin/mysql -u root --password=root DATABASE
That works well, but sometimes the import takes a long time because of the size of the sql file dump. (~10 minutes or even more).
Is there any way I can check the status (loading percentage or something helpful) of the restore?
TLDR: Use the command template below replacing your settings.
pv -pert <sql file> | docker exec -i <container> /usr/bin/mysql -u <user> --password=<password> <DATABASE>
This is what I do:
pv -pert backup.sql | ...mysql command to restore...
The pv command shows a nice progress bar.
Example of restoring a 1.6GB sql file:
pv is not necessarily installed by default on your system, but it's commonly available in package repos. On my Mac, I installed it easily using brew.

How to backup MYSQL database with kubernetes

I successfully, recreated Single-Instance Stateful Application tutorial. Naturally, I'd like to create a periodic backup of all databases. I found this article that explains how to make a backup. Unfortunately, it does not work for me. The command that I am running looks like this
$ kubectl exec -n <namespace> <pod> -- mysqldump -u root -p$MYSQL_ROOT_PASSWORD --all-databases > /var/lib/mysql/backup/alldbs.sql
I found error(s). Backup was not working for two reasons.
1st, incorrect semantic. Instead of using kubectl exec -n <namespace> <pod> mysqldump -u root -p$MYSQL_ROOT_PASSWORD --all-databases > dump.sql as article mentions. I had to use a syntax described in mysql dockerhub documentation that looks like this kubectl exec -n <namespace> <pod> -- sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > dump.sql
2nd, incorrect path assumption. I assumed that dump.sql was created on the pod/container filesystem so, I expected to see backup file inside container. Instead backup file was created relative to your host machine filesystem not pod/container.

Exporting data from MySQL docker container

I use the official MySQL docker image, and I am having difficulty exporting data from the instance without errors. I run my export like this:
docker run -it --link containername:mysql --rm mysql sh -c
'exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
-p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"
dbname'
| gz > output.sql.gz
However, this results in the warning:
"mysqldump: [Warning] Using a password on the command line interface can be insecure."
As the first line of the outputted file. Obviously this later causes problems for any other MySQL processes which are used to consume the data.
Is there any way to suppress this warning from the mysqldump client?
A little late to answer but this command saved my day.
docker exec CONTAINER /usr/bin/mysqldump -u root --password=root DATABASE > backup.sql
I realise that this is an old question, but for those stumbling across it now I put together a post about exporting and importing from mysql docker containers: https://medium.com/#tomsowerby/mysql-backup-and-restore-in-docker-fcc07137c757
It covers the "Using a password on the command line interface..." warning and how to bypass it.
Run Following command on terminal
docker exec CONTAINER_id /usr/bin/mysqldump -uusername --password=yourpassword databasename> backup.sql
Replace the
CONTAINER_id. username, yourpassword
with specific to your configuration.
To get Container Id :
docker container ls
To eliminate this exact warning you can pass password in MYSQL_PWD environment variable or use other connection method - see http://dev.mysql.com/doc/refman/5.7/en/password-security-user.html
docker run -it --link containername:mysql --rm mysql sh -c
'export MYSQL_PWD="$MYSQL_ENV_MYSQL_ROOT_PASSWORD"; exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
dbname'
| gz > output.sql.gz
Here's how I solved this to dump a mysql db into a file.
I created a dump-db.sh file with the content:
# dump db from docker container
(docker exec -i CONTAINER_ID mysqldump -u DB_USER -pDB_PASS DB_NAME) > FILENAME.sql
To get the CONTAINER_ID list them: docker container list
Add run permissions to the script:
chmod o+x dump-db.sh
Run it:
./dump-db.sh
Remember to replace the CONSTANTS above with your own data.
I always create bash "tools" in my repo root with which I can repeat common tasks, such as database dumps. With bash, you can also load your .env file, so your credentials are not in a file in the repo, but just in your .env file.
#!/bin/bash
# load .env
set -o allexport; . ./.env; set +o allexport
# setup
TIMESTAMP=$(date +%Y-%m-%d__%H.%M)
BACKUP_DIR="dockerfiles/db"
CONTAINER_NAME="cp-db"
# dump
docker exec $CONTAINER_NAME /usr/bin/mysqldump -u$DB_USER --password=$DB_PASSWORD $DB_NAME> $BACKUP_DIR/dump__$TIMESTAMP.sql