Exporting data from MySQL docker container - mysql

I use the official MySQL docker image, and I am having difficulty exporting data from the instance without errors. I run my export like this:
docker run -it --link containername:mysql --rm mysql sh -c
'exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
-p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"
dbname'
| gz > output.sql.gz
However, this results in the warning:
"mysqldump: [Warning] Using a password on the command line interface can be insecure."
As the first line of the outputted file. Obviously this later causes problems for any other MySQL processes which are used to consume the data.
Is there any way to suppress this warning from the mysqldump client?

A little late to answer but this command saved my day.
docker exec CONTAINER /usr/bin/mysqldump -u root --password=root DATABASE > backup.sql

I realise that this is an old question, but for those stumbling across it now I put together a post about exporting and importing from mysql docker containers: https://medium.com/#tomsowerby/mysql-backup-and-restore-in-docker-fcc07137c757
It covers the "Using a password on the command line interface..." warning and how to bypass it.

Run Following command on terminal
docker exec CONTAINER_id /usr/bin/mysqldump -uusername --password=yourpassword databasename> backup.sql
Replace the
CONTAINER_id. username, yourpassword
with specific to your configuration.
To get Container Id :
docker container ls

To eliminate this exact warning you can pass password in MYSQL_PWD environment variable or use other connection method - see http://dev.mysql.com/doc/refman/5.7/en/password-security-user.html
docker run -it --link containername:mysql --rm mysql sh -c
'export MYSQL_PWD="$MYSQL_ENV_MYSQL_ROOT_PASSWORD"; exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
dbname'
| gz > output.sql.gz

Here's how I solved this to dump a mysql db into a file.
I created a dump-db.sh file with the content:
# dump db from docker container
(docker exec -i CONTAINER_ID mysqldump -u DB_USER -pDB_PASS DB_NAME) > FILENAME.sql
To get the CONTAINER_ID list them: docker container list
Add run permissions to the script:
chmod o+x dump-db.sh
Run it:
./dump-db.sh
Remember to replace the CONSTANTS above with your own data.

I always create bash "tools" in my repo root with which I can repeat common tasks, such as database dumps. With bash, you can also load your .env file, so your credentials are not in a file in the repo, but just in your .env file.
#!/bin/bash
# load .env
set -o allexport; . ./.env; set +o allexport
# setup
TIMESTAMP=$(date +%Y-%m-%d__%H.%M)
BACKUP_DIR="dockerfiles/db"
CONTAINER_NAME="cp-db"
# dump
docker exec $CONTAINER_NAME /usr/bin/mysqldump -u$DB_USER --password=$DB_PASSWORD $DB_NAME> $BACKUP_DIR/dump__$TIMESTAMP.sql

Related

pass password from .env on command to mysql inside docker

I basically know nothing about docker. And not that much more about bash neither. So:
There's a command in the README of a Laravel project i'm working on, that shows how to fill some data on local MySQL docker image, by sending a queries from a file located in the HOST.
docker exec -i {image} mysql -uroot -p{password} {database} < location/of/file.sql
What i want to do is "hide" the password from README, and make it read from .env file
So, i want to do something like this:
docker exec --env-file=.env -i {image} mysql -uroot -p$DB_PASSWORD {database} < location/of/file.sql
I've tested that docker ... printenv does show the variables from the file. But echoing one of then outputs a blank line: docker ... echo $DB_PASSWORD and running MySQL command using it, gets me "Access denied for user 'root'#'localhost'"
I've tried run the MySQL command "directly": docker ... mysql ... < file.sql and also "indirectly": docker bash -c "mysql ..." < file.sql.
You should prevent your shell from expanding the local variables (by single-quoting, or by escaping $)
This should be passed to containers shell and expanded there:
docker exec --env-file=.env -i {image} bash -c 'mysql -uroot -p$DB_PASSWORD {database}' < location/of/file.sql
It could possibly be two cases.
Check the key name in your env file and the docker run command
Check the path of the env file you are mapping to.

Is it possible to pass a determined $MYSQL_ROOT_PASS to MySQL docker from outside docker? If so how?

So I'm writing an install script and because I haven't been able to find a solid MySQL replacement on armfh (the db must be MySQL compatible), I'm using a community on that works, however it does not initiate the db as it should. it requires me to pass the following argument.
mysql -h"db" -u"root" -p"$MYSQL_ROOT_PASSWORD" "$MYSQL_DATABASE" < /docker-entrypoint-initdb.d/1_db.sql
From inside the docker. Problem is I want this to flow naturally as a smooth install script. I've tried using the following command to pass the document and get a password prompt:
docker exec -it db bash -c "mysql -h"db" -u"root" -p"$MYSQL_ROOT_PASSWORD" "$MYSQL_DATABASE" < /docker-entrypoint-initdb.d/1_db.sql"
If also tried:
docker exec -it db bash -c "mysql -h'db' -u'root' -p'$MYSQL_ROOT_PASSWORD' '$MYSQL_DATABASE' < /docker-entrypoint-initdb.d/1_db.sql
FwIW: I used the MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1) to define the password. And if I manually enter the docker send command 1 (in quotations) the db initiates.
So to summarize my question: Is it possible to pass a command like the above to activate the 1_db.sql file from outside docker?
Any help would be amazing! Thanks in advance!
Is it possible to pass a command like the above to activate the
1_db.sql file from outside docker?
you can try something like
cat 1_db.sql | docker exec -i test bash -c 'mysql -uroot -p$MYSQL_ROOT_PASSWORD $MYSQL_DATABASE'
also, remember when you try exec bash -c "mysql -uroot -p$MYSQL_ROOT_PASSWORD"it will for MYSQL_ROOT_PASSWORD in the host, not inside container, use single quotes.
determined $MYSQL_ROOT_PASS to MySQL docker from outside docker? If so
how?
docker exec -i test bash -c 'echo mysql docker password is $MYSQL_ROOT_PASSWORD'

Bash: file not created with '>' command

I'm writing a script to create backups of a MySQL database running in a docker container. The database is correctly up and running.
My current code is
#!/bin/bash
PATH=/usr/bin:/usr/local/bin:/root/.local/bin:$PATH
docker-compose exec -T db mkdir -p /opt/booking-backup
docker_backup_path="/opt/booking-backup/dump_prod_$(date +%F_%R).sql"
copy_backup_path="/root/backup_scripts/booking_prod/dump_prod_$(date +%F_%R).sql"
docker-compose exec db mysqldump --add-drop-database --add-drop-table --user=root --password="pw" booking > "$docker_backup_path"
docker-compose exec db mysqldump --add-drop-database --add-drop-table --user=root --password="pw" booking > "/opt/booking-backup/dump_prod.sql"
[ -d ./backup ] || mkdir ./backup
docker cp $(docker-compose ps -q db):$docker_backup_path $copy_backup_path
However, when I execute it it throws this error:
Error: No such container:path: f0baa241becd20d2690bb901fb257a4bbec8cac17e6f1ce6d50adb9532bbae03:/opt/booking-backup/dump_prod_2019-05-28_14:23.sql
What makes this weirder is that I have the exact same code (but with booking switched out for abc, and with PSQL instead of MySQL) that works correctly.
It appears that this line
docker-compose exec db mysqldump --add-drop-database --add-drop-table --user=root --password="pw" booking > $docker_backup_path
does not create the output file, but when I use tee I can see the contents of the dump and they are correct.
What's going wrong here?
The shell redirections
docker-compose exec db mysqldump ... > "$docker_backup_path"
docker-compose exec db mysqldump ... > "/opt/booking-backup/dump_prod.sql"
# -----------------------------------^ here
... will be expanded by your local shell, not inside the container. Meaning the files are written to your local filesystem not to the container's filesystem.

docker-compose mysql import failed - the input device is not a TTY

I have any running containers. So i need import sql databases and try
docker-compose exec MYSQL_CONTAINERNAME mysql -uroot -p --database=MY_DB < /code/export_new.sql
But I have message "docker-compose exec -i MYSQL_CONTAINER NAME mysql -uroot -p --database=MY_DB < /code/export_new.sql" If I use "-i" or "-T" or "-it" parameters after command "... exec " I have message:
Usage: exec [options] [-e KEY=VAL...] SERVICE COMMAND [ARGS...]
Options:
-d, --detach Detached mode: Run command in the background.
--privileged Give extended privileges to the process.
-u, --user USER Run the command as this user.
-T Disable pseudo-tty allocation. By default `docker-compose exec`
allocates a TTY.
--index=index index of the container if there are multiple
instances of a service [default: 1]
-e, --env KEY=VAL Set environment variables (can be used multiple times,
not supported in API < 1.25)
-w, --workdir DIR Path to workdir directory for this command.
How I can to import my "export_new.sql" into mysql, which placed in container?
Worked for me with
docker-compose exec -T MYSQL_CONTAINERNAME mysql databasename < data.sql
Using docker-compose exec like this did not work (don't know why), but you can use docker exec. You just need to know the container name or id. Name is listed during docker-compose up or you can find out using docker-compose ps.
Here is an example command:
docker exec -i MYSQL_CONTAINERNAME mysql databasename < data.sql
Or you can combine docker-compose ps into it so that you only need to know the short name (defined in docker-compose.yml):
docker exec -i $(docker-compose ps -q MYSQL) mysql databasename < data.sql
Also, note that I had trouble using the above commands with -p flag so that it would ask for password interactively, but it worked when I passed the password in the initial command (e.g. mysql -uroot -pmypwd).
Ok, so first you have to connect to your running mysql instance. You can do it with this command:
docker run -it --link some-mysql:mysql --rm mysql sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
in this command change the string some-mysql with the name of your actual container. Variables correspond to:
MYSQL_PORT_3306_TCP_ADDR - host (default: localhost)
MYSQL_PORT_3306_TCP_PORT - port (default: 3306)
MYSQL_ENV_MYSQL_ROOT_PASSWORD - password of the root user
Then you will be able to execute commands directly on you mysql instance, co it will be enough to type:
MY_DB < /code/export_new.sql
For any troubleshooting or good source of information please read about mysql docker image here:
https://hub.docker.com/_/mysql/
If your docker DBMS has password you should use:
docker exec -i MYSQL_CONTAINERNAME mysql databasename_optional -ppassword < data.sql
The databasename_optional is optional since some DB Dumps sets the DB name on top of it like:
CREATE DATABASE IF NOT EXISTS `databasename` DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci;
USE `databasename`;

How do I restore a dump file from mysqldump using kubernetes?

I know how to restore a dump file from mysqldump. Now, I am attempting to do that using kubernetes and a docker container. The database files are in persistent (nfs) mount. The docker cannot be accessed outside of the cluster as there is no need for anything external to touch it.
I tried:
kubectl run -i -t dbtest --image=mariadb --restart=Never --rm=true --command -- mysql -uroot -ps3kr37 < dump.sql
and
kubectl exec mariadb-deployment-3614069618-mn524 -i -t -- mysql -u root -p=s3kr37 < dump.sql
But neither commands worked -- errors about TTY, sockets, and other things hinting that I am missing something vital here.
What am I not understanding here?
I could just stop the deployment, scp the database files, and restart the container and hope for the best. However, what can go right?
The question Install an sql dump file to a docker container with mariaDB sure looks like a duplicate but is not: first, I am on Linux not Windows and more importantly the answers all are about initialising with a dump. I want to be able to trash the data and revert to the dump data. This is a test system that will eventually be the "live" so I need to restore from many potential dumps.
As described in here you can use the following command to restore a DB on kubernetes pod from a dump in your machine
$ kubectl exec -it {{podName}} -n {{namespace}} -- mysql -u {{dbUser}} -p{{password}} {{DatabaseName}} < <scriptName>.sql
Example :
$ kubectl exec -it mysql-58 -n sql -- mysql -u root -proot USERS < dump_all.sql
What I did was this:
Create an NFS mount with two sub0drectories: mysql and initd.
In initd, I added several ,sql files, including the dump.
Mount initd as /docker-entrypoint-initdb.d in the deployment.This causes all the files to be read at initialisation time provided that it is the first time we run.
The mysql directory is mounted as /var/lib/mysql and contains all the mariaDB files.
If I need to revert, I trash all the contents of the mysql directory and re-create the deployment.
This should work:
kubectl --kubeconfig=k8s-XXXXXXX-kubeconfig.yaml exec -i ddevdb-XXXXX -- mysql -u root -h mysqlservice -proot drupal < you-dump.sql
kubeconfig is optional, digitalocean for examples provides that so you can run your commands from your local.
To see if everything looks good:
kubectl --kubeconfig=k8s-XXXXXXX-kubeconfig.yaml run -it --rm --image=mariadb:10.4 --restart=Never mysql -- mysql -h mysqlservice -proot
After which you'll have a terminal inside mysql.