Bash: file not created with '>' command - mysql

I'm writing a script to create backups of a MySQL database running in a docker container. The database is correctly up and running.
My current code is
#!/bin/bash
PATH=/usr/bin:/usr/local/bin:/root/.local/bin:$PATH
docker-compose exec -T db mkdir -p /opt/booking-backup
docker_backup_path="/opt/booking-backup/dump_prod_$(date +%F_%R).sql"
copy_backup_path="/root/backup_scripts/booking_prod/dump_prod_$(date +%F_%R).sql"
docker-compose exec db mysqldump --add-drop-database --add-drop-table --user=root --password="pw" booking > "$docker_backup_path"
docker-compose exec db mysqldump --add-drop-database --add-drop-table --user=root --password="pw" booking > "/opt/booking-backup/dump_prod.sql"
[ -d ./backup ] || mkdir ./backup
docker cp $(docker-compose ps -q db):$docker_backup_path $copy_backup_path
However, when I execute it it throws this error:
Error: No such container:path: f0baa241becd20d2690bb901fb257a4bbec8cac17e6f1ce6d50adb9532bbae03:/opt/booking-backup/dump_prod_2019-05-28_14:23.sql
What makes this weirder is that I have the exact same code (but with booking switched out for abc, and with PSQL instead of MySQL) that works correctly.
It appears that this line
docker-compose exec db mysqldump --add-drop-database --add-drop-table --user=root --password="pw" booking > $docker_backup_path
does not create the output file, but when I use tee I can see the contents of the dump and they are correct.
What's going wrong here?

The shell redirections
docker-compose exec db mysqldump ... > "$docker_backup_path"
docker-compose exec db mysqldump ... > "/opt/booking-backup/dump_prod.sql"
# -----------------------------------^ here
... will be expanded by your local shell, not inside the container. Meaning the files are written to your local filesystem not to the container's filesystem.

Related

How to backup MYSQL database with kubernetes

I successfully, recreated Single-Instance Stateful Application tutorial. Naturally, I'd like to create a periodic backup of all databases. I found this article that explains how to make a backup. Unfortunately, it does not work for me. The command that I am running looks like this
$ kubectl exec -n <namespace> <pod> -- mysqldump -u root -p$MYSQL_ROOT_PASSWORD --all-databases > /var/lib/mysql/backup/alldbs.sql
I found error(s). Backup was not working for two reasons.
1st, incorrect semantic. Instead of using kubectl exec -n <namespace> <pod> mysqldump -u root -p$MYSQL_ROOT_PASSWORD --all-databases > dump.sql as article mentions. I had to use a syntax described in mysql dockerhub documentation that looks like this kubectl exec -n <namespace> <pod> -- sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > dump.sql
2nd, incorrect path assumption. I assumed that dump.sql was created on the pod/container filesystem so, I expected to see backup file inside container. Instead backup file was created relative to your host machine filesystem not pod/container.

docker-compose mysql import failed - the input device is not a TTY

I have any running containers. So i need import sql databases and try
docker-compose exec MYSQL_CONTAINERNAME mysql -uroot -p --database=MY_DB < /code/export_new.sql
But I have message "docker-compose exec -i MYSQL_CONTAINER NAME mysql -uroot -p --database=MY_DB < /code/export_new.sql" If I use "-i" or "-T" or "-it" parameters after command "... exec " I have message:
Usage: exec [options] [-e KEY=VAL...] SERVICE COMMAND [ARGS...]
Options:
-d, --detach Detached mode: Run command in the background.
--privileged Give extended privileges to the process.
-u, --user USER Run the command as this user.
-T Disable pseudo-tty allocation. By default `docker-compose exec`
allocates a TTY.
--index=index index of the container if there are multiple
instances of a service [default: 1]
-e, --env KEY=VAL Set environment variables (can be used multiple times,
not supported in API < 1.25)
-w, --workdir DIR Path to workdir directory for this command.
How I can to import my "export_new.sql" into mysql, which placed in container?
Worked for me with
docker-compose exec -T MYSQL_CONTAINERNAME mysql databasename < data.sql
Using docker-compose exec like this did not work (don't know why), but you can use docker exec. You just need to know the container name or id. Name is listed during docker-compose up or you can find out using docker-compose ps.
Here is an example command:
docker exec -i MYSQL_CONTAINERNAME mysql databasename < data.sql
Or you can combine docker-compose ps into it so that you only need to know the short name (defined in docker-compose.yml):
docker exec -i $(docker-compose ps -q MYSQL) mysql databasename < data.sql
Also, note that I had trouble using the above commands with -p flag so that it would ask for password interactively, but it worked when I passed the password in the initial command (e.g. mysql -uroot -pmypwd).
Ok, so first you have to connect to your running mysql instance. You can do it with this command:
docker run -it --link some-mysql:mysql --rm mysql sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
in this command change the string some-mysql with the name of your actual container. Variables correspond to:
MYSQL_PORT_3306_TCP_ADDR - host (default: localhost)
MYSQL_PORT_3306_TCP_PORT - port (default: 3306)
MYSQL_ENV_MYSQL_ROOT_PASSWORD - password of the root user
Then you will be able to execute commands directly on you mysql instance, co it will be enough to type:
MY_DB < /code/export_new.sql
For any troubleshooting or good source of information please read about mysql docker image here:
https://hub.docker.com/_/mysql/
If your docker DBMS has password you should use:
docker exec -i MYSQL_CONTAINERNAME mysql databasename_optional -ppassword < data.sql
The databasename_optional is optional since some DB Dumps sets the DB name on top of it like:
CREATE DATABASE IF NOT EXISTS `databasename` DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci;
USE `databasename`;

Set up MySQL using Dockerfile

I'm just getting started with Docker and was able to set up MySQL according to my needs, by running tutum/lamp and doing a bunch of exec. For example:
docker run -d -p 80:80 -p 3306:3306 --name test tutum/lamp
...
docker exec test mysqldump --host somehost --user someuser --password --databases somedatabase > dump.sql
docker exec test mysql -u root < dump.sql
However, I'm having issues converting this to a Dockerfile. Specifically, the following results in ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock':
FROM tutum/lamp
EXPOSE 80 3306
...
RUN mysqldump --host=$DB_IP --user=$DB_USER --password=$DB_PASSWORD --databases somedatabase > dump.sql
RUN mysql -u root < dump.sql
You will need to override run.sh in order to do that, because when you run a container it will install mysql for the first time.
That is why you can not connect to mysql prior to that (in my previous answer I wasn't aware of that).
I've managed to execute mysql command by adding this to Dockerfile
FROM tutum/lamp
ADD . /custom
RUN chmod 755 /custom/run.sh
CMD ["/custom/run.sh"]
Then in the same folder create a file run.sh
#!/bin/bash
VOLUME_HOME="/var/lib/mysql"
sed -ri -e "s/^upload_max_filesize.*/upload_max_filesize = ${PHP_UPLOAD_MAX_FILESIZE}/" \
-e "s/^post_max_size.*/post_max_size = ${PHP_POST_MAX_SIZE}/" /etc/php5/apache2/php.ini
if [[ ! -d $VOLUME_HOME/mysql ]]; then
echo "=> An empty or uninitialized MySQL volume is detected in $VOLUME_HOME"
echo "=> Installing MySQL ..."
mysql_install_db > /dev/null 2>&1
echo "=> Done!"
/create_mysql_admin_user.sh
else
echo "=> Using an existing volume of MySQL"
fi
( sleep 20 ; mysql -u root < /custom/dump.sql ; echo "*** IMPORT ***" ) &
exec supervisord -n
This file is the same as /run.sh with one line added to run sql import after 20 seconds to make sure mysql service is up and running (there must be more elegant way to run a command just after mysql is started, of course).

Exporting data from MySQL docker container

I use the official MySQL docker image, and I am having difficulty exporting data from the instance without errors. I run my export like this:
docker run -it --link containername:mysql --rm mysql sh -c
'exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
-p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"
dbname'
| gz > output.sql.gz
However, this results in the warning:
"mysqldump: [Warning] Using a password on the command line interface can be insecure."
As the first line of the outputted file. Obviously this later causes problems for any other MySQL processes which are used to consume the data.
Is there any way to suppress this warning from the mysqldump client?
A little late to answer but this command saved my day.
docker exec CONTAINER /usr/bin/mysqldump -u root --password=root DATABASE > backup.sql
I realise that this is an old question, but for those stumbling across it now I put together a post about exporting and importing from mysql docker containers: https://medium.com/#tomsowerby/mysql-backup-and-restore-in-docker-fcc07137c757
It covers the "Using a password on the command line interface..." warning and how to bypass it.
Run Following command on terminal
docker exec CONTAINER_id /usr/bin/mysqldump -uusername --password=yourpassword databasename> backup.sql
Replace the
CONTAINER_id. username, yourpassword
with specific to your configuration.
To get Container Id :
docker container ls
To eliminate this exact warning you can pass password in MYSQL_PWD environment variable or use other connection method - see http://dev.mysql.com/doc/refman/5.7/en/password-security-user.html
docker run -it --link containername:mysql --rm mysql sh -c
'export MYSQL_PWD="$MYSQL_ENV_MYSQL_ROOT_PASSWORD"; exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
dbname'
| gz > output.sql.gz
Here's how I solved this to dump a mysql db into a file.
I created a dump-db.sh file with the content:
# dump db from docker container
(docker exec -i CONTAINER_ID mysqldump -u DB_USER -pDB_PASS DB_NAME) > FILENAME.sql
To get the CONTAINER_ID list them: docker container list
Add run permissions to the script:
chmod o+x dump-db.sh
Run it:
./dump-db.sh
Remember to replace the CONSTANTS above with your own data.
I always create bash "tools" in my repo root with which I can repeat common tasks, such as database dumps. With bash, you can also load your .env file, so your credentials are not in a file in the repo, but just in your .env file.
#!/bin/bash
# load .env
set -o allexport; . ./.env; set +o allexport
# setup
TIMESTAMP=$(date +%Y-%m-%d__%H.%M)
BACKUP_DIR="dockerfiles/db"
CONTAINER_NAME="cp-db"
# dump
docker exec $CONTAINER_NAME /usr/bin/mysqldump -u$DB_USER --password=$DB_PASSWORD $DB_NAME> $BACKUP_DIR/dump__$TIMESTAMP.sql

Docker MariaDB/Mysql dump

How can i mysqldump from running container on https://hub.docker.com/_/mariadb/ ?
I cant find any useful documentation or data?
Any method for backup and restore database.
This is my my continaer run command :
docker run --name myaapp-mariadb -v /databases/maria:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=password -d mariadb:10
If we assume you created the mariadb server container this way:
docker run --name some-mariadb -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mariadb:latest
Then you access it from another client container:
docker run -it --link some-mariadb:mysql \
--rm mariadb:latest \
sh -c 'exec mysqldump -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD" database_name' > database_name_dump.sql
There's lots more helpful usage tip in the mysql official image page.
Accepted answer stands accepted & correct in all its sense. Adding, this for the case where we have mapped the database to an external volume.
So, for example if the container was created using below command
docker run --name mysqldb -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password -v /dir_path_on_your_machine/mysql-data:/var/lib/mysql -d mariadb:latest
then, we can execute the below command from cmd line or terminal
docker exec mysqldb mysqldump --user=root --password=password dbname > /dir_path_on_your_machine/mysql-data/dump/db.sql
However, the dump created using above commands will not dump stored procedures, functions and events. We would need extra params with the in order to do that
--triggers Dump triggers for each dumped table.
--routines Dump stored routines (functions and procedures).
--events Dump events.
So, we can modify our command to include the above params for desired result.
Sample update command
docker exec mysqldb mysqldump --routines --triggers --user=root --password=password dbname > /dir_path_on_your_machine/mysql-data/dump/db1.sql
In case, you encounter any import related error ,, check if this helps.