MySQL Docker container - unable to import file - mysql

I have a Docker container running MariaDB and within the Dockerfile for the image I've copied an SQL file to a location in the container:
FROM mariadb
RUN mkdir /adhoc_scripts
COPY bootup.sql /adhoc_scripts
Once the container has spun up, I'm able to enter a shell within the container and confirm that the sql file does exist in the specified location:
$ docker exec -it my_mariadb_container bash
root#6e3f4b9abe17:/# ls -lt /adhoc_scripts/
total 4
-rw-r--r-- 1 root root 1839 Apr 10 18:35 bootup.sql
However when I exit the container and try to invoke the following command:
docker exec -it my_mariadb_container bash \
mysql mydb -u root -prootpass \
< /adhoc_scripts/bootup.sql
I get this error:
-bash: /adhoc_scripts/bootup.sql: No such file or directory
What am I doing wrong?
EDIT1: I tried changing the permissions of /adhoc-scripts to 777 as well, but that didn't help.

This is because the < operator operates on the whole docker exec ... command instead of the bash mysql ... part.
The error message is clear: your bash (outside the container) is trying to interpret the /adhoc_scripts/bootup.sql file.
To solve it, try this:
docker exec -it my_mariadb_container bash -c "mysql mydb -u root -prootpass < /adhoc_scripts/bootup.sql"

Related

pass password from .env on command to mysql inside docker

I basically know nothing about docker. And not that much more about bash neither. So:
There's a command in the README of a Laravel project i'm working on, that shows how to fill some data on local MySQL docker image, by sending a queries from a file located in the HOST.
docker exec -i {image} mysql -uroot -p{password} {database} < location/of/file.sql
What i want to do is "hide" the password from README, and make it read from .env file
So, i want to do something like this:
docker exec --env-file=.env -i {image} mysql -uroot -p$DB_PASSWORD {database} < location/of/file.sql
I've tested that docker ... printenv does show the variables from the file. But echoing one of then outputs a blank line: docker ... echo $DB_PASSWORD and running MySQL command using it, gets me "Access denied for user 'root'#'localhost'"
I've tried run the MySQL command "directly": docker ... mysql ... < file.sql and also "indirectly": docker bash -c "mysql ..." < file.sql.
You should prevent your shell from expanding the local variables (by single-quoting, or by escaping $)
This should be passed to containers shell and expanded there:
docker exec --env-file=.env -i {image} bash -c 'mysql -uroot -p$DB_PASSWORD {database}' < location/of/file.sql
It could possibly be two cases.
Check the key name in your env file and the docker run command
Check the path of the env file you are mapping to.

Is it possible to pass a determined $MYSQL_ROOT_PASS to MySQL docker from outside docker? If so how?

So I'm writing an install script and because I haven't been able to find a solid MySQL replacement on armfh (the db must be MySQL compatible), I'm using a community on that works, however it does not initiate the db as it should. it requires me to pass the following argument.
mysql -h"db" -u"root" -p"$MYSQL_ROOT_PASSWORD" "$MYSQL_DATABASE" < /docker-entrypoint-initdb.d/1_db.sql
From inside the docker. Problem is I want this to flow naturally as a smooth install script. I've tried using the following command to pass the document and get a password prompt:
docker exec -it db bash -c "mysql -h"db" -u"root" -p"$MYSQL_ROOT_PASSWORD" "$MYSQL_DATABASE" < /docker-entrypoint-initdb.d/1_db.sql"
If also tried:
docker exec -it db bash -c "mysql -h'db' -u'root' -p'$MYSQL_ROOT_PASSWORD' '$MYSQL_DATABASE' < /docker-entrypoint-initdb.d/1_db.sql
FwIW: I used the MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1) to define the password. And if I manually enter the docker send command 1 (in quotations) the db initiates.
So to summarize my question: Is it possible to pass a command like the above to activate the 1_db.sql file from outside docker?
Any help would be amazing! Thanks in advance!
Is it possible to pass a command like the above to activate the
1_db.sql file from outside docker?
you can try something like
cat 1_db.sql | docker exec -i test bash -c 'mysql -uroot -p$MYSQL_ROOT_PASSWORD $MYSQL_DATABASE'
also, remember when you try exec bash -c "mysql -uroot -p$MYSQL_ROOT_PASSWORD"it will for MYSQL_ROOT_PASSWORD in the host, not inside container, use single quotes.
determined $MYSQL_ROOT_PASS to MySQL docker from outside docker? If so
how?
docker exec -i test bash -c 'echo mysql docker password is $MYSQL_ROOT_PASSWORD'

The command '/bin/sh -c mysql -u wordpress -pwordpress wordpress < /docker-entrypoint-initdb.d/wordpress.sql' returned a non-zero code: 1

The command '/bin/sh -c mysql -u wordpress -pwordpress wordpress < /docker-entrypoint-initdb.d/wordpress.sql' returned a non-zero code: 1
My docker File :
From mysql:5.7
ENV MYSQL_ROOT_PASSWORD="************"
ENV MYSQL_USER="*******"
ENV MYSQL_PASSWORD="*********"
ENV MYSQL_DATABASE="********"
EXPOSE 3306 3366
COPY wordpress.sql /docker-entrypoint-initdb.d/.
RUN mysql -u wordpress -pwordpress wordpress < /docker-entrypoint-initdb.d/wordpress.sql
You don't need the last line "RUN ..."
Mysql container automatically runs whatever you copy in /docker-entrypoint-initdb.d/. This happens when you run the container, not during build.

docker-compose mysql import failed - the input device is not a TTY

I have any running containers. So i need import sql databases and try
docker-compose exec MYSQL_CONTAINERNAME mysql -uroot -p --database=MY_DB < /code/export_new.sql
But I have message "docker-compose exec -i MYSQL_CONTAINER NAME mysql -uroot -p --database=MY_DB < /code/export_new.sql" If I use "-i" or "-T" or "-it" parameters after command "... exec " I have message:
Usage: exec [options] [-e KEY=VAL...] SERVICE COMMAND [ARGS...]
Options:
-d, --detach Detached mode: Run command in the background.
--privileged Give extended privileges to the process.
-u, --user USER Run the command as this user.
-T Disable pseudo-tty allocation. By default `docker-compose exec`
allocates a TTY.
--index=index index of the container if there are multiple
instances of a service [default: 1]
-e, --env KEY=VAL Set environment variables (can be used multiple times,
not supported in API < 1.25)
-w, --workdir DIR Path to workdir directory for this command.
How I can to import my "export_new.sql" into mysql, which placed in container?
Worked for me with
docker-compose exec -T MYSQL_CONTAINERNAME mysql databasename < data.sql
Using docker-compose exec like this did not work (don't know why), but you can use docker exec. You just need to know the container name or id. Name is listed during docker-compose up or you can find out using docker-compose ps.
Here is an example command:
docker exec -i MYSQL_CONTAINERNAME mysql databasename < data.sql
Or you can combine docker-compose ps into it so that you only need to know the short name (defined in docker-compose.yml):
docker exec -i $(docker-compose ps -q MYSQL) mysql databasename < data.sql
Also, note that I had trouble using the above commands with -p flag so that it would ask for password interactively, but it worked when I passed the password in the initial command (e.g. mysql -uroot -pmypwd).
Ok, so first you have to connect to your running mysql instance. You can do it with this command:
docker run -it --link some-mysql:mysql --rm mysql sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
in this command change the string some-mysql with the name of your actual container. Variables correspond to:
MYSQL_PORT_3306_TCP_ADDR - host (default: localhost)
MYSQL_PORT_3306_TCP_PORT - port (default: 3306)
MYSQL_ENV_MYSQL_ROOT_PASSWORD - password of the root user
Then you will be able to execute commands directly on you mysql instance, co it will be enough to type:
MY_DB < /code/export_new.sql
For any troubleshooting or good source of information please read about mysql docker image here:
https://hub.docker.com/_/mysql/
If your docker DBMS has password you should use:
docker exec -i MYSQL_CONTAINERNAME mysql databasename_optional -ppassword < data.sql
The databasename_optional is optional since some DB Dumps sets the DB name on top of it like:
CREATE DATABASE IF NOT EXISTS `databasename` DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci;
USE `databasename`;

Exporting data from MySQL docker container

I use the official MySQL docker image, and I am having difficulty exporting data from the instance without errors. I run my export like this:
docker run -it --link containername:mysql --rm mysql sh -c
'exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
-p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"
dbname'
| gz > output.sql.gz
However, this results in the warning:
"mysqldump: [Warning] Using a password on the command line interface can be insecure."
As the first line of the outputted file. Obviously this later causes problems for any other MySQL processes which are used to consume the data.
Is there any way to suppress this warning from the mysqldump client?
A little late to answer but this command saved my day.
docker exec CONTAINER /usr/bin/mysqldump -u root --password=root DATABASE > backup.sql
I realise that this is an old question, but for those stumbling across it now I put together a post about exporting and importing from mysql docker containers: https://medium.com/#tomsowerby/mysql-backup-and-restore-in-docker-fcc07137c757
It covers the "Using a password on the command line interface..." warning and how to bypass it.
Run Following command on terminal
docker exec CONTAINER_id /usr/bin/mysqldump -uusername --password=yourpassword databasename> backup.sql
Replace the
CONTAINER_id. username, yourpassword
with specific to your configuration.
To get Container Id :
docker container ls
To eliminate this exact warning you can pass password in MYSQL_PWD environment variable or use other connection method - see http://dev.mysql.com/doc/refman/5.7/en/password-security-user.html
docker run -it --link containername:mysql --rm mysql sh -c
'export MYSQL_PWD="$MYSQL_ENV_MYSQL_ROOT_PASSWORD"; exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
dbname'
| gz > output.sql.gz
Here's how I solved this to dump a mysql db into a file.
I created a dump-db.sh file with the content:
# dump db from docker container
(docker exec -i CONTAINER_ID mysqldump -u DB_USER -pDB_PASS DB_NAME) > FILENAME.sql
To get the CONTAINER_ID list them: docker container list
Add run permissions to the script:
chmod o+x dump-db.sh
Run it:
./dump-db.sh
Remember to replace the CONSTANTS above with your own data.
I always create bash "tools" in my repo root with which I can repeat common tasks, such as database dumps. With bash, you can also load your .env file, so your credentials are not in a file in the repo, but just in your .env file.
#!/bin/bash
# load .env
set -o allexport; . ./.env; set +o allexport
# setup
TIMESTAMP=$(date +%Y-%m-%d__%H.%M)
BACKUP_DIR="dockerfiles/db"
CONTAINER_NAME="cp-db"
# dump
docker exec $CONTAINER_NAME /usr/bin/mysqldump -u$DB_USER --password=$DB_PASSWORD $DB_NAME> $BACKUP_DIR/dump__$TIMESTAMP.sql