Docker container: /bin/sh: cat: No such file or directory - mysql

I'm using the mysql/mysql-server image to create a mysql server in docker. Since I want to setup my database (add users, create tables) automatically, I've created a SQL file that does that for me. In order to automatically run that script, I extended the image with this dockerfile:
FROM mysql/mysql-server:latest
RUN mkdir /scripts
WORKDIR /scripts
COPY ./db_setup.sql .
RUN mysql -u root -p password < cat db_setup.sql
but for some reason, this happens:
/bin/sh: cat: No such file or directory
ERROR: Service 'db' failed to build : The command '/bin/sh -c mysql -u root -p password < cat db_setup.sql' returned a non-zero code: 1
How do I fix this?

You can just remove the cat command from your RUN command:
RUN mysql -u root -p password < db_setup.sql
No such file or directory is returned since cat cannot be found in the current directory set by WORKDIR. You can just redirect the stdin of mysql to be from the db_setup.sql file. Edited to clarify < sh redirection is expecting the file name to use for input.
EDIT 2: Keep in mind your example is a RUN command that is attempting to run mysql and creating a layer at docker image build time. You may want to have this run during the mysql entrypoint script at runtime instead (e.g. scripts are run from thedocker-entrypoint-initdb.d/ directory by the docker-entrypoint.sh script of the official mysql image) or using other features that are documented for the official image.

RUN is a build time command. MySQL isn't running at this point.
If you where/are using a standard image there is a location for database initialization:
FROM mysql:8.0
COPY db_setup.sql /docker-entrypoint-initdb.d

Command cat is not present in mysql/mysql-server:latest image.
Moreover, you would only need to provide filename afetr redirection.
RUN mysql -u root -p password < db_setup.sql

Related

pass password from .env on command to mysql inside docker

I basically know nothing about docker. And not that much more about bash neither. So:
There's a command in the README of a Laravel project i'm working on, that shows how to fill some data on local MySQL docker image, by sending a queries from a file located in the HOST.
docker exec -i {image} mysql -uroot -p{password} {database} < location/of/file.sql
What i want to do is "hide" the password from README, and make it read from .env file
So, i want to do something like this:
docker exec --env-file=.env -i {image} mysql -uroot -p$DB_PASSWORD {database} < location/of/file.sql
I've tested that docker ... printenv does show the variables from the file. But echoing one of then outputs a blank line: docker ... echo $DB_PASSWORD and running MySQL command using it, gets me "Access denied for user 'root'#'localhost'"
I've tried run the MySQL command "directly": docker ... mysql ... < file.sql and also "indirectly": docker bash -c "mysql ..." < file.sql.
You should prevent your shell from expanding the local variables (by single-quoting, or by escaping $)
This should be passed to containers shell and expanded there:
docker exec --env-file=.env -i {image} bash -c 'mysql -uroot -p$DB_PASSWORD {database}' < location/of/file.sql
It could possibly be two cases.
Check the key name in your env file and the docker run command
Check the path of the env file you are mapping to.

Db migration in Helm mysql.initializationFiles causes the pod to crash

I'm building a helm chart with a MySQL dependency. It's being set up okay when empty, but I want to run an sh file on the pod that would copy some data from somewhere else. For this I want to use the initializationFiles constructions from the documentation https://github.com/helm/charts/tree/master/stable/mysql
My values.yaml part related to this dependendency looks like this:
mysql:
mysqlRootPassword: somePass
mysqlPassword : somePass
mysqlUser: user
appPassword: somePass
initializationFiles:
db.sh: |-
#!/bin/sh
touch dump.sql
Looks like the code under initializationFiles stanza is tried to be executed and causes the pod to fail. Since this stanza is executed only once by design, the second attempt succeeds, and when the pod is running I don't see any new file when I do kubectl exec -it pod_name -- bash -c ls
I have tried this:
...
initializationFiles:
- db.sh
and put the db.sh file in the same folder as values.yaml, but this still didn't work.
What is the correct way to execute an sh file when the dependency MySQL pod is being set up?
Kubernetes version 1.17, helm 3.5.0
I see that the file is actually copied to the docker-entrypoint-initdb.d, but it's not executed, no new file created.
root#mypod:/docker-entrypoint-initdb.d# ls -la
lrwxrwxrwx 1 root root 12 Mar 6 00:14 db.sh -> ..data/db.sh
root#mypod:/docker-entrypoint-initdb.d# cat db.sh
#!/bin/sh
touch dump.sql
I've tried to run the file manually, and got permission denied:
root#mypod:/# ./docker-entrypoint-initdb.d/db.sh
bash: ./docker-entrypoint-initdb.d/db.sh: Permission denied
If I change my command to echo test then the sh file is executed, I can see this in the logs of the pod. Looks like changing the filesystem is prohibited, but doing touch /dump.sql or touch /home/dump.sql doesn't work either.
MUAHAHA I did it.
The scripts inside ./docker-entrypoint-initdb.d/ are executed by the mysql user.
After poking around inside the pod I've found a directory for which the mysql user has write permissions. So I just write the file there and delete afterwards to initialize the dbs.
Now the whole stanza looks like this:
mysql:
mysqlRootPassword: myPass
mysqlPassword : myPass
mysqlUser: user
appPassword: myPass
initializationFiles:
db.sh: |-
#!/bin/sh
mysqldump -h remoteDbUrl -u remoteUser -pRemotePass --databases db1 db2 > /var/lib/mysql/dump.sql
mysql -uroot -pmyPass < /var/lib/mysql/dump.sql
rm /var/lib/mysql/dump.sql
echo "GRANT ALL PRIVILEGES ON *.* TO 'user'#'%' IDENTIFIED BY 'myPass';" | mysql -uroot -pmyPass

Best way to initialize DB script while building docker image

I am building a docker machine using the image "mysql". I have some setup script to run for the first time the machine is built. This setup script will create some database and dabase users with specified permmissions.
Following are the minimized version of my files..
pcdb1.entrypoint.sh
#!/bin/sh
mysql -uroot -p'pass123' -e 'show databases MYENTRYDB;'
Dockerfile
FROM mysql:5.7
COPY ./pcdb1.entrypoint.sh /
ENTRYPOINT ["/pcdb1.entrypoint.sh"];
I am getting the following error in the log
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
pcdb1 exited with code 1
What I understood is, my script is trying to run before mysql is started. But I am not sure how to do it properly. Can I get a suggestion?
EDIT: 20181007
I have found the way you mentioned in question--initialize DB while building docker image. But with a little difference, the way I found seems like initializing DB while running a container from image, although the initializing script was specified while building the image.
According to the official information about mysql:5.7, there is paragraph named "Initializing a fresh instance". We could just add a initializing script into the directory /docker-entrypoint-initdb.d, the default ENTRYPOINT and CMD of image mysql:5.7 would execute it after database start-up.
For example:
FROM mysql:5.7
COPY init-database.sql /docker-entrypoint-initdb.d/
content of init-database.sql:
create database light;
create user 'light'#'%' identified by 'abc123';
grant all privileges on light.* to 'light'#'%' identified by 'abc123';
grant all privileges on light.* to 'light'#'localhost' identified by 'abc123';
Build new image:
docker build -t light/mysql:5.7 .
Run a container:
docker run -tid --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD='abc123' light/mysql:5.7
Examine initialization:
docker run -ti mysql /bin/bash
root#25e73d40c4ff:/# mysql -uroot -p
Enter password: (abc123)
Welcome to the MySQL monitor.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
All work well.
Former answer below.
How to start mysql damon in container?
First of all, you are right on "trying to run before mysql is started" part. But still, there is a missing on "how MySQL starts exactly". If you execute docker history mysql:5.7 --no-trunc, you could see three important records among output like below:
/bin/sh -c #(nop) CMD ["mysqld"]
/bin/sh -c #(nop) ENTRYPOINT ["docker-entrypoint.sh"]
/bin/sh -c ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh
So far we should know when we start a mysql container with command below, the exact initial command in container is docker-entrypoint.sh mysqld.
docker run -tid -e MYSQL_ROOT_PASSWORD='abc123' mysql:5.7
How to initiate mysql in container?
Secondly, let's now have a check on docker-entrypoint.sh script.
There is a specific line like below, just at roughly middle position of this script, which means to start mysql daemon.
mysql=( mysql --protocol=socket -uroot -hlocalhost --socket="${SOCKET}" )
After starting mysql, we could see lots of initiating statements in docker-entrypoint.sh script. Such as creating root user with password or not, granting privileges to root, creating database declared by users with MYSQL_DATABASE env and so on.
Now here are solutions offered for you.
Self-defining the docker-entrypoint.sh script.
In this way, you could whatever you want which is legal in mysql.
Get the whole entrypoint.sh script on your host.
Add your self-definition of mysql in the script, make your
self-defining content at the bottom of this script. I assume you
don't want to mess it with original content.
Build a new mysql image for your own with command and Dockerfile below.
command: docker build -t mysql:self .
Dockerfile:
FROM mysql:5.7
COPY /path/to/your-entrypoint.sh /
ENTRYPOINT ["/your-entrypoint.sh"]
CMD ["mysqld"]
If your don't want a new image, there is another way to change
ENTRYPOINT when you run a container. But still, you should make your own script available in container.
docker run -tid -v /path/to/your-entrypoint.sh:/entrypoint.sh -p 3306:3306 -e MYSQL_ROOT_PASSWORD='abc123' mysql:5.7
Using default ENVs provided by mysql:5.7
In this way, there is a limit, especially on "specified permmissions" you mentioned.
The ENVs you need are: MYSQL_DATABASE, MYSQL_USER, MYSQL_PASSWORD.
The command should like this:
docker run -tid -e MYSQL_ROOT_PASSWORD='abc123' -e MYSQL_DATABASE='apps' -e MYSQL_USER='light' -e MYSQL_PASSWORD='abc123' mysql:5.7
This means that the database apps and user light will be created automatically, and the user light will be granted superuser permissions for the database apps.
More reference here on hub.docker.com.

ADD > LOAD .SQL using Docker Automated build and Compose

What is the optimal way to load in a sql dump when using docker-compose + docker automated builds?
Have been ignoring docker-compose for a moment and trying to understand docker and it's automated builds at first but have come to realize that i will probably need docker-compose if i want to accomplish my project goal that is to use one 1 command and from that have a fully working 3 site Docker cluster
1xHAProxy
3xUbuntu/wp
3xMysqld
In my Dockerfile i can just include the db.sql from my Github repo like
ADD db.sql /tmp/db.sql
Failing to find a best practise how i should load my DB without writing any commands outside of build.
Want to know your solution to this using Dockerfile or Compose
By just executing one of the commands below a mysql FROM mysql with ADD db.sql db.sql should be build / run while loading db.sql in to mysql db wp
Dockerfile
$docker run -d user/repo:tag
docker-compose.yml
$docker-compose up
If am totally on the wrong path here please give me some references. Could also mention that am planning to use CoreOS once i feel OK with Docker. So if best practices on a CoreOS > Docker setup is something else, let me know!
There are two options for initializing a SQL file during build or run time:
The first would be to just base your MySQL image on the official image and place your SQL file in /docker-entrypoint-initdb.d (using something like ADD my.sql /docker-entrypoint-initdb.d/ in the Dockerfile). The official image has a fairly complex entrypoint script (https://github.com/docker-library/mysql/blob/master/5.7/docker-entrypoint.sh) which starts MySQL, initializes a username and password, and scripts from the /docker-entrypoint-initdb.d folder.
The other option would be to do something like the answer at https://stackoverflow.com/a/25920875/684908 and just add a command such as:
COPY dump.sql /tmp/
RUN /bin/bash -c "/usr/bin/mysqld_safe &" && \
sleep 5 && \
mysql -u root -e "CREATE DATABASE mydb" && \
mysql -u root mydb < /tmp/dump.sql

Exporting data from MySQL docker container

I use the official MySQL docker image, and I am having difficulty exporting data from the instance without errors. I run my export like this:
docker run -it --link containername:mysql --rm mysql sh -c
'exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
-p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"
dbname'
| gz > output.sql.gz
However, this results in the warning:
"mysqldump: [Warning] Using a password on the command line interface can be insecure."
As the first line of the outputted file. Obviously this later causes problems for any other MySQL processes which are used to consume the data.
Is there any way to suppress this warning from the mysqldump client?
A little late to answer but this command saved my day.
docker exec CONTAINER /usr/bin/mysqldump -u root --password=root DATABASE > backup.sql
I realise that this is an old question, but for those stumbling across it now I put together a post about exporting and importing from mysql docker containers: https://medium.com/#tomsowerby/mysql-backup-and-restore-in-docker-fcc07137c757
It covers the "Using a password on the command line interface..." warning and how to bypass it.
Run Following command on terminal
docker exec CONTAINER_id /usr/bin/mysqldump -uusername --password=yourpassword databasename> backup.sql
Replace the
CONTAINER_id. username, yourpassword
with specific to your configuration.
To get Container Id :
docker container ls
To eliminate this exact warning you can pass password in MYSQL_PWD environment variable or use other connection method - see http://dev.mysql.com/doc/refman/5.7/en/password-security-user.html
docker run -it --link containername:mysql --rm mysql sh -c
'export MYSQL_PWD="$MYSQL_ENV_MYSQL_ROOT_PASSWORD"; exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
dbname'
| gz > output.sql.gz
Here's how I solved this to dump a mysql db into a file.
I created a dump-db.sh file with the content:
# dump db from docker container
(docker exec -i CONTAINER_ID mysqldump -u DB_USER -pDB_PASS DB_NAME) > FILENAME.sql
To get the CONTAINER_ID list them: docker container list
Add run permissions to the script:
chmod o+x dump-db.sh
Run it:
./dump-db.sh
Remember to replace the CONSTANTS above with your own data.
I always create bash "tools" in my repo root with which I can repeat common tasks, such as database dumps. With bash, you can also load your .env file, so your credentials are not in a file in the repo, but just in your .env file.
#!/bin/bash
# load .env
set -o allexport; . ./.env; set +o allexport
# setup
TIMESTAMP=$(date +%Y-%m-%d__%H.%M)
BACKUP_DIR="dockerfiles/db"
CONTAINER_NAME="cp-db"
# dump
docker exec $CONTAINER_NAME /usr/bin/mysqldump -u$DB_USER --password=$DB_PASSWORD $DB_NAME> $BACKUP_DIR/dump__$TIMESTAMP.sql