Db migration in Helm mysql.initializationFiles causes the pod to crash - mysql

I'm building a helm chart with a MySQL dependency. It's being set up okay when empty, but I want to run an sh file on the pod that would copy some data from somewhere else. For this I want to use the initializationFiles constructions from the documentation https://github.com/helm/charts/tree/master/stable/mysql
My values.yaml part related to this dependendency looks like this:
mysql:
mysqlRootPassword: somePass
mysqlPassword : somePass
mysqlUser: user
appPassword: somePass
initializationFiles:
db.sh: |-
#!/bin/sh
touch dump.sql
Looks like the code under initializationFiles stanza is tried to be executed and causes the pod to fail. Since this stanza is executed only once by design, the second attempt succeeds, and when the pod is running I don't see any new file when I do kubectl exec -it pod_name -- bash -c ls
I have tried this:
...
initializationFiles:
- db.sh
and put the db.sh file in the same folder as values.yaml, but this still didn't work.
What is the correct way to execute an sh file when the dependency MySQL pod is being set up?
Kubernetes version 1.17, helm 3.5.0
I see that the file is actually copied to the docker-entrypoint-initdb.d, but it's not executed, no new file created.
root#mypod:/docker-entrypoint-initdb.d# ls -la
lrwxrwxrwx 1 root root 12 Mar 6 00:14 db.sh -> ..data/db.sh
root#mypod:/docker-entrypoint-initdb.d# cat db.sh
#!/bin/sh
touch dump.sql
I've tried to run the file manually, and got permission denied:
root#mypod:/# ./docker-entrypoint-initdb.d/db.sh
bash: ./docker-entrypoint-initdb.d/db.sh: Permission denied
If I change my command to echo test then the sh file is executed, I can see this in the logs of the pod. Looks like changing the filesystem is prohibited, but doing touch /dump.sql or touch /home/dump.sql doesn't work either.

MUAHAHA I did it.
The scripts inside ./docker-entrypoint-initdb.d/ are executed by the mysql user.
After poking around inside the pod I've found a directory for which the mysql user has write permissions. So I just write the file there and delete afterwards to initialize the dbs.
Now the whole stanza looks like this:
mysql:
mysqlRootPassword: myPass
mysqlPassword : myPass
mysqlUser: user
appPassword: myPass
initializationFiles:
db.sh: |-
#!/bin/sh
mysqldump -h remoteDbUrl -u remoteUser -pRemotePass --databases db1 db2 > /var/lib/mysql/dump.sql
mysql -uroot -pmyPass < /var/lib/mysql/dump.sql
rm /var/lib/mysql/dump.sql
echo "GRANT ALL PRIVILEGES ON *.* TO 'user'#'%' IDENTIFIED BY 'myPass';" | mysql -uroot -pmyPass

Related

Docker compose: Enter Password access denied error or invalid file error [duplicate]

I am trying to create a container with a MySQL database and add a schema to these database.
My current Dockerfile is:
FROM mysql
MAINTAINER (me) <email>
# Copy the database schema to the /data directory
COPY files/epcis_schema.sql /data/epcis_schema.sql
# Change the working directory
WORKDIR data
CMD mysql -u $MYSQL_USER -p $MYSQL_PASSWORD $MYSQL_DATABASE < epcis_schema.sql
In order to create the container I am following the documentation provided on Docker and executing this command:
docker run --name ${CONTAINER_NAME} -e MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD} -e MYSQL_USER=${DB_USER} -e MYSQL_PASSWORD=${DB_USER_PASSWORD} -e MYSQL_DATABASE=${DB_NAME} -d mvpgomes/epcisdb
But when I execute this command the Container is not created and in the Container status it is possible to see that the CMD was not executed successfully, in fact only the mysql command is executed.
Anyway, is there a way to initialize the database with the schema or do I need to perform these operations manually?
I had this same issue where I wanted to initialize my MySQL Docker instance's schema, but I ran into difficulty getting this working after doing some Googling and following others' examples. Here's how I solved it.
1) Dump your MySQL schema to a file.
mysqldump -h <your_mysql_host> -u <user_name> -p --no-data <schema_name> > schema.sql
2) Use the ADD command to add your schema file to the /docker-entrypoint-initdb.d directory in the Docker container. The docker-entrypoint.sh file will run any files in this directory ending with ".sql" against the MySQL database.
Dockerfile:
FROM mysql:5.7.15
MAINTAINER me
ENV MYSQL_DATABASE=<schema_name> \
MYSQL_ROOT_PASSWORD=<password>
ADD schema.sql /docker-entrypoint-initdb.d
EXPOSE 3306
3) Start up the Docker MySQL instance.
docker-compose build
docker-compose up
Thanks to Setting up MySQL and importing dump within Dockerfile for clueing me in on the docker-entrypoint.sh and the fact that it runs both SQL and shell scripts!
I am sorry for this super long answer, but, you have a little way to go to get where you want. I will say that normally you wouldn't put the storage for the database in the same container as the database itself, you would either mount a host volume so that the data persists on the docker host, or, perhaps a container could be used to hold the data (/var/lib/mysql). Also, I am new to mysql, so, this might not be super efficient. That said...
I think there may be a few issues here. The Dockerfile is used to create an image. You need to execute the build step. At a minimum, from the directory that contains the Dockerfile you would do something like :
docker build .
The Dockerfile describes the image to create. I don't know much about mysql (I am a postgres fanboy), but, I did a search around the interwebs for 'how do i initialize a mysql docker container'. First I created a new directory to work in, I called it mdir, then I created a files directory which I deposited a epcis_schema.sql file which creates a database and a single table:
create database test;
use test;
CREATE TABLE testtab
(
id INTEGER AUTO_INCREMENT,
name TEXT,
PRIMARY KEY (id)
) COMMENT='this is my test table';
Then I created a script called init_db in the files directory:
#!/bin/bash
# Initialize MySQL database.
# ADD this file into the container via Dockerfile.
# Assuming you specify a VOLUME ["/var/lib/mysql"] or `-v /var/lib/mysql` on the `docker run` command…
# Once built, do e.g. `docker run your_image /path/to/docker-mysql-initialize.sh`
# Again, make sure MySQL is persisting data outside the container for this to have any effect.
set -e
set -x
mysql_install_db
# Start the MySQL daemon in the background.
/usr/sbin/mysqld &
mysql_pid=$!
until mysqladmin ping >/dev/null 2>&1; do
echo -n "."; sleep 0.2
done
# Permit root login without password from outside container.
mysql -e "GRANT ALL ON *.* TO root#'%' IDENTIFIED BY '' WITH GRANT OPTION"
# create the default database from the ADDed file.
mysql < /tmp/epcis_schema.sql
# Tell the MySQL daemon to shutdown.
mysqladmin shutdown
# Wait for the MySQL daemon to exit.
wait $mysql_pid
# create a tar file with the database as it currently exists
tar czvf default_mysql.tar.gz /var/lib/mysql
# the tarfile contains the initialized state of the database.
# when the container is started, if the database is empty (/var/lib/mysql)
# then it is unpacked from default_mysql.tar.gz from
# the ENTRYPOINT /tmp/run_db script
(most of this script was lifted from here: https://gist.github.com/pda/9697520)
Here is the files/run_db script I created:
# start db
set -e
set -x
# first, if the /var/lib/mysql directory is empty, unpack it from our predefined db
[ "$(ls -A /var/lib/mysql)" ] && echo "Running with existing database in /var/lib/mysql" || ( echo 'Populate initial db'; tar xpzvf default_mysql.tar.gz )
/usr/sbin/mysqld
Finally, the Dockerfile to bind them all:
FROM mysql
MAINTAINER (me) <email>
# Copy the database schema to the /data directory
ADD files/run_db files/init_db files/epcis_schema.sql /tmp/
# init_db will create the default
# database from epcis_schema.sql, then
# stop mysqld, and finally copy the /var/lib/mysql directory
# to default_mysql_db.tar.gz
RUN /tmp/init_db
# run_db starts mysqld, but first it checks
# to see if the /var/lib/mysql directory is empty, if
# it is it is seeded with default_mysql_db.tar.gz before
# the mysql is fired up
ENTRYPOINT "/tmp/run_db"
So, I cd'ed to my mdir directory (which has the Dockerfile along with the files directory). I then run the command:
docker build --no-cache .
You should see output like this:
Sending build context to Docker daemon 7.168 kB
Sending build context to Docker daemon
Step 0 : FROM mysql
---> 461d07d927e6
Step 1 : MAINTAINER (me) <email>
---> Running in 963e8de55299
---> 2fd67c825c34
Removing intermediate container 963e8de55299
Step 2 : ADD files/run_db files/init_db files/epcis_schema.sql /tmp/
---> 81871189374b
Removing intermediate container 3221afd8695a
Step 3 : RUN /tmp/init_db
---> Running in 8dbdf74b2a79
+ mysql_install_db
2015-03-19 16:40:39 12 [Note] InnoDB: Using atomics to ref count buffer pool pages
...
/var/lib/mysql/ib_logfile0
---> 885ec2f1a7d5
Removing intermediate container 8dbdf74b2a79
Step 4 : ENTRYPOINT "/tmp/run_db"
---> Running in 717ed52ba665
---> 7f6d5215fe8d
Removing intermediate container 717ed52ba665
Successfully built 7f6d5215fe8d
You now have an image '7f6d5215fe8d'. I could run this image:
docker run -d 7f6d5215fe8d
and the image starts, I see an instance string:
4b377ac7397ff5880bc9218abe6d7eadd49505d50efb5063d6fab796ee157bd3
I could then 'stop' it, and restart it.
docker stop 4b377
docker start 4b377
If you look at the logs, the first line will contain:
docker logs 4b377
Populate initial db
var/lib/mysql/
...
Then, at the end of the logs:
Running with existing database in /var/lib/mysql
These are the messages from the /tmp/run_db script, the first one indicates that the database was unpacked from the saved (initial) version, the second one indicates that the database was already there, so the existing copy was used.
Here is a ls -lR of the directory structure I describe above. Note that the init_db and run_db are scripts with the execute bit set:
gregs-air:~ gfausak$ ls -Rl mdir
total 8
-rw-r--r-- 1 gfausak wheel 534 Mar 19 11:13 Dockerfile
drwxr-xr-x 5 gfausak staff 170 Mar 19 11:24 files
mdir/files:
total 24
-rw-r--r-- 1 gfausak staff 126 Mar 19 11:14 epcis_schema.sql
-rwxr-xr-x 1 gfausak staff 1226 Mar 19 11:16 init_db
-rwxr-xr-x 1 gfausak staff 284 Mar 19 11:23 run_db
Another way based on a merge of serveral responses here before :
docker-compose file :
version: "3"
services:
db:
container_name: db
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=mysql
- MYSQL_DATABASE=db
volumes:
- /home/user/db/mysql/data:/var/lib/mysql
- /home/user/db/mysql/init:/docker-entrypoint-initdb.d/:ro
where /home/user.. is a shared folder on the host
And in the /home/user/db/mysql/init folder .. just drop one sql file, with any name, for example init.sql containing :
CREATE DATABASE mydb;
GRANT ALL PRIVILEGES ON mydb.* TO 'myuser'#'%' IDENTIFIED BY 'mysql';
GRANT ALL PRIVILEGES ON mydb.* TO 'myuser'#'localhost' IDENTIFIED BY 'mysql';
USE mydb
CREATE TABLE CONTACTS (
[ ... ]
);
INSERT INTO CONTACTS VALUES ...
[ ... ]
According to the official mysql documentation, you can put more than one sql file in the docker-entrypoint-initdb.d, they are executed in the alphabetical order
The other simple way, use docker-compose with the following lines:
mysql:
from: mysql:5.7
volumes:
- ./database:/tmp/database
command: mysqld --init-file="/tmp/database/install_db.sql"
Put your database schema into the ./database/install_db.sql. Every time when you build up your container, the install_db.sql will be executed.
I've tried Greg's answer with zero success, I must have done something wrong since my database had no data after all the steps: I was using MariaDB's latest image, just in case.
Then I decided to read the entrypoint for the official MariaDB image, and used that to generate a simple docker-compose file:
database:
image: mariadb
ports:
- 3306:3306
expose:
- 3306
volumes:
- ./docker/mariadb/data:/var/lib/mysql:rw
- ./database/schema.sql:/docker-entrypoint-initdb.d/schema.sql:ro
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
Now I'm able to persist my data AND generate a database with my own schema!
After Aug. 4, 2015, if you are using the official mysql Docker image, you can just ADD/COPY a file into the /docker-entrypoint-initdb.d/ directory and it will run with the container is initialized. See github: https://github.com/docker-library/mysql/commit/14f165596ea8808dfeb2131f092aabe61c967225 if you want to implement it on other container images
The easiest solution is to use tutum/mysql
Step1
docker pull tutum/mysql:5.5
Step2
docker run -d -p 3306:3306 -v /tmp:/tmp -e STARTUP_SQL="/tmp/to_be_imported.mysql" tutum/mysql:5.5
Step3
Get above CONTAINER_ID and then execute command docker logs to see the generated password information.
docker logs #<CONTAINER_ID>
Since I struggled with this problem recently, I'm adding a docker-compose file that really helped me:
version: '3.5'
services:
db:
image: mysql:5.7
container_name: db-container
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- "./scripts/schema.sql:/docker-entrypoint-initdb.d/1.sql"
- "./scripts/data.sql:/docker-entrypoint-initdb.d/2.sql"
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: test
MYSQL_USER: test-user
MYSQL_PASSWORD: password
ports:
- '3306:3306'
healthcheck:
test: "/usr/bin/mysql --user=root --password=password --execute \"SHOW DATABASES;\""
interval: 2s
timeout: 20s
retries: 10
You just need to create a scripts folder in the same location as the docker-compose.yml file above.
The scripts folder will have 2 files:
schema.sql: DDL scripts (create table...etc)
data.sql: Insert statements that you want to be executed right after schema creation.
After this, you can run the command below to erase any previous database info (for a fresh start):
docker-compose rm -v -f db && docker-compose up
For the ones not wanting to create an entrypoint script like me, you actually can start mysqld at build-time and then execute the mysql commands in your Dockerfile like so:
RUN mysqld_safe & until mysqladmin ping; do sleep 1; done && \
mysql -e "CREATE DATABASE somedb;" && \
mysql -e "CREATE USER 'someuser'#'localhost' IDENTIFIED BY 'somepassword';" && \
mysql -e "GRANT ALL PRIVILEGES ON somedb.* TO 'someuser'#'localhost';"
or source a prepopulated sql dump:
COPY dump.sql /SQL
RUN mysqld_safe & until mysqladmin ping; do sleep 1; done && \
mysql -e "SOURCE /SQL;"
RUN mysqladmin shutdown
The key here is to send mysqld_safe to background with the single & sign.
After to struggle a little bit with that, take a look the Dockerfile using named volumes (db-data).
It's important declare a plus at final part, where I mentioned that volume is [external]
All worked great this way!
version: "3"
services:
database:
image: mysql:5.7
container_name: mysql
ports:
- "3306:3306"
volumes:
- db-data:/docker-entrypoint-initdb.d
environment:
- MYSQL_DATABASE=sample
- MYSQL_ROOT_PASSWORD=root
volumes:
db-data:
external: true
Below is the Dockerfile I used successfully to install xampp, create a MariaDB with scheme and pre populated with the info used on local server(usrs,pics orders,etc..)
FROM ubuntu:14.04
COPY Ecommerce.sql /root
RUN apt-get update \
&& apt-get install wget -yq \
&& apt-get install nano \
&& wget https://www.apachefriends.org/xampp-files/7.1.11/xampp-linux-x64-7.1.11-0-installer.run \
&& mv xampp-linux-x64-7.1.11-0-installer.run /opt/ \
&& cd /opt/ \
&& chmod +x xampp-linux-x64-7.1.11-0-installer.run \
&& printf 'y\n\y\n\r\n\y\n\r\n' | ./xampp-linux-x64-7.1.11-0-installer.run \
&& cd /opt/lampp/bin \
&& /opt/lampp/lampp start \
&& sleep 5s \
&& ./mysql -uroot -e "CREATE DATABASE Ecommerce" \
&& ./mysql -uroot -D Ecommerce < /root/Ecommerce.sql \
&& cd / \
&& /opt/lampp/lampp reload \
&& mkdir opt/lampp/htdocs/Ecommerce
COPY /Ecommerce /opt/lampp/htdocs/Ecommerce
EXPOSE 80

Docker container: /bin/sh: cat: No such file or directory

I'm using the mysql/mysql-server image to create a mysql server in docker. Since I want to setup my database (add users, create tables) automatically, I've created a SQL file that does that for me. In order to automatically run that script, I extended the image with this dockerfile:
FROM mysql/mysql-server:latest
RUN mkdir /scripts
WORKDIR /scripts
COPY ./db_setup.sql .
RUN mysql -u root -p password < cat db_setup.sql
but for some reason, this happens:
/bin/sh: cat: No such file or directory
ERROR: Service 'db' failed to build : The command '/bin/sh -c mysql -u root -p password < cat db_setup.sql' returned a non-zero code: 1
How do I fix this?
You can just remove the cat command from your RUN command:
RUN mysql -u root -p password < db_setup.sql
No such file or directory is returned since cat cannot be found in the current directory set by WORKDIR. You can just redirect the stdin of mysql to be from the db_setup.sql file. Edited to clarify < sh redirection is expecting the file name to use for input.
EDIT 2: Keep in mind your example is a RUN command that is attempting to run mysql and creating a layer at docker image build time. You may want to have this run during the mysql entrypoint script at runtime instead (e.g. scripts are run from thedocker-entrypoint-initdb.d/ directory by the docker-entrypoint.sh script of the official mysql image) or using other features that are documented for the official image.
RUN is a build time command. MySQL isn't running at this point.
If you where/are using a standard image there is a location for database initialization:
FROM mysql:8.0
COPY db_setup.sql /docker-entrypoint-initdb.d
Command cat is not present in mysql/mysql-server:latest image.
Moreover, you would only need to provide filename afetr redirection.
RUN mysql -u root -p password < db_setup.sql

Best way to initialize DB script while building docker image

I am building a docker machine using the image "mysql". I have some setup script to run for the first time the machine is built. This setup script will create some database and dabase users with specified permmissions.
Following are the minimized version of my files..
pcdb1.entrypoint.sh
#!/bin/sh
mysql -uroot -p'pass123' -e 'show databases MYENTRYDB;'
Dockerfile
FROM mysql:5.7
COPY ./pcdb1.entrypoint.sh /
ENTRYPOINT ["/pcdb1.entrypoint.sh"];
I am getting the following error in the log
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
pcdb1 exited with code 1
What I understood is, my script is trying to run before mysql is started. But I am not sure how to do it properly. Can I get a suggestion?
EDIT: 20181007
I have found the way you mentioned in question--initialize DB while building docker image. But with a little difference, the way I found seems like initializing DB while running a container from image, although the initializing script was specified while building the image.
According to the official information about mysql:5.7, there is paragraph named "Initializing a fresh instance". We could just add a initializing script into the directory /docker-entrypoint-initdb.d, the default ENTRYPOINT and CMD of image mysql:5.7 would execute it after database start-up.
For example:
FROM mysql:5.7
COPY init-database.sql /docker-entrypoint-initdb.d/
content of init-database.sql:
create database light;
create user 'light'#'%' identified by 'abc123';
grant all privileges on light.* to 'light'#'%' identified by 'abc123';
grant all privileges on light.* to 'light'#'localhost' identified by 'abc123';
Build new image:
docker build -t light/mysql:5.7 .
Run a container:
docker run -tid --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD='abc123' light/mysql:5.7
Examine initialization:
docker run -ti mysql /bin/bash
root#25e73d40c4ff:/# mysql -uroot -p
Enter password: (abc123)
Welcome to the MySQL monitor.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
All work well.
Former answer below.
How to start mysql damon in container?
First of all, you are right on "trying to run before mysql is started" part. But still, there is a missing on "how MySQL starts exactly". If you execute docker history mysql:5.7 --no-trunc, you could see three important records among output like below:
/bin/sh -c #(nop) CMD ["mysqld"]
/bin/sh -c #(nop) ENTRYPOINT ["docker-entrypoint.sh"]
/bin/sh -c ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh
So far we should know when we start a mysql container with command below, the exact initial command in container is docker-entrypoint.sh mysqld.
docker run -tid -e MYSQL_ROOT_PASSWORD='abc123' mysql:5.7
How to initiate mysql in container?
Secondly, let's now have a check on docker-entrypoint.sh script.
There is a specific line like below, just at roughly middle position of this script, which means to start mysql daemon.
mysql=( mysql --protocol=socket -uroot -hlocalhost --socket="${SOCKET}" )
After starting mysql, we could see lots of initiating statements in docker-entrypoint.sh script. Such as creating root user with password or not, granting privileges to root, creating database declared by users with MYSQL_DATABASE env and so on.
Now here are solutions offered for you.
Self-defining the docker-entrypoint.sh script.
In this way, you could whatever you want which is legal in mysql.
Get the whole entrypoint.sh script on your host.
Add your self-definition of mysql in the script, make your
self-defining content at the bottom of this script. I assume you
don't want to mess it with original content.
Build a new mysql image for your own with command and Dockerfile below.
command: docker build -t mysql:self .
Dockerfile:
FROM mysql:5.7
COPY /path/to/your-entrypoint.sh /
ENTRYPOINT ["/your-entrypoint.sh"]
CMD ["mysqld"]
If your don't want a new image, there is another way to change
ENTRYPOINT when you run a container. But still, you should make your own script available in container.
docker run -tid -v /path/to/your-entrypoint.sh:/entrypoint.sh -p 3306:3306 -e MYSQL_ROOT_PASSWORD='abc123' mysql:5.7
Using default ENVs provided by mysql:5.7
In this way, there is a limit, especially on "specified permmissions" you mentioned.
The ENVs you need are: MYSQL_DATABASE, MYSQL_USER, MYSQL_PASSWORD.
The command should like this:
docker run -tid -e MYSQL_ROOT_PASSWORD='abc123' -e MYSQL_DATABASE='apps' -e MYSQL_USER='light' -e MYSQL_PASSWORD='abc123' mysql:5.7
This means that the database apps and user light will be created automatically, and the user light will be granted superuser permissions for the database apps.
More reference here on hub.docker.com.

How do I restore a dump file from mysqldump using kubernetes?

I know how to restore a dump file from mysqldump. Now, I am attempting to do that using kubernetes and a docker container. The database files are in persistent (nfs) mount. The docker cannot be accessed outside of the cluster as there is no need for anything external to touch it.
I tried:
kubectl run -i -t dbtest --image=mariadb --restart=Never --rm=true --command -- mysql -uroot -ps3kr37 < dump.sql
and
kubectl exec mariadb-deployment-3614069618-mn524 -i -t -- mysql -u root -p=s3kr37 < dump.sql
But neither commands worked -- errors about TTY, sockets, and other things hinting that I am missing something vital here.
What am I not understanding here?
I could just stop the deployment, scp the database files, and restart the container and hope for the best. However, what can go right?
The question Install an sql dump file to a docker container with mariaDB sure looks like a duplicate but is not: first, I am on Linux not Windows and more importantly the answers all are about initialising with a dump. I want to be able to trash the data and revert to the dump data. This is a test system that will eventually be the "live" so I need to restore from many potential dumps.
As described in here you can use the following command to restore a DB on kubernetes pod from a dump in your machine
$ kubectl exec -it {{podName}} -n {{namespace}} -- mysql -u {{dbUser}} -p{{password}} {{DatabaseName}} < <scriptName>.sql
Example :
$ kubectl exec -it mysql-58 -n sql -- mysql -u root -proot USERS < dump_all.sql
What I did was this:
Create an NFS mount with two sub0drectories: mysql and initd.
In initd, I added several ,sql files, including the dump.
Mount initd as /docker-entrypoint-initdb.d in the deployment.This causes all the files to be read at initialisation time provided that it is the first time we run.
The mysql directory is mounted as /var/lib/mysql and contains all the mariaDB files.
If I need to revert, I trash all the contents of the mysql directory and re-create the deployment.
This should work:
kubectl --kubeconfig=k8s-XXXXXXX-kubeconfig.yaml exec -i ddevdb-XXXXX -- mysql -u root -h mysqlservice -proot drupal < you-dump.sql
kubeconfig is optional, digitalocean for examples provides that so you can run your commands from your local.
To see if everything looks good:
kubectl --kubeconfig=k8s-XXXXXXX-kubeconfig.yaml run -it --rm --image=mariadb:10.4 --restart=Never mysql -- mysql -h mysqlservice -proot
After which you'll have a terminal inside mysql.

Exporting data from MySQL docker container

I use the official MySQL docker image, and I am having difficulty exporting data from the instance without errors. I run my export like this:
docker run -it --link containername:mysql --rm mysql sh -c
'exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
-p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"
dbname'
| gz > output.sql.gz
However, this results in the warning:
"mysqldump: [Warning] Using a password on the command line interface can be insecure."
As the first line of the outputted file. Obviously this later causes problems for any other MySQL processes which are used to consume the data.
Is there any way to suppress this warning from the mysqldump client?
A little late to answer but this command saved my day.
docker exec CONTAINER /usr/bin/mysqldump -u root --password=root DATABASE > backup.sql
I realise that this is an old question, but for those stumbling across it now I put together a post about exporting and importing from mysql docker containers: https://medium.com/#tomsowerby/mysql-backup-and-restore-in-docker-fcc07137c757
It covers the "Using a password on the command line interface..." warning and how to bypass it.
Run Following command on terminal
docker exec CONTAINER_id /usr/bin/mysqldump -uusername --password=yourpassword databasename> backup.sql
Replace the
CONTAINER_id. username, yourpassword
with specific to your configuration.
To get Container Id :
docker container ls
To eliminate this exact warning you can pass password in MYSQL_PWD environment variable or use other connection method - see http://dev.mysql.com/doc/refman/5.7/en/password-security-user.html
docker run -it --link containername:mysql --rm mysql sh -c
'export MYSQL_PWD="$MYSQL_ENV_MYSQL_ROOT_PASSWORD"; exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
dbname'
| gz > output.sql.gz
Here's how I solved this to dump a mysql db into a file.
I created a dump-db.sh file with the content:
# dump db from docker container
(docker exec -i CONTAINER_ID mysqldump -u DB_USER -pDB_PASS DB_NAME) > FILENAME.sql
To get the CONTAINER_ID list them: docker container list
Add run permissions to the script:
chmod o+x dump-db.sh
Run it:
./dump-db.sh
Remember to replace the CONSTANTS above with your own data.
I always create bash "tools" in my repo root with which I can repeat common tasks, such as database dumps. With bash, you can also load your .env file, so your credentials are not in a file in the repo, but just in your .env file.
#!/bin/bash
# load .env
set -o allexport; . ./.env; set +o allexport
# setup
TIMESTAMP=$(date +%Y-%m-%d__%H.%M)
BACKUP_DIR="dockerfiles/db"
CONTAINER_NAME="cp-db"
# dump
docker exec $CONTAINER_NAME /usr/bin/mysqldump -u$DB_USER --password=$DB_PASSWORD $DB_NAME> $BACKUP_DIR/dump__$TIMESTAMP.sql