seeding mysql data in a docker build - mysql

I'm attempting to build a docker image that will include MySQL and some seed data, and I'm trying to figure out how to insert the data into the database during the docker build phase.
It seems I need to start the MySQL engine, invoke a command to run some SQL statements, and then shut down the MySQL engine. Any good ideas on how to best do that?

When the image is built, the folder /docker-entrypoint-initdb.d is checked for files to seed the DB with. Below the corresponding paragraph from the section Initializing a fresh instance on the official MySQL Docker Hub page.
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data. SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.

This blog post might help you.
Essentially, the steps to be followed are:
1. create a file (say seed_data.sh) and put it in the same directory as your Dockerfile
2. in the dockerfile add the following lines
ADD resources/seed_data.sh /tmp/
RUN chmod +x /tmp/seed_data.sh
RUN /tmp/seed_data.sh
RUN rm /tmp/seed_data.sh
The file seed_data.sh contains the code for running the mysql server, logging into it and then inserting the data.

Related

How to make a stateless mysql docker container

I want to make a mysql docker image that imports some initial data in the build process.
Afterwards, when used in a container, the container stays stateless, meaning the data added while the container is running does not survive destroying/starting the container again but the inital data is still there.
Is this possible? How would I a setup such an image and container?
I suggest creating the MySQL tables as needed in a SQL script, or directly in a local MySQL instance and exporting them to a file.
With this file in hand, create a Dockerfile which builds on the MySQL container. Add to this another entrypoint script which injects the SQL script into the database.
You don't write anything about mounting volumes. You may want a data volume for the database or configure MySQL for keeping everything in memory.
For added "statelessness" you may want to DROP all tables in your SQL script too.
I think what you need is a multi-stage build:
FROM mysql:5.7 as builder
# needed for intialization
ENV MYSQL_ROOT_PASSWORD=somepassword
ADD initialize.aql /docker-entrypoint-initdb.d/
# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$#\"/echo \"not running $#\"/", "/usr/local/bin/docker-entrypoint.sh"]
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db"]
FROM mysql:5.7
COPY --from=builder /initialized-db /var/lib/mysql
You can put your initialization scripts in initialize.sql (or choose a different way to initialize your database).
The resulting image is a database that is already initialised. You can use it and throw it away as you like.
You can also use this process to create different images (tag them differently) for different use cases.
Hope this answers your question.

Creating, populating, and using Docker Volumes

I've been plugging around with Docker for the last few days and am hoping to move my Python-MySQL webapp over to Docker here soon.
The corollary is that I need to use Docker volumes and have been stumped lately. I can create a volume directly by
$ docker volume create my-vol
Or indirectly by referencing a nonexistent volume in a docker run call, but I cannot figure out how to populate these volumes with my .sql database file, without copying the file over via a COPY call in the Dockerfile.
I've tried directly creating the volume within the directory containing the .sql file (first method mentioned above) and mounting the directory containing the .sql file in my 'docker run' call, which does move the .sql file to the container (I've seen it by navigaating the bash shell inside the container) but when running a mariadb container connecting to the database-containing mariadb container (as suggested in the mariadb docker readme file), it only has the standard databases (information_schema, mysql, performance_schema)
How can I create a volume containing my pre-existing .sql database?
When working with mariadb in a docker container, the image supports running .sql files as a part of the first startup of the container. This allows you to push data into the database before it is made accessible.
From the mariadb documentation:
Initializing a fresh instance
When a container is started for thefirst time, a new database with the specified name will be created and
initialized with the provided configuration variables. Furthermore, it
will execute files with extensions .sh, .sql and .sql.gz that are
found in /docker-entrypoint-initdb.d. Files will be executed in
alphabetical order. You can easily populate your mariadb services by
mounting a SQL dump into that directory and provide custom images with
contributed data. SQL files will be imported by default to the
database specified by the MYSQL_DATABASE variable.
This means that if you want to inject data into the container, when it starts up for the first time. In your Dockerfile, COPY the .sql file into the container at the path /docker-entrypoint-initdb.d/myscript.sql - and it will be invoked on the database that you specified in the environment variable MYSQL_DATABASE.
Like this:
FROM mariadb
COPY ./myscript.sql /docker-entrypoint-initdb.d/myscript.sql
Then:
docker run -e MYSQL_DATABASE=mydb mariadb
There is then the question of how you want to manage the database storage. You basically have two options here:
Create a volume binding to the host, where mariadb stores the database. This will enable you to access the database storage files easily from the host machine.
An example with docker run:
docker run -v /my/own/datadir:/var/lib/mysql mariadb
Create a docker volume and bind it to the storage location in the container. This will be a volume that is managed by docker. This volume will persist the data between restarts of the container.
docker volume create my_mariadb_volume
docker run -v my_mariadb_volume:/var/lib/mysql mariadb
The is also covered in the docs for the mariadb docker image. I can recommend reading it from top to bottom if you are going to use this image.

Huge static (mysql) database in docker

I am developing an application and try to implement the microservice architecture. For information about locations (cities, zip codes, etc.) I downloaded a database dump for mysql from opengeodb.org.
Now I want to provide the database as a docker container.
I set up a mysql image with following Dockerfile as mentioned in the docs for the mysql image:
FROM mysql
ENV MYSQL_ROOT_PASSWORD=mypassword
ENV MYSQL_DATABASE geodb
WORKDIR /docker-entrypoint-initdb.d
ADD ${PWD}/sql .
EXPOSE 3306
The "sql"-folder contains sql scripts with the raw data as insert statements, so it creates the whole database.The problem is, that the database is really huge and it takes really long to set it up.
So I thought, maybe there is a possibility to save the created database inside an image, because it is an static database for read-only operations only.
I am fairly new to docker and not quite sure how to achieve this.
I'm using docker on a Windows 10 machine.
EDIT:
I achieved my goal by doing the following:
I added the sql dump file as described above.
I ran the container and built the whole database with a local directory (the 'data' folder) mounted to /var/lib/mysql.
Then stopped the container and edited the Dockerfile:
FROM mysql
ENV MYSQL_ROOT_PASSWORD=mypassword
ENV MYSQL_DATABASE geodb
WORKDIR /var/lib/mysql
COPY ${PWD}\data .
EXPOSE 3306
So the generated Database is now beeing copied from local system into the container.
You could create a volume with your container to persist the database on your local machine. When you first create the container, the SQL in /docker-entrypoint-initdb.d will be executed, and the changes will be stored to the volume. Next time you start the container, MySQL will see that the schema already exists and it won't run the scripts again.
https://docs.docker.com/storage/volumes/
In principle you could achieve it like this:
start the container
load the database
perform a docker commit to build an image of the current state of the container.
The other option would be to load in the database during the image build time, but for this you would have to start mysql similarly to how it's done in the entrypoint script.
start mysql in background
wait for it to initialize
load in the data using mysql < sql file

Mnesia : ejabberd : Export all tables as SQL queries to a file

I need to migrate mnesia to mysql from ejabberd.
I have tried many ways from ui : From ui there is a node. On selection of node I will have many options out of which one option is backup. On that page there is a option of Export all tables as SQL queries to a file: host(0.0.0.0),I tried to take sql backup but file is empty
I also tried these commands:
ejabberdctl export2odbc localhost /var/lib/ejabberd/new_file.sql. This is also a blank file generating no error.:
ejabberdctl export2sql localhost /tmp/sql /var/lib/ejabberd/new.sql. This command does not execute as export2sql does not exist.
Is there any other way to take sql dump from Mnesia
Version : ejabberd 16.01 mysql 5.6.xx
The sql export command was added in 16.04 and named as export_sql and later renamed to export2sql in 16.06. So there's no way to take a dump directly, but you have two alternatives:
If you can upgrade ejabberd, then its straight forward, upgrade the server, take a dump of the sql.
Take a backup of the relevant folders, like database/spool directory, config directory, etc.
Upgrade the server to the latest version or at least version 17.07 (the reason being since version 17.06 most of the tables can be exported to sql file, but 17.03-17.06 suffers a bug.
Configure ejabberd to use mysql as a backend database.
Make sure the following modules have the db_type: sql option.
mod_announce, mod_caps, mod_irc, mod_last, mod_muc, mod_offline, mod_privacy, mod_private, mod_pubsub, mod_roster, mod_shared_roster, mod_vcard, mod_vcard_xupdate
Restore the spool directory and make sure you have the same permissions
for all the files and sub-directories as before.
Run the ejabberd2sql with hosts and the sql filename as parameters
Note: If you only need the sql dump you might want to revert back the configuration after the dump.
If you cannot upgrade the server, you can install the latest version of ejabberd in another machine, copy the database directory, follow the same procedure above and you can get the dump of your sql.

Backing up and restoring mysql database with Laravel/Docker

I'm working on changing my environment from vagrant to docker and I came across one hitch. With vagrant I have a bash file that will pull data from ftp and restore it in my local so that I can work with the most up to date data.
This is my code
php artisan db:restore --database=mysql --source=ftp --sourcePath=$(date +'%Y')"/"$(date +'%m')"/"$(date +%m-%d-%Y -d "yesterday")".gz" --compression=gzip
php artisan migrate
Inside of my work container I run this and it wont find the mysql command because mysql is in a different container. What can I do to fix my work flow?
This answer is about rails migrations, but you could take a similar approach to laravel. Create a container that has the required laravel tools and required MySQL tools (likely this) to connect to the database.
You could then combine it with a suggestion in this answer and create a migration script that you can add to your image. As larsks points out in the comments, it would allow you to encapsulate the logic for where to get the backup data and how to restore it in one place. Assuming you named this script restore, you could run your restore command like this
docker run myimage restore