Setting up MySQL for dev environment with Docker - mysql

Trying to set up a docker mysql server with phpmyadmin and an existing company_dev.sql file to import, in an effort to dockerize my dev environment.
My first question is how do I go about setting this up? Do I need to specify an OS, i.e. Ubuntu in my Dockerfile, then add sudo apt-get install mysql-server and install phpmyadmin? Or am I better off running an existing docker image from the docker repo and building on top of that?
Upon making CRUD operations to this database, I would like to save its state for later use. Would using docker commit be appropriate for this use case? I know using dockerfile is best practice.
I appreciate any advice.

First of all, with docker you should have a single service/Daemon per container. In your case, mysql and phpmyadmin should go in different containers. This is not mandatory (there are workarounds) but makes things a lot easier.
In order to avoid reinventing the wheel, you should IMHO always use existing images for the wanted service, expecially if they're official ones. But again, you can choose for any reason to start from scratch (a base image such as "Ubuntu" or "Debian" just to name two) and install the needed stuff.
About the storage question: docker containers should always be immutable. If a container needs to save it's state, it should use volumes. Volumes are a way to share a folder between the container and the host. For instance, the official mysql image uses a volume to store the database files.
So, to summarize, you should use ready images when possible and no, using git commit to store mysql data is not a good practice.

Previously I have used this Dockerfile in order to restore MySQL data.
GitHub - stormcat24/docker-mysql-remote
My first question is how do I go about setting this up?
This dockerfile is using mysqldump to load from real env/save to docker env. You can also do that. Actually, it will load/save whole tables in your specified database.
Do I need to specify an OS, i.e. Ubuntu in my Dockerfile, then add sudo apt-get install mysql-server and install phpmyadmin?
You can see this docker image is created from DockerHub - library/mysql , we don't need to prepare basic middle-wares except phpmyadmin.
Or am I better off running an existing docker image from the docker repo and building on top of that?
It's better to use already existing Docker repository !
Upon making CRUD operations to this database, I would like to save its state for later use. Would using docker commit be appropriate for this use case? I know using dockerfile is best practice.
I have tried this also. After some testing, I successfully saved a docker image contains MySQL DB. To do that, we just need to exec docker commit xxx after finished your build. However be careful, don't push your image file to DockerHub.

Related

mysql container broken, how to recover data?

our IT broke the mysql container and now it can not be started.
I understand that I can commit a new version and run it without entrypoint, so I can "exec -it" to enter and check what's wrong.
but how can I recover my data? inspect the old container and copy all files from mounted volume? (it seems a overkill for this problem, can I 'start' my container without entrypoint?)
what's the best practice for this problem?
If you have a mounted volume, your data is in a volume directory in your host, and there will be unless you delete it. So, fix your MySQL image and then create another MySQL container.
You should be able to fix your container by using docker attach or docker exec. You can even change container entrypoint using something like this: How to start a stopped Docker container with a different command?
But that's not a good approach. As stated in Best practices for writing Dockerfiles, Docker containers should be ephemeral, meaning this that they can be replaced easily for new ones. So, best option is destroy your container and create a new one.
I think as #kstromeiraos says you should first fix your Dockerfile if at all it's broken and again build and run the container using:
docker build
docker run -v xxx
Since you have used volumes from your MySQL data seems to be backed off properly, so the new container which comes up should have the backed up data.
You can do:
docker exec -it bash
and get into the container and check the logs and data.

Compressing mysql data folder to save in a docker image

For internal dev productivity usecase, we are building docker images for every build by installing our application which includes glassfish application server and mysql database and keeping the application server and database in stopped state before saving the docker image. On container startup, database and application server are started in that order.
In order to reduce the docker image size, i am planning to compress the mysql data folder and keep the .tar.gz file only in the docker image. Container startup will uncompress the data folder before starting the database. Are there any issues with this approach in case anyone has gone down this path?
Yeah that is a fairly good approach to go with.
You can compress the data into tar either by going inside the container and running tar command. Then you can just edit rc.local where you can put a restart script which will run when the container boots up and untars the tar file in the appropriate directory and then Later you can make an image out of this.
OR
Create image of your container. Then use Dockerfile to do what I explained above. put this image over DockerHub and pull it on a different machine, run it and it should run what you want. You can also make use of CMD to start database.
OR
Simply just tar the data, ship the Docker Image. Pull it on which ever server you want to, keep and untar command handy. Run the container, get into it, untar the file and then start your database server.

How to make a docker image with a populated database for automated tests?

I want to create containers w/ a MySQL db and a dump loaded for integration tests. Each test should connect to a fresh container, with the DB in the same state. It should be able to read and write, but all changes should be lost when the test ends and the container is destroyed. I'm using the "mysql" image from the official docker repo.
1) The image's docs suggests taking advantage of the "entrypoint" script that will import any .sql files you provide on a specific folder. As I understand, this will import the dump again every time a new container is created, so not a good option. Is that correct?
2) This SO answer suggests extending that image with a RUN statement to start the mysql service and import all dumps. This seems to be the way to go, but I keep getting
mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
followed by
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
when I run build, even though I can connect to mysql fine on containers of the original image. I tried sleep 5 to wait for the mysqld service to startup, and adding -h with 'localhost' or the docker-machine ip.
How can I fix "2)"? Or, is there a better approach?
If re-seeding the data is an expensive operation another option would be starting / stopping a Docker container (previously build with the DB and seed data). I blogged about this a few months ago Integration Testing using Spring Boot, Postgres and Docker and although the blog focuses on Postgres, the idea is the same and could be translated to MySQL.
The standard MySQL image is pretty slow to start up so might be useful to use something that has been prepared more for this situation like this:
https://github.com/awin/docker-mysql
You can include data or use with a Flyway situation too, but it should speed things up a bit.
How I've solved this before is using a Database Migration tool, specifically flyway: http://flywaydb.org/documentation/database/mysql.html
Flyway is more for migrating the database schema opposed to putting data into it, but you could use it either way. Whenever you start your container just run the migrations against it and your database will be setup however you want. It's easy to use and you can also just use the default MySQL docker container without messing around with any settings. Flyway is also nice for many other reasons, like having a way to have version control for a database schema, and the ability to perform migrations on production databases easily.
To run integration tests with a clean DB I would just have an initial dataset that you insert before the test, then afterwards just truncate all the tables. I'm not sure how large your dataset is, but I think this is generally faster than restarting a mysql container every time,.
Yes, the data will be imported every time you start a container. This could take a long time.
You can view an example image that I created
https://github.com/kliewkliew/mysql-adventureworks
https://hub.docker.com/r/kliew/mysql-adventureworks/
My Dockerfile builds an image by installing MySQL, imports a sample database (from a .sql file), and sets the entrypoint to auto-start MySQL server. When you start a container from this image, it will have the data pre-loaded in the database.

How to run JIRA and MySQL on same Docker Container

I have found this docker image with JIRA on it. JIRA can be used with MySQL, but I do not want to run MySQL on another Container.
In my opinion it is more useful to run the MySQL on the same Container (faster access, higher security, less resources, etc.).
How can I accomplish that?
You need to use a base image which specializes in managing several services, in order to avoid the "PID 1 zombie reaping issue".
Create a dockerfile similar to the JIRA one, but:
with phusion/baseimage-docker as base image
with mysql installed (as in this Dockerfile)
with both Jira and mysql declared as additional daemons
with the baseimage-docker's init system:
CMD ["/sbin/my_init"]
That way, you can easily start multiple apps, and also stop the container while knowing all apps will be stopping properly.
I did JIRA and PostgreSQL but in two containers, look at https://github.com/manufy/docker-jira-bitbucket-gitlab/blob/master/atlassian/jira/docker-compose.yml
In one container you can mix the JIRA Dockerfile and add commands to install mysql, after that only configure db on web jira install.
Perhaps will help you

Persistent mysql data from docker container

How can I persist data from my mysql container? I'd really like to mount /var/lib/mysql from the container to the host machine. This indeed creates the directory, but when I create data, stop my application, and start a new one with the mounted directory, nothing is there. I've messed around with giving the directory all permissions and changing the user and group to root but nothing seems to work. I keep seeing people saying to use a data container, but I don't see how that can work with Amazon ec2 container service (ECS), considering each time I stop and start a task it would create a new data container rather than use an existing one. Please help.
Thank you
Simply run your containers with something like this:
docker run -v /var/lib/mysql:/var/lib/mysql -t -i <container_id> <command>
You can keep your host /var/lib/mysql and mount it to each one of the containers. Now this is not going to work with EC2 Container service unless all of your servers that you use for containers map to a common /var/lib/mysql (Perhaps nfs mounted from a master EC2 instance)
AWS EFS is going to be great for this once it becomes widely available.