Backing up and restoring mysql database with Laravel/Docker - mysql

I'm working on changing my environment from vagrant to docker and I came across one hitch. With vagrant I have a bash file that will pull data from ftp and restore it in my local so that I can work with the most up to date data.
This is my code
php artisan db:restore --database=mysql --source=ftp --sourcePath=$(date +'%Y')"/"$(date +'%m')"/"$(date +%m-%d-%Y -d "yesterday")".gz" --compression=gzip
php artisan migrate
Inside of my work container I run this and it wont find the mysql command because mysql is in a different container. What can I do to fix my work flow?

This answer is about rails migrations, but you could take a similar approach to laravel. Create a container that has the required laravel tools and required MySQL tools (likely this) to connect to the database.
You could then combine it with a suggestion in this answer and create a migration script that you can add to your image. As larsks points out in the comments, it would allow you to encapsulate the logic for where to get the backup data and how to restore it in one place. Assuming you named this script restore, you could run your restore command like this
docker run myimage restore

Related

Setting Replication For MySQL Production Container in Kubernetes

Use Case: A MySQL instance will be running on production with required databases. I need to configure this running container as a master and spin up one more MySQL instance as a slave.
Challenges: Facing issues in configuring running MySQL instance as a master. The issue is not able to create replication user and not able to append the master/slave configuration to my.cnf file. The reason is,
To create replication user or to execute any custom SQL commands in container, we have to place initdb.sql with required SQL commands inside docker-entrypoint-initdb.d. So when container starting it execute the file present in docker-entrypoint-initdb.d and executes it, if the database had not created, if the database had created already it skips executing this .sql file residing in docker-entrypoint-initdb.d. This is the root for failing to configure master because MySQL is running with databases in production. So I cannot take this solution to configure as MySQL.
After facing this issue we planned to put the configuration SQL commands in .sh and keep in docker-entrypoint-initdb.d and execute them by patching the deployment. In this scenario we are facing some permission issues when executing the .sh files.
I need to configure replication(master-slave) for MySQL instance(s) in kubernetes world. I gone through the lot of posts to understand how to implement this. Nothing worked out as I am expecting and as I explained above. Along with this I found a custom image(bitnami/mysql) which supports setting up the replication, which I don't want use because finally I would not be able to implement this in production env.
So it will be very grateful if anyone helps me by suggesting any approach to solve this problem.
Thank you very much in advance.!!!

Setting up MySQL for dev environment with Docker

Trying to set up a docker mysql server with phpmyadmin and an existing company_dev.sql file to import, in an effort to dockerize my dev environment.
My first question is how do I go about setting this up? Do I need to specify an OS, i.e. Ubuntu in my Dockerfile, then add sudo apt-get install mysql-server and install phpmyadmin? Or am I better off running an existing docker image from the docker repo and building on top of that?
Upon making CRUD operations to this database, I would like to save its state for later use. Would using docker commit be appropriate for this use case? I know using dockerfile is best practice.
I appreciate any advice.
First of all, with docker you should have a single service/Daemon per container. In your case, mysql and phpmyadmin should go in different containers. This is not mandatory (there are workarounds) but makes things a lot easier.
In order to avoid reinventing the wheel, you should IMHO always use existing images for the wanted service, expecially if they're official ones. But again, you can choose for any reason to start from scratch (a base image such as "Ubuntu" or "Debian" just to name two) and install the needed stuff.
About the storage question: docker containers should always be immutable. If a container needs to save it's state, it should use volumes. Volumes are a way to share a folder between the container and the host. For instance, the official mysql image uses a volume to store the database files.
So, to summarize, you should use ready images when possible and no, using git commit to store mysql data is not a good practice.
Previously I have used this Dockerfile in order to restore MySQL data.
GitHub - stormcat24/docker-mysql-remote
My first question is how do I go about setting this up?
This dockerfile is using mysqldump to load from real env/save to docker env. You can also do that. Actually, it will load/save whole tables in your specified database.
Do I need to specify an OS, i.e. Ubuntu in my Dockerfile, then add sudo apt-get install mysql-server and install phpmyadmin?
You can see this docker image is created from DockerHub - library/mysql , we don't need to prepare basic middle-wares except phpmyadmin.
Or am I better off running an existing docker image from the docker repo and building on top of that?
It's better to use already existing Docker repository !
Upon making CRUD operations to this database, I would like to save its state for later use. Would using docker commit be appropriate for this use case? I know using dockerfile is best practice.
I have tried this also. After some testing, I successfully saved a docker image contains MySQL DB. To do that, we just need to exec docker commit xxx after finished your build. However be careful, don't push your image file to DockerHub.

How to make a docker image with a populated database for automated tests?

I want to create containers w/ a MySQL db and a dump loaded for integration tests. Each test should connect to a fresh container, with the DB in the same state. It should be able to read and write, but all changes should be lost when the test ends and the container is destroyed. I'm using the "mysql" image from the official docker repo.
1) The image's docs suggests taking advantage of the "entrypoint" script that will import any .sql files you provide on a specific folder. As I understand, this will import the dump again every time a new container is created, so not a good option. Is that correct?
2) This SO answer suggests extending that image with a RUN statement to start the mysql service and import all dumps. This seems to be the way to go, but I keep getting
mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
followed by
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
when I run build, even though I can connect to mysql fine on containers of the original image. I tried sleep 5 to wait for the mysqld service to startup, and adding -h with 'localhost' or the docker-machine ip.
How can I fix "2)"? Or, is there a better approach?
If re-seeding the data is an expensive operation another option would be starting / stopping a Docker container (previously build with the DB and seed data). I blogged about this a few months ago Integration Testing using Spring Boot, Postgres and Docker and although the blog focuses on Postgres, the idea is the same and could be translated to MySQL.
The standard MySQL image is pretty slow to start up so might be useful to use something that has been prepared more for this situation like this:
https://github.com/awin/docker-mysql
You can include data or use with a Flyway situation too, but it should speed things up a bit.
How I've solved this before is using a Database Migration tool, specifically flyway: http://flywaydb.org/documentation/database/mysql.html
Flyway is more for migrating the database schema opposed to putting data into it, but you could use it either way. Whenever you start your container just run the migrations against it and your database will be setup however you want. It's easy to use and you can also just use the default MySQL docker container without messing around with any settings. Flyway is also nice for many other reasons, like having a way to have version control for a database schema, and the ability to perform migrations on production databases easily.
To run integration tests with a clean DB I would just have an initial dataset that you insert before the test, then afterwards just truncate all the tables. I'm not sure how large your dataset is, but I think this is generally faster than restarting a mysql container every time,.
Yes, the data will be imported every time you start a container. This could take a long time.
You can view an example image that I created
https://github.com/kliewkliew/mysql-adventureworks
https://hub.docker.com/r/kliew/mysql-adventureworks/
My Dockerfile builds an image by installing MySQL, imports a sample database (from a .sql file), and sets the entrypoint to auto-start MySQL server. When you start a container from this image, it will have the data pre-loaded in the database.

seeding mysql data in a docker build

I'm attempting to build a docker image that will include MySQL and some seed data, and I'm trying to figure out how to insert the data into the database during the docker build phase.
It seems I need to start the MySQL engine, invoke a command to run some SQL statements, and then shut down the MySQL engine. Any good ideas on how to best do that?
When the image is built, the folder /docker-entrypoint-initdb.d is checked for files to seed the DB with. Below the corresponding paragraph from the section Initializing a fresh instance on the official MySQL Docker Hub page.
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data. SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.
This blog post might help you.
Essentially, the steps to be followed are:
1. create a file (say seed_data.sh) and put it in the same directory as your Dockerfile
2. in the dockerfile add the following lines
ADD resources/seed_data.sh /tmp/
RUN chmod +x /tmp/seed_data.sh
RUN /tmp/seed_data.sh
RUN rm /tmp/seed_data.sh
The file seed_data.sh contains the code for running the mysql server, logging into it and then inserting the data.

Rails rake task for loading live (MySQL) database to local development database

For years I've used a ssh pipe from mysqldump on the live server to mysql on my development machine for getting a copy of the current data.
ssh -C <server> mysqldump --opt <live_database_name> |mysql <local_dev_database_name>
Where -C enables ssh compression and --opt enables quickness and completeness.
Does anyone have a rails-ish equivalent rake task for this? Ideally it'd take the database names from config/database.yml
https://gist.github.com/750129
This is not an elegant solution. It's basically a wrapper for your old method, so it's not even compatible with other database drivers.
But it is something you can put in your SCM under lib/tasks to share it with other developers on your team. It also uses config data from your existing config/database.yml file. You define the live db by simply adding another branch to that file and it uses the same key names that Rails do.
Maybe it would even make sense to reuse the production database configuration.
Here's one I use for a Postgres database: https://gist.github.com/748222.
There are three tasks: db:download, db:replace and db:restore. db:restore is just a wrapper around the other two.
I'd say you could do something similar for Mysql pretty quickly as well. I just use the latest backup in this case instead of creating it at runtime.
If you are fine with coding in Ruby you might want to look into seed-fu and activerecord-import.
Oh, I forgot to mention standalone-migrations. It comes with a Rake task for schema migrations.
Good luck!
Instead of using rake to do this, I'd use a capistrano task, as your capistrano tasks would already have knowledge of where the production server is, etc...
Theoretically you should be able to create another "instance" in database.yml (for your live server) and put the correct host there (and leave the rest username/passwords etc). In your rake task you would load the database yaml file, read the host and go on with your command line slurpiness. It could easily be wrapped up in a rake task.