mysql container broken, how to recover data? - mysql

our IT broke the mysql container and now it can not be started.
I understand that I can commit a new version and run it without entrypoint, so I can "exec -it" to enter and check what's wrong.
but how can I recover my data? inspect the old container and copy all files from mounted volume? (it seems a overkill for this problem, can I 'start' my container without entrypoint?)
what's the best practice for this problem?

If you have a mounted volume, your data is in a volume directory in your host, and there will be unless you delete it. So, fix your MySQL image and then create another MySQL container.
You should be able to fix your container by using docker attach or docker exec. You can even change container entrypoint using something like this: How to start a stopped Docker container with a different command?
But that's not a good approach. As stated in Best practices for writing Dockerfiles, Docker containers should be ephemeral, meaning this that they can be replaced easily for new ones. So, best option is destroy your container and create a new one.

I think as #kstromeiraos says you should first fix your Dockerfile if at all it's broken and again build and run the container using:
docker build
docker run -v xxx
Since you have used volumes from your MySQL data seems to be backed off properly, so the new container which comes up should have the backed up data.
You can do:
docker exec -it bash
and get into the container and check the logs and data.

Related

Install a MySQL database outside Docker container

Can I run a docker container with mysql, and save my database (data), outside the container?
Yes, you can. You can use bind mounts when creating the docker container to mount a path on the host to some path inside the container:
https://docs.docker.com/storage/bind-mounts/
You could, for example, mount the host OS' /home//mysqldata as /var/lib/mysql inside the container. When a process inside the docker container tries to read/write files in /var/lib/mysql inside the container, that will actually be reading/writing data in the host OS' /home//mysqldata directory/folder. For example:
docker run -it --mount type=bind,source=/home/bob/mysqldata,target=/var/lib/mysql <some_image_name>
Do note that docker volumes can also be used for this although those work differently than bind mounts, so make sure you're using a bind mount (type=bind).
Also, I've seen at least one scenario where using a bind mount won't work for MySQL data. In my case it was using a bind mount for a docker container that was running inside a Vagrant box using a directory that was a VirtualBox shared folder. In that case I was getting some kernel/block level errors that prevented MySQL from setting certain file modes or making low-level calls to some of the files in the data dir which ultimately prevented MySQL from starting. I forget now exactly what error it was throwing (I can go back and check) but I had to switch to a volume instead of a bind mount. That was fine for my use case but just be aware if you use a bind mount and MySQL fails to start due to some lower-level disk call.
I should also add that it's not clear from your question /why/ you want to do this so I can't advocate that doing this will be good/do what you want. Only one MySQL process should be writing to the MySQL data directory at a time and the files are binary files so trying to read them with something other than MySQL seems odd. But, if you have a use case where you want something outside of Docker to read the MySQL data files, the bind mount might do what you want.

How to make a stateless mysql docker container

I want to make a mysql docker image that imports some initial data in the build process.
Afterwards, when used in a container, the container stays stateless, meaning the data added while the container is running does not survive destroying/starting the container again but the inital data is still there.
Is this possible? How would I a setup such an image and container?
I suggest creating the MySQL tables as needed in a SQL script, or directly in a local MySQL instance and exporting them to a file.
With this file in hand, create a Dockerfile which builds on the MySQL container. Add to this another entrypoint script which injects the SQL script into the database.
You don't write anything about mounting volumes. You may want a data volume for the database or configure MySQL for keeping everything in memory.
For added "statelessness" you may want to DROP all tables in your SQL script too.
I think what you need is a multi-stage build:
FROM mysql:5.7 as builder
# needed for intialization
ENV MYSQL_ROOT_PASSWORD=somepassword
ADD initialize.aql /docker-entrypoint-initdb.d/
# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$#\"/echo \"not running $#\"/", "/usr/local/bin/docker-entrypoint.sh"]
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db"]
FROM mysql:5.7
COPY --from=builder /initialized-db /var/lib/mysql
You can put your initialization scripts in initialize.sql (or choose a different way to initialize your database).
The resulting image is a database that is already initialised. You can use it and throw it away as you like.
You can also use this process to create different images (tag them differently) for different use cases.
Hope this answers your question.

Setting up MySQL for dev environment with Docker

Trying to set up a docker mysql server with phpmyadmin and an existing company_dev.sql file to import, in an effort to dockerize my dev environment.
My first question is how do I go about setting this up? Do I need to specify an OS, i.e. Ubuntu in my Dockerfile, then add sudo apt-get install mysql-server and install phpmyadmin? Or am I better off running an existing docker image from the docker repo and building on top of that?
Upon making CRUD operations to this database, I would like to save its state for later use. Would using docker commit be appropriate for this use case? I know using dockerfile is best practice.
I appreciate any advice.
First of all, with docker you should have a single service/Daemon per container. In your case, mysql and phpmyadmin should go in different containers. This is not mandatory (there are workarounds) but makes things a lot easier.
In order to avoid reinventing the wheel, you should IMHO always use existing images for the wanted service, expecially if they're official ones. But again, you can choose for any reason to start from scratch (a base image such as "Ubuntu" or "Debian" just to name two) and install the needed stuff.
About the storage question: docker containers should always be immutable. If a container needs to save it's state, it should use volumes. Volumes are a way to share a folder between the container and the host. For instance, the official mysql image uses a volume to store the database files.
So, to summarize, you should use ready images when possible and no, using git commit to store mysql data is not a good practice.
Previously I have used this Dockerfile in order to restore MySQL data.
GitHub - stormcat24/docker-mysql-remote
My first question is how do I go about setting this up?
This dockerfile is using mysqldump to load from real env/save to docker env. You can also do that. Actually, it will load/save whole tables in your specified database.
Do I need to specify an OS, i.e. Ubuntu in my Dockerfile, then add sudo apt-get install mysql-server and install phpmyadmin?
You can see this docker image is created from DockerHub - library/mysql , we don't need to prepare basic middle-wares except phpmyadmin.
Or am I better off running an existing docker image from the docker repo and building on top of that?
It's better to use already existing Docker repository !
Upon making CRUD operations to this database, I would like to save its state for later use. Would using docker commit be appropriate for this use case? I know using dockerfile is best practice.
I have tried this also. After some testing, I successfully saved a docker image contains MySQL DB. To do that, we just need to exec docker commit xxx after finished your build. However be careful, don't push your image file to DockerHub.

How to properly move a mariadb database from a container to the host

I have a smallish webapp running in a Docker container. It uses a mariadb database running in another container on the same box, based on the official "mariadb" image.
When I first set up these containers, I started the mariadb container using an "internal" database. I gave the "/var/lib/mysql" a volume name, but I didn't map it to a directory on the host ("-v vol-name:/var/lib/mysql"). Actually, I'm not even sure why I gave it a volume name. I set this up several months ago, and I'm not sure why I would have done that specifically.
In any case, I've concluded that having a database internal to the container wasn't a good idea. I've decided I really need to have the actual database stored on the host and use a volume mapping to refer to it. I know how to do this if I was setting this up from scratch, but now that the app is running, I need to move the database to the host and restart the container to point to that. I'm not certain of all the proper steps to make this happen.
In addition, I'm also going to need to set up a second instance of this application, using containers based on the same images. The second database will also be stored on the host, in a directory next to the other one. I can initialize the second db with the backup file from the first one, but I'll likely manually empty most of the tables in the second instance.
I did use mysqldump inside the container to dump the database, then I copied that backup file to the host.
I know how to set a volume mapping in "docker run" to map /var/lib/mysql in the container to a location on the host.
At this point, I'm not certain exactly what to do with this backup file so I can restart the container with the modified volume mapping. I know I can run "mysql dbname < backup.sql", but I'm not sure of the consequences of that.
While the container is running, run docker cp-a CONTAINER:/var/lib/mysql /local/path/to/folder to copy the MariaDB databases from the container to your local machine. Replace "CONTAINER" with the name or ID of your MariaDB container.
Once you've done that, you can stop the container and restart it binding /local/path/to/folder to the container's /var/lib/mysql path.
If you're using an older version of docker that does not support the -a or --archive flag, you can copy the files without that flag but you'll need to make sure that the folder on the host machine has the proper ownership: the UID and GID of the folder must match the UID and GID of the folder in the Docker container.
Note: if you're using SELinux, you might need to set the proper permissions as well, as the documentation for the MariaDB image states:
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir

Persistent mysql data from docker container

How can I persist data from my mysql container? I'd really like to mount /var/lib/mysql from the container to the host machine. This indeed creates the directory, but when I create data, stop my application, and start a new one with the mounted directory, nothing is there. I've messed around with giving the directory all permissions and changing the user and group to root but nothing seems to work. I keep seeing people saying to use a data container, but I don't see how that can work with Amazon ec2 container service (ECS), considering each time I stop and start a task it would create a new data container rather than use an existing one. Please help.
Thank you
Simply run your containers with something like this:
docker run -v /var/lib/mysql:/var/lib/mysql -t -i <container_id> <command>
You can keep your host /var/lib/mysql and mount it to each one of the containers. Now this is not going to work with EC2 Container service unless all of your servers that you use for containers map to a common /var/lib/mysql (Perhaps nfs mounted from a master EC2 instance)
AWS EFS is going to be great for this once it becomes widely available.