How to restart mysql docker image with different startup parameter - mysql

I created a mysql docker image and launched the container successfully. I have used this instance for a while and it has a lot of data in it. Now I want to change its startup parameters to add character-set-server. How can I restart this mysql instance with this parameter without loosing any data?

It depends, as explained in the mysql docker page how you stored your database:
through a data volume: you would need to inspect your current container in order to get the path of that volume, and you can then move it to the path of the new volume of your new container launched by your new image. I did that with a script, but with docker 1.10, 1.11, docker volume ls can help too.
through a data directory on the host system mounted in your container, in which case you don't have to do anything else than mounting that same host folder in your new container.
In both instances, you need to add your custom config as explained in the mysql docker page:
The MySQL startup configuration is specified in the file /etc/mysql/my.cnf, and that file in turn includes any files found in the /etc/mysql/conf.d directory that end with .cnf.
Settings in files in this directory will augment and/or override settings in /etc/mysql/my.cnf. If you want to use a customized MySQL configuration, you can create your alternative configuration file in a directory on the host machine and then mount that directory location as /etc/mysql/conf.d inside the mysql container.

Related

Install a MySQL database outside Docker container

Can I run a docker container with mysql, and save my database (data), outside the container?
Yes, you can. You can use bind mounts when creating the docker container to mount a path on the host to some path inside the container:
https://docs.docker.com/storage/bind-mounts/
You could, for example, mount the host OS' /home//mysqldata as /var/lib/mysql inside the container. When a process inside the docker container tries to read/write files in /var/lib/mysql inside the container, that will actually be reading/writing data in the host OS' /home//mysqldata directory/folder. For example:
docker run -it --mount type=bind,source=/home/bob/mysqldata,target=/var/lib/mysql <some_image_name>
Do note that docker volumes can also be used for this although those work differently than bind mounts, so make sure you're using a bind mount (type=bind).
Also, I've seen at least one scenario where using a bind mount won't work for MySQL data. In my case it was using a bind mount for a docker container that was running inside a Vagrant box using a directory that was a VirtualBox shared folder. In that case I was getting some kernel/block level errors that prevented MySQL from setting certain file modes or making low-level calls to some of the files in the data dir which ultimately prevented MySQL from starting. I forget now exactly what error it was throwing (I can go back and check) but I had to switch to a volume instead of a bind mount. That was fine for my use case but just be aware if you use a bind mount and MySQL fails to start due to some lower-level disk call.
I should also add that it's not clear from your question /why/ you want to do this so I can't advocate that doing this will be good/do what you want. Only one MySQL process should be writing to the MySQL data directory at a time and the files are binary files so trying to read them with something other than MySQL seems odd. But, if you have a use case where you want something outside of Docker to read the MySQL data files, the bind mount might do what you want.

Creating, populating, and using Docker Volumes

I've been plugging around with Docker for the last few days and am hoping to move my Python-MySQL webapp over to Docker here soon.
The corollary is that I need to use Docker volumes and have been stumped lately. I can create a volume directly by
$ docker volume create my-vol
Or indirectly by referencing a nonexistent volume in a docker run call, but I cannot figure out how to populate these volumes with my .sql database file, without copying the file over via a COPY call in the Dockerfile.
I've tried directly creating the volume within the directory containing the .sql file (first method mentioned above) and mounting the directory containing the .sql file in my 'docker run' call, which does move the .sql file to the container (I've seen it by navigaating the bash shell inside the container) but when running a mariadb container connecting to the database-containing mariadb container (as suggested in the mariadb docker readme file), it only has the standard databases (information_schema, mysql, performance_schema)
How can I create a volume containing my pre-existing .sql database?
When working with mariadb in a docker container, the image supports running .sql files as a part of the first startup of the container. This allows you to push data into the database before it is made accessible.
From the mariadb documentation:
Initializing a fresh instance
When a container is started for thefirst time, a new database with the specified name will be created and
initialized with the provided configuration variables. Furthermore, it
will execute files with extensions .sh, .sql and .sql.gz that are
found in /docker-entrypoint-initdb.d. Files will be executed in
alphabetical order. You can easily populate your mariadb services by
mounting a SQL dump into that directory and provide custom images with
contributed data. SQL files will be imported by default to the
database specified by the MYSQL_DATABASE variable.
This means that if you want to inject data into the container, when it starts up for the first time. In your Dockerfile, COPY the .sql file into the container at the path /docker-entrypoint-initdb.d/myscript.sql - and it will be invoked on the database that you specified in the environment variable MYSQL_DATABASE.
Like this:
FROM mariadb
COPY ./myscript.sql /docker-entrypoint-initdb.d/myscript.sql
Then:
docker run -e MYSQL_DATABASE=mydb mariadb
There is then the question of how you want to manage the database storage. You basically have two options here:
Create a volume binding to the host, where mariadb stores the database. This will enable you to access the database storage files easily from the host machine.
An example with docker run:
docker run -v /my/own/datadir:/var/lib/mysql mariadb
Create a docker volume and bind it to the storage location in the container. This will be a volume that is managed by docker. This volume will persist the data between restarts of the container.
docker volume create my_mariadb_volume
docker run -v my_mariadb_volume:/var/lib/mysql mariadb
The is also covered in the docs for the mariadb docker image. I can recommend reading it from top to bottom if you are going to use this image.

Huge static (mysql) database in docker

I am developing an application and try to implement the microservice architecture. For information about locations (cities, zip codes, etc.) I downloaded a database dump for mysql from opengeodb.org.
Now I want to provide the database as a docker container.
I set up a mysql image with following Dockerfile as mentioned in the docs for the mysql image:
FROM mysql
ENV MYSQL_ROOT_PASSWORD=mypassword
ENV MYSQL_DATABASE geodb
WORKDIR /docker-entrypoint-initdb.d
ADD ${PWD}/sql .
EXPOSE 3306
The "sql"-folder contains sql scripts with the raw data as insert statements, so it creates the whole database.The problem is, that the database is really huge and it takes really long to set it up.
So I thought, maybe there is a possibility to save the created database inside an image, because it is an static database for read-only operations only.
I am fairly new to docker and not quite sure how to achieve this.
I'm using docker on a Windows 10 machine.
EDIT:
I achieved my goal by doing the following:
I added the sql dump file as described above.
I ran the container and built the whole database with a local directory (the 'data' folder) mounted to /var/lib/mysql.
Then stopped the container and edited the Dockerfile:
FROM mysql
ENV MYSQL_ROOT_PASSWORD=mypassword
ENV MYSQL_DATABASE geodb
WORKDIR /var/lib/mysql
COPY ${PWD}\data .
EXPOSE 3306
So the generated Database is now beeing copied from local system into the container.
You could create a volume with your container to persist the database on your local machine. When you first create the container, the SQL in /docker-entrypoint-initdb.d will be executed, and the changes will be stored to the volume. Next time you start the container, MySQL will see that the schema already exists and it won't run the scripts again.
https://docs.docker.com/storage/volumes/
In principle you could achieve it like this:
start the container
load the database
perform a docker commit to build an image of the current state of the container.
The other option would be to load in the database during the image build time, but for this you would have to start mysql similarly to how it's done in the entrypoint script.
start mysql in background
wait for it to initialize
load in the data using mysql < sql file

How to properly move a mariadb database from a container to the host

I have a smallish webapp running in a Docker container. It uses a mariadb database running in another container on the same box, based on the official "mariadb" image.
When I first set up these containers, I started the mariadb container using an "internal" database. I gave the "/var/lib/mysql" a volume name, but I didn't map it to a directory on the host ("-v vol-name:/var/lib/mysql"). Actually, I'm not even sure why I gave it a volume name. I set this up several months ago, and I'm not sure why I would have done that specifically.
In any case, I've concluded that having a database internal to the container wasn't a good idea. I've decided I really need to have the actual database stored on the host and use a volume mapping to refer to it. I know how to do this if I was setting this up from scratch, but now that the app is running, I need to move the database to the host and restart the container to point to that. I'm not certain of all the proper steps to make this happen.
In addition, I'm also going to need to set up a second instance of this application, using containers based on the same images. The second database will also be stored on the host, in a directory next to the other one. I can initialize the second db with the backup file from the first one, but I'll likely manually empty most of the tables in the second instance.
I did use mysqldump inside the container to dump the database, then I copied that backup file to the host.
I know how to set a volume mapping in "docker run" to map /var/lib/mysql in the container to a location on the host.
At this point, I'm not certain exactly what to do with this backup file so I can restart the container with the modified volume mapping. I know I can run "mysql dbname < backup.sql", but I'm not sure of the consequences of that.
While the container is running, run docker cp-a CONTAINER:/var/lib/mysql /local/path/to/folder to copy the MariaDB databases from the container to your local machine. Replace "CONTAINER" with the name or ID of your MariaDB container.
Once you've done that, you can stop the container and restart it binding /local/path/to/folder to the container's /var/lib/mysql path.
If you're using an older version of docker that does not support the -a or --archive flag, you can copy the files without that flag but you'll need to make sure that the folder on the host machine has the proper ownership: the UID and GID of the folder must match the UID and GID of the folder in the Docker container.
Note: if you're using SELinux, you might need to set the proper permissions as well, as the documentation for the MariaDB image states:
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir

How to store named docker volume with mysql data on external disk (mac os)?

I have installed docker on my Mac on SSD. Docker.qcow2 located on Mac too. So, named volume with mysql data in 100GB is located in this Docker.qcow2 on SSD too!
I want to store named volume with mysql data on the external HDD and connect it to the docker container on the Mac. For me it is ok too store all containers on SSD but some huge named volumes on external cheap disc. Is it possible and how?
Map the route to a directory in your external disc to /var/lib/mysql inside the container. This will write contents of your /var/lib/mysql directory (inside the container) to a directory in your external hard disk.
To do this, use option -v host_directory:container_directory in your docker run command.
Check this reference for further information: https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume