Docker: Edit "my.cnf" file in stopped container - mysql

After making an edit to "my.cnf", I now get an error from Kitematic on the Mac when I attempt to start the container:
mysqld: [ERROR] Found option without preceding group in config file /etc/mysql/my.cnf at line 19! mysqld:
[ERROR] Fatal error in defaults handling. Program aborted!
I've tried accessing the container via:
docker exec -it [container] bash
... but I get the error:
Error response from daemon: Container [container] is not running
I was able to access something via the image, but the file didn't appear to be the same, so I'm not sure what was happening (I'm not too conversant with Docker).
At this stage, either making the appropriate edit and fixing the container, or somehow cloning the MySQL data to another container would be ideal.

To fix the my.cnf, you can use docker container cp. It works with stopped containers.
To copy file from your container to current path
docker container cp containerId:/etc/mysql/my.cnf container-my.cnf
Then edit container-my.cnf and copy back from path to container :
docker container cp container-my.cnf containerId:/etc/mysql/my.cnf
To use the existing MySQL data with a new container:
docker container inspect -f '{{.Mounts}}' [container]
gives you the volume name (key volume) where the data is. Then start a new mysql container and mount the volume under /var/lib/mysql:
docker container run -d -v [volume_name]:/var/lib/mysql [image]
Afterwards you can remove the old container (Actually you can remove it before creating the new one)

If you are using the official mysql docker image from
https://hub.docker.com/_/mysql
you can use a custom MySQL configuration file that overrides my.cnf
If /my/custom/config-file.cnf is the path and name of your custom configuration file, you can start your mysql container like this (note that only the directory path of the custom config file is used in this command):
$ docker run --name some-mysql -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
This will start a new container some-mysql where the MySQL instance uses the combined startup settings from /etc/mysql/my.cnf and /etc/mysql/conf.d/config-file.cnf, with settings from the latter taking precedence.

Related

Resuming docker mysql instance after restarting

I'm using docker to run a mysql 5.6 instance on my localhost (which is running ubuntu 20.04), using these instructions. When I create a new container for the database I use the following command
sudo docker run --name mysql-56-container -p 127.0.0.1:3310:3306 -e MYSQL_ROOT_PASSWORD=rootpassword -d mysql:5.6
That serves the intended purpose; I'm able to create the database using port 3310 and get on with what I want to do.
However when I reboot my localhost, I am unable to get back into sql5.6 using that port again.
When I list containers, I see none listed:
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
So I try to recreate it and am told that it already exists:
$ sudo docker run --name mysql-56-container -p 127.0.0.1:3310:3306 -e MYSQL_ROOT_PASSWORD=rootpassword -d mysql:5.6
docker: Error response from daemon: Conflict. The container name "/mysql-56-container" is already in use by container "a05582bff8fc02da37929d2fa2bba2e13c3b9eb488fa03fcffb09348dffd858f". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
So I try starting it but with no luck:
$ sudo docker start my-56-container
Error response from daemon: No such container: my-56-container
Error: failed to start containers: my-56-container
I clearly am not understanding how this works so my question is, how do I resume work on databases I've created in a docker container after I reboot?
docker ps just list running containers. If you reboot your laptop, all of them will be stopped. You can use docker ps --all or docker container ls --all to list all containers (running or stopped). You can check more about the docker ps command in docker ps command line reference
Once a container is created, you cannot create another with the same name. Tha is the reason your second docker run is failing.
You should use docker start instead. But you are trying to start a container with a different name. Your docker start command is using a container named my-56-container but it is called mysql-56-container. Please check your first docker run command in the question.

Unable to start the MySQL docker container on WSL2

I am currently using a docker container to run MySQL on WSL2 and I am facing an issue while running this container. I checked the docker logs and got the following issue -
Docker started and then immediately exited with the code (1) and then I checked the docker logs and it was giving the error as -
[ERROR] 'Setup of socket: '/var/run/mysqld/mysqlx.sock' failed, another process with PID is using UNIX socket file'
[ERROR] Another process with pid is using unix socket file.
[ERROR] Unable to setup unix socket lock file.
[ERROR] Aborting
How can I resolve this error and start my container again?
Following are the steps that I tried to resolve this issue -
Firsly, stop your docker service by using following command - "sudo service docker stop"
Now,get into the docker folder in my Linux system using the following path - /var/lib/docker.
Then within the docker folder you need to get into the volumes folder. This folder contains the volumes of all your containers (memory of each container) - cd /volumes
After getting into volumes do 'sudo ls' you will find multiple folders with hash names. These folders are volumes of your containers. Each folder is named after its hash (You need to inspect your docker container and get the hash of your container volume. For this, you need to do the following steps -
Run command "docker inspect 'your container ID' ".
Now you will get a JSON file. It is the config file of your docker container.
Seach for Mounts key within this JSON file. In Mounts, you will get the Name(hash) of your volume. (You will also get the path of your volume within the Mounts. Within Mounts "Name" key is your volume name and "Source" is the path where your volume is located.)).
Once you get the name of your volume you can go within your volume folder and within this folder you will find "_data" folder. Get into this folder.
Finally within "_data" folder use sudo ls command and you will find a folder with the name mysql.sock.lock. Remove this folder By "rm -f mysql.sock.lock".
Now restart your docker service and then start your docker container. It will start working.
Note- Use sudo in each command while you are in the docker container folder.

Command to use with scratch docker container

I'm trying to start a docker container for mysql. The image for the container was built from scratch for a training I attended and I need to figure out how to configure it to run a command that will start the container.
The /bin/bash and /bin/sh commands don't work. When I docker inspect the container the CMD section doesn't contain anything. I've tried running CMD['/bin/bash'] or CMD['/bin/sh'] at the end of my docker container run command and that populates the CMD field but the container still won't run.
There are a number of other microservice containers I'm having the same problem with. This is the first one I need to solve however.
This is the command I'm running:
docker run -d -v infytel-mysql-volume:/var/lib/mysql --network=infytel-docker-networkMS --name=infytel-mysql-con2 -e MYSQL_PASSWORD_ROOT=root infytel-mysql-img:v1 /bin/bash
This is my error:
oci runtime error: container_linux.go:235: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory
[EDIT] Running docker logs gives the error shown above.
Running without the /bin/sh command states error response from daemon: No command specified

MySQL container crash after /etc/mysql/my.cnf change, how to edit back?

I changed some mysql config settings and set something wrong, now Docker container keeps restarting and I cannot find the my.cnf file to edit in host filesystem. I have tried aufs/diff folders but so far unable to find it. Also tried:
find / -name my.cnf -exec nano {} \;
But it does not bring up the file I changed. And I tried to change config.v2.json to start /bin/bash instead of mysqld and restarted docker, but yet it started mysqld (due supervisor or something?) using official mysql container image.
I am seeing two possible solutions for your problem:
Bypass the ENTRYPOINT for the MySQL image
Find your image name by running docker images then run:
docker run -it --entrypoint="/bin/sh" OPTIONS image
That should take you to the bash inside the container and from there you can execute all the commands you want to find your my.cnf file. Although I don't know if editing the file from there, save it and try to run it again will works. I didn't tried.
Delete the old image and use the proper way to edit the my.cnf file
Find your image name by running: docker images and then delete it by running docker rmi <image_name>
Check the docs for the default MySQL images at MySQL Dockerhub is pretty straight on this and I quote:
Using a custom MySQL configuration file The MySQL startup
configuration is specified in the file /etc/mysql/my.cnf, and that
file in turn includes any files found in the /etc/mysql/conf.d
directory that end with .cnf. Settings in files in this directory will
augment and/or override settings in /etc/mysql/my.cnf. If you want to
use a customized MySQL configuration, you can create your alternative
configuration file in a directory on the host machine and then mount
that directory location as /etc/mysql/conf.d inside the mysql
container.
If /my/custom/config-file.cnf is the path and name of your custom
configuration file, you can start your mysql container like this (note
that only the directory path of the custom config file is used in this
command):
$ docker run --name some-mysql -v /my/custom:/etc/mysql/conf.d -e
MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
This will start a new
container some-mysql where the MySQL instance uses the combined
startup settings from /etc/mysql/my.cnf and
/etc/mysql/conf.d/config-file.cnf, with settings from the latter
taking precedence.
From that point and if you create the my.cnf file on your host then you'll never run into this problem again since you can edit the file as many times as you want.

Docker db container running. Another process with pid <id> is using unix socket file

I'm trying to run a docker mysql container with initialized db according instruction provided in this message https://stackoverflow.com/a/29150538/6086816. After first run it works ok, but on second run, after trying of executing /usr/sbin/mysqld from script, I get this error:
db_1 | 2016-03-19T14:50:14.819377Z 0 [ERROR] Another process with pid 10 is using unix socket file.
db_1 | 2016-03-19T14:50:14.819498Z 0 [ERROR] Unable to setup unix socket lock file.
...
mdir_db_1 exited with code 1
what can be the reason of it?
I was facing the same issue. Following are the steps that I tried to resolve this issue -
Firsly, stop your docker service by using following command - "sudo service docker stop"
Now,get into the docker folder in my Linux system using the following path -
/var/lib/docker.
Then within the docker folder you need to get into the volumes folder. This folder contains the volumes of all your containers (memory of each container) -
cd /volumes
After getting into volumes do 'sudo ls' you will find multiple folders with hash names. These folders are volumes of your containers. Each folder is named after its hash
(You need to inspect your docker container and get the hash of your container volume. For this, you need to do the following steps -
Run command "docker inspect 'your container ID' ".
Now you will get a JSON file. It is the config file of your docker container.
Seach for Mounts key within this JSON file. In Mounts, you will get the Name(hash) of your volume. (You will also get the path of your volume within the Mounts. Within Mounts "Name" key is your volume name and "Source" is the path where your volume is located.)).
Once you get the name of your volume you can go within your volume folder and within this folder you will find "_data" folder. Get into this folder.
Finally within "_data" folder use sudo ls command and you will find a folder with the name mysql.sock.lock. Remove this folder By "rm -f mysql.sock.lock".
Now restart your docker service and then start your docker container. It will start working.
Note- Use sudo in each command while you are in the docker container folder.
You should make sure the socket file have been deleted before you start mysql.Check my.cnf(/etc/mysql/my.cnf) file to get the path of socket file.
find sth like this socket = /var/run/mysqld/mysqld.sock.And delete the .sock.lock file as well.
This is a glitch with docker.
Execute following commands:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
This will stop all containers and remove them.
After this it should work just fine.
Just faced same problem.
After many research, summary of my solution:
Find host location of docker files
$ docker inspect <container_name> --> Mounts.Source section
In my case, it was /var/snap/docker/common/.../_data
As root, you can ls -l that directory and see the files that are preventing your container from starting, the socket mysql.sock and the file mysql.sock.lock
Simply delete them as root ($ sudo rm /var/snap/.../_data/mysql.sock*) and start your docker container.
NOTE: be sure you don't have any other mysql.sock... files than those two. In that case don't use wildcar (*), delete each of them individually.
Hope this helps.
I had the same problem and got rid of it in an easy and mysterious way.
First I have noticed that I am unable to start mysql_container container. Running docker logs mysql_container indicated exactly the same problem as described repeating for few times.
I wanted to get a look around by running the container in an interactive mode by docker start -i mysql_container from one bash window while running things like
docker exec -it mysql_container cat /etc/mysql/my.cnf in another.
I have done that and was very surprised to see that this time the container started successfully. I cannot understand why. I can only guess that starting an interactive mode together with running subsequent docker exec commands slowed down init process and some another process had a bit more time to remove its locks.
Hope that helps anybody.