stop mysql daemon container from docker - mysql

I have been working on getting mysql docker image inside my windos 10 local machine . It works well , but the problem is that I delete the .yml file ( I also run docker with command to sepcific file ) . Now in each time I run docker , I got adminer container that won't to be deleted at all . suggesstions please ?
deleteing docker and reinstall
forcing container to delete , but in few seconds one other appear

during docker mysql configuration , I got command that have docker swarm . Past it without reading about .
I just stoped the swarm service .

Related

Mysql container exits with status: Exited (139)

I am facing an issue when i want to run a mysql container: I tried with the example command i found on the Docker hub:
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:5.6.24
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2569c1a8cbd2 mysql:5.6.24 "/entrypoint.sh mysq…" 5 seconds ago Exited (139) 4 seconds ago some-mysql
Shows that the container exited with code 139
And i can't have a single line of logs: the return of the docker logs command is empty...
~ docker logs 2569c1a8cbd2
~
I am using Docker(v19.03.1, build 74b1e89) for Debian(v10.0)
Are you running other containers? (maybe a separate project?)
I have two separate projects with their separate docker-compose files and their own services.
When one is running, the one with a mysql/mariadb container exits with 139. If I docker-compose down the other project, then the mysql container starts correctly.
I'm still figuring out why (came here for an answer to my problem), but you might have something similar.
Today I had the same issue after an upgrade from Debian 9 to 11. The mysql:5.6.24 Docker image just doesn't want to start. My solution was to upgrade to image mysql:5-debian
https://hub.docker.com/layers/mysql/library/mysql/5-debian/images/sha256-5adbbb05d43e67a7ed5f4856d3831b22ece5178d23c565b31cef61f92e3467ea?context=explore

docker best way to run mysql

I'm new in docker, and i have two microservices running in two containers and i would like to create simple database for them.
i created it like that:
docker run --net=kajsnetwork -d -e MYSQL_ROOT_PASSWORD='mypassword' -v /storage/mysql1/mysql-datadir:/var/lib/mysql mysql
i enter the container using
docker exec -it containernumber /bin/bash
and then i created database... But when i went to /var/lib/mysql mysql on host i haven't there nothing new - no database which i created from docker file. Did i something wrong ?
I would like to have database with data stored on host, but running in a docker container (is it good solution?) ? How to do it correctly?
You should not have to docker exec to create an instance: the container should already have one.
The doc mentions:
The -v /my/own/datadir:/var/lib/mysql part of the command mounts the /my/own/datadir directory from the underlying host system as /var/lib/mysql inside the container, where MySQL by default will write its data files.
So the order matters.
The docker cmd option -v /storage/mysql1/mysql-datadir:/var/lib/mysql indicates that you are mounting host directory /storage/mysql1/mysql-datadir to /var/lib/mysql as a data volume of the container.
So if you check /var/lib/mysql from the container your should see the same contents as /storage/mysql1/mysql-datadir in your host machine.
More details:
https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume

Why is MariaDB data persistent in my Docker container? I don't have any volumes

I have a Docker container with MariaDB installed. I am not using any volumes.
[vagrant#devops ~]$ sudo docker volume ls
DRIVER VOLUME NAME
[vagrant#devops ~]$
Now something strange is happening. When I do sudo docker stop and sudo docker start the MariaDB data is still there. I expected this data to be lost.
Btw when I edit some file for example /etc/hosts I do see the expected behavior. Changes to this file are lost after restart.
How is it possible that MariaDB data is persistent without volumes? This shouldn't happen right?
docker stop does not remove a container, neither does docker start create a container.
docker run does create a new container from a image.
docker start starts a container which does exist but has been stopped before ( call it pause/resume if you like ).
Thus for start/stop no volumes are required to keep the state persistent.
if you though do docker stop <name> && docker rm <name> and then docker start <name> you get and error, that the container does no longer exist - so now you need docker run <args> youimage

Docker db container running. Another process with pid <id> is using unix socket file

I'm trying to run a docker mysql container with initialized db according instruction provided in this message https://stackoverflow.com/a/29150538/6086816. After first run it works ok, but on second run, after trying of executing /usr/sbin/mysqld from script, I get this error:
db_1 | 2016-03-19T14:50:14.819377Z 0 [ERROR] Another process with pid 10 is using unix socket file.
db_1 | 2016-03-19T14:50:14.819498Z 0 [ERROR] Unable to setup unix socket lock file.
...
mdir_db_1 exited with code 1
what can be the reason of it?
I was facing the same issue. Following are the steps that I tried to resolve this issue -
Firsly, stop your docker service by using following command - "sudo service docker stop"
Now,get into the docker folder in my Linux system using the following path -
/var/lib/docker.
Then within the docker folder you need to get into the volumes folder. This folder contains the volumes of all your containers (memory of each container) -
cd /volumes
After getting into volumes do 'sudo ls' you will find multiple folders with hash names. These folders are volumes of your containers. Each folder is named after its hash
(You need to inspect your docker container and get the hash of your container volume. For this, you need to do the following steps -
Run command "docker inspect 'your container ID' ".
Now you will get a JSON file. It is the config file of your docker container.
Seach for Mounts key within this JSON file. In Mounts, you will get the Name(hash) of your volume. (You will also get the path of your volume within the Mounts. Within Mounts "Name" key is your volume name and "Source" is the path where your volume is located.)).
Once you get the name of your volume you can go within your volume folder and within this folder you will find "_data" folder. Get into this folder.
Finally within "_data" folder use sudo ls command and you will find a folder with the name mysql.sock.lock. Remove this folder By "rm -f mysql.sock.lock".
Now restart your docker service and then start your docker container. It will start working.
Note- Use sudo in each command while you are in the docker container folder.
You should make sure the socket file have been deleted before you start mysql.Check my.cnf(/etc/mysql/my.cnf) file to get the path of socket file.
find sth like this socket = /var/run/mysqld/mysqld.sock.And delete the .sock.lock file as well.
This is a glitch with docker.
Execute following commands:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
This will stop all containers and remove them.
After this it should work just fine.
Just faced same problem.
After many research, summary of my solution:
Find host location of docker files
$ docker inspect <container_name> --> Mounts.Source section
In my case, it was /var/snap/docker/common/.../_data
As root, you can ls -l that directory and see the files that are preventing your container from starting, the socket mysql.sock and the file mysql.sock.lock
Simply delete them as root ($ sudo rm /var/snap/.../_data/mysql.sock*) and start your docker container.
NOTE: be sure you don't have any other mysql.sock... files than those two. In that case don't use wildcar (*), delete each of them individually.
Hope this helps.
I had the same problem and got rid of it in an easy and mysterious way.
First I have noticed that I am unable to start mysql_container container. Running docker logs mysql_container indicated exactly the same problem as described repeating for few times.
I wanted to get a look around by running the container in an interactive mode by docker start -i mysql_container from one bash window while running things like
docker exec -it mysql_container cat /etc/mysql/my.cnf in another.
I have done that and was very surprised to see that this time the container started successfully. I cannot understand why. I can only guess that starting an interactive mode together with running subsequent docker exec commands slowed down init process and some another process had a bit more time to remove its locks.
Hope that helps anybody.

Run mysql instance in Docker container

I want to run a mysql container in Docker. The Dockerfile that I use is the Dockerfile defined in the official repo[here].
I only extended this Dockerfile with 2 more lines so I can import a init sql file, like this :
ADD my-init-file.sql /my-init-file.sql
CMD ["mysqld", "--init-file=/my-init-file.sql"]
I want to run this instance as a daemon but when I execute this command, from the documentation:
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mysql
The container exits automatically. I want to run it as a daemon so I can link apps(like a wordpress site) in another container to the mysql database.
Maybe I am missing something. Can anyone show me how ?
[EDIT] I forgot to say that I ran docker logs my-container after starting the container and there is no error :
Running mysql_install_db ...
Finished mysql_install_db
docker ps shows no running container.
My guess is the command executes successfully but the mysqld daemon does not start.
Your Dockerfile seems fine. Your init file may be buggy, though. If MySQL terminates, then the container will terminate.
The first debug step is to look at the logs:
docker logs some-mysql
You can use this whether the container is stopped or running. Hopefully, you'll see something obvious, like you missed some semicolons.
If the logs don't help, the next thing to try is to get inside the container and see what's happening first-hand
docker run -e MYSQL_ROOT_PASSWORD=mysecretpassword -it mysql /bin/bash
This will get you a Bash shell inside your container. Then you can run
mysqld --init-file=/my-init-file.sql
And see what happens. Maybe something in your init file tells MySQL to exit cleanly, so you get no logs but the command terminates.
Dmitri, after you made docker run with -d argument your container detached and already working as daemon if only CMD command not returned exit code.
You can check running containers by docker ps command.
You can check all containers by running docker ps -a.
Also i think you will need to open mysql port outside the container. You can do it with -P argument or better way to make communication between containers is docker links.