I'm trying to run a mysql docker container with persistent data mapped to a folder that is mounted through CIFS.
(I orginally posted a more general question but got a bit further in terms of user rights and now the error seems to be specific to innodb/cifs mounts. Hence I'm reposting this as a new topic. Thanks for life888888 for the initial help.)
Here is the command to start the container:
docker run \
--name localmysql \
-v /mnt/mysqlshare:/var/lib/mysql \
--rm \
--env MYSQL_ALLOW_EMPTY_PASSWORD=true \
-it \
mysql:8.0.31-debian
/mnt/mysqlshare is a mounted cifs share. The command to map the CIFS mount is below:
sudo mount -t cifs -o username=linuxmount,cache=none,vers=3.0,uid=999,gid=999,rw [networkpath] /mnt/mysqlshare
On the machine where docker is running I have a user set up called "mysql" which is the owner of the mapped cifs drive. It is configured to have 999 as uid and group (which ties to mysql-user in the docker container which is used by default).
When remoting into the container (interactive session) and changing to "su mysql" I'm able to write into /var/lib/mysql and changes are reflected in the mounted drive.
However when starting up the container I'm getting the following error which just keeps repeating:
2022-12-08T21:12:56.435340Z 0 [ERROR] [MY-012894] [InnoDB] Unable to open './#innodb_redo/#ib_redo0' (error: 11).
2022-12-08T21:12:56.435784Z 0 [ERROR] [MY-012574] [InnoDB] Unable to lock ./#innodb_redo/#ib_redo0 error: 13
There's files added to the folder:
I had that same issue on MariaDB and I'm having some success with the following (in docker-compose.yml file):
redacted-volume:
driver: local
driver_opts:
type: "cifs"
o: "mfsymlinks,vers=3.0,username=redacted,password=redacted,uid=999,gid=999"
device: "//192.168.4.204/docker-data/redacted/data-db"
mfsymlinks enables symlinks that MariaDB needs. uid and gid are Linux UserId and GroupId of MariaDB user that accesses the share. This works for me with Windows Server connecting to SMB shares from TrueNAS Scale. Docker version 20.10.22.
I still have this same issue with MySQL. I can't get it to work with this same configuration and I feel like I have looked everywhere on the internet.
I'm learning docker and mongodb at the same time, I'm trying to import a collection with a JSON file in a docker container but there is an issue, as this is a container I'm using bind mounts to refer this JSON from my local machine to my container but if I run it with bind mounts mongodb won't run.
I'm using the following command to run just the container using bash:
sudo docker exec -it mongo-image bash
inside the container I just write mongo and terminal from mongo works as expected, if I run the following command:
sudo docker run -it -v "$(pwd)":/MyData mongo:4.2 /bin/bash
I see the files from my local machine but If I type mongo terminal won't let me start
root#e753483bb65b:/# mongo
MongoDB shell version v4.2.15
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-07-30T21:05:14.037+0000 E QUERY [js] Error: couldn't connect to server
127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to
127.0.0.1:27017 :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:353:17
#(connect):2:6
2021-07-30T21:05:14.040+0000 F - [main] exception: connect failed
2021-07-30T21:05:14.040+0000 E - [main] exiting with code 1
Is there any suggestion that you could give me to import a json file collection to mongo db docker container or fix this issue?
I think you are overwriting the default run command of mongo and it has nothing to do with mounting and volumes. I suggest you change this:
sudo docker run -it -v "$(pwd)":/MyData mongo:4.2 /bin/bash
in two separate commands:
sudo docker run -d -v "$(pwd)":/MyData mongo:4.2
and then when it printed the id of the running container, create an interactive shell:
sudo docker exec -it fb82f /bin/bash
where fb82f is the initial characters of the docker container.
I am currently using a docker container to run MySQL on WSL2 and I am facing an issue while running this container. I checked the docker logs and got the following issue -
Docker started and then immediately exited with the code (1) and then I checked the docker logs and it was giving the error as -
[ERROR] 'Setup of socket: '/var/run/mysqld/mysqlx.sock' failed, another process with PID is using UNIX socket file'
[ERROR] Another process with pid is using unix socket file.
[ERROR] Unable to setup unix socket lock file.
[ERROR] Aborting
How can I resolve this error and start my container again?
Following are the steps that I tried to resolve this issue -
Firsly, stop your docker service by using following command - "sudo service docker stop"
Now,get into the docker folder in my Linux system using the following path - /var/lib/docker.
Then within the docker folder you need to get into the volumes folder. This folder contains the volumes of all your containers (memory of each container) - cd /volumes
After getting into volumes do 'sudo ls' you will find multiple folders with hash names. These folders are volumes of your containers. Each folder is named after its hash (You need to inspect your docker container and get the hash of your container volume. For this, you need to do the following steps -
Run command "docker inspect 'your container ID' ".
Now you will get a JSON file. It is the config file of your docker container.
Seach for Mounts key within this JSON file. In Mounts, you will get the Name(hash) of your volume. (You will also get the path of your volume within the Mounts. Within Mounts "Name" key is your volume name and "Source" is the path where your volume is located.)).
Once you get the name of your volume you can go within your volume folder and within this folder you will find "_data" folder. Get into this folder.
Finally within "_data" folder use sudo ls command and you will find a folder with the name mysql.sock.lock. Remove this folder By "rm -f mysql.sock.lock".
Now restart your docker service and then start your docker container. It will start working.
Note- Use sudo in each command while you are in the docker container folder.
I unable to run MySQL containers made from MySQL images with database volumes mapped to my host machine's folder.
It doesn't matter if the host folder is empty or with existing database files. I do know that Docker Toolbox could mount volumes on Windows only from c:\Users\ so my test folder is under that one.
I was trying different (official and not) MySQL images from 5.5 to latest with no result. Anytime when location /var/lib/mysql in container is pointing to a folder on my host machine (c:\Users\someuser\testfolder) I've got an error on container`s running with InnoDB error ("InnoDB: Operating system error number 22 in a file operation" or "InnoDB: File ./ib_logfile0: 'aio write' returned OS error 122").
I was trying to modify mysql container's /etc/my.cnf (under [mysqld] section, using "docker cp" command) adding "innodb_use_native_aio=OFF" or (sometimes even and) "innodb_use_native_aio=0" keys and even was trying to run "docker run" with "--user 1000:50" with no result either.
Just after I delete mount point from container's /var/lib/mysql to my host folder, the container runs normally.
There are many alike questions but no one has complete step by step solution how to run a MySQL container with a Docker Toolbox under Windows 10 (Home & Pro) to bring container work with an existing database on the host's volumes.
It took me a while to get an answer but finally everything worked! For those who are new to Docker and have problems mounting MySQL folder to the host here is a short guide. Please note I chose bitnami/mysql image for my experiments (for another images folders can be differ).
Create a folder c:\Users\[YourAccount]\MySQLData for MySQL data.
Create a folder c:\Users\[YourAccount]\MySQLConf for a custom MySQL config file.
Create a custom MySQL config file c:\Users\[YourAccount]\MySQLConf\my_custom.cnf and add two lines in it:
[mysqld]
innodb_use_native_aio=0
4. Now create and run the container mounting your custom config and data folder to it:
docker run -d --name mysql -e ALLOW_EMPTY_PASSWORD="yes" \
-v //c/Users/[YourAccount]/MySQLData:/bitnami/mysql/data \
-v //c/Users/[YourAccount]/MySQLConf/my_custom.cnf:/opt/bitnami/mysql/conf/my_custom.cnf:ro \ bitnami/mysql:latest
Hooray!
I created docker-compose.yml file, which creates MySQL image and sets the password of MySQL root user to "hello".
# docker-compose.yml
version: '3.1'
services:
mysql:
image: mysql:5.6.40
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
- MYSQL_ROOT_PASSWORD=hello
- MYSQL_ALLOW_EMPTY_PASSWORD=hello
- MYSQL_RANDOM_ROOT_PASSWORD=hello
Then I run:
sudo docker-compose up # (1)
... from the directory with this file.
The first problem is that the newly created container starts running in the foreground but not in bash and I can't put it in the background without exiting it, which I can do by Ctrl+C, or somehow enter bash, getting out from this process.
But when I open a new terminal window and run:
sudo docker exec -it bdebee1b8090 /bin/bash # (2)
..., where bdebee1b8090 is the id of the running container, I enter bash, where I can enter MySQL shell as root user, entering password "hello":
mysql -u root -p # (3)
Then I exit MySQL shell and bash shell in the container without stopping the container.
And then I commit changes to the container:
sudo docker commit bdebee1b8090 hello_mysql # (4)
..., creating a image. And then, when I run the image:
sudo docker run -it --rm hello_mysql /bin/bash # (5)
... and try to start MySQL shell again as root user, entering password "hello", I get an error like
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
And even after I restart the MySQL server:
/etc/init.d/mysql restart # (6)
..., I get the same error.
All of the above commands were run on ubuntu.
Why is this happening?
Edit:
When I try to make those steps on MacOS High Sierra, I get stuck on the step (3), because when I try to enter the password "hello", it doesn't get accepted. This error is shown on the screen:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
And when I try to restart MySQL server in the container
/etc/init.d/mysql restart
..., the container restarts in the background
..., but when I run it again and try to repeat steps (2) and (3) it gives the same error, and when I restart the MySQL server again, the container restarts in the background again...
Edit2:
After I removed lines:
- MYSQL_ALLOW_EMPTY_PASSWORD=hello
- MYSQL_RANDOM_ROOT_PASSWORD=hello -
...it started to work on Mac, but still after I commit the container, and try to enter the MySQL shell, it gives and error like:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
.... again.
According to official documentation for MySQL docker images
https://hub.docker.com/r/mysql/mysql-server/
The boolean variables including MYSQL_RANDOM_ROOT_PASSWORD,
MYSQL_ONETIME_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD, and
MYSQL_LOG_CONSOLE are made true by setting them with any strings of
non-zero lengths. Therefore, setting them to, for example, “0”,
“false”, or “no” does not make them false, but actually makes them
true. This is a known issue of the MySQL Server containers.
Which means your config:
- MYSQL_ALLOW_EMPTY_PASSWORD=hello
- MYSQL_RANDOM_ROOT_PASSWORD=hello
equal to:
- MYSQL_ALLOW_EMPTY_PASSWORD = true
- MYSQL_RANDOM_ROOT_PASSWORD = true
Which bring us to the point:
MYSQL_RANDOM_ROOT_PASSWORD: When this variable is true (which is its
default state, unless MYSQL_ROOT_PASSWORD is set or
MYSQL_ALLOW_EMPTY_PASSWORD is set to true), a random password for the
server's root user is generated when the Docker container is started.
The password is printed to stdout of the container and can be found by
looking at the container’s log.
So either check your container log file to find random password generated
or just remove config parameter MYSQL_RANDOM_ROOT_PASSWORD from your file.
Docker images are generally designed to run a single process or server, in the foreground, until they exit. The standard mysql image works this way: if you docker run mysql without -d you will see all of the server logs printed to stdout, and there’s not an immediate option to get a shell. You should think of interactive shells in containers as a debugging convenience, and not the usual way Docker containers run.
When you docker run --rm -it /bin/bash, it runs the interactive shell instead of the database server, and that’s why you get the “can’t connect to server” error. As a general rule you should assume things like init.d scripts just don’t work in Docker; depending on the specific setup they might, but again, they’re not the usual way Docker containers run.
You can install a mysql client binary on your host or elsewhere and use that to interact with the server running in the container. You don’t need an interactive shell in a container to do any of what you’re describing here.
Bit late here:
i also faced this issue.
i took an ubuntu image and ran apt-get install mysql-server it worked fine in current instance but after commiting it into image it didnt work and i was unable to up the service as well.
so in ubuntu i made it to apt-get install mariadb-server.After commiting it in image in new container i ran service mysql start; mysql
It worked fine !