Mounting container volume from the hosts' drive? - mysql

im setting up a mysql container like so:
docker run -v /srv/information-db:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
now, this works when nothing is mounted on /srv on the host, but when i mount my drive, docker seems to write to the underlying filesystem (/), eg:
/]# ls -l /srv
total 0
/]# mount /dev/xvdc1 /srv
/]# mount
...
/dev/xvdc1 on /srv type ext4 (rw,relatime,seclabel,data=ordered)
/]# docker run -v /srv/information-db:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
/]# ls -l /srv
total 16
drwx------. 2 root root 16384 Apr 22 18:05 lost+found
/]# umount /dev/xvdc1
/]# ls -l /srv
total 4
drwxr-xr-x. 4 102 root 4096 Apr 22 18:24 information-db
Anyone seen this behaviour / have a solution?
Cheers

I've seen something like that. Try to perform stat -c %i checks both inside the host and container before and after mount event (in order to get inode values of the target dirs). I guess they're mismatched for a some reason when you mount external device.

Related

Cannot change ownership of a subdirectory under a mount directory

I am unable to change the ownership of a directory, under a mounted drive.
Already tried sudo chown mysql:mysql /personal/mysql and sudo chmod --reference=/var/lib/mysql /personal/mysql.
However, the mysql directory i.e., /personal/mysql is owned by root.
ls -la /personal/mysql | grep mysql
drwxrwxrwx 1 root root 4096 Oct 29 06:32 mysql
You need to specify the uid and gid as options when you create the initial mount using the mount command. The uid and gid can be found by running the id command:
So for example, if:
id mysql
returns:
uid=10171(mysql) gid=10171(mysql)
Run mount with:
mount -type ... //...... /path/to/dir -o uid=10171,gid=10171,....

MySQL Docker file : sed: couldn't open temporary file

I am trying to create new MySQL image and deploying in Kubernetes.
FROM oraclelinux:7-slim
USER root
ARG MYSQL_SERVER_PACKAGE=mysql-community-server-minimal-8.0.19
ARG MYSQL_SHELL_PACKAGE=mysql-shell-8.0.19
# Install server
RUN yum install -y https://repo.mysql.com/mysql-community-minimal-release-el7.rpm \
https://repo.mysql.com/mysql-community-release-el7.rpm \
&& yum-config-manager --enable mysql80-server-minimal \
&& yum install -y \
$MYSQL_SERVER_PACKAGE \
$MYSQL_SHELL_PACKAGE \
libpwquality \
&& yum clean all \
&& mkdir /docker-entrypoint-initdb.d
VOLUME /var/lib/mysql
COPY docker-entrypoint.sh /entrypoint.sh
COPY healthcheck.sh /healthcheck.sh
ENTRYPOINT ["/entrypoint.sh"]
HEALTHCHECK CMD /healthcheck.sh
EXPOSE 3306 33060
RUN chmod +rwx /entrypoint.sh
RUN chmod +rwx /healthcheck.sh
RUN groupadd -r mysql && useradd -r -g mysql mysql
EXPOSE 3306
CMD ["mysqld"]
It's working fine in the container. But throwing error when I deployed in Kubernetes like below:
How can I understand this issue?
ADDED
docker-entrypoint.sh:
if [ -n "$MYSQL_LOG_CONSOLE" ] || [ -n "console" ]; then
# Don't touch bind-mounted config files
if ! cat /proc/1/mounts | grep "/etc/my.cnf"; then
sed -i 's/^log-error=/#&/' /etc/my.cnf
fi
fi
P.S : I have added content of the file.
The problem is related with sed's in-place editing implementation. When you edit a file using the -i or --in-place option, the edition doesn't actually happen in-place. sed saves the changes into a temporary file and then uses it to replace the original one.
It happens that you don't have write permission to /etc directory, where sed is trying to create its temporary file.
As suggested in comments most probably the command is run by user mysql. For sure it is not run as root as it has enough privileges to be able to write to /etc:
bash-4.2# ls -ld /etc
drwxr-xr-x 1 root root 4096 Mar 27 15:04 /etc
As you can see others don't have write permission. Changing permissions or owner of /etc directory itself is a really bad idea and I won't advise you to run this command as root user either.
The simplest solution is to give up on using --in-place option, save the result in a directory such as /tmp, to which everyone has access:
bash-4.2# ls -ld /tmp
drwxrwxrwt 1 root root 4096 Mar 27 16:39 /tmp
and after that replace the content of the original file with the content of the temporary one.
Your command may look like this:
sed 's/^log-error=/#&/' /etc/my.cnf > /tmp/my.cnf && cat /tmp/my.cnf > /etc/my.cnf
One important caveat:
You need to make sure you have write permission on /etc/my.cnf file. As you can see below, by default you don't have such permission either, so the error will occur later, when the command will try to write to the original config file.
bash-4.2# ls -l /etc/my.cnf
-rw-r--r-- 1 root root 1239 Mar 27 15:04 /etc/my.cnf
You need to modify it in your Dockerfile either by making it availeble for edit by everyone:
RUN chmod 666 /etc/my.cnf
or better option:
RUN chown mysql /etc/my.cnf
to change its owner to mysql, if this is the user that executes the entrypoint.sh script.
Please let me know if it helps.

MySQL in docker won't persist changes to volume configured

I am trying to run the offical docker image of Mysql 5.7.28 on MacOS, but I cannot manage to make it persistent.
I used mount point on docker host as well volume created with docker volume, but it doesn't work.
I use the following to create the container:
docker run \
--detach \
--name=dockMysql \
--env="MYSQL_ROOT_PASSWORD=password" \
--publish 127.0.0.1:3307:3306 \
--volume=/Users/myuser/docker/dockMysql/data:/var/lib/mysql \
mysql:5.7.28
The data path: /Users/myuser/docker/dockMysql/data has all per
$ ls -l
drwxrwxrwx 6 root admin 192 May 24 2019 Users
$ pwd
/Users/myuser/docker/dockMysql/data
$ ls
auto.cnf ca.pem client-key.pem ib_logfile0 ibdata1 mysql private_key.pem server-cert.pem sys
ca-key.pem client-cert.pem ib_buffer_pool ib_logfile1 ibtmp1 performance_schema public_key.pem server-key.pem
It seems mysql writes data in the host directory provided but doesn't save the data upon restart of the container.
Anyone has any idea?
Thanks,
Ionut

Docker run command with -v flag puts container in Exited status

I am trying to map a local directory /home/ubuntu/data to /var/lib/mysql folder in container by using -v flag but container's status becomes Exited (0) 1. However, if I don't use -v flag at all, container is Up but this is not what I want. What could be the reason? I see volume mount line is missing in event logs opposed to working example.
$ docker -v
Docker version 17.09.0-ce, build afdb6d4
Dockerfile
FROM ubuntu:16.04
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server \
&& sed -i "s/127.0.0.1/0.0.0.0/g" /etc/mysql/mysql.conf.d/mysqld.cnf \
&& mkdir /var/run/mysqld \
&& chown -R mysql:mysql /var/run/mysqld
VOLUME ["/var/lib/mysql"]
EXPOSE 3306
CMD ["mysqld_safe"]
This is the which doesn't work.
$ docker run -i -t -d -v /home/ubuntu/data:/var/lib/mysql --name mysql_container mysql_image
Event logs.
2017-11-... container create 08b44c094... (image=mysql_image, name=mysql_container)
2017-11-... network connect 62bb211934... (container=08b44c094..., name=bridge, type=bridge)
2017-11-... container start 08b44c094... (image=mysql_image, name=mysql_container)
2017-11-... container die 08b44c094... (exitCode=0, image=mysql_image, name=mysql_container)
2017-11-... network disconnect 62bb211934... (container=08b44c094..., name=bridge, type=bridge)
Container logs.
$ docker logs -t mysql_container
2017-11-... mysqld_safe Logging to syslog.
2017-11-... mysqld_safe Logging to '/var/log/mysql/error.log'.
2017-11-... mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
This works without -v
$ docker run -i -t -d --name mysql_container mysql_image
Event logs.
2017-11-... container create 84993141... (image=mysql_image, name=mysql_container)
2017-11-... network connect 62bb2119... (container=84993141..., name=bridge, type=bridge)
2017-11-... volume mount 8c36b53d33... (container=84993141...7, destination=/var/lib/mysql, driver=local, propagation=, read/write=true)
2017-11-... container start 84993141... (image=mysql_image, name=mysql_container)
Container logs.
$ docker logs -t mysql_container
2017-11-... mysqld_safe Logging to syslog.
2017-11-... mysqld_safe Logging to '/var/log/mysql/error.log'.
2017-11-... mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
2017-11-... mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
It's a little complicated but interesting case.
So how you can check what's happening? Use following command:
docker run -i -t -v /tmp/data:/var/lib/mysql mysql_image bash
Now you are inside container so let's try command:
mysqld_safe
And it's ending but let's look into /var/log/mysql/error.log
We see there:
2017-11-25T17:22:24.006180Z 0 [ERROR] InnoDB: Operating system error number 13 in a file operation.
2017-11-25T17:22:24.006211Z 0 [ERROR] InnoDB: The error means mysqld does not have the access rights to the directory.
2017-11-25T17:22:24.006221Z 0 [ERROR] InnoDB: Operating system error number 13 in a file operation.
2017-11-25T17:22:24.006229Z 0 [ERROR] InnoDB: The error means mysqld does not have the access rights to the directory.
2017-11-25T17:22:24.006237Z 0 [ERROR] InnoDB: Cannot open datafile './ibdata1'
Ok let's see how /var/lib/mysql looks without volume mapping:
root#4474b1cd4300:/var/lib/mysql# ls -lah
total 109M
drwx------ 5 mysql mysql 4.0K Nov 25 17:24 .
drwxr-xr-x 1 root root 4.0K Nov 25 17:13 ..
-rw-r----- 1 mysql mysql 56 Nov 25 17:13 auto.cnf
-rw-r--r-- 1 root root 0 Nov 25 17:13 debian-5.7.flag
-rw-r----- 1 mysql mysql 419 Nov 25 17:13 ib_buffer_pool
-rw-r----- 1 mysql mysql 48M Nov 25 17:13 ib_logfile0
-rw-r----- 1 mysql mysql 48M Nov 25 17:13 ib_logfile1
-rw-r----- 1 mysql mysql 12M Nov 25 17:13 ibdata1
drwxr-x--- 2 mysql mysql 4.0K Nov 25 17:13 mysql
drwxr-x--- 2 mysql mysql 4.0K Nov 25 17:13 performance_schema
drwxr-x--- 2 mysql mysql 12K Nov 25 17:13 sys
mysql:mysql is owner of that directory
We have a lot mysql specific files there
Let's see what we've got with volume mapping:
root#fca45ee1e8fb:/var/lib/mysql# ls -lah
total 8.0K
drwxr-xr-x 2 root root 4.0K Nov 25 17:22 .
drwxr-xr-x 1 root root 4.0K Nov 25 17:13 ..
Docker is mapping this directory as root user
Docker is mapping this directory into host so all files disappear because on host machine that directory is empty
How to get this work?
Change your command to:
CMD chown -R mysql:mysql /var/lib/mysql && if [ ! -c /var/lib/mysql/ibdata1 ]; then mysqld --initialize-insecure; fi && mysqld_safe
What's happening there?
chown -R mysql:mysql /var/lib/mysql - get back mysql:mysql owner
if [ ! -c /var/lib/mysql/ibdata1 ]; then mysqld --initialize-insecure; fi - recreate mysql files with root user without pass but only if files not already exists (required for next runs)
mysqld_safe - run mysql

Can I change owner of directory that is mounted on volume in IBM containers?

I'm trying to launch postgres in IBM containers. I have just created volume by:
$ cf ic volume create pgdata
Then mount it:
$ cf ic run --volume pgdata:/var/pgsql -p 22 registry.ng.bluemix.net/ruimo/pgsql944-cli
After logging into container through ssh, I found the mounted directory is owned by root:
drwxr-xr-x 3 root root 4096 Jul 8 08:20 pgsql
Since postgres does not permit to run by root, I want to change the owner of this directory. But I cannot change the owner of this directory:
# chown postgres:postgres pgsql
chown: changing ownership of 'pgsql': Permission denied
Is it possible to change owner of mounted directory?
In IBM Containers, the user namespace is enabled for docker engine. When, the user namespace is enabled, the effective root inside the container is a non-root user out side the container process and NFS is not allowing the mapped non-root user to perform the chown operation on the volume inside the container. Please note that the volume pgdata is a NFS, this can verified by executing mount -t nfs4 from container.
You can try the workaround suggested for
How can I fix the permissions using docker on a bluemix volume?
In this scenario it will be
1. Mount the Volume to `/mnt/pgdata` inside the container
cf ic run --volume pgdata:/mnt/pgdata -p 22 registry.ng.bluemix.net/ruimo/pgsql944-cli
2. Inside the container
2.1 Create "postgres" group and user
groupadd --gid 1010 postgres
useradd --uid 1010 --gid 1010 -m --shell /bin/bash postgres
2.2 Add the user to group "root"
adduser postgres root
chmod 775 /mnt/pgdata
2.3 Create pgsql directory under bind-mount volume
su -c "mkdir -p /mnt/pgdata/pgsql" postgres
ln -sf /mnt/pgdata/pgsql /var/pgsql
2.2 Remove the user from group "root"
deluser postgres root
chmod 755 /mnt/pgdata
In your Dockerfile you can modify the permissions of a directory.
RUN chown postgres:postgres pgsql
Additionally when you ssh in you can modify the permissions of the directory by using sudo.
sudo chown postgres:postgres pgsql
Here are 3 different but possible solutions:
Using a dockerfile and doing a chown before mounting the volume.
USER ROOT command in dockerfile before you do a chown.
Use --cap-add flag.