Cannot change ownership of a subdirectory under a mount directory - mysql

I am unable to change the ownership of a directory, under a mounted drive.
Already tried sudo chown mysql:mysql /personal/mysql and sudo chmod --reference=/var/lib/mysql /personal/mysql.
However, the mysql directory i.e., /personal/mysql is owned by root.
ls -la /personal/mysql | grep mysql
drwxrwxrwx 1 root root 4096 Oct 29 06:32 mysql

You need to specify the uid and gid as options when you create the initial mount using the mount command. The uid and gid can be found by running the id command:
So for example, if:
id mysql
returns:
uid=10171(mysql) gid=10171(mysql)
Run mount with:
mount -type ... //...... /path/to/dir -o uid=10171,gid=10171,....

Related

Can't import MySql dump via kubectl

I'm struggling with import dump via kubectl to MySql database running in Kubernetes. There is no error output, but also no data imported.
Here is proof that there is such pod, also dump file on disk root called /database.sql and command.
root#node-1:~# kubectl get pods -n esopa-test | grep mariadb
esopa-test-mariadb-0 1/1 Running 0 14d
root#node-1:~# ll /database.sql
-rw-r--r-- 1 root root 4418347 Oct 14 08:50 /database.sql
root#node-1:~# kubectl exec esopa-test-mariadb-0 -n esopa-test -- mysql -u root -proot database < /database.sql
root#node-1:~#
Thank you for any advice
You can copy files from a pod to node by using kubectl cp command.
To copy files from pod to node syntax is very simple:
kubectl cp <some-namespace>/<some-pod>:<directory-inside-pod> <directory_on_your_node>
So in your use case you can use following command:
kubectl cp esopa-test/esopa-test-mariadb-0:/database.sql <directory_on_your_node>
And to copy files from node to pod you can use:
kubectl cp <directory_on_your_node> esopa-test/esopa-test-mariadb-0:/database.sql

MySQL Docker file : sed: couldn't open temporary file

I am trying to create new MySQL image and deploying in Kubernetes.
FROM oraclelinux:7-slim
USER root
ARG MYSQL_SERVER_PACKAGE=mysql-community-server-minimal-8.0.19
ARG MYSQL_SHELL_PACKAGE=mysql-shell-8.0.19
# Install server
RUN yum install -y https://repo.mysql.com/mysql-community-minimal-release-el7.rpm \
https://repo.mysql.com/mysql-community-release-el7.rpm \
&& yum-config-manager --enable mysql80-server-minimal \
&& yum install -y \
$MYSQL_SERVER_PACKAGE \
$MYSQL_SHELL_PACKAGE \
libpwquality \
&& yum clean all \
&& mkdir /docker-entrypoint-initdb.d
VOLUME /var/lib/mysql
COPY docker-entrypoint.sh /entrypoint.sh
COPY healthcheck.sh /healthcheck.sh
ENTRYPOINT ["/entrypoint.sh"]
HEALTHCHECK CMD /healthcheck.sh
EXPOSE 3306 33060
RUN chmod +rwx /entrypoint.sh
RUN chmod +rwx /healthcheck.sh
RUN groupadd -r mysql && useradd -r -g mysql mysql
EXPOSE 3306
CMD ["mysqld"]
It's working fine in the container. But throwing error when I deployed in Kubernetes like below:
How can I understand this issue?
ADDED
docker-entrypoint.sh:
if [ -n "$MYSQL_LOG_CONSOLE" ] || [ -n "console" ]; then
# Don't touch bind-mounted config files
if ! cat /proc/1/mounts | grep "/etc/my.cnf"; then
sed -i 's/^log-error=/#&/' /etc/my.cnf
fi
fi
P.S : I have added content of the file.
The problem is related with sed's in-place editing implementation. When you edit a file using the -i or --in-place option, the edition doesn't actually happen in-place. sed saves the changes into a temporary file and then uses it to replace the original one.
It happens that you don't have write permission to /etc directory, where sed is trying to create its temporary file.
As suggested in comments most probably the command is run by user mysql. For sure it is not run as root as it has enough privileges to be able to write to /etc:
bash-4.2# ls -ld /etc
drwxr-xr-x 1 root root 4096 Mar 27 15:04 /etc
As you can see others don't have write permission. Changing permissions or owner of /etc directory itself is a really bad idea and I won't advise you to run this command as root user either.
The simplest solution is to give up on using --in-place option, save the result in a directory such as /tmp, to which everyone has access:
bash-4.2# ls -ld /tmp
drwxrwxrwt 1 root root 4096 Mar 27 16:39 /tmp
and after that replace the content of the original file with the content of the temporary one.
Your command may look like this:
sed 's/^log-error=/#&/' /etc/my.cnf > /tmp/my.cnf && cat /tmp/my.cnf > /etc/my.cnf
One important caveat:
You need to make sure you have write permission on /etc/my.cnf file. As you can see below, by default you don't have such permission either, so the error will occur later, when the command will try to write to the original config file.
bash-4.2# ls -l /etc/my.cnf
-rw-r--r-- 1 root root 1239 Mar 27 15:04 /etc/my.cnf
You need to modify it in your Dockerfile either by making it availeble for edit by everyone:
RUN chmod 666 /etc/my.cnf
or better option:
RUN chown mysql /etc/my.cnf
to change its owner to mysql, if this is the user that executes the entrypoint.sh script.
Please let me know if it helps.

Docker - can not start mysql permission error

I have problem with docker and mysql. I have build an image based on phusion/baseimage. I want to create image where /var/lib/mysql directory is shared with my host (OS X), because I dont want to store my data on container.
Everything works fine when directory /var/lib/mysql is not shared. When I share this directory, mysql service can not start. In logs are informations about problems with permissions while starting.
Result of ls -la from /var/lib is:
[...]
drwxr-xr-x 1 lc staff 170 Jan 3 16:55 mysql
[...]
The mysql user should be an owner. I tried to do:
sudo chown -R mysql:mysql mysql/
But this command didn't return any error and didn't change owner.
I have also tried to add my user (from container) to mysql group:
lc#cbe25ac0681e:~$ groups lc
lc : lc sudo mysql
But It also didn't work. Have anybody any idea how to solve this issue?
My docker-compose.yml file:
server:
image: lukasz619/light-core_server
ports:
- "40080:80"
- "40022:22"
- "40443:443"
- "43306:3306"
volumes:
- /Users/lukasz/Workspace/database/mysql:/var/lib/mysql
This is my Dockerfile:
FROM phusion/baseimage
RUN apt-get update
RUN apt-get install -y apache2
# Enable SSH
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
# public key for root
ADD public_key.pub /tmp/public_key.pub
RUN cat /tmp/public_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/public_key.pub
EXPOSE 80
CMD ["/sbin/my_init"]
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
When we say that users are shared between the container and the host, it's not quite true; it's the UIDs that are actually shared. What this means is that the user named mysql in the container most likely has a different UID to the user mysql on the host.
We can try to fix this by using the image to set the permissions:
docker run --user root lukasz619/light-core_server chown -R mysql:mysql /var/lib/mysql
This may work, depending on how you've set up the image. There is also a further complication from the VM running in-between.
The problem is OS X.. I ran this docker-composer.yml
server:
image: lukasz619/light-core_server
ports:
- "40080:80"
- "40022:22"
- "40443:443"
- "43306:3306"
volumes:
- /var/mysql:/var/lib/mysql
Its possible to share directories (and do chown) ONLY between boot2docker and container, but do not work properly shareing between Os X and container.

Can I change owner of directory that is mounted on volume in IBM containers?

I'm trying to launch postgres in IBM containers. I have just created volume by:
$ cf ic volume create pgdata
Then mount it:
$ cf ic run --volume pgdata:/var/pgsql -p 22 registry.ng.bluemix.net/ruimo/pgsql944-cli
After logging into container through ssh, I found the mounted directory is owned by root:
drwxr-xr-x 3 root root 4096 Jul 8 08:20 pgsql
Since postgres does not permit to run by root, I want to change the owner of this directory. But I cannot change the owner of this directory:
# chown postgres:postgres pgsql
chown: changing ownership of 'pgsql': Permission denied
Is it possible to change owner of mounted directory?
In IBM Containers, the user namespace is enabled for docker engine. When, the user namespace is enabled, the effective root inside the container is a non-root user out side the container process and NFS is not allowing the mapped non-root user to perform the chown operation on the volume inside the container. Please note that the volume pgdata is a NFS, this can verified by executing mount -t nfs4 from container.
You can try the workaround suggested for
How can I fix the permissions using docker on a bluemix volume?
In this scenario it will be
1. Mount the Volume to `/mnt/pgdata` inside the container
cf ic run --volume pgdata:/mnt/pgdata -p 22 registry.ng.bluemix.net/ruimo/pgsql944-cli
2. Inside the container
2.1 Create "postgres" group and user
groupadd --gid 1010 postgres
useradd --uid 1010 --gid 1010 -m --shell /bin/bash postgres
2.2 Add the user to group "root"
adduser postgres root
chmod 775 /mnt/pgdata
2.3 Create pgsql directory under bind-mount volume
su -c "mkdir -p /mnt/pgdata/pgsql" postgres
ln -sf /mnt/pgdata/pgsql /var/pgsql
2.2 Remove the user from group "root"
deluser postgres root
chmod 755 /mnt/pgdata
In your Dockerfile you can modify the permissions of a directory.
RUN chown postgres:postgres pgsql
Additionally when you ssh in you can modify the permissions of the directory by using sudo.
sudo chown postgres:postgres pgsql
Here are 3 different but possible solutions:
Using a dockerfile and doing a chown before mounting the volume.
USER ROOT command in dockerfile before you do a chown.
Use --cap-add flag.

Mounting container volume from the hosts' drive?

im setting up a mysql container like so:
docker run -v /srv/information-db:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
now, this works when nothing is mounted on /srv on the host, but when i mount my drive, docker seems to write to the underlying filesystem (/), eg:
/]# ls -l /srv
total 0
/]# mount /dev/xvdc1 /srv
/]# mount
...
/dev/xvdc1 on /srv type ext4 (rw,relatime,seclabel,data=ordered)
/]# docker run -v /srv/information-db:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
/]# ls -l /srv
total 16
drwx------. 2 root root 16384 Apr 22 18:05 lost+found
/]# umount /dev/xvdc1
/]# ls -l /srv
total 4
drwxr-xr-x. 4 102 root 4096 Apr 22 18:24 information-db
Anyone seen this behaviour / have a solution?
Cheers
I've seen something like that. Try to perform stat -c %i checks both inside the host and container before and after mount event (in order to get inode values of the target dirs). I guess they're mismatched for a some reason when you mount external device.