centos7: Operation not permitted - mysql - mysql

I have installed mysql in centOS and now, want to start the mysql-server. However, I get that error:
# systemctl start mysqld
Failed to get D-Bus connection: Operation not permitted
To fix it, I created a Dockerfile as shown
FROM centos:7
MAINTAINER theodosiostziomakas <mymail#gmail.com>
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i
== systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
And then run it to create an image.
$ docker build --rm -t local/c7-systemd .
But I am still getting the same error.
I also looked at this proposed solution
Any ideas?
Thanks,
Theo.

I believe the issue with the Dockerfile or with the run command
It seems the issue in you Dockerfile is in this line
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
Here is MySQL centos Dockerfile
# Starting from base CentOS image
FROM centos:7
# Enabling SystemD
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
# Enabling EPEL & Remi repo
#RUN yum install -y epel-release && \
#yum install -y http://rpms.remirepo.net/enterprise/remi-release-7.rpm
# Mysql repo & installion
RUN yum install -y https://dev.mysql.com/get/mysql57-community-release-el7-9.noarch.rpm && \
yum install -y mysql mysql-server
RUN chkconfig --level 345 mysqld on
RUN systemctl enable mysqld
VOLUME [ "/var/lib/mysql" ]
# Port Expose
EXPOSE 3306
CMD ["/usr/sbin/init"]
Now, Next step is to run
--privileged is not enough, you also need to mount cgroup
Here is the command
docker run --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -it adilm7177/centos-mysql
You can build your own or you can pull the above image from docker registry that i build and push during testing.
docker push adilm7177/centos-mysql:latest
Update:
RUN systemctl enable mysqld
After adding this I am able to start-stop using systemctl

I am able to run mysql just fine with the docker-systemctl-replacement script which emulates "systemctl" commands without an active systemd daemon. You can look at that at the docker-systemctl-images examples.

Related

Docker mysql via MariaDB with Supervisor

Somewhat a year ago, I came up with the idea of extending my Docker knowledge to begin with creating a sort of multi-platform server image for development purposes, since then, I figured out how to get Nginx and PHP-fpm running in a stable environment. This all is based on a Debian image. Now since a couple one week ago, I wanted to add MySQL functionality to the image. At first, I tried the normal MySQL(-server) image and after trying to fix errors about why it couldn't run in my image, I switched to using MariaDB - I even had changed the Docker image of MySQL to fit to my needs (Replaced CMD ["mysqld"] for a supervisord.conf executable since my project is using several services of course). Now, I'm trying to figure it out for days but it is still not running. At the moment, I've chosen to use https://hub.docker.com/_/mariadb (second: 10.4.12-bionic, 10.4-bionic, 10-bionic, bionic, 10.4.12, 10.4, 10, latest) with my image.
I've just created a mariadb copy on time of writing, but replaced directly executing mysqld (working). When this topic is created, it didn't worked with a supervisord and that works as supposed to be now.
I have a docker-compose.yml where it will be started, here the code:
version: "3"
services:
db:
container_name: mariadb
image: mariadb
build: .
restart: on-failure
ports:
- 3306:3306
environment:
- MYSQL_ROOT_PASSWORD=test123
networks:
- local-network
networks:
local-network:
driver: bridge
Then, I will execute docker-compose up -d or with the (--build) parameter.
The Dockerfile behind that is:
FROM debian:buster-slim
ENV DEBIAN_FRONTEND noninteractive
ENV GOSU_VERSION 1.12
ENV MARIADB_VERSION 10.4
ENV GPG_KEYS \
199369E5404BD5FC7D2FE43BCBCB082A1BB943DB \
177F4010FE56CA3336300305F1656F24C74CD1D8
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mysql && useradd -r -g mysql mysql
RUN apt-get update && apt-get install --no-install-recommends --no-install-suggests -q -y \
wget \
ca-certificates \
gnupg \
gnupg1 \
gnupg2 \
dirmngr \
pwgen \
tzdata \
xz-utils
# Get Gosu for easy stepdown from root (to avoid sudo/su miscommunications)
# https://github.com/tianon/gosu/releases
RUN set -eux; \
savedAptMark="$(apt-mark showmanual)"; \
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
apt-mark auto '.*' > /dev/null; \
[ -z "$savedAptMark" ] || apt-mark manual $savedAptMark > /dev/null; \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true
RUN mkdir /docker-entrypoint-initdb.d
RUN set -ex; \
export GNUPGHOME="$(mktemp -d)"; \
for key in $GPG_KEYS; do \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done; \
gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg; \
command -v gpgconf > /dev/null && gpgconf --kill all || :; \
rm -r "$GNUPGHOME"; \
apt-key list
# Add MariaDB repo
RUN set -e;\
echo "deb http://downloads.mariadb.com/MariaDB/mariadb-$MARIADB_VERSION/repo/debian buster main" > /etc/apt/sources.list.d/mariadb.list; \
{ \
echo 'Package: *'; \
echo 'Pin: release o=MariaDB'; \
echo 'Pin-Priority: 999'; \
} > /etc/apt/preferences.d/mariadb
# Install MariaDB and set custom requirements
RUN set -ex; \
{ \
echo "mariadb-server" mysql-server/root_password password 'unused'; \
echo "mariadb-server" mysql-server/root_password_again password 'unused'; \
} | debconf-set-selections; \
apt-get update && apt-get install --no-install-recommends --no-install-suggests -y -q \
mariadb-server \
mariadb-backup \
socat; \
# comment out any "user" entires in the MySQL config ("docker-entrypoint.sh" or "--user" will handle user switching)
sed -ri 's/^user\s/#&/' /etc/mysql/my.cnf /etc/mysql/conf.d/*; \
# making sure that the correct permissions are set
mkdir -p /var/lib/mysql /var/run/mysqld; \
chown -R mysql:mysql /var/lib/mysql /var/run/mysqld; \
# comment out a few problematic configuration values
find /etc/mysql/ -name '*.cnf' -print0 \
| xargs -0 grep -lZE '^(bind-address|log)' \
| xargs -rt -0 sed -Ei 's/^(bind-address|log)/#&/'; \
# don't reverse lookup hostnames, they are usually another container
echo '[mysqld]\nskip-host-cache\nskip-name-resolve' > /etc/mysql/conf.d/docker.cnf
# Setup the Supervisor
RUN apt-get update && apt-get install supervisor -y \
&& mkdir -p /var/log/supervisor
COPY /supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN chmod +x /etc/supervisor/conf.d/supervisord.conf
VOLUME /var/lib/mysql
COPY /docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh \
&& ln -s /usr/local/bin/docker-entrypoint.sh /
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 33060
# call and execute the supervisor after build
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
After a couple days of working on fixing the image I thought that the supervisord was the issue, it couldn't run because of that or something. Well, here is the supervisord:
[supervisord]
logfile=/var/log/supervisord.log
nodaemon=true
user=root
[program:mysql]
command=/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql
process_name=mysqld
priority=1
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stdout_events_enabled=true
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stderr_events_enabled=true
autorestart=true
user=mysql
What happens next when the image has been build is that mysql will be executed by the supervisor. But, the problem is that I wanted to use the entrypoint from https://github.com/mariadb-corporation/mariadb-server-docker/tree/master/10.4 - I'm not very well known in Bash, so it will take some time to practice things there. Anyway, the docker-entrypoint has not been executed the first time, the database will not be initialized. What I can do, is creating an own shell script to initialize it. Tested that and it worked, but why can't I just use the default entrypoint as the first choise?
Is it going wrong at some point between Supervisord commands - docker-entrypoint with mysql connection points or something?
I really hope that someone can help me out.
Edit [04/26/2020]: Described the latest situation from now on, database not initializing, no message, notes or warnings from the entrypoint script.
Regards,
Colin
The MySQL service should run as root user, but later that's the mysql user whiche tries to access to the "socket". So, the socket directory should be accessible by mysql user but Superviser runs the mysql service as root user.
I fixed this issue by creating and gave right permission to the MySQL socket directory in my Dockerfile:
ARG MARIADB_MYSQL_SOCKET_DIRECTORY='/var/run/mysqld'
RUN mkdir -p $MARIADB_MYSQL_SOCKET_DIRECTORY && \
chown root:mysql $MARIADB_MYSQL_SOCKET_DIRECTORY && \
chmod 774 $MARIADB_MYSQL_SOCKET_DIRECTORY
then configured the Supervisor like this:
[program:mariadb]
command=/usr/sbin/mysqld
autorestart=true
user=root

How to run bash on docker container with entry point?

How can I run bash on a container with an ENTRYPOINT?
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update \
&& apt-get install -y curl gnupg
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash \
&& export NVM_DIR="$HOME/.nvm" \
&& [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" \
&& [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" \
&& nvm i 8.11 \
&& apt-get install -y mysql-server=5.7.23-0ubuntu0.18.04.1 python3 python3-pip \
&& ln -s /usr/bin/python3 /usr/bin/python \
&& ln -s /usr/bin/pip3 /usr/bin/pip \
&& pip install awscli --upgrade --user \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ENTRYPOINT [ "/etc/init.d/mysql", "start" ]
EXPOSE 3306
I tried:
jiewmeng#JM  ~/Dropbox/ci-docker-node-mysql  docker run -it ci-docker-node-mysql bash
* Starting MySQL database server mysqld No directory, logging in with HOME=/
[ OK ]
jiewmeng#JM  ~/Dropbox/ci-docker-node-mysql 
But I got kicked out once MySQL starts
I tried running my docker container ...
jiewmeng#JM  ~/Dropbox/ci-docker-node-mysql  docker run -p 3307:3306 ci-docker-node-mysql
✘ jiewmeng#JM  ~/Dropbox/ci-docker-node-mysql  mysql -h 127.0.0.1:3307
ERROR 2005 (HY000): Unknown MySQL server host '127.0.0.1:3307' (2)
But seems like I cannot connect. What did I do wrong?
If you want to launch the container using bash:
docker run --rm -it --entrypoint "/bin/bash" ci-docker-node-mysql
Your container exits when the command mysql completes. Containers don't persist once their task is done.
Try to run MySQL in daemon mode which should prevent it from assuming the process is complete:
ENTRYPOINT ["mysqld"]
EDIT: I took a look at the official mysql Docker image and that's how they do it there.
EDIT2: Once that's done, you can run exec to get a shell into the container:
docker exec -ti container-name /bin/bash

docker official dind build (docker:latest) with chromiuim

I've been trying for the last two days to get chromuim installed and running on docker:latest docker image. (docker in docker).
I have tried multiple docker files:
from docker:latest
RUN apk add --no-cache python py2-pip curl bash chromuim ttf-freefont xvfb nodejs nodejs-npm udev
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:~/google-cloud-sdk/bin
RUN pip install docker-compose
RUN npm install -g #angular/cli swagger
ENV CHROME_BIN=/usr/bin/chromium-browser
This installed chrome 57, which doesn't support headless.
So I suspect I can run this with xvbf, but running this chrome fails with:
Failed to move to new namespace: PID namespaces supported, Network namespace supported, but failed: errno = Operation not permitted
[8:8:1124/085514.600081:FATAL:zygote_host_impl_linux.cc(182)] Check failed: ReceiveFixedMessage(fds[0], kZygoteBootMessage, sizeof(kZygoteBootMessage), &boot_pid).
Aborted (core dumped)
So I tried to install chrome 61 (which supported headless).
But for that you need to update the Dockerfile to use edge.
I tried to upgrade / or install 61 right away. I always get fonts missing.
The closest I got was adjusting my dockerfile to use lighthose one
from docker:latest
RUN apk add --no-cache python py2-pip curl bash xvfb nodejs nodejs-npm udev
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:~/google-cloud-sdk/bin
RUN pip install docker-compose
RUN npm install -g #angular/cli swagger
ENV CHROME_BIN=/usr/bin/chromium-browser
USER root
RUN echo "http://dl-2.alpinelinux.org/alpine/edge/main" > /etc/apk/repositories
RUN echo "http://dl-2.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories
RUN echo "http://dl-2.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
#-----------------
# Set ENV and change mode
#-----------------
ENV LIGHTHOUSE_CHROMIUM_PATH /usr/bin/chromium-browser
ENV TZ "Europe/Berlin"
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
ENV SCREEN_WIDTH 750
ENV SCREEN_HEIGHT 1334
ENV SCREEN_DEPTH 24
ENV DISPLAY :99.0
ENV PATH /lighthouse/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV GEOMETRY "$SCREEN_WIDTH""x""$SCREEN_HEIGHT""x""$SCREEN_DEPTH"
RUN echo $TZ > /etc/timezone
#-----------------
# Add packages
#-----------------
RUN apk -U --no-cache update
RUN apk -U --no-cache add \
zlib-dev \
chromium \
freetype \
ttf-opensans \
xvfb \
wait4ports \
xorg-server \
dbus \
ttf-freefont \
mesa-dri-swrast
# Minimize size
RUN apk del --purge --force curl make gcc g++ python linux-headers binutils-gold gnupg git zlib-dev apk-tools libc-utils
RUN rm -rf /var/lib/apt/lists/* \
/var/cache/apk/* \
/usr/share/man \
/tmp/* \
/usr/lib/node_modules/npm/man \
/usr/lib/node_modules/npm/doc \
/usr/lib/node_modules/npm/html \
/usr/lib/node_modules/npm/scripts
VOLUME /lighthouse/output
ADD xvfb-chromium.sh /chromium-xvfb.sh
RUN chmod +x /chromium-xvfb.sh
xvfb-chromium.sh (althought not need, as you can docker run /bin/bash into the container)
#!/bin/sh
_kill_procs() {
kill -TERM $chromium
wait $chromium
kill -TERM $xvfb
}
parameters=$#
# We need to test if /var/run/dbus exists, since script will fail if it does not
[ ! -e /var/run/dbus ] && mkdir /var/run/dbus
/usr/bin/dbus-daemon --system
# Setup a trap to catch SIGTERM and relay it to child processes
trap _kill_procs SIGTERM
TMP_PROFILE_DIR=`mktemp -d -t chromium.XXXXXX`
export CHROME_DEBUGGING_PORT=9222
# Start Xvfb
Xvfb ${DISPLAY} -ac +iglx -screen 0 ${GEOMETRY} -nolisten tcp & xvfb=$!
printf "Starting xvfb window server..."
while [ 1 -gt $xvfb ]; do printf "..."; sleep 1; done
printf "xvfb started\n\n"
#printf "Starting chromium, with debugger on port $CHROME_DEBUGGING_POST...\n\n"
# --disable-webgl \
$CHROME_BIN \
--no-sandbox \
--user-data-dir=${TMP_PROFILE_DIR} \
--start-maximized \
--remote-debugging-port=${CHROME_DEBUGGING_PORT} \
--no-first-run "about:blank" &
#chromium=$!
#wait4ports tcp://127.0.0.1:$CHROME_DEBUGGING_PORT
printf "\n\n==============================\nlaunching lighthouse run\n==============================\n\n"
#wait $chromium
wait $xvfb
Then I got another error:
Error relocating /usr/lib/chromium/chrome: FT_Set_Default_Properties: symbol not found
Not sure how to solve this, any help would be appreciated.
you could try this link https://github.com/c0b/chrome-in-docker
It downloads a google-chrome Linux version from chrome channels, either stable, or beta, or developer version;It turns google-chrome into a headless browser,

Docker : Start mysql and apache from entrypoint or CMD

Building a docker image for development, I want to start automatically mysql and apache when I run the image.
If I log into the container and run "service apache2 start" and "service mysql start" it works. But if I put in entrypoint or CMD it fails.
I was able to start apache by putting ENTRYPOINT ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]but I was not able to start mysql programmatically.
I tried many many things. Most of the time if fails silently in that the container is not running, other time I got : docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/etc/init.d/mysql start\": stat /etc/init.d/mysql start: no such file or directory"
This is what I have so far :
FROM debian:wheezy
RUN apt-get update && \
apt-get install -y libmcrypt-dev \
subversion ssl-cert nano wget unzip && \
echo "deb http://packages.dotdeb.org wheezy-php56 all" >> /etc/apt/sources.list.d/dotdeb.list && \
echo "deb-src http://packages.dotdeb.org wheezy-php56 all" >> /etc/apt/sources.list.d/dotdeb.list && \
wget http://www.dotdeb.org/dotdeb.gpg -O- | apt-key add - && \
echo mysql-server-5.5 mysql-server/root_password password yourpass | debconf-set-selections && \
echo mysql-server-5.5 mysql-server/root_password_again password yourpass | debconf-set-selections && \
apt-get update && \
apt-get install -y \
apache2 apache2-doc apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common libapache2-mod-php5 \
openssl php-pear php5 php5-cli php5-common php5-curl php5-gd php5-mcrypt php5-mysql php5-memcache php5-readline \
subversion ssl-cert nano wget unzip \
mysql-server-5.5 mysql-client mysql-client-5.5 mysql-common && \
/etc/init.d/mysql start && \
mysql -u root -pyourpass -e "create database mydb;" && \
rm -rf /var/lib/apt/lists/* && \
rm /etc/apache2/sites-enabled/000-default && \
mkdir -p /var/www/html && \
chown www-data:www-data -R /var/www/html/
COPY conf/etc/ /etc/
COPY mydump.sql /var/www/html/mydump.sql
RUN /etc/init.d/mysql start && \
mysql -u root -pyourpass -h localhost mydb < /var/www/html/mydump.sql && \
rm /var/www/html/mydump.sql
VOLUME ["/var/www", "/var/log/apache2", "/etc/apache2", "/var/lib/mysql"]
EXPOSE 80 443 3306
Your way of starting either Apache or Mysql looks wrong to me
If I look at the most popular Apache on hub.docker.com the Dockerfile shows how to start Apache. The last line of the Dockerfile is
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
For the reference Mysql, the last line of the Dockerfile is
CMD ["mysqld"]
So you can look at supervisor or any other similar tool like S6 or daemontools in order to start both Apache and Mysql in the Docker way.
A model often seen is to include a script (bash, shell, etc) in your Docker image, and then use that script as the entrypoint for your application. See that described in https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#entrypoint
So, put the things you're starting in a docker-entrypoint.sh script, COPY the script in, and reference it from the ENTRYPOINT.

Custom Docker MySQL build won't run

I am trying to compose my custom Dockerfile for setting up Mysql 5.7.
As part of this I would like to set s3 backup as well.
But when I try to run/create the docker instance it fails
Here is the Dockerfile:
# Start with a base mysql:5.6 image
FROM mysql:5.7
MAINTAINER Ikenna N. Okpala <me#ikennaokpala.com>
USER root
# RUN locale-gen
ENV DEBIAN_FRONTEND noninteractive
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US.en
ENV LC_ALL en_US.UTF-8
ENV PS_NGX_EXTRA_FLAGS --with-cc=/usr/bin/gcc --with-ld-opt=-static-libstdc++
# Add all base dependencies
RUN apt-get update -y
RUN apt-get install -y build-essential checkinstall
RUN apt-get install -y vim curl wget unzip
RUN apt-get install -y libfuse-dev libcurl4-openssl-dev mime-support automake libtool python-docutils libreadline-dev
RUN apt-get install -y pkg-config libssl-dev
RUN apt-get install -y git-core
RUN apt-get install -y man cron
RUN apt-get install -y libgmp-dev
RUN apt-get install -y zlib1g-dev
RUN apt-get install -y libxslt-dev
RUN apt-get install -y libxml2-dev
RUN apt-get install -y libpcre3 libpcre3-dev
RUN apt-get install -y freetds-dev
# RUN apt-get install -y openjdk-7-jdk
RUN apt-get install -y software-properties-common
RUN mkdir -p /mnt/s3b
RUN sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
RUN cd ~/
RUN /bin/bash -l -c "wget https://github.com/s3fs-fuse/s3fs-fuse/archive/master.zip"
RUN unzip master.zip
RUN cd s3fs-fuse-master/ && ./autogen.sh && ./configure --prefix=/usr --with-openssl && make && make install
ADD templates/setup.sh /root/setup.sh
RUN chmod +x /root/setup.sh
ADD templates/backup-cron /etc/cron.d/backup-cron
RUN chmod 0644 /etc/cron.d/backup-cron
RUN cron
# RUN chmod +x /root/backup-cron
EXPOSE 3306
CMD ["/bin/bash", "-l", "-c", "/root/setup.sh"]
Here is the setup.sh file
#!/bin/bash
export MYSQL_HOST_IP=`awk 'NR==1 {print $1}' /etc/hosts`
set -e
set -x
# NOW=$(date +"%Y-%m-%d-%H%M")
# DUMP_FILE="/dumps/dump.sql"
echo $AWS_S3 >> ~/.passwd-s3fs && cp ~/.passwd-s3fs /etc/passwd-s3fs
chmod 600 ~/.passwd-s3fs
chmod 640 /etc/passwd-s3fs
mysql -h$MYSQL_HOST_IP -uroot -p$MYSQL_ROOT_PASSWORD -e "DROP DATABASE IF EXISTS $MYSQL_DATABASE; CREATE USER '$MYSQL_USER'#'localhost' IDENTIFIED BY '$MYSQL_PASSWORD'; CREATE DATABASE $MYSQL_DATABASE; GRANT ALL ON $MYSQL_DATABASE.* TO '$MYSQL_USER'#'localhost'; FLUSH PRIVILEGES;"
Here is the docker run command:
docker run --name=mysql-s3 --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_USER=${MYSQL_USER} --env MYSQL_PASSWORD=${MYSQL_PASSWORD} --env MYSQL_DATABASE=${MYSQL_DATABASE} --env AWS_S3=${AWS_S3} --detach --publish 3306:3306 --volume=/vagrant/scripts/dumps/:/dumps/ --cap-add mknod --cap-add sys_admin --device=/dev/fuse --privileged mysql-s3
This approach seems too complicated using fuse and modifying the base mysql container. I would suggest that you just stick with the base MYSQL and write a script that you run in a separate container that does a MYSQL dump to a text file and then copies that text file to S3 with the AWS CLI.