I am trying to access mysql databases from my docker host to the container.
It's my own dockerfile which install a database expose on port 3306.
I launch my docker with docker-compose, and my compose file is mapping 3308 host port on 3306 container port.
I can access to mysql from the host like this :
mysql -h localhost -P 3308 -u root -pMyPassword
It's working well, but what I can't figure out, is why I can't see any datas from my container?
From inside the container, I have a test databases which I can connect to without any problem. But when I connect from the host to the container mysql process, It seems to show me the mysql datas from the host machine, not from the container one.
Any ideas?
Thanks :)
EDIT 1 :
So here is the first way I can connect to mysql into the container :
docker exec -it MyContainer mysql -uroot -pMyPassword
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| test_db |
+--------------------+
It show me my db : test_db
But If i access from :
mysql -h localhost -P 3308 -u root -pMyPassword
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
My test_db isn't here.
And the result of docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a0de6a691d72 MyContainer "docker-entrypoint.sh" 3 hours ago Up 3 hours 9000/tcp, 0.0.0.0:8085->80/tcp, 0.0.0.0:3308->3306/tcp, 0.0.0.0:8084->8000/tcp, 0.0.0.0:8086->8080/tcp MyContainer
EDIT 2 :
I am developing a standard docker container for web hosting production environnement. Each host is controlled by ajenti. The host work with an nginx reverse proxy which redistribute websites on correct container. Every thing is wokring well. So here is my Dockerfile :
FROM php:5.6-fpm
RUN apt-get update && apt-get install -y --no-install-recommends \
git \
libxml2-dev \
python \
build-essential \
make \
gcc \
python-dev \
locales \
python-pip
RUN dpkg-reconfigure locales && \
locale-gen C.UTF-8 && \
/usr/sbin/update-locale LANG=C.UTF-8
ENV LC_ALL C.UTF-8
ARG MYSQL_ROOT_PASSWORD
RUN export DEBIAN_FRONTEND=noninteractive; \
echo mysql-server mysql-server/root_password password $MYSQL_ROOT_PASSWORD | debconf-set-selections; \
echo mysql-server mysql-server/root_password_again password $MYSQL_ROOT_PASSWORD | debconf-set-selections;
RUN apt-get update && apt-get install -y -q mysql-server php5-mysql
RUN rm /etc/apt/apt.conf.d/docker-gzip-indexes
RUN apt-get update && apt-get install -y wget
RUN wget http://repo.ajenti.org/debian/key -O- | apt-key add -
RUN echo "deb http://repo.ajenti.org/debian main main debian" > /etc/apt/sources.list.d/ajenti.list
RUN apt-get update && apt-get install -y ajenti cron unzip ajenti-v ajenti-v-php-fpm ajenti-v-mysql ajenti-v-nginx
RUN apt-get install -y python-setuptools python-dev \
&& easy_install -U gevent==1.1b3 \
&& sed -i -e s/ssl_version=PROTOCOL_SSLv3/ssl_version=PROTOCOL_SSLv23/ /usr/local/lib/python2.7/dist-packages/gevent-1.1b3-py2.7-linux-x86_64.egg/gevent/ssl.py
EXPOSE 80 8000 8080 3306
RUN mkdir /tmp/tempfiles \
&& mv /srv /tmp/tempfiles \
&& mv /var/lib/mysql /tmp/tempfiles \
&& mv /var/log /tmp/tempfiles \
&& mv /etc/ajenti /tmp/tempfiles
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
As I said, I wanted to be able do deploy a new container easily. So I created a docker-entrypoint.sh which copy wanted files to my volume when I start the container :
#!/bin/bash
DIR="/var/lib/mysql"
# look for empty dir
if [ ! "$(ls -A $DIR)" ]; then
cp -avr /tmp/tempfiles/mysql /var/lib/
fi
# rest of the logic
DIR="/srv"
# look for empty dir
if [ ! "$(ls -A $DIR)" ]; then
cp -avr /tmp/tempfiles/srv /
fi
# rest of the logic
DIR="/var/log"
# look for empty dir
if [ ! "$(ls -A $DIR)" ]; then
cp -avr /tmp/tempfiles/log /var/
fi
# rest of the logic
DIR="/etc/ajenti"
# look for empty dir
if [ ! "$(ls -A $DIR)" ]; then
cp -avr /tmp/tempfiles/ajenti /etc/
fi
# rest of the logic
Finally, my docker-compose.yml to launch everything and map ports :
version: '2'
services:
ajenti:
build:
context: ./dockerfiles/
args:
MYSQL_ROOT_PASSWORD: MyPassword
volumes:
- ./logs:/var/log
- ./html:/srv
- ./ajenti:/etc/ajenti
- ./mysql-data:/var/lib/mysql
- ./apache2:/etc/apache2
ports:
- "8084:8000"
#NGINX
- "8085:80"
#APACHE
- "8086:8080"
- "3308:3306"
Hope this will help to find a solution !
I finally found a solution and it was pretty simple...
First of all, I need to let mysql bind external address, so I changed the line bind-address to '0.0.0.0' inside the container.
Next I just changed the command line with mysql -h 127.0.0.1 -P 3308 -u root -pMyPassword
Now it's fine, I can access container mysql data from the host.
Thanks all for your help :)
In my case I was confused because docker used a different host and port. So you need to find them then do this:
mysql -P <portnumber> -h <host IP> -u db_name -p
Most people would put the docker DB related variables into the environment of the docker container so do this:
sudo docker exec -it container_name env
See if there's a variable called DB_HOST or DB_PORT or something like that. If not then look thru the source code. If it's a PHP project then find a config directory and look in main.php and see
if you execute MySQL operation as entrypoint in the dockerfile file, you will only see that operation when you connect to the container. try changing the entrypoint.
https://docs.docker.com/engine/reference/builder/#entrypoint
Related
I am totally new in Docker community and I am trying to create a custom container image with mysql and a .war file inside and to run it on a AWS EC2 instance. I've tried a lot but I cannot figure this out..
To build the container image I rum this
docker build -t <name-of-image> -f Dockerfile
I suppose Docker file content should contain something like
FROM mysql:latest
ENV TARGETD /opt/apache-tomcat-9.0.35
ENV WAR /target/NewWebApp.war
RUN apt-get -y update
RUN apt-get -y upgrade
# Create database
RUN mkdir /usr/sql
#RUN CHMOD 644 /usr/sql
ADD db.sql /usr/sql/db.sql
RUN mysql -h localhost -P 3306 --protocol=tcp -u root start && \
mysql -u root -e < /usr/sql/db.sql
EXPOSE 3306
ADD ${WAR} ${TARGETD}/webapps
And to run(deploy) image I use
docker run -d -p 8080:3306 <name-of-image>:latest
I have already installed Tomcat on 8080
What can I do in order to run this image and to be able to access it through AWS EC2?
Somewhat a year ago, I came up with the idea of extending my Docker knowledge to begin with creating a sort of multi-platform server image for development purposes, since then, I figured out how to get Nginx and PHP-fpm running in a stable environment. This all is based on a Debian image. Now since a couple one week ago, I wanted to add MySQL functionality to the image. At first, I tried the normal MySQL(-server) image and after trying to fix errors about why it couldn't run in my image, I switched to using MariaDB - I even had changed the Docker image of MySQL to fit to my needs (Replaced CMD ["mysqld"] for a supervisord.conf executable since my project is using several services of course). Now, I'm trying to figure it out for days but it is still not running. At the moment, I've chosen to use https://hub.docker.com/_/mariadb (second: 10.4.12-bionic, 10.4-bionic, 10-bionic, bionic, 10.4.12, 10.4, 10, latest) with my image.
I've just created a mariadb copy on time of writing, but replaced directly executing mysqld (working). When this topic is created, it didn't worked with a supervisord and that works as supposed to be now.
I have a docker-compose.yml where it will be started, here the code:
version: "3"
services:
db:
container_name: mariadb
image: mariadb
build: .
restart: on-failure
ports:
- 3306:3306
environment:
- MYSQL_ROOT_PASSWORD=test123
networks:
- local-network
networks:
local-network:
driver: bridge
Then, I will execute docker-compose up -d or with the (--build) parameter.
The Dockerfile behind that is:
FROM debian:buster-slim
ENV DEBIAN_FRONTEND noninteractive
ENV GOSU_VERSION 1.12
ENV MARIADB_VERSION 10.4
ENV GPG_KEYS \
199369E5404BD5FC7D2FE43BCBCB082A1BB943DB \
177F4010FE56CA3336300305F1656F24C74CD1D8
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mysql && useradd -r -g mysql mysql
RUN apt-get update && apt-get install --no-install-recommends --no-install-suggests -q -y \
wget \
ca-certificates \
gnupg \
gnupg1 \
gnupg2 \
dirmngr \
pwgen \
tzdata \
xz-utils
# Get Gosu for easy stepdown from root (to avoid sudo/su miscommunications)
# https://github.com/tianon/gosu/releases
RUN set -eux; \
savedAptMark="$(apt-mark showmanual)"; \
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
apt-mark auto '.*' > /dev/null; \
[ -z "$savedAptMark" ] || apt-mark manual $savedAptMark > /dev/null; \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true
RUN mkdir /docker-entrypoint-initdb.d
RUN set -ex; \
export GNUPGHOME="$(mktemp -d)"; \
for key in $GPG_KEYS; do \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done; \
gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg; \
command -v gpgconf > /dev/null && gpgconf --kill all || :; \
rm -r "$GNUPGHOME"; \
apt-key list
# Add MariaDB repo
RUN set -e;\
echo "deb http://downloads.mariadb.com/MariaDB/mariadb-$MARIADB_VERSION/repo/debian buster main" > /etc/apt/sources.list.d/mariadb.list; \
{ \
echo 'Package: *'; \
echo 'Pin: release o=MariaDB'; \
echo 'Pin-Priority: 999'; \
} > /etc/apt/preferences.d/mariadb
# Install MariaDB and set custom requirements
RUN set -ex; \
{ \
echo "mariadb-server" mysql-server/root_password password 'unused'; \
echo "mariadb-server" mysql-server/root_password_again password 'unused'; \
} | debconf-set-selections; \
apt-get update && apt-get install --no-install-recommends --no-install-suggests -y -q \
mariadb-server \
mariadb-backup \
socat; \
# comment out any "user" entires in the MySQL config ("docker-entrypoint.sh" or "--user" will handle user switching)
sed -ri 's/^user\s/#&/' /etc/mysql/my.cnf /etc/mysql/conf.d/*; \
# making sure that the correct permissions are set
mkdir -p /var/lib/mysql /var/run/mysqld; \
chown -R mysql:mysql /var/lib/mysql /var/run/mysqld; \
# comment out a few problematic configuration values
find /etc/mysql/ -name '*.cnf' -print0 \
| xargs -0 grep -lZE '^(bind-address|log)' \
| xargs -rt -0 sed -Ei 's/^(bind-address|log)/#&/'; \
# don't reverse lookup hostnames, they are usually another container
echo '[mysqld]\nskip-host-cache\nskip-name-resolve' > /etc/mysql/conf.d/docker.cnf
# Setup the Supervisor
RUN apt-get update && apt-get install supervisor -y \
&& mkdir -p /var/log/supervisor
COPY /supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN chmod +x /etc/supervisor/conf.d/supervisord.conf
VOLUME /var/lib/mysql
COPY /docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh \
&& ln -s /usr/local/bin/docker-entrypoint.sh /
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 33060
# call and execute the supervisor after build
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
After a couple days of working on fixing the image I thought that the supervisord was the issue, it couldn't run because of that or something. Well, here is the supervisord:
[supervisord]
logfile=/var/log/supervisord.log
nodaemon=true
user=root
[program:mysql]
command=/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql
process_name=mysqld
priority=1
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stdout_events_enabled=true
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stderr_events_enabled=true
autorestart=true
user=mysql
What happens next when the image has been build is that mysql will be executed by the supervisor. But, the problem is that I wanted to use the entrypoint from https://github.com/mariadb-corporation/mariadb-server-docker/tree/master/10.4 - I'm not very well known in Bash, so it will take some time to practice things there. Anyway, the docker-entrypoint has not been executed the first time, the database will not be initialized. What I can do, is creating an own shell script to initialize it. Tested that and it worked, but why can't I just use the default entrypoint as the first choise?
Is it going wrong at some point between Supervisord commands - docker-entrypoint with mysql connection points or something?
I really hope that someone can help me out.
Edit [04/26/2020]: Described the latest situation from now on, database not initializing, no message, notes or warnings from the entrypoint script.
Regards,
Colin
The MySQL service should run as root user, but later that's the mysql user whiche tries to access to the "socket". So, the socket directory should be accessible by mysql user but Superviser runs the mysql service as root user.
I fixed this issue by creating and gave right permission to the MySQL socket directory in my Dockerfile:
ARG MARIADB_MYSQL_SOCKET_DIRECTORY='/var/run/mysqld'
RUN mkdir -p $MARIADB_MYSQL_SOCKET_DIRECTORY && \
chown root:mysql $MARIADB_MYSQL_SOCKET_DIRECTORY && \
chmod 774 $MARIADB_MYSQL_SOCKET_DIRECTORY
then configured the Supervisor like this:
[program:mariadb]
command=/usr/sbin/mysqld
autorestart=true
user=root
I am trying to write my own mariadb-alpine docker image. Everything works file but while I am trying to collect the mariadb logs I am getting nothing. I tried to follow a lot of related issue like this and tried those but in vain.
FROM alpine:edge
COPY my.cnf /etc/mysql/my.cnf
RUN set -ex \
&& apk add mariadb mariadb-client shadow \
&& ln -snf /usr/lib/mariadb /usr/lib/mysql \
&& mysql_install_db --user=mysql --skip-name-resolve --auth-root-authentication-method=socket --auth-root-socket-user=root --force --rpm --skip-test-db \
&& usermod -a -G tty mysql \
&& ln -sf /dev/stdout /var/log/mysqld.err \
&& chown -h mysql:mysql /var/log/mysqld.err
CMD ["mysqld_safe"]
EXPOSE 3306
Is it required to mysqld take pid=1 to work stdout ? In my case it is some like below.
# ps aux
PID USER TIME COMMAND
1 root 0:00 {mysqld_safe} /bin/sh /usr/bin/mysqld_safe
134 mysql 0:00 /usr/bin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mariadb/plugin --user=mysql --log-error=/var/log/mysqld.err --pid-file=49ea99ae9348.p
166 root 0:00 sh
171 root 0:00 ps aux
You're likely running into these issues:
https://github.com/moby/moby/issues/31243
https://github.com/moby/moby/issues/31106
Something with alpine breaks the ability to access /dev/stdout when you change user accounts. The workaround I've used involves:
Running the container with a tty
Adding the user inside the container to the tty group
Starting the command with a gosu/exec to replace the pid 1 shell script with your app
I'm not sure if the last part was required, and you may not have access to do this with the mysql command. You're already doing the second item. That just leaves the first item that you can implement with:
docker run -t your_image
or in a compose file:
services:
mysql:
image: your_image
tty: true
....
The only other option is to run your application directly as mysql instead of starting it as root with user: mysql in the compose file, but that may not be supported by mysql itself.
If none of those work, the option used by the official image is to pick a debian base image instead of the alpine image. You can see their Dockerfile here:
https://github.com/docker-library/mysql/blob/696fc899126ae00771b5d87bdadae836e704ae7d/8.0/Dockerfile
I'm setting up a Dockerfile where I can run my automated tests, and I'm having troubles with connecting to mysql database.
The Dockerfile depends on a prevoously built image and looks like this:
# Stage 0, assign argument as multistage image alias
ARG PHP_IMAGE
FROM ${PHP_IMAGE} as image
# Stage 1, start tests
FROM php:7.2-fpm
RUN curl -sS https://getcomposer.org/installer | php \
&& chmod +x composer.phar && mv composer.phar /usr/local/bin/composer
RUN apt-get update && apt-get install -y gnupg
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && \
apt-get install -yq nodejs build-essential \
git unzip \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng-dev \
subversion \
&& curl -sL https://deb.nodesource.com/setup_8.x | bash - \
&& pecl install mcrypt-1.0.1 \
&& docker-php-ext-enable mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install -j$(nproc) mysqli
RUN apt-get install -y mysql-server
RUN /etc/init.d/mysql start
RUN mysqladmin -u root -p status
RUN yes | pecl install xdebug \
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini \
&& echo "xdebug.remote_enable=on" >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo "xdebug.remote_autostart=off" >> /usr/local/etc/php/conf.d/xdebug.ini
RUN npm install -g npm
COPY --from=image /var/www/html/ /var/www/html/
WORKDIR /var/www/html/
COPY scripts/develop.sh develop.sh
COPY scripts/docker-test.sh docker-test.sh
RUN ["/bin/bash", "-c", "bash develop.sh && bash docker-test.sh"]
I've added RUN mysqladmin -u root -p status to try to debug why connecting to mysql failed and I got
Enter password: mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory")'
Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
To run this I am running
docker build -t $TEST_DOCKER_NAME --build-arg PHP_IMAGE=$DOCKER_IMAGE_NAME_PHP -f Dockerfile.test .
The TEST_DOCKER_NAME and DOCKER_IMAGE_NAME_PHP are stored in an env file and read from there. The PHP image was built successfuly and I'm using it to copy the files from there to here so that I can run PHPUnit.
When I remove that RUN line my build fails when I'm trying to run a script that creates the database
mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to MySQL server on 'localhost' (99 "Cannot assign requested address")'
Check that mysqld is running on localhost and that the port is 3306.
You can check this by doing 'telnet localhost 3306'
What do I need to do in my Dockerfile to make it work?
Answer to your specific problem
This is a common mistake people make when using docker. When you use the RUN directive in docker you are running a command through to completion, capturing the filesystem changes and then exiting.
So when you have the lines
RUN /etc/init.d/mysql start
RUN mysqladmin -u root -p status
The first one is starting mysql. But then the changes are captured, the container is exited and then a new one is started to run the mysqladmin command. Therefore the mysql process is no longer running.
To avoid this you could combine them into a single line like
RUN /etc/init.d/mysql start && mysqladmin -u root -p status
However you will need to do this every time you want to use mysql. Such as in your develop.sh.
Wider answer
It is not recommended to run multiple processes within your container and it is also not recommended to use init.d or other system startup frameworks within your container.
You seem to be treating your container like a virtual machine and are having issues because containers are not VMs.
I recommend you explore running mysql in a separate container and then using a tool like docker-compose to start and and stop your containers.
Does anybody knows how to install mysql-server via dockerfile? I have written a Dockerfile, but the build ends with an error: /bin/sh: 1: /usr/bin/mysqld: not found
USER root
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server-5.7
# Remove pre-installed database
RUN rm -rf /var/lib/mysql/*
RUN sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/"/etc/mysql/my.cnf
ENV DB_USER example
ENV DB_PASSWORD example
ENV DB_NAME example
ENV VOLUME_HOME "/var/lib/mysql"
EXPOSE 3306
RUN cp /etc/mysql/my.cnf /usr/share/mysql/my-default.cnf
RUN /usr/bin/mysqld && sleep 5 && \
mysql -uroot -e "CREATE USER '${DB_USER}'#'%' IDENTIFIED BY '${DB_PASSWORD}'" && \
mysql -uroot -e "GRANT ALL PRIVILEGES ON *.* TO '${DB_USER}'#'%' WITH GRANT OPTION" &&\
mysql -uroot -e "CREATE DATABASE ${DB_NAME}" && \
mysqladmin -uroot shutdown
For an ubuntu:16.04 base image, mysqld is found in /usr/sbin, not /usr/bin
If you can add a step RUN which mysqld before your final RUN command that will show you where the mysqld executable is found. It may vary depending on which base image/distro you're using.
You can also use RUN mysqld ... without a full path, if the file is in your $PATH
You may also need to update your RUN sed line as below, adding spaces around the quoted string:
RUN sed -i -e "s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
Otherwise, you may see the following error:
The command '/bin/sh -c sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/"/etc/mysql/my.cnf' returned a non-zero code: 1