I have written a dockerfile that runs mysql on an ubuntu image. The Dockerfile is:
FROM ubuntu
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server
RUN sed -i '43s/.*/bind-address = 0.0.0.0/' /etc/mysql/mysql.conf.d/mysqld.cnf
EXPOSE 3306
ENTRYPOINT service mysql start && bash
If I run:
docker run -dit mysql-server
after building the container everything works fine and my Apache/PHP container can communicate with it. However, if I run it with a volume attached (docker run -dit -v ~/vol/:/var/lib/mysql/ mysql-server) the container will stop running after 30 seconds (I'm pretty sure it's the same amount of time every time).
Does anyone know a way I can keep the container up and mount a volume? I've never had this problem before and can't find anything else online (I've been looking a while). Thanks.
This is because you are masking the contents of /var/lib/mysql with the contents of ~/vol which I'm assuming is empty. As such the MySQL server can't start as it's missing database files. I would personally use the official image over your custom implementation as it will handle what your looking for here is the link to Dockerhub. It has options for mounting your custom my.cnf file if you need those changes. However by default the image does bind to 0.0.0.0. See the Dockerhub link for config options.
Hope this helps
Dylan
Related
I'm trying to add mysql to a dockerfile. I dont want to use a mysql source docker, I'm using something else as I need ffmpeg/nvidia/asp.net aswell. So I can't simple use a different base image to start from.
So how can I
Add mysql to my docker build file?
Configure it so the data for mysql is in a specific directory (so I can can map outside the docker file)
Have mysql start up but not be the entry point service
Everything I found so far basically say "use this base image". which doesn't help me. I dont want to have mysql separate, just self contained docker image with everything it needs.
TIA
Install mysql
Use apt-get to install packages on debian distro's. Add in your dockerfile the following line:
RUN apt-get update && apt-get install -y mysql-server
Start MySQL
Add to the Dockerfile CMD a prefix where you start mysql in detached mode. Like:
CMD mysql start & # [paste here your default command]`.
This will start mysql and start your app.
Mount directories
Mounting directories is done with the -v flag:
docker run -ti -v <host_dir>:<container_dir> my-image /bin/bash
I am trying to connect Django application with MySql docker container. I am using the latest version of MySql i.e MySql 8.0 to build a container. I was able to build the MySql container successfully but I am not able to connect it using Django's default MySql Connector. When I run the docker-compose up command I get the error mentioned below.
django.db.utils.OperationalError: (1045, 'Plugin caching_sha2_password could not be loaded: /usr/lib/x86_64-linux-gnu/mariadb19/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory')
I started looking for the solution and got to know that MySql has released a major change in its default authentication plugin which is not support by most of the MySql Connectors.
To fix this issue I will have to set the default-authentication-plugin to mysql_native_password in my.cnf file of MySql container.
I logged into container using command docker exec -it <conatiner id> /bin/bash and was also able to locate the my.cnf file inside the container.
To edit the my.cnf file I will have to use the nano command as stated below.
nano my.cnf
But unfortunately nano command is not installed in MySql Container. To install nano I will need sudo installed in container.
I tried installing sudo using below mentioned command but it did not work.
apt-get install sudo
error -
Reading package lists... Done
Building dependency tree
Reading state information... Done
What are the possible solutions to fix this issue.
In general you shouldn't try to directly edit files in containers. Those changes will get lost as soon as the container is stopped and deleted; this happens extremely routinely since many Docker options can only be set at startup time, and the standard way to update the software in a container is to recreate it with a newer image. In your case, you also can't live-edit a configuration file the main container process needs at startup time, because it will have already read the configuration file by the time you're able to edit it.
The right way to do this is to inject the modified configuration file at container startup time. If you haven't already, get the default configuration file out of the image
docker run --rm mysql:8 cat /etc/mysql/my.cnf > my.cnf
and edit it, directly, on your host, using your choice of text editor. When you launch the container, inject the modified file
docker run -v $PWD/my.cnf:/etc/mysql/my.cnf ... mysql:8
or, in Docker Compose,
volumes:
- ./my.cnf:/etc/mysql/my.cnf
The Docker Hub mysql image documentation has some more suggestions on ways to set this; see "Using a custom MySQL configuration file" there.
While docker exec is an extremely useful debugging tool, it shouldn't be part of your core workflow, and I'd recommend trying to avoid it in cases like this. (With the bind-mount approach, you can add the modified config file to your source control system and blindly docker-compose up like normal without knowing this detail; a docker exec approach you'd have to remember and repeat by hand every time you started the container stack.)
Also note that you don't need sudo in Docker at all. Every context where you can run something (Dockerfiles, docker run, docker exec) has some way to explicitly specify the user ID, so you can docker exec -u root .... sudo generally depends on things like users having passwords and interactive prompting, which works well for administering a real Linux host but doesn't match a typical Docker environment.
The issue is not with sudo because you've already permissions to install pacakegs.
You should instead update package manager before to install new packages in order to update package repositories:
RUN apt-get update
RUN apt-get install nano
Mysql build the image with oracle linux, run the commands to install nano:
microdnf update
microdnf install nano sudo -y
And edit the my.cnf with nano
I discovered docker last week and am playing around withit for a decent time.
Now I want to deploy a Website inside a Container. The Website is already finished and I got all the files on my host system. It needs php, java, tomcat and - and here is the problem - a mysql-db.
So I created a Dockerfile, using alpine:latest as base image and after that installing the above named applications one by one.
FROM alpine:latest
ENV http_proxy http://not_important/
RUN apk update
RUN apk --no-cache --quiet add openjdk8
RUN apk --no-cache --quiet add nano
RUN apk --no-cache --quiet add php7
RUN apk --no-cache --quiet add mysql
RUN apk --no-cache --quiet add phpmyadmin
RUN mkdir -p /usr/local/tomcat/
COPY apache-tomcat-9.0.4.tar.gz /usr/local/tomcat/
RUN cd /usr/local/tomcat/ && tar xzf /usr/local/tomcat/apache-tomcat-9.0.4.tar.gz
RUN mv /usr/local/tomcat/apache-tomcat-9.0.4/* /usr/local/tomcat
RUN rm -r /usr/local/tomcat/apache-tomcat-9.0.4
RUN rm -r /usr/local/tomcat/apache-tomcat-9.0.4.tar.gz
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
But now, I dont rly know how to finish my work. How am I able to start the mysql-db and access it with phpmyadmin?
I run the container with the following command:
docker run --name alpine_custom -dit -p 30000:8080 -p31000:80 alpine:custom
The tomcat is running on port 30000 without a problem and I want phpmyadmin to be accessable over port 31000. I do have a working MySQL-DB on my Host and manage it with phpmyadmin (meaning, there are two containers, the phpmyadmin container is linked with the database)...
Is it even possible to do it like I want it, or do I have to deploy a second container with a database which is linked with my alpine container (and a third one with phpmyadmin...)?
I am thankful for every answer, thank you in advance
Sincerely
Telvanis :)
PS: I know, the Dockerfile isn't very good but i think its enough for my needs ^^
Try to avoid having it "all-in-one".
This is the idea behind Docker, to go from something "monolithic" to something which is separated to components. This approach gives you an advantage when you want to scale up/down your app, update specific components without rebuilding the whole app... etc.
Try to avoid the installation & configuration of every technology on your own
I remember myself trying to do so with MySQL. I spent much time and had no result. Ended up using the official image. The installation of a software inside docker might have tricky parts and not be the same with the installation one does in a VM.
So, I would propose to start searching for the official images of the technologies that you are trying to put into use. Docker hub has plenty and most of them also provide guidelines on how to use/configure them. For example:
https://hub.docker.com/r/phpmyadmin/phpmyadmin/
https://hub.docker.com/_/mysql/
https://hub.docker.com/_/openjdk/
...you get the idea.
Your running containers will have names. Docker offers a DNS mechanism so that your containers can connect to each other by using these names. For example if you have a container for your MySQL database named my_app_db listening on port 5000, configure the phpmyadmin container to connect there. An important notice here: don't try these on the default network, because it will not work. Define your own test-network.
Dealing with 3,4,5... or maybe more containers will make you type commands to build them, run them, start/stop them. Here is where docker-compose comes in and proves to be very handy. Within a docker-compose.yml file, you can define a "composition" of inter-connecting containers and handle them with single commands like docker-compose up, docker-compose down etc...
Working example:
comes from here, but is slightly modified...
docker-compose.yml file:
version: '2'
services:
mysql:
image: mysql:latest
container_name: phpmyadmin_testing_mysql
environment:
- MYSQL_ROOT_PASSWORD=test123
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin_testing
volumes:
- /sessions
ports:
- 8090:80
environment:
- PMA_ARBITRARY=1
- TESTSUITE_PASSWORD=test123
depends_on:
- mysql
To run, simply use docker-compose up. To connect, use:
server: phpmyadmin_testing_mysql (the name of the MySQL container)
username: root
password: test123
I have a Docker container with MariaDB installed. I am not using any volumes.
[vagrant#devops ~]$ sudo docker volume ls
DRIVER VOLUME NAME
[vagrant#devops ~]$
Now something strange is happening. When I do sudo docker stop and sudo docker start the MariaDB data is still there. I expected this data to be lost.
Btw when I edit some file for example /etc/hosts I do see the expected behavior. Changes to this file are lost after restart.
How is it possible that MariaDB data is persistent without volumes? This shouldn't happen right?
docker stop does not remove a container, neither does docker start create a container.
docker run does create a new container from a image.
docker start starts a container which does exist but has been stopped before ( call it pause/resume if you like ).
Thus for start/stop no volumes are required to keep the state persistent.
if you though do docker stop <name> && docker rm <name> and then docker start <name> you get and error, that the container does no longer exist - so now you need docker run <args> youimage
I am learning docker these days. And I want to install mysql inside docker container.
Here is my Dockerfile
FROM ubuntu:14.04
ADD ./setup_mysql.sh /setup_mysql.sh
RUN chmod 755 /setup_mysql.sh
RUN /setup_mysql.sh
EXPOSE 3306
CMD ["/usr/sbin/mysqld"]
and shell script setup_mysql.sh
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server
sed -i -e "s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
service mysql start &
sleep 5
echo "UPDATE mysql.user SET password=PASSWORD('rootpass') WHERE user='root'" | mysql
echo "CREATE DATABASE devdb" | mysql
echo "GRANT ALL ON devdb.* TO devuser #'%' IDENTIFIED BY 'devpass'" | mysql
sleep 5
service mysql stop
Something wrong happend when running sudo docker build -t test/devenv .
Setting up mysql-server-5.5 (5.5.38-0ubuntu0.14.04.1) ...
invoke-rc.d: policy-rc.d denied execution of stop.
invoke-rc.d: policy-rc.d denied execution of start.
And if I remove the second sleep 5, the command service mysql stop will throw
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
Why does this happen?
Thank you!
I high recommend leveraging the work of others. For example checkout the Mysql image from the docker registry:
https://registry.hub.docker.com/_/mysql/
Here's the associated git repository files:
https://github.com/docker-library/mysql/blob/master/5.7
If you look into the Dockerfile you'll notice the software is being installed as expected:
.. apt-get update && apt-get install -y mysql-server="${MYSQL_VERSION}"* ..
The trick is to realize that a database instance is not the same thing as the database software, only the latter is shipped with the image. Creating DBs and loading them with data is something that is done at run-time. So that work is done by an extra script, pulled into the image and setup to be executed when you run the container:
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Hope this helps.
Add this to your dockerfile:
RUN su
RUN echo exit 0 > /usr/sbin/policy-rc.d
I was facing the same issue. This code fixed it.
Here is a good post which tries to root cause the issue you are facing.
Shorter way:
RUN echo "#!/bin/sh\nexit 0" > /usr/sbin/policy-rc.d should resolve your issue
OR
If that doesn't resolve the issue, try running your docker container with privileged option. Like this, docker run --privileged -d -ti DOCKER_IMAGE:TAG
Ideally, I would not recommend running container with privileged option unless its a test bed container. The reason being running a docker container with privileged gives all capabilities to the container, and it also lifts all the limitations enforced. In other words, the container can then do almost everything that the host can do. But this is not a good practice. This defeats the docker purpose of isolating from host machine.
The ideal way to do this is to set capabilities of your docker container based on what you want to achieve. Googling this should help you out to provide appropriate capability for your docker container.