Docker image extending the mysql image isn't running the initdb scripts - mysql

Documentation for the mysql docker image says:
When a container is started for the first time [...] it will execute files with extensions .sh and .sql that are found in /docker-entrypoint-initdb.d. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data.
So at first I did this in my docker-compose.yml:
version: '2'
services:
db:
image: mysql:5.7
volumes:
- .:/docker-entrypoint-initdb.d:ro
When I ran docker-compose build and docker-compose up the container was created and the sql files in the current directory were executed. So far all good.
But if I want to deploy these containers to another machine (using docker-machine), mounting /docker-entrypoint-initdb.d as a volume won't work, since that machine won't have access to my machine's . directory.
So then I tried to extend the mysql:5.7 image:
FROM mysql:5.7
COPY ./*.sql /docker-entrypoint-initdb.d/
And do this in my docker-compose.yml
version: '2'
services:
db:
build:
context: .
dockerfile: Dockerfile
However, when I then run docker-compose build and docker-compose up on the second machine and try to run my application, the *.sql files in the current directory aren't executed. None of my tables are created.
Why doesn't my second approach work?
EDIT:
Ah, wait. I have asked the wrong question. The problem is not that the second approach doesn't work, it is that the second approach doesn't work when running it on the local docker-machine running in Virtualbox. The second approach actually works when I use it on my host machine (i.e. not using docker-machine).

I found the issue. The problem was that I thought docker-compose rm -f destroyed any volumes attached to the containers, but I was wrong. So what I thought was the first up:ed containers were in fact using the database created by an earlier up. So the sql-files weren't run because it wasn't actually the first time the containers started. Duh. Thanks Ken for pointing me in the right direction.
Turns out that not even using docker-compose rm -v removes the volumes. I had to list them with docker volume ls and then remove them manually with docker volume rm <volume>.

Another Docker-specific way to clean up volumes:
docker system prune
This will remove dangling images, containers, volumes, and networks. Adding -a will also remove expired containers and currently unused images.

Related

Mariadb tables are deleted when use volume in docker-compose

I'm copy previous mariadb container's data(/var/lib/mysql) and paste data to new container image.
this is Dockerfile
FROM mariadb:latest
ENV MYSQL_ROOT_PASSWORD tt
ENV MYSQL_DATABASE tt
ENV MYSQL_USER tt
ENV MYSQL_PASSWORD tt
# copy other database data
ADD mysql /var/lib/mysql
RUN chown -R mysql:mysql /var/lib/mysql
VOLUME /var/lib/mysql
EXPOSE 3306
CMD ["mysqld"]
when I build docker image, all table remained
but run docker image by using volume, all table disappear, just db.opt remains.
how can i get database's data with using volume?
Your problem it's related on how VOLUMES work, what you are doing it's adding stuff inside /var/lib/mysql at build time, and then when you run the container, a new EMPTY VOLUME is created and linked at the /var/lib/mysql, which pretty much overwrites anything you put there before.
The correct way to go about it for an existing table space would be to create a volume with the docker volume create syntax (ref: https://docs.docker.com/storage/volumes/) and then using this trick ( https://github.com/moby/moby/issues/25245#issuecomment-365980572 ) you can add the table data to your volume, and then you run your mariadb container mounting said volume docker run --mount source=myvolume,target=/var/lib/mysql mariadb:latest
I will add that you shouldn't build an image with the tables added at the docker build layer, it makes the docker image huge and it's a wrong use of it all around except for some niche cases like QA databases that you destroy afterwards or read-only databases. The usual way about it it's either using VOLUMES as you stated or with bindings from the host OS.
Then, you shouldn't use mariadb:latest either, choose a particular version and stick with it, as using mariadb:latest can upgrade/downgrade your version and cause all kind of funny bugs.

Unexpected behaviour when extending the mysql Dockerfile

I created the following Dockerfile:
FROM mysql:5.7
ADD assets/geograph.cnf /etc/mysql/conf.d
ENV MYSQL_DATABASE=geograph \
MYSQL_RANDOM_ROOT_PASSWORD=yes
ADD http://data.geograph.org.uk/dumps/gridimage_base_sample.mysql.gz /docker-entrypoint-initdb.d/gridimage_base_sample.sql.gz
I then created an image from this Dockerfile:
docker build --tag geograph:latest .
I then created a container from this image:
docker run --name geograph -e MYSQL_USER=geograph -e MYSQL_PASSWORD=geograph --detach geograph:latest
However, I've noticed some unexpected behaviour:
The container starts and then stops. (I've compared docker ps and docker ps --all.)
The geograph database doesn't contain the data from gridimage_base_sample.sql.gz, which was created by mysqldump. (I've verified the database dump.)
I expected the container to behave like the base MySQL image (mysql:5.7), with some additional configuration and some data. What am I doing wrong?
Some context: I'd like to use the database for analysis, ensuring that the results of my analysis are repeatable. I'm not going to be creating/deleting records and host/guest will probably be on the same machine. I've experimented with using another image/container to curl the database dump into a volume and then mounting this volume when creating the geograph container. This solves #2 but not #1. (It also seems a bit unnecessary to have two images/containers but that's not terribly important: if two are required, then two are required!)
Thanks in advance for any help.
Whilst the above image could ADD the database dump, the MySQL image couldn't unzip it because of a permissions error. (The user didn't have permission to write to /docker-entrypoint-initdb.d.) Seemingly this was enough to cause the container to stop (#1) and explains why the geograph database didn't contain the data from gridimage_base_sample.sql.gz (#2).

docker run phpmyadmin inside alpine container

I discovered docker last week and am playing around withit for a decent time.
Now I want to deploy a Website inside a Container. The Website is already finished and I got all the files on my host system. It needs php, java, tomcat and - and here is the problem - a mysql-db.
So I created a Dockerfile, using alpine:latest as base image and after that installing the above named applications one by one.
FROM alpine:latest
ENV http_proxy http://not_important/
RUN apk update
RUN apk --no-cache --quiet add openjdk8
RUN apk --no-cache --quiet add nano
RUN apk --no-cache --quiet add php7
RUN apk --no-cache --quiet add mysql
RUN apk --no-cache --quiet add phpmyadmin
RUN mkdir -p /usr/local/tomcat/
COPY apache-tomcat-9.0.4.tar.gz /usr/local/tomcat/
RUN cd /usr/local/tomcat/ && tar xzf /usr/local/tomcat/apache-tomcat-9.0.4.tar.gz
RUN mv /usr/local/tomcat/apache-tomcat-9.0.4/* /usr/local/tomcat
RUN rm -r /usr/local/tomcat/apache-tomcat-9.0.4
RUN rm -r /usr/local/tomcat/apache-tomcat-9.0.4.tar.gz
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
But now, I dont rly know how to finish my work. How am I able to start the mysql-db and access it with phpmyadmin?
I run the container with the following command:
docker run --name alpine_custom -dit -p 30000:8080 -p31000:80 alpine:custom
The tomcat is running on port 30000 without a problem and I want phpmyadmin to be accessable over port 31000. I do have a working MySQL-DB on my Host and manage it with phpmyadmin (meaning, there are two containers, the phpmyadmin container is linked with the database)...
Is it even possible to do it like I want it, or do I have to deploy a second container with a database which is linked with my alpine container (and a third one with phpmyadmin...)?
I am thankful for every answer, thank you in advance
Sincerely
Telvanis :)
PS: I know, the Dockerfile isn't very good but i think its enough for my needs ^^
Try to avoid having it "all-in-one".
This is the idea behind Docker, to go from something "monolithic" to something which is separated to components. This approach gives you an advantage when you want to scale up/down your app, update specific components without rebuilding the whole app... etc.
Try to avoid the installation & configuration of every technology on your own
I remember myself trying to do so with MySQL. I spent much time and had no result. Ended up using the official image. The installation of a software inside docker might have tricky parts and not be the same with the installation one does in a VM.
So, I would propose to start searching for the official images of the technologies that you are trying to put into use. Docker hub has plenty and most of them also provide guidelines on how to use/configure them. For example:
https://hub.docker.com/r/phpmyadmin/phpmyadmin/
https://hub.docker.com/_/mysql/
https://hub.docker.com/_/openjdk/
...you get the idea.
Your running containers will have names. Docker offers a DNS mechanism so that your containers can connect to each other by using these names. For example if you have a container for your MySQL database named my_app_db listening on port 5000, configure the phpmyadmin container to connect there. An important notice here: don't try these on the default network, because it will not work. Define your own test-network.
Dealing with 3,4,5... or maybe more containers will make you type commands to build them, run them, start/stop them. Here is where docker-compose comes in and proves to be very handy. Within a docker-compose.yml file, you can define a "composition" of inter-connecting containers and handle them with single commands like docker-compose up, docker-compose down etc...
Working example:
comes from here, but is slightly modified...
docker-compose.yml file:
version: '2'
services:
mysql:
image: mysql:latest
container_name: phpmyadmin_testing_mysql
environment:
- MYSQL_ROOT_PASSWORD=test123
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin_testing
volumes:
- /sessions
ports:
- 8090:80
environment:
- PMA_ARBITRARY=1
- TESTSUITE_PASSWORD=test123
depends_on:
- mysql
To run, simply use docker-compose up. To connect, use:
server: phpmyadmin_testing_mysql (the name of the MySQL container)
username: root
password: test123

Can't connect nodejs and mysql in same docker

I'm new in docker and i'm trying to make my nodejs express run inside it.
I'm trying to install the dependencies using shellscript and its working but in the end I can't connect to mysql.
My docker file install mysql, create an user and a database, and install nodejs too.
Then it runs npm install and try to start my app but knex says it can't connect to the mysql with the message:
Knex:Error Pool2 - Error: connect ECONNREFUSED /var/run/mysqld/mysqld.sock
Here's a gist with the code i'm using. (nodejs part is incomplete, just with the important part):
https://gist.github.com/jradesenv/527f6e59ab2e7985c38fbed3a2084c83
I hope anyone will have a good ideia on how to resolve or debbug this.
The best practice is to keep the components of a micro-service separate in their own container.
See for instance "Learn Docker by building a Microservice" from Dave Kerr.
He does declare two services:
version: '2'
services:
users-service:
build: ./users-service
ports:
- "8123:8123"
depends_on:
- db
environment:
- DATABASE_HOST=db
db:
build: ./test-database
With a dedicated Dockerfile for the database:
FROM mysql:5
ENV MYSQL_ROOT_PASSWORD 123
ENV MYSQL_DATABASE users
ENV MYSQL_USER users_service
ENV MYSQL_PASSWORD 123
ADD setup.sql /docker-entrypoint-initdb.d
Docker containers are designed to run a single command. The mysql installer expects the service it registered to automatically be started on the OS bootup, but that's not the case inside of a container.
The proper solution is to split these into two separate containers, one db container, and another nodejs/app container. Link and run the two together with a docker-compose configuration that automatically sets up the host names.
The less ideal option is supervisord which you can use to run and manage multiple processes inside of the container. You install it just like any other app, configure your db and node app as two services for supervisord to manage, and then launch supervisord as your container's run command.
Use docker-compose and create a dockerfile for your nodejs and one for mysql. Each container is responsible for doing their thing. In your compose, link them. Then point your nodejs db connection to the mysql container.

MySQL image ignores volume configuration of docker-compose.yml

Using the official MySQL Docker image, I don't understand how to mount the data directory to a specifc point on the host. The Dockerfile of the image sets
VOLUME /var/lib/mysql
so database data should be stored "somewhere" on the host. I want to be more specific in my docker-compose file, so I tried the following:
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=mydb
volumes:
- ./database/mysql:/var/lib/mysql
Starting with docker-compose up everything works fine, but the ./database/mysql directory on the host stays empty, whereas /var/lib/mysql in the container contains data. Is there a problem in my configuration? Or do I misunderstand how to use volumes?
docker-compose will always try to preserve data volumes, so that you don't lose any data within them. If you started with a data volume, then changed to a host volume, you may still get the data volume.
To correct this, run docker-compose stop && docker-compose rm -f, which will remove your containers and your data volumes (this will erase any data in your data volumes). On the next docker-compose up, you should see it using the host volume.
Edit: As of Compose 1.6 you can run docker-compose stop -v instead of the two commands above.