I discovered docker last week and am playing around withit for a decent time.
Now I want to deploy a Website inside a Container. The Website is already finished and I got all the files on my host system. It needs php, java, tomcat and - and here is the problem - a mysql-db.
So I created a Dockerfile, using alpine:latest as base image and after that installing the above named applications one by one.
FROM alpine:latest
ENV http_proxy http://not_important/
RUN apk update
RUN apk --no-cache --quiet add openjdk8
RUN apk --no-cache --quiet add nano
RUN apk --no-cache --quiet add php7
RUN apk --no-cache --quiet add mysql
RUN apk --no-cache --quiet add phpmyadmin
RUN mkdir -p /usr/local/tomcat/
COPY apache-tomcat-9.0.4.tar.gz /usr/local/tomcat/
RUN cd /usr/local/tomcat/ && tar xzf /usr/local/tomcat/apache-tomcat-9.0.4.tar.gz
RUN mv /usr/local/tomcat/apache-tomcat-9.0.4/* /usr/local/tomcat
RUN rm -r /usr/local/tomcat/apache-tomcat-9.0.4
RUN rm -r /usr/local/tomcat/apache-tomcat-9.0.4.tar.gz
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
But now, I dont rly know how to finish my work. How am I able to start the mysql-db and access it with phpmyadmin?
I run the container with the following command:
docker run --name alpine_custom -dit -p 30000:8080 -p31000:80 alpine:custom
The tomcat is running on port 30000 without a problem and I want phpmyadmin to be accessable over port 31000. I do have a working MySQL-DB on my Host and manage it with phpmyadmin (meaning, there are two containers, the phpmyadmin container is linked with the database)...
Is it even possible to do it like I want it, or do I have to deploy a second container with a database which is linked with my alpine container (and a third one with phpmyadmin...)?
I am thankful for every answer, thank you in advance
Sincerely
Telvanis :)
PS: I know, the Dockerfile isn't very good but i think its enough for my needs ^^
Try to avoid having it "all-in-one".
This is the idea behind Docker, to go from something "monolithic" to something which is separated to components. This approach gives you an advantage when you want to scale up/down your app, update specific components without rebuilding the whole app... etc.
Try to avoid the installation & configuration of every technology on your own
I remember myself trying to do so with MySQL. I spent much time and had no result. Ended up using the official image. The installation of a software inside docker might have tricky parts and not be the same with the installation one does in a VM.
So, I would propose to start searching for the official images of the technologies that you are trying to put into use. Docker hub has plenty and most of them also provide guidelines on how to use/configure them. For example:
https://hub.docker.com/r/phpmyadmin/phpmyadmin/
https://hub.docker.com/_/mysql/
https://hub.docker.com/_/openjdk/
...you get the idea.
Your running containers will have names. Docker offers a DNS mechanism so that your containers can connect to each other by using these names. For example if you have a container for your MySQL database named my_app_db listening on port 5000, configure the phpmyadmin container to connect there. An important notice here: don't try these on the default network, because it will not work. Define your own test-network.
Dealing with 3,4,5... or maybe more containers will make you type commands to build them, run them, start/stop them. Here is where docker-compose comes in and proves to be very handy. Within a docker-compose.yml file, you can define a "composition" of inter-connecting containers and handle them with single commands like docker-compose up, docker-compose down etc...
Working example:
comes from here, but is slightly modified...
docker-compose.yml file:
version: '2'
services:
mysql:
image: mysql:latest
container_name: phpmyadmin_testing_mysql
environment:
- MYSQL_ROOT_PASSWORD=test123
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin_testing
volumes:
- /sessions
ports:
- 8090:80
environment:
- PMA_ARBITRARY=1
- TESTSUITE_PASSWORD=test123
depends_on:
- mysql
To run, simply use docker-compose up. To connect, use:
server: phpmyadmin_testing_mysql (the name of the MySQL container)
username: root
password: test123
Related
Currently I have a Docker container that hosts a webpage (mostly php). Right now the database is stored on a server on AWS. For development purposes I want to create a local database in the Docker container. I did some googling around and it seems like most people recommend creating an entire separate container for hosting the mysql. Since this is only a database for development I am wondering if I can avoid the effort of setting up another container and put MySQL in directly in the container that hosts the webpage. To do this I tried installing MySQL-Server:
sudo apt-get install mysql-server
Mysql installed fine doing this. Then I tried to run the MySQL interactive shell:
mysql -u root -p
When I did this I got the following error ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
Can I run the mysql in the same Docker container or am I going to need to create a separate one?
There is really no effort in setting up separate MySQL container. Real effort is to install it inside existing container.
I would recommend that you create docker compose file and define application and database containers (make sure you have docker compose installed on your dev environment, in most cases it should be already installed).
Create a file docker-compose.yml (you can create it in the same folder where Dockerfile is for you project, usually project root folder) with following content:
version: '2'
services:
app:
image: your_app_docker_image_name
...more config options depending on your project (volumes, ports, etc)
db:
image: mariadb
volumes:
- './user/db:/var/lib/mysql'
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
To start your project run
docker-compose up
This will lift your app container and separate MySQL container (without root password intentionally since this is for dev purpose).
Now you can access mysql server from your app container like this
mysql -h db -u root
Using docker compose you can setup complex environments easily. And when deploying to production or other test environment you don't need to change your Dockerfile.
There are many pros to have separate containers for each service.
To have php+apache+mysql in the same container you either have to find an image like this https://github.com/tutumcloud/lamp, or build it yourself from a Dockerfile.
But try to imagine one day you decide to switch your db storage engine from Mysql to Percona or Maria, or you would like to start using Memcached/Redis for your application. Either of the above won't be any problem if you have your services as separate containers.
I'm new in docker and i'm trying to make my nodejs express run inside it.
I'm trying to install the dependencies using shellscript and its working but in the end I can't connect to mysql.
My docker file install mysql, create an user and a database, and install nodejs too.
Then it runs npm install and try to start my app but knex says it can't connect to the mysql with the message:
Knex:Error Pool2 - Error: connect ECONNREFUSED /var/run/mysqld/mysqld.sock
Here's a gist with the code i'm using. (nodejs part is incomplete, just with the important part):
https://gist.github.com/jradesenv/527f6e59ab2e7985c38fbed3a2084c83
I hope anyone will have a good ideia on how to resolve or debbug this.
The best practice is to keep the components of a micro-service separate in their own container.
See for instance "Learn Docker by building a Microservice" from Dave Kerr.
He does declare two services:
version: '2'
services:
users-service:
build: ./users-service
ports:
- "8123:8123"
depends_on:
- db
environment:
- DATABASE_HOST=db
db:
build: ./test-database
With a dedicated Dockerfile for the database:
FROM mysql:5
ENV MYSQL_ROOT_PASSWORD 123
ENV MYSQL_DATABASE users
ENV MYSQL_USER users_service
ENV MYSQL_PASSWORD 123
ADD setup.sql /docker-entrypoint-initdb.d
Docker containers are designed to run a single command. The mysql installer expects the service it registered to automatically be started on the OS bootup, but that's not the case inside of a container.
The proper solution is to split these into two separate containers, one db container, and another nodejs/app container. Link and run the two together with a docker-compose configuration that automatically sets up the host names.
The less ideal option is supervisord which you can use to run and manage multiple processes inside of the container. You install it just like any other app, configure your db and node app as two services for supervisord to manage, and then launch supervisord as your container's run command.
Use docker-compose and create a dockerfile for your nodejs and one for mysql. Each container is responsible for doing their thing. In your compose, link them. Then point your nodejs db connection to the mysql container.
Documentation for the mysql docker image says:
When a container is started for the first time [...] it will execute files with extensions .sh and .sql that are found in /docker-entrypoint-initdb.d. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data.
So at first I did this in my docker-compose.yml:
version: '2'
services:
db:
image: mysql:5.7
volumes:
- .:/docker-entrypoint-initdb.d:ro
When I ran docker-compose build and docker-compose up the container was created and the sql files in the current directory were executed. So far all good.
But if I want to deploy these containers to another machine (using docker-machine), mounting /docker-entrypoint-initdb.d as a volume won't work, since that machine won't have access to my machine's . directory.
So then I tried to extend the mysql:5.7 image:
FROM mysql:5.7
COPY ./*.sql /docker-entrypoint-initdb.d/
And do this in my docker-compose.yml
version: '2'
services:
db:
build:
context: .
dockerfile: Dockerfile
However, when I then run docker-compose build and docker-compose up on the second machine and try to run my application, the *.sql files in the current directory aren't executed. None of my tables are created.
Why doesn't my second approach work?
EDIT:
Ah, wait. I have asked the wrong question. The problem is not that the second approach doesn't work, it is that the second approach doesn't work when running it on the local docker-machine running in Virtualbox. The second approach actually works when I use it on my host machine (i.e. not using docker-machine).
I found the issue. The problem was that I thought docker-compose rm -f destroyed any volumes attached to the containers, but I was wrong. So what I thought was the first up:ed containers were in fact using the database created by an earlier up. So the sql-files weren't run because it wasn't actually the first time the containers started. Duh. Thanks Ken for pointing me in the right direction.
Turns out that not even using docker-compose rm -v removes the volumes. I had to list them with docker volume ls and then remove them manually with docker volume rm <volume>.
Another Docker-specific way to clean up volumes:
docker system prune
This will remove dangling images, containers, volumes, and networks. Adding -a will also remove expired containers and currently unused images.
Using the official MySQL Docker image, I don't understand how to mount the data directory to a specifc point on the host. The Dockerfile of the image sets
VOLUME /var/lib/mysql
so database data should be stored "somewhere" on the host. I want to be more specific in my docker-compose file, so I tried the following:
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=mydb
volumes:
- ./database/mysql:/var/lib/mysql
Starting with docker-compose up everything works fine, but the ./database/mysql directory on the host stays empty, whereas /var/lib/mysql in the container contains data. Is there a problem in my configuration? Or do I misunderstand how to use volumes?
docker-compose will always try to preserve data volumes, so that you don't lose any data within them. If you started with a data volume, then changed to a host volume, you may still get the data volume.
To correct this, run docker-compose stop && docker-compose rm -f, which will remove your containers and your data volumes (this will erase any data in your data volumes). On the next docker-compose up, you should see it using the host volume.
Edit: As of Compose 1.6 you can run docker-compose stop -v instead of the two commands above.
I am trying to automate the installation and running of set of linked docker containers using fig. The configuration is composed of a container running RStudio linked to a container running MySQL, such that I can query the MySQL database from RStudio.
On first run, I would like to create the MySQL container from the base MySQL image, and populate it with a user and database. From the command line, something like this:
#Get the latest database file
wget -P /tmp http://ergast.com/downloads/f1db.sql.gz && gunzip -f /tmp/f1db.sql.gz
#Create the database container with user, password and database
docker run --name ergastdb -e MYSQL_USER=ergast -e MYSQL_ROOT_PASSWORD=mrd -e MYSQL_DATABASE=f1db -d mysql
#Populate the database
docker run -it --link=ergastdb:mysql -v /tmp:/tmp/import --rm mysql sh -c 'exec mysql -h$MYSQL_PORT_3306_TCP_ADDR -P$MYSQL_PORT_3306_TCP_PORT -uergast -pmrd f1db < /tmp/import/f1db.sql'
#Fire up RStudio and link to the MySQL db
docker run --name f1djd -p 8788:8787 --link ergastdb:db -d rocker/hadleyverse
If I could get hold of a database image with the data preloaded, I guess that something like the following fig.yml script could link the elements?
gdrive:
command: echo created
image: busybox
volumes:
- "~/Google Drive/shareddata:/gdrive"
dbdata:
image: mysql_preloaded
environment:
MYSQL_USER=ergast
MYSQL_ROOT_PASSWORD=mrd
MYSQL_DATABASE=f1db
rstudio:
image: rocker/hadleyverse
links:
- dbdata:db
ports:
- "8788:8787"
volumes_from:
- gdrive
My question is, can I use a one-shot fig step to create the dbdata container, then perhaps mount a persistent volume, link to it and initialise the database, presumably as part of an initial fig up. If I then start and stop containers, I don't want to run the db initialisation step again, just link to the data volume container that contains the data I previously installed.
I also notice that the MySQL docker image looks like it will support arbitrary datadir definitions (Update entrypoints to read DATADIR from the MySQL configuration directly instead of assuming /var/lib/docker). As I understand it, the current definition of the MySQL image prevents mounting (and hence persisting) the database contents within the database container. I guess this might make it possible to create a mysql_preloaded image, but I don't think the latest version of the MySQL docker script has been pushed to dockerhub just yet and I can't quite think my way to how fig might then be able to make use of this alternative pathway?
Some options:
Edit the fig.yml to run a custom command that is different than the default image command/entrypoint.
From http://www.fig.sh/yml.html (example):
command: bundle exec thin -p 3000
Start the container locally, modify it and then commit it as a new image.
Modify the MySQL image docker-entrypoint.sh file to do your custom initialization.
https://github.com/docker-library/mysql/blob/567028d4e177238c58760bcd69a8766a8f026e2a/5.7/docker-entrypoint.sh
Couldn't you just roll your own version of the MySQL docker image? The official one from MySQL "upstream" is available at https://github.com/mysql/mysql-docker/blob/mysql-server/5.7/Dockerfile
What if you simply make your own copy of that, remove the VOLUME line (line 11) and then you can
docker build -t my_mysql .
docker run -d --name=empty_db my_mysql ...
# add data to the database running in the container
docker commit empty_db primed_db
docker rm -v empty_db
docker run -d --name=instance1 primed_db
docker run -d --name=instance2 primed_db
which should leave you with two running "identical" but fully isolated instances.