How Wordpress should be runned on Docker - mysql

I am very newbie on all of this stuff of Docker. I've read on some sites that should exist one image per each application is running. This means that for run wordpress I would need at least 2 images: One for MySQL and another for Wordpress (and apache). In fact, the official Wordpress docker image does not include MySQL, requires an external connection.
But I've found some images in which MySQL is embedded on the image among wordpress and Apache. This gives you a more portable image because you only need that to deploy on any server. But if in the system is already running an image of docker you are wasting resources.
So, my question is if Wordpress should be runned on a same image with MySQL or not. And if not, how it should be done to move all data on MySQL to a different location.

The standart way is having a container per service, so you will have a container for MySQL and another one for Apache/PHP with the application.
If your are going to use the official MySQL container, and you want to persist the data, you just can to mount a volume from your host to the datadir in the mysql container:
$ docker run --name some-mysql -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
This will create a folder in the /my/own/datadir path of your host with all the content of MySQL.
You can find more information about that in that link:
https://github.com/docker-library/docs/tree/master/mysql#where-to-store-data

Related

Can i build a already local build mysql Database with the mysql Dockerfile?

i am completely new to Docker and everything that has to do with it. In my last semester i build a mysql Database locally with mysql Workbench and connected a java project to it. This year i need to make this run in a Docker Container. I have pulled the Dockerfile from GitHub and i am using Portainer to manage Docker.
My teacher wants the following:
He wants me to put my code in a repository which he created for me
Then he wants to pull my project, which should include a Dockerfile, so that he don't needs to manually rebuild my mysql Database.
So how can i do this? Do i need to change the mysql Dockerfile? Or should i use the default one and then initialize my Database in my javacode?
This i my first post here on stackoverflow and i am not that advanced in programming (only 2 years experience with java), so if i can give you any more information please let me know. I hope this is the right way to post questions here.
I am thankful to everyone helping me out!
Greets, Luciore
You need to place your sql file also on GitHub, and it should be accessible from your docker environment.
When a container is started for the first time, a new database with
the specified name will be created and initialized with the provided
configuration variables. Furthermore, it will execute files with
extensions .sh, .sql and .sql.gz that are found in
/docker-entrypoint-initdb.d. Files will be executed in alphabetical
order.
docker-hub-mysql
And yes, Also you need to modify your Dockerfile, Here is the example.
This will build mysql8 based docker image, will download sql file from GitHub and will initiate DB name classicmodels mean anything a valid SQL file will be executed on boot.
FROM mysql:8
RUN apt update && apt install curl -y
RUN curl -0 https://raw.githubusercontent.com/Adiii717/doctor-demo-app/master/sample_database.sql > /docker-entrypoint-initdb.d/sampledb.sql
Build the docker image
docker build -t mysql8 .
Run the docker container
docker run --rm --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -it mysql8

Connecting to percona docker from a java docker container

I know there have been many similar questions, but none of them are what I want. I'm following this because I specifically need 5.5, at least for now. My java project (which accesses mysql) is in a container I built with
docker build -t projectname-testing .
The Dockerfile is pretty standard, it just copies over a built tarball and extracts it to a specific folder. The CMD is a shell script run_dev_server.sh that just launches the server with dev configurations rather than production ones.
I created a percona docker container with the command given in the link with
docker run --name projectname-mysql-server -e MYSQL_ROOT_PASSWORD="" -d percona:5.5
So now the way I see it, just need the link the two as mentioned in the link:
docker run -p 3306:3306 --name projectname-local --link projectname-mysql-server projectname-testing
Which gives me
docker: Error response from daemon: Cannot link to a non running container: /projectname-mysql-server AS /projectname-local/projectname-mysql-server.
ERRO[0000] error getting events from daemon: net/http: request canceled
Which isn't very helpful and doesn't tell me what happened. Am I understanding this process wrong? What should I be doing?
First of all, I would recommend using the official Percona docker image from Docker Hub, instead of building your own image. The official image has a 5.5 version; https://hub.docker.com/_/percona/
You can either extend this image if you need specific changes (such as a custom configuration), for example;
FROM percona:5.5
COPY my-config.cnf /etc/mysql/conf.d/
Important: I notice you are publishing port 3306 (-p 3306:3306). Publishing a port makes it publicly accessible on the host's network-interface. You should only do this if you have external software that needs to connect to the database. If only your application needs access to the database, publishing the port is not needed, because containers can connect with eachother through the docker container-container network, which is "private" and not reachable from outside the host.
The --link option on the default network is a legacy option that is still around for backward compatibility, but should not be used for most situations. The --link option has a number of limitations;
legacy links are not dynamic; it's not possible to replace a linked container without re-creating all containers linked to that container
restarting a linked container can break the link, with no option to re-establish a link
legacy links are uni-directional
environment variables are shared between containers, which can easily lead to leaking (e.g.) credentials to other containers.
Docker 1.9 introduced custom docker networks, which allows
A simple example;
create a network for your application;
docker network create mynet
create a database container, and attach it to the network; there is no need to publish its ports for other containers to connect to it. (I'm using an nginx image here, just to illustrate the concept);
docker run -d --name db --network mynet nginx:alpine
create an "application" container and attach it to the same network; doing so
allows it to communicate with the db container over that network;
docker run -dit --name app --network mynet alpine sh
The application container can now connect to the db container, using its name
as hostname (db); to illustrate this, open a shell in the app container, install curl and connect to http://db:80;
docker exec -it app sh
/ # apk add --no-cache curl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ca-certificates (20161130-r1)
(2/4) Installing libssh2 (1.7.0-r2)
(3/4) Installing libcurl (7.52.1-r3)
(4/4) Installing curl (7.52.1-r3)
Executing busybox-1.25.1-r0.trigger
Executing ca-certificates-20161130-r1.trigger
OK: 5 MiB in 15 packages
/ # curl http://db:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
You can read more about networks (also how to dynamically attach and detach a container from a network) in the []"docker container networking" section of the documentation](https://docs.docker.com/engine/userguide/networking/)

How do I backup data from MySQL container into a shared volume?

If I'm working with a containerized MySQL database that wasn't originally run with shared volume options, what's the easiest way to sort of externalize the data? Is there still a way to modify the container so that it shares its data with the Docker host in a specified directory?
Note: if you're still having problems with this question, please comment so I can improve it further.
Official Docker documentation provides a great overview on how to backup, restore, or migrate data volumes. For my problem, in particular, I did the following:
Run a throw-away Docker container that runs Ubuntu, shares volumes with currently running MySQL container, and backs up database data in local machine (as described in the overview):
docker run --rm --volumes-from some-mysql -v /path/to/local/directory:backup ubuntu:15.10 tar cvf /backup/mysql.tar /var/lib/mysql
(The official MySQL Docker image uses /var/lib/mysql for storing data.)
The previous step will result in creation of /path/to/directory/mysql.tar in the Docker host. This can now be extracted like:
tar -xvf mysql.tar
(Assuming cd /path/to/directory). The resulting directory (/var/lib/mysql) can now be used as shared volume with same instance, or any other instance of containerized MySQL.

Where is my Docker Wordpress Website Storing Data?

I currently have a Wordpress container set up in Docker, and have it linked to a MySQL database on the same machine (that is not in a Docker container). I played around with editing the website in my browser, deleting the Wordpress container, and creating a new one linked to the same database.
When I did this, the sample posts I made on my website persisted, so I assumed my data was being stored by my database locally. However, I then tried setting up multiple websites using Wordpress Multisite using one Wordpress container. To do this, I had to edit the Wordpress config file inside the Wordpress container.
I deleted this container, and created a new one like before. I tried replicating the config changes in this container, however, when I navigate to my website, it just gives me a white screen. This leads me to think that the MySQL database is pointing to empty tables all of a sudden.
Where are my Wordpress templates/info actually being stored?
EDIT: Below is the command I run
sudo docker run -p 80:80 --name wordpress_local -e WORDPRESS_DB_HOST=(machine's IP address) -e WORDPRESS_DB_USER=user -e WORDPRESS_DB_PASSWORD=password -d wordpress
Note: This is assuming I have a local MySQL database set up that accepts connections from 0.0.0.0 and has a user called user with password password
I know that my container is properly linking to the database from looking at the logs (and the fact that I can access the website--just get a blank page)
EDIT 2: Looking at my Wordpress container filesystem, I can navigate different folders and do see content such as themes/plugins that I have installed. Why is this not being stored on my local machine? (Sorry if this is a dumb question--I am new to both MySQL and Docker)
When you run wordpress container for the first time, the initialization script downloads the wordpress codebase to /var/www/html and then start the web server. Since everything inside a container is ephemeral, the codebase with any changes you make will be lost when you re-run the container (unless you just stop/start the container which is not the best option for this scenario).
What you need is to make this folder have persistent data. To achieve this you have to mount a folder from the host machine inside the container:
sudo docker run -p 80:80 \
--name wordpress_local \
-e WORDPRESS_DB_HOST=(machine's IP address) \
-e WORDPRESS_DB_USER=user \
-e WORDPRESS_DB_PASSWORD=password \
-d \
-v `pwd`/html:/var/www/html \
wordpress
Don't forget, the folder should be already created: mkdir -p data

Installation of system tables failed! boot2docker tutum/mysql mount file volume on Mac OS

I have trouble mounting a volume on tutum/mysql container on Mac OS.
I am running boot2docker 1.5
When I run
docker run -v $HOME/mysql-data:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
i get this error
Installation of system tables failed! Examine the logs in /var/lib/mysql for more information.
Running the above command also creates an empty $HOME/mysql-data/mysql folder.
The tutum/mysql container runs smoothly when no mounting occurs.
I have successfully mounted a folder on the nginx demo container, which means that the boot2docker is setup correctly for mounting volumes.
I would guess that it's just a permissions issue. Either find the uid of the mysql user inside the container and chown the mysql-data dir to that user, or use a data container to hold the volumes.
For more information on data containers see the official docs.
Also note that as the Dockerfile declares volumes, mounting is taking place whether or not you use -v argument to docker run - it just happens in a directory on the host controlled by Docker (under /var/lib/docker) instead of a directory chosen by you.
I've also had a problem starting mysql docker container with error "Installation of system tables failed". There was no changes on the docker image, and there was no recent update on my machine or docker. One thing I was doing differently was that using images that could take up or more than 5GB memory on testing.
After cleaning dangling images and volumes, I was able to start mysql image as usual.
This blog seems to have a good instructions and explains all variations of clean up with docker.