How to run JIRA and MySQL on same Docker Container - mysql

I have found this docker image with JIRA on it. JIRA can be used with MySQL, but I do not want to run MySQL on another Container.
In my opinion it is more useful to run the MySQL on the same Container (faster access, higher security, less resources, etc.).
How can I accomplish that?

You need to use a base image which specializes in managing several services, in order to avoid the "PID 1 zombie reaping issue".
Create a dockerfile similar to the JIRA one, but:
with phusion/baseimage-docker as base image
with mysql installed (as in this Dockerfile)
with both Jira and mysql declared as additional daemons
with the baseimage-docker's init system:
CMD ["/sbin/my_init"]
That way, you can easily start multiple apps, and also stop the container while knowing all apps will be stopping properly.

I did JIRA and PostgreSQL but in two containers, look at https://github.com/manufy/docker-jira-bitbucket-gitlab/blob/master/atlassian/jira/docker-compose.yml
In one container you can mix the JIRA Dockerfile and add commands to install mysql, after that only configure db on web jira install.
Perhaps will help you

Related

How can I shard MySQL database with Vitess using both Docker images?

I found out about Vitess which let you shard MySQL database.
I want to use the docker image of both MariaDB and Vitess but I'm not quite sure what to do next. I'm using CentOS 7.
I installed the images
docker pull mariadb
docker pull vitess/root
docker pull vitess/orchestrator
Log inside the vitess image
sudo docker run -ti vitess/root bash
As the website said, make build
make build
I set up the variables
export VTROOT=/vt
export VTDATAROOT=/vt/vtdataroot
The manual said it was in the home directory but in the image it's at root.
But after that I'm stuck. I laucnh zookeeper : ./zk-up.sh
Starting zk servers... Waiting for zk servers to be ready... Started zk servers. ERROR: logging before flag.Parse: E0412
00:31:26.378586 132 syslogger.go:122] can't connect to syslog
W0412 00:31:26.382527 132 vtctl.go:80] cannot connect to syslog:
Unix syslog delivery error Configured zk servers.
Oops, okay, let's continue...
./vtctld-up.sh for the web internace
Starting vtctld...
Access vtctld web UI at http://88bdaff4e181:15000
Obviously I cannot access that link since it's in docker on a headless server
./vttablet-up.sh suppose to bring up 3 vttablets, but MariaDB is in another docker, not yet started and if I open the file it is not apparent how to set it up.
Is there any MySQL or PostgreSQL sharding solution more easily installable?
Or how can I set this up?
(Docker noob here sorry)
Thanks!
If you need multiple container orchestrated, best bet is to use docker-compose. You can define all the application dependencies as separate containers and network them to be accessible from each other.

Setting up MySQL for dev environment with Docker

Trying to set up a docker mysql server with phpmyadmin and an existing company_dev.sql file to import, in an effort to dockerize my dev environment.
My first question is how do I go about setting this up? Do I need to specify an OS, i.e. Ubuntu in my Dockerfile, then add sudo apt-get install mysql-server and install phpmyadmin? Or am I better off running an existing docker image from the docker repo and building on top of that?
Upon making CRUD operations to this database, I would like to save its state for later use. Would using docker commit be appropriate for this use case? I know using dockerfile is best practice.
I appreciate any advice.
First of all, with docker you should have a single service/Daemon per container. In your case, mysql and phpmyadmin should go in different containers. This is not mandatory (there are workarounds) but makes things a lot easier.
In order to avoid reinventing the wheel, you should IMHO always use existing images for the wanted service, expecially if they're official ones. But again, you can choose for any reason to start from scratch (a base image such as "Ubuntu" or "Debian" just to name two) and install the needed stuff.
About the storage question: docker containers should always be immutable. If a container needs to save it's state, it should use volumes. Volumes are a way to share a folder between the container and the host. For instance, the official mysql image uses a volume to store the database files.
So, to summarize, you should use ready images when possible and no, using git commit to store mysql data is not a good practice.
Previously I have used this Dockerfile in order to restore MySQL data.
GitHub - stormcat24/docker-mysql-remote
My first question is how do I go about setting this up?
This dockerfile is using mysqldump to load from real env/save to docker env. You can also do that. Actually, it will load/save whole tables in your specified database.
Do I need to specify an OS, i.e. Ubuntu in my Dockerfile, then add sudo apt-get install mysql-server and install phpmyadmin?
You can see this docker image is created from DockerHub - library/mysql , we don't need to prepare basic middle-wares except phpmyadmin.
Or am I better off running an existing docker image from the docker repo and building on top of that?
It's better to use already existing Docker repository !
Upon making CRUD operations to this database, I would like to save its state for later use. Would using docker commit be appropriate for this use case? I know using dockerfile is best practice.
I have tried this also. After some testing, I successfully saved a docker image contains MySQL DB. To do that, we just need to exec docker commit xxx after finished your build. However be careful, don't push your image file to DockerHub.

mysql container broken, how to recover data?

our IT broke the mysql container and now it can not be started.
I understand that I can commit a new version and run it without entrypoint, so I can "exec -it" to enter and check what's wrong.
but how can I recover my data? inspect the old container and copy all files from mounted volume? (it seems a overkill for this problem, can I 'start' my container without entrypoint?)
what's the best practice for this problem?
If you have a mounted volume, your data is in a volume directory in your host, and there will be unless you delete it. So, fix your MySQL image and then create another MySQL container.
You should be able to fix your container by using docker attach or docker exec. You can even change container entrypoint using something like this: How to start a stopped Docker container with a different command?
But that's not a good approach. As stated in Best practices for writing Dockerfiles, Docker containers should be ephemeral, meaning this that they can be replaced easily for new ones. So, best option is destroy your container and create a new one.
I think as #kstromeiraos says you should first fix your Dockerfile if at all it's broken and again build and run the container using:
docker build
docker run -v xxx
Since you have used volumes from your MySQL data seems to be backed off properly, so the new container which comes up should have the backed up data.
You can do:
docker exec -it bash
and get into the container and check the logs and data.

How to make a docker image with a populated database for automated tests?

I want to create containers w/ a MySQL db and a dump loaded for integration tests. Each test should connect to a fresh container, with the DB in the same state. It should be able to read and write, but all changes should be lost when the test ends and the container is destroyed. I'm using the "mysql" image from the official docker repo.
1) The image's docs suggests taking advantage of the "entrypoint" script that will import any .sql files you provide on a specific folder. As I understand, this will import the dump again every time a new container is created, so not a good option. Is that correct?
2) This SO answer suggests extending that image with a RUN statement to start the mysql service and import all dumps. This seems to be the way to go, but I keep getting
mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
followed by
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
when I run build, even though I can connect to mysql fine on containers of the original image. I tried sleep 5 to wait for the mysqld service to startup, and adding -h with 'localhost' or the docker-machine ip.
How can I fix "2)"? Or, is there a better approach?
If re-seeding the data is an expensive operation another option would be starting / stopping a Docker container (previously build with the DB and seed data). I blogged about this a few months ago Integration Testing using Spring Boot, Postgres and Docker and although the blog focuses on Postgres, the idea is the same and could be translated to MySQL.
The standard MySQL image is pretty slow to start up so might be useful to use something that has been prepared more for this situation like this:
https://github.com/awin/docker-mysql
You can include data or use with a Flyway situation too, but it should speed things up a bit.
How I've solved this before is using a Database Migration tool, specifically flyway: http://flywaydb.org/documentation/database/mysql.html
Flyway is more for migrating the database schema opposed to putting data into it, but you could use it either way. Whenever you start your container just run the migrations against it and your database will be setup however you want. It's easy to use and you can also just use the default MySQL docker container without messing around with any settings. Flyway is also nice for many other reasons, like having a way to have version control for a database schema, and the ability to perform migrations on production databases easily.
To run integration tests with a clean DB I would just have an initial dataset that you insert before the test, then afterwards just truncate all the tables. I'm not sure how large your dataset is, but I think this is generally faster than restarting a mysql container every time,.
Yes, the data will be imported every time you start a container. This could take a long time.
You can view an example image that I created
https://github.com/kliewkliew/mysql-adventureworks
https://hub.docker.com/r/kliew/mysql-adventureworks/
My Dockerfile builds an image by installing MySQL, imports a sample database (from a .sql file), and sets the entrypoint to auto-start MySQL server. When you start a container from this image, it will have the data pre-loaded in the database.

Architecture of a Docker multi-apps server regarding to database

I have a server running 5 or 6 small Rails apps. All their attached files are on S3 and they all use MySQL as database. Each app has its own user and runs some thins. There is an nginx server doing the load balancing and domain routing.
I plan to replace this server by a Docker installation : one server with one container per app, with a nginx in front.
My question is : where would you put the database part ?
I mainly see 4 possibilities :
1) One Mysql server inside of each app container. This seams not to be Docker's philosophy I think. It would require each container's data to be backuped individually.
2) A unique MySQL container for all apps.
3) A standard MySQL installation on the host Docker server.
4) A separate MySQL server for all apps.
What would you do ?
PS : I know Docker is not production ready yet, I plan to use it for staging at the moment and switch if I'm happy with it.
It depends on several factors. Here are some questions to help you to decide.
Are the 5-6 apps very similar (i.e., in Docker terms, you could base them on a common image), and are you thinking about deploying more of them, and/or migrating some of them to other servers?
YES: then it makes sense to embed the MySQL server in each app, because it will "stick around" with the app, with minimal configuration effort.
NO: then there is no compelling reason to embed the MySQL server.
Do you want to be able to scale those apps (i.e. load balance requests for a single app on multiple containers), or to scale the MySQL server (to e.g. a master/slave replicated setup) ?
YES: then you cannot embed the MySQL server, otherwise, scaling one tier would scale the other tier, which will lead to though headaches.
NO: then nothing prevents you from embedding the MySQL server.
Do you think that there will be a significant database load on at least one of those apps?
YES: then you might want to use separate MySQL servers, because a single app could impede the others.
NO: then you can use a single MySQL server.
Embedding the MySQL server is fine if you want a super-easy-to-deploy setup, where you don't need scalability, but you want to be able to spin up new instances super easily, and you want to be able to move instances around without difficulty.
The most flexible setup is the one where you deploy one app container + one MySQL container for each app. If you want to do that, I would suggest to wait for Docker 0.7, which will implement links, which will let you have a basic service discovery mechanism, so that each app container can easily discover the host/port of its database container.
I wouldn't deploy MySQL on the host; if you want a single MySQL install, you can achieve the same result by running a single MySQL container and running it with -p 3306:3306 (it will route the host's 3306/tcp port to the MySQL container's 3306/tcp port).
Since the 5 or 6 apps are small as you described, I will definitely exclude the option of installing a separate MySQL per container for two reasons:
It is waste of server resources, it is almost equivalent to installing MySQL 5 or 6 times on the same server.
It is less flexible (cannot scale DB independently from the apps) and harder to backup.
Having a dedicated MySQL container or installing MySQL on the host directly (i.e. not dockerizied), should have almost the same performance (at the end you will have a native mysql process on the host regardless if it is in the container or not).
The only difference is that you have to mount a volume to persist the data outside the MySQL
container, so having a dedicated MySQL container is a better option.