how to start multi instances in one mysql docker container - mysql

I want to deploy data slot to the distributed mysql databases via middleware,need one mysql docker container running two instances with different port, eg. 3306 and 3316.
tried many ways, such as:
Add mysql_3316.sh:
#!/bin/bash
/entrypoint.sh --defaults-file=/etc/mysql/my_3316.cnf
in the rc.local:
#!/bin/sh -e
/usr/local/bin/mysql_3316.sh || exit 1
exit 0
and modified the Dockerfile like below,
RUN touch /etc/mysql/my_3316.cnf
COPY mysql_3316.sh /usr/local/bin/mysql_3316.sh
RUN chmod +x /usr/local/bin/mysql_3316.sh
COPY rc.local /etc/rc.local
RUN chmod +x /etc/rc.local
RUN chown root:root /etc/rc.local
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 3316
CMD ["mysqld"]
it doesn't work while the mysql container comes up, but the 3316 mysql
port instance works by run the /entrypoint.sh --defaults-file=/etc/mysql/my_3316.cnf shell line manually.
tried the init.d ,
RUN touch /etc/mysql/my_3316.cnf
COPY mysql_3316.sh /etc/init.d/mysql_3316
RUN chmod +x /etc/init.d/mysql_3316
RUN update-rc.d mysql_3316 defaults 99
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 3316
CMD ["mysqld"]
doesn't work too.
tried the crontab,
#reboot /usr/local/bin/mysql_3316.sh
#Don't remove the empty line at the end of this file. It is required to run the cron job
and the Dockerfile as that,
COPY mysql_3316.sh /usr/local/bin/mysql_3316.sh
RUN chmod +x /usr/local/bin/mysql_3316.sh
ADD crontab /etc/cron.d/docker-cron
RUN chmod +x /etc/cron.d/docker-cron
RUN crontab /etc/cron.d/docker-cron
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 3316
CMD ["mysqld"]
doesn't work third.
It's been spent much on this key, almost give up...
Any kindly suggestion are welcomed please.
The docker-compose.yml for mysql right here:
services:
mysql:
image: mysql:latest
container_name: mysql
hostname: mysql
restart: unless-stopped
networks:
dockernet:
ipv4_address: 172.18.0.5
ports:
- 3306:3306
- 3316:3316
volumes:
- /Docker/mysql/:/var/lib/mysql/
- ./docker/mysql/mysql/my.cnf:/etc/mysql/my.cnf
- ./docker/mysql/mysql/my_3316.cnf:/etc/mysql/my_3316.cnf
- ./docker/mysql/mysql/logs/:/var/log/mysql/
- ./docker/mysql/mysql/init/:/docker-entrypoint-initdb.d/
entrypoint: ['/entrypoint.sh', '--default-authentication-plugin=mysql_native_password']

Normally you do NOT want to run more than one process in the same container. Despite your title I really think that what you are looking for is to start two containers, both from a MySQL image.
You should not need to change any startup scripts, Dockerfile or anything else to start up similar containers bound to different ports.
Remember that the EXPOSE command only exposes the ports to different containers, not to the outside world.
To access the port you need to use the -p flag with your docker run: https://docs.docker.com/engine/reference/run/#expose-incoming-ports
You can use the same docker image from the same Dockerfile. Just give different -p parameter when you run.
Edit:
You added your docker-compose.yml after my initial response. Using docker-compose will make my advice about -p obsolete, and you should use the ports: section of the docker-compose.yml to vary the port numbers instead.
This answer, however, might not be what you are looking for because based on your comment I think I do not fully understand your use case here.

Use the stock mysql container and just run:
docker run -p3306:3306 --name mysql1 mysql
docker run -p3316:3306 --name mysql2 mysql
# plus appropriate -d -e ... -v ... as needed on both commands
Don't try to build your own image and definitely don't try to run two servers with expected different lifetimes in a single container.

Related

Dockerfile to install Nginx and MySQL in same image

I want to install nginx and mysql in same image. I start out with a mysql image with the plan to install docker using dockerfile.
Here is my dockerfile:
FROM mysql:latest
ENV MYSQL_ROOT_PASSWORD=HelloWorld \
MYSQL_DATABASE=content
RUN apt update
RUN apt install nginx -y
COPY nginx.conf /etc/nginx/nginx.conf
This starts the mysql db perfectly and nginx also gets installed. Unfortunately, nginx doesn't start. To start nginx I also added another command in the docker file:
CMD service nginx start
After adding this line in the dockerfile, the container closes after creation. What am I doing wrong here?
I am using below command to start container with above image:
docker run -it -p 3306:3306 -p 8080:80 -p 8081:443 --name mycontainer myimage
it's best to run each process in a separate container. but if you wanna do that, you should create a bash file to start MySQL and Nginx. finally, you should use that bash file as the ENTRYPOINT of your image/container

Service mysql not runnig Dockerfile

I have from issue running mysql using Dockerfile
FROM mysql:latest
# Add a database
ENV MYSQL_DATABASE some_table_name
ENV MYSQL_ROOT_PASSWORD some_password
ADD some_table.sql /docker-entrypoint-initdb.d
ENTRYPOINT /bin/bash
Building an image works for this code but mysql service is not found in the container when I try to run service mysql start. mysqld command would not run due to some root security issues. Is anyone able to help? Thank you.
Remove the ENTRYPOINT
build and run the container.
Run another process to check inside the container name like:
docker exec -ti {containername} mysql -u root -p
This will check the password is set right. Then SHOW CREATE TABLE {db.tablename}

docker run phpmyadmin inside alpine container

I discovered docker last week and am playing around withit for a decent time.
Now I want to deploy a Website inside a Container. The Website is already finished and I got all the files on my host system. It needs php, java, tomcat and - and here is the problem - a mysql-db.
So I created a Dockerfile, using alpine:latest as base image and after that installing the above named applications one by one.
FROM alpine:latest
ENV http_proxy http://not_important/
RUN apk update
RUN apk --no-cache --quiet add openjdk8
RUN apk --no-cache --quiet add nano
RUN apk --no-cache --quiet add php7
RUN apk --no-cache --quiet add mysql
RUN apk --no-cache --quiet add phpmyadmin
RUN mkdir -p /usr/local/tomcat/
COPY apache-tomcat-9.0.4.tar.gz /usr/local/tomcat/
RUN cd /usr/local/tomcat/ && tar xzf /usr/local/tomcat/apache-tomcat-9.0.4.tar.gz
RUN mv /usr/local/tomcat/apache-tomcat-9.0.4/* /usr/local/tomcat
RUN rm -r /usr/local/tomcat/apache-tomcat-9.0.4
RUN rm -r /usr/local/tomcat/apache-tomcat-9.0.4.tar.gz
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
But now, I dont rly know how to finish my work. How am I able to start the mysql-db and access it with phpmyadmin?
I run the container with the following command:
docker run --name alpine_custom -dit -p 30000:8080 -p31000:80 alpine:custom
The tomcat is running on port 30000 without a problem and I want phpmyadmin to be accessable over port 31000. I do have a working MySQL-DB on my Host and manage it with phpmyadmin (meaning, there are two containers, the phpmyadmin container is linked with the database)...
Is it even possible to do it like I want it, or do I have to deploy a second container with a database which is linked with my alpine container (and a third one with phpmyadmin...)?
I am thankful for every answer, thank you in advance
Sincerely
Telvanis :)
PS: I know, the Dockerfile isn't very good but i think its enough for my needs ^^
Try to avoid having it "all-in-one".
This is the idea behind Docker, to go from something "monolithic" to something which is separated to components. This approach gives you an advantage when you want to scale up/down your app, update specific components without rebuilding the whole app... etc.
Try to avoid the installation & configuration of every technology on your own
I remember myself trying to do so with MySQL. I spent much time and had no result. Ended up using the official image. The installation of a software inside docker might have tricky parts and not be the same with the installation one does in a VM.
So, I would propose to start searching for the official images of the technologies that you are trying to put into use. Docker hub has plenty and most of them also provide guidelines on how to use/configure them. For example:
https://hub.docker.com/r/phpmyadmin/phpmyadmin/
https://hub.docker.com/_/mysql/
https://hub.docker.com/_/openjdk/
...you get the idea.
Your running containers will have names. Docker offers a DNS mechanism so that your containers can connect to each other by using these names. For example if you have a container for your MySQL database named my_app_db listening on port 5000, configure the phpmyadmin container to connect there. An important notice here: don't try these on the default network, because it will not work. Define your own test-network.
Dealing with 3,4,5... or maybe more containers will make you type commands to build them, run them, start/stop them. Here is where docker-compose comes in and proves to be very handy. Within a docker-compose.yml file, you can define a "composition" of inter-connecting containers and handle them with single commands like docker-compose up, docker-compose down etc...
Working example:
comes from here, but is slightly modified...
docker-compose.yml file:
version: '2'
services:
mysql:
image: mysql:latest
container_name: phpmyadmin_testing_mysql
environment:
- MYSQL_ROOT_PASSWORD=test123
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin_testing
volumes:
- /sessions
ports:
- 8090:80
environment:
- PMA_ARBITRARY=1
- TESTSUITE_PASSWORD=test123
depends_on:
- mysql
To run, simply use docker-compose up. To connect, use:
server: phpmyadmin_testing_mysql (the name of the MySQL container)
username: root
password: test123

Docker integration test with static data at launch

I have been struggling with this issue for a while now. I want to do a performance test using a specific set of data.
To achieve this I am using Docker-compose and I have exported a couple of .sql files.
When I first build and run the container using docker-compose build and docker-compose up the dataset is fine. Then I run my test (which also inserts) data. After running my test I want to perform it again, so I relaunch the container using docker-compose up. But this time, for some reason I don't understand, the data that was inserted the last time (by my test) is still there. I get different behaviour now.
At the moment I have the following Dockerfile:
FROM mysql:5.7
ENV MYSQL_DATABASE dev_munisense1\
MYSQL_ROOT_PASSWORD pass
EXPOSE 3306
ADD docker/data/ /docker-entrypoint-initdb.d/
Because I read that the mysql Docker image runs everything in /docker-entrypoint-initdb.d/. The first time it works properly.
I have also tried what these posts suggested:
How do i migrate mysql data directory in docker container?
http://txt.fliglio.com/2013/11/creating-a-mysql-docker-container/
How to create populated MySQL Docker Image on build time
How to make a docker image with a populated database for automated tests?
And a couple of identical other posts.
None of them seem to work currently.
How can I make sure the dataset is exactly the same each time I launch the container? Without having to rebuild the image each time (this takes kind of long because of a large dataset).
Thanks in advance
EDIT:
I have also tried running the container with different arguments like:
docker-compose up --force-recreate --build mysql but this has no success. The container is rebuilt and restarted but the db is still affected by my test. Currently the only solution to my problem is to remove the entire container and image.
I managed to fix the issue (with mysql image) by doing the following:
change the mounting point of the sql storage (This is what actually caused the problem) I used the solution suggested here: How to create populated MySQL Docker Image on build time but I did it by running a sed command: RUN sed -i 's|/var/lib/mysql|/var/lib/mysql2|g' /etc/mysql/my.cnf
add my scripts to a folder inside the container
run import.sh script that inserts data using the daemon (using the wait-for-it.sh script)
remove the sql scripts
expose port like regular
The docker file looks like this (the variables are used to select different SQL files, I wanted multiple versions of the image):
FROM mysql:5.5.54
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh /utils/wait-for-it.sh
COPY docker/import.sh /usr/local/bin/
RUN sed -i 's|/var/lib/mysql|/var/lib/mysql2|g' /etc/mysql/my.cnf
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_ROOT_PASSWORD
ARG MYSQL_ALLOW_EMPTY_PASSWORD
ARG DEVICE_INFORMATION
ARG LAST_NODE_STATUS
ARG V_NODE
ARG NETWORKSTATUS_EVENTS
ENV MYSQL_DATABASE=$MYSQL_DATABASE \
MYSQL_USER=$MYSQL_USER \
MYSQL_PASSWORD=$MYSQL_PASSWORD \
MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD \
DEVICE_INFORMATION=$DEVICE_INFORMATION \
LAST_NODE_STATUS=$LAST_NODE_STATUS \
V_NODE=$V_NODE \
MYSQL_ALLOW_EMPTY_PASSWORD=$MYSQL_ALLOW_EMPTY_PASSWORD
#Set up tables
COPY docker/data/$DEVICE_INFORMATION.sql /usr/local/bin/device_information.sql
COPY docker/data/$NETWORKSTATUS_EVENTS.sql /usr/local/bin/networkstatus_events.sql
COPY docker/data/$LAST_NODE_STATUS.sql /usr/local/bin/last_node_status.sql
COPY docker/data/$V_NODE.sql /usr/local/bin/v_node.sql
RUN chmod 777 /usr/local/bin/import.sh && chmod 777 /utils/wait-for-it.sh && \
/bin/bash /entrypoint.sh mysqld --user='root' & /bin/bash /utils/wait-for-it.sh -t 0 localhost:3306 -- /usr/local/bin/import.sh; exit
RUN rm -f /usr/local/bin/*.sql
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
the script looks like this:
#!/bin/bash
echo "Going to insert the device information"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/device_information.sql
echo "Going to insert the last_node_status"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/last_node_status.sql
echo "Going to insert the v_node"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/v_node.sql
echo "Going to insert the networkstatus_events"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/networkstatus_events.sql
echo "Database now has the following tables"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE --execute="SHOW TABLES;"
So now all I have to do to start my performance tests is
#!/usr/bin/env bash
echo "Shutting down previous containers"
docker-compose -f docker-compose.yml down
docker-compose -f docker-compose-test-10k-half.yml down
docker-compose -f docker-compose-test-100k-half.yml down
docker-compose -f docker-compose-test-500k-half.yml down
docker-compose -f docker-compose-test-1m-half.yml down
echo "Launching rabbitmq container"
docker-compose up -d rabbitmq & sh wait-for-it.sh -t 0 -h localhost -p 5672 -- sleep 5;
echo "Going to execute 10k test"
docker-compose -f docker-compose-test-10k-half.yml up -d mysql_10k & sh wait-for-it.sh -t 0 -h localhost -p 3306 -- sleep 5 && ./networkstatus-event-service --env=performance-test --run-once=true;
docker-compose -f docker-compose-test-10k-half.yml stop mysql_10k
couple of more of these lines (slightly different, cause different container names)
Running docker-compose down after your tests will destroy everything associated with your docker-compose.yml
Docker Compose is a container life cycle manager and by default it tries to keep everything across multiple runs. As Stas Makarov mentions, there is a VOLUME defined in the mysql image that persists the data outside of the container.

Docker Cannot link to a non running container

I need to create Rails and Mysql containers with docker-compose. When I try to create links between containers with docker-compose up, I get
Cannot start container
9b271c58cf6aecaf017dadaf5b Cannot link to a non running container:
/puma_db_1 AS /puma_web_1/db
Files
Dockerfile
FROM ubuntu:14.04
RUN apt-get -y update
RUN apt-get -y install git curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
RUN apt-get -y install libmysqlclient-dev
RUN git clone https://github.com/sstephenson/rbenv.git /root/.rbenv
RUN git clone https://github.com/sstephenson/ruby-build.git /root/.rbenv/plugins/ruby-build
RUN echo 'eval "$(rbenv init -)"' >> $HOME/.profile
RUN echo 'eval "$(rbenv init -)"' >> $HOME/.bashrc
RUN rbenv install 2.1.5
RUN rbenv global 2.1.5
RUN gem install rails -v 4.0.11
ADD app.tar.gz /home/
WORKDIR /home/app
RUN bundle install
EXPOSE 3000
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml
db:
image: mysql:latest
environment:
MYSQL_DATABASE: app_development
MYSQL_USER: mysql
DATABASE_PASSWORD: onetwo
ROOT_PASSWORD: onetwo
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "4000:3000"
links:
- db
Most likely the db container fails to start.
Make sure it works fine by starting only the db service. You can do that with the following command:
docker-compose up db
If it appears the MySQL service is not running after this command, then you found the origin of your problem.
Not specifically related to MySQL but more the message ERROR: for <service> Cannot link to a non running container: /b2f21b869ccc_<dependency>_1 AS /<service>_1/<dependency>_1
I found that the dependency container had a different id than the one given (b2f21b869ccc in my example above)
Solved simply by running
docker-compose up -d --force-recreate <service>
which caused it to recreate the dependency and fix the link to the correct docker id
For me, it did not help running docker-compose up db.
This did the trick for me:
sudo service docker restart
and then continuing with docker-compose up (-d)
You might try out the new features of docker networking, To do this, You must remove the link parameter in your docker-compose.yml , and initialize the container with the --x-networking option.
docker-compose --x-networking up -d
To prevent docker generate random names for the containers, which are added to the /etc/hosts file of the respective network for every container, you can use the container_name: key in the docker-compose.yml
db:
container_name: db
image: mysql:latest
environment:
MYSQL_DATABASE: app_development
MYSQL_USER: mysql
DATABASE_PASSWORD: onetwo
ROOT_PASSWORD: onetwo
web:
container_name: web
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "4000:3000"
Issue:
I have gotten this error whenever docker-compose successfully
builds a set of Images, but one of those Imagesfails to
run (e.g. launch into its own Container).
In this case, I suspect the Image, underlying your
puma_db_1 Container, is failing to run. You can
find the name of this Image by running docker ps -a. That said, its name is most likely puma_db
Solution:
To get at the cause, you can try docker-compose up
<service_name> or docker-compose up db
Alternatively, I find the error message by running docker run
<image_name> more useful. In this case, that would be docker
run puma_db
I had the same problem for mssql.link, as I am not using local database (rather using the one we have on staging), all I had to do is just comment that line out by editing Dockerfile script:
# DOCKER_ARGS="${DOCKER_ARGS} --link mssql-server-linux:mssql.link"
This solution may help someone or may be no one, but it sorted it for me :)
If you started the container lets say X with a link --link keen_visvesvaraya and then once X is up the linked container was stopped, but X kept running . Now if you try to docker exec into X you get this error.
Yah solution is to restart.
I had the same problem with elasticsearch - symfony - and docker
Can not link to a non-running container:/43c1d3b410db_myindex_elasticsearch_1 AS /myindex_apache_1/elasticsearch
the solution is to delete the content of the data volume
  elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.5.2
Volumes:
     - ./docker/volume/elasticsearch: /usr/share/elasticsearch/data
and run docker-composer up -d again.
You can use below command Because It's working for me
docker run --name standlone-mysql -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=test -e MYSQL_USER=root -e MYSQL_ROOT_PASSWORD=root -d mysql:5.6
you need to modify the db: in yml file to include "POSTGRES_HOST_AUTH_METHOD" in environment section
db:
environment:
POSTGRES_HOST_AUTH_METHOD: trust
I got same error when restart the service.
Cannot link to a non running container: /c7e8ba2cc034_<service1>_1 AS /<service2>/<srvice1>
In my case there is Exited service2, so remove the container (docker rm) and start the service 2.