Docker Cannot link to a non running container - mysql

I need to create Rails and Mysql containers with docker-compose. When I try to create links between containers with docker-compose up, I get
Cannot start container
9b271c58cf6aecaf017dadaf5b Cannot link to a non running container:
/puma_db_1 AS /puma_web_1/db
Files
Dockerfile
FROM ubuntu:14.04
RUN apt-get -y update
RUN apt-get -y install git curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
RUN apt-get -y install libmysqlclient-dev
RUN git clone https://github.com/sstephenson/rbenv.git /root/.rbenv
RUN git clone https://github.com/sstephenson/ruby-build.git /root/.rbenv/plugins/ruby-build
RUN echo 'eval "$(rbenv init -)"' >> $HOME/.profile
RUN echo 'eval "$(rbenv init -)"' >> $HOME/.bashrc
RUN rbenv install 2.1.5
RUN rbenv global 2.1.5
RUN gem install rails -v 4.0.11
ADD app.tar.gz /home/
WORKDIR /home/app
RUN bundle install
EXPOSE 3000
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml
db:
image: mysql:latest
environment:
MYSQL_DATABASE: app_development
MYSQL_USER: mysql
DATABASE_PASSWORD: onetwo
ROOT_PASSWORD: onetwo
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "4000:3000"
links:
- db

Most likely the db container fails to start.
Make sure it works fine by starting only the db service. You can do that with the following command:
docker-compose up db
If it appears the MySQL service is not running after this command, then you found the origin of your problem.

Not specifically related to MySQL but more the message ERROR: for <service> Cannot link to a non running container: /b2f21b869ccc_<dependency>_1 AS /<service>_1/<dependency>_1
I found that the dependency container had a different id than the one given (b2f21b869ccc in my example above)
Solved simply by running
docker-compose up -d --force-recreate <service>
which caused it to recreate the dependency and fix the link to the correct docker id

For me, it did not help running docker-compose up db.
This did the trick for me:
sudo service docker restart
and then continuing with docker-compose up (-d)

You might try out the new features of docker networking, To do this, You must remove the link parameter in your docker-compose.yml , and initialize the container with the --x-networking option.
docker-compose --x-networking up -d
To prevent docker generate random names for the containers, which are added to the /etc/hosts file of the respective network for every container, you can use the container_name: key in the docker-compose.yml
db:
container_name: db
image: mysql:latest
environment:
MYSQL_DATABASE: app_development
MYSQL_USER: mysql
DATABASE_PASSWORD: onetwo
ROOT_PASSWORD: onetwo
web:
container_name: web
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "4000:3000"

Issue:
I have gotten this error whenever docker-compose successfully
builds a set of Images, but one of those Imagesfails to
run (e.g. launch into its own Container).
In this case, I suspect the Image, underlying your
puma_db_1 Container, is failing to run. You can
find the name of this Image by running docker ps -a. That said, its name is most likely puma_db
Solution:
To get at the cause, you can try docker-compose up
<service_name> or docker-compose up db
Alternatively, I find the error message by running docker run
<image_name> more useful. In this case, that would be docker
run puma_db

I had the same problem for mssql.link, as I am not using local database (rather using the one we have on staging), all I had to do is just comment that line out by editing Dockerfile script:
# DOCKER_ARGS="${DOCKER_ARGS} --link mssql-server-linux:mssql.link"
This solution may help someone or may be no one, but it sorted it for me :)

If you started the container lets say X with a link --link keen_visvesvaraya and then once X is up the linked container was stopped, but X kept running . Now if you try to docker exec into X you get this error.
Yah solution is to restart.

I had the same problem with elasticsearch - symfony - and docker
Can not link to a non-running container:/43c1d3b410db_myindex_elasticsearch_1 AS /myindex_apache_1/elasticsearch
the solution is to delete the content of the data volume
  elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.5.2
Volumes:
     - ./docker/volume/elasticsearch: /usr/share/elasticsearch/data
and run docker-composer up -d again.

You can use below command Because It's working for me
docker run --name standlone-mysql -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=test -e MYSQL_USER=root -e MYSQL_ROOT_PASSWORD=root -d mysql:5.6

you need to modify the db: in yml file to include "POSTGRES_HOST_AUTH_METHOD" in environment section
db:
environment:
POSTGRES_HOST_AUTH_METHOD: trust

I got same error when restart the service.
Cannot link to a non running container: /c7e8ba2cc034_<service1>_1 AS /<service2>/<srvice1>
In my case there is Exited service2, so remove the container (docker rm) and start the service 2.

Related

Dockerfile to install Nginx and MySQL in same image

I want to install nginx and mysql in same image. I start out with a mysql image with the plan to install docker using dockerfile.
Here is my dockerfile:
FROM mysql:latest
ENV MYSQL_ROOT_PASSWORD=HelloWorld \
MYSQL_DATABASE=content
RUN apt update
RUN apt install nginx -y
COPY nginx.conf /etc/nginx/nginx.conf
This starts the mysql db perfectly and nginx also gets installed. Unfortunately, nginx doesn't start. To start nginx I also added another command in the docker file:
CMD service nginx start
After adding this line in the dockerfile, the container closes after creation. What am I doing wrong here?
I am using below command to start container with above image:
docker run -it -p 3306:3306 -p 8080:80 -p 8081:443 --name mycontainer myimage
it's best to run each process in a separate container. but if you wanna do that, you should create a bash file to start MySQL and Nginx. finally, you should use that bash file as the ENTRYPOINT of your image/container

Service mysql not runnig Dockerfile

I have from issue running mysql using Dockerfile
FROM mysql:latest
# Add a database
ENV MYSQL_DATABASE some_table_name
ENV MYSQL_ROOT_PASSWORD some_password
ADD some_table.sql /docker-entrypoint-initdb.d
ENTRYPOINT /bin/bash
Building an image works for this code but mysql service is not found in the container when I try to run service mysql start. mysqld command would not run due to some root security issues. Is anyone able to help? Thank you.
Remove the ENTRYPOINT
build and run the container.
Run another process to check inside the container name like:
docker exec -ti {containername} mysql -u root -p
This will check the password is set right. Then SHOW CREATE TABLE {db.tablename}

how to start multi instances in one mysql docker container

I want to deploy data slot to the distributed mysql databases via middleware,need one mysql docker container running two instances with different port, eg. 3306 and 3316.
tried many ways, such as:
Add mysql_3316.sh:
#!/bin/bash
/entrypoint.sh --defaults-file=/etc/mysql/my_3316.cnf
in the rc.local:
#!/bin/sh -e
/usr/local/bin/mysql_3316.sh || exit 1
exit 0
and modified the Dockerfile like below,
RUN touch /etc/mysql/my_3316.cnf
COPY mysql_3316.sh /usr/local/bin/mysql_3316.sh
RUN chmod +x /usr/local/bin/mysql_3316.sh
COPY rc.local /etc/rc.local
RUN chmod +x /etc/rc.local
RUN chown root:root /etc/rc.local
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 3316
CMD ["mysqld"]
it doesn't work while the mysql container comes up, but the 3316 mysql
port instance works by run the /entrypoint.sh --defaults-file=/etc/mysql/my_3316.cnf shell line manually.
tried the init.d ,
RUN touch /etc/mysql/my_3316.cnf
COPY mysql_3316.sh /etc/init.d/mysql_3316
RUN chmod +x /etc/init.d/mysql_3316
RUN update-rc.d mysql_3316 defaults 99
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 3316
CMD ["mysqld"]
doesn't work too.
tried the crontab,
#reboot /usr/local/bin/mysql_3316.sh
#Don't remove the empty line at the end of this file. It is required to run the cron job
and the Dockerfile as that,
COPY mysql_3316.sh /usr/local/bin/mysql_3316.sh
RUN chmod +x /usr/local/bin/mysql_3316.sh
ADD crontab /etc/cron.d/docker-cron
RUN chmod +x /etc/cron.d/docker-cron
RUN crontab /etc/cron.d/docker-cron
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 3316
CMD ["mysqld"]
doesn't work third.
It's been spent much on this key, almost give up...
Any kindly suggestion are welcomed please.
The docker-compose.yml for mysql right here:
services:
mysql:
image: mysql:latest
container_name: mysql
hostname: mysql
restart: unless-stopped
networks:
dockernet:
ipv4_address: 172.18.0.5
ports:
- 3306:3306
- 3316:3316
volumes:
- /Docker/mysql/:/var/lib/mysql/
- ./docker/mysql/mysql/my.cnf:/etc/mysql/my.cnf
- ./docker/mysql/mysql/my_3316.cnf:/etc/mysql/my_3316.cnf
- ./docker/mysql/mysql/logs/:/var/log/mysql/
- ./docker/mysql/mysql/init/:/docker-entrypoint-initdb.d/
entrypoint: ['/entrypoint.sh', '--default-authentication-plugin=mysql_native_password']
Normally you do NOT want to run more than one process in the same container. Despite your title I really think that what you are looking for is to start two containers, both from a MySQL image.
You should not need to change any startup scripts, Dockerfile or anything else to start up similar containers bound to different ports.
Remember that the EXPOSE command only exposes the ports to different containers, not to the outside world.
To access the port you need to use the -p flag with your docker run: https://docs.docker.com/engine/reference/run/#expose-incoming-ports
You can use the same docker image from the same Dockerfile. Just give different -p parameter when you run.
Edit:
You added your docker-compose.yml after my initial response. Using docker-compose will make my advice about -p obsolete, and you should use the ports: section of the docker-compose.yml to vary the port numbers instead.
This answer, however, might not be what you are looking for because based on your comment I think I do not fully understand your use case here.
Use the stock mysql container and just run:
docker run -p3306:3306 --name mysql1 mysql
docker run -p3316:3306 --name mysql2 mysql
# plus appropriate -d -e ... -v ... as needed on both commands
Don't try to build your own image and definitely don't try to run two servers with expected different lifetimes in a single container.

Import into dockerized mariadb on initial build with script

I'm using MariaDB, but I think this could probably apply to MySQL as well.
I have a project that works off of MariaDB, and there is some initial setup for the database that needs to be done to create tables, insert initial data, etc. Based on other answers, I could normally do ADD dump.sql /docker-entrypoint-initdb.d, but I don't have a dump.sql -- instead what I have is a python script that connects to MariaDB directly and creates the tables and data.
I have a docker-compose.yml
version: '3'
services:
db:
build: ./db
ports:
- "3306:3306"
container_name: db
environment:
- MYSQL_ROOT_PASSWORD=root
web:
build: ./web
command: node /app/src/index.js
ports:
- "3000:3000"
links:
- db
"Web" is not so important right now since I just want to get db working.
The Dockerfile I've attempted for DB is:
# db/Dockerfile
FROM mariadb:10.3.2
RUN apt-get update && apt-get install -y python python-pip python-dev libmariadbclient-dev
RUN pip install requests mysql-python
ADD pricing_import.py /scripts/
RUN ["/bin/sh", "-c", "python /scripts/pricing_import.py"]
However this doesn't work for various reasons. I've gotten up to the point where pip install mysql-python doesn't compile:
_mysql.c:2005:41: error: 'MYSQL' has no member named 'reconnect'
if ( reconnect != -1 ) self->connection.reconnect = reconnect;
I think this has to do with the installation of mysql-python.
However before I go down the hole too far, I want to make sure my approach even makes sense since I don't even think the database will be started once I get to the ./pricing_import.py script and since it tries to connect to the database and runs queries, it probably won't work.
Since I can't get the python installation to work on the mariadb container anyway, I was also thinking about creating another docker-compose entry that depends on db and runs the python script on build to do the initial import during docker-compose build.
Are either of these approaches correct, or is there a better way to handle running an initialization script against MariaDB?
We use docker-compose healthcheck with combination of makefile and bash to handle running an initialization script. So your docker-compose.yml would look something like that:
version: '3'
services:
db:
build: ./db
ports:
- "3306:3306"
container_name: db
environment:
- MYSQL_ROOT_PASSWORD=root
healthcheck:
test: mysqlshow --defaults-extra-file=./database/my.cnf
interval: 5s
timeout: 60s
Where ./database/my.cnf is the config with credentials:
[client]
host = db
user = root
password = password
Then you can use this health-check.bash script to check the health:
#!/usr/bin/env bash
DATABASE_DOCKER_CONTAINER=$1
# Check on health database server before continuing
function get_service_health {
# https://gist.github.com/mixja/1ed1314525ba4a04807303dad229f2e1
docker inspect -f '{{if .State.Running}}{{ .State.Health.Status }}{{end}}' $DATABASE_DOCKER_CONTAINER
}
until [[ $(get_service_health) != starting ]];
do echo "database: ... Waiting on database Docker Instance to Start";
sleep 5;
done;
# Instance has finished starting, will be unhealthy until database finishes startup
MYSQL_HEALTH_CHECK_ATTEMPTS=12
until [[ $(get_service_health) == healthy ]]; do
echo "database: ... Waiting on database service"
sleep 5
if [[ $MYSQL_HEALTH_CHECK_ATTEMPTS == 0 ]];
then echo $DATABASE_DOCKER_CONTAINER ' failed health check (not running or unhealthy) - ' $(get_service_mysql_health)
exit 1
fi;
MYSQL_HEALTH_CHECK_ATTEMPTS=$((MYSQL_HEALTH_CHECK_ATTEMPTS-1))
done;
echo "Database is healthy"
Finally, you can use makefile to connect all things together. Something like that:
docker-up:
docker-compose up -d
db-health-check:
db/health-check.bash db
load-database:
docker run --rm --interactive --tty --network your_docker_network_name -v `pwd`:/application -w /application your_docker_db_image_name python /application/pricing_import.py
start: docker-up db-health-check load-database
Then start your app with make start.

Docker, how to run .sql file in an image?

It's my first time working with Docker an I am not sure if I am doing things well.
I have a rails applications that depends on a Mysql database, so I've configured the docker-compose.yml file like this:
db:
image: library/mysql:5.6
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
expose:
- "3306"
ports:
- "3306:3306"
rails-app:
build: .
dockerfile: Dockerfile
environment:
RAILS_ENV: development
links:
- db
volumes:
- ".:/home/app"
volumes_from:
- bundle
... omitted lines ...
Then, if I run the following:
$ docker-compose run db mysql --host=$DOCKER_LOCALHOST --port=3306 --protocol=tcp -u root < shared/create_common_tables.sql
I get this error:
ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.99.100' (111)
This sounds normal, because I suspect that I have to build before some container that links to db.
I know this because if I run this in this order:
$ docker-compose build rails-app
$ docker-compose run -e RAILS_ENV=development rails-app bundle
$ docker-compose run -e RAILS_ENV=development rails-app bundle exec rake db:create
$ docker-compose run db mysql --host=$DOCKER_LOCALHOST --port=3306 --protocol=tcp -u root < shared/create_common_tables.sql
It works fine.
But, how can I do to execute this sql before creating any container?
You can load the sql file during the build phase of the image. To do this you create a Dockerfile for the db service that will look something like this:
FROM mysql:5.6
COPY setup.sh /mysql/setup.sh
COPY setup.sql /mysql/setup.sql
RUN /mysql/setup.sh
where setup.sh looks something like this:
#!/bin/bash
set -e
service mysql start
mysql < /mysql/setup.sql
service mysql stop
And in your docker-compose.yml you'd change image to build: ./db or the path where you put your files.
Now this works if you have all your sql in a raw .sql file, but this wont be the case if you're using rails or a similar framework where the sql is actually stored in code. This leaves you with two options.
Instead of using FROM mysql:5.6 you can use FROM your_app_image_that_has_the_code_in_it and apt-get install mysql .... This leaves you with a larger image that contains both mysql and your app, allowing you to run the ruby commands above. You'd replace the mysql < /mysql/setup/sql with the rails-app bundle exec rake db:create lines. You'd also have to provide an app config that hits a database on localhost:3306 instead of db:3306
My preferred option is to create a script which exports the sql into a .sql file, which you can then use to build your database container. This is a bit more work, but is a lot nicer. It means that instead of running rails-app bundle exec rake db:create you'd just run the script to load a db.
Such a script would look something like this:
#!/bin/bash
set -e
docker-compose build rails-app
docker run -d --name mysql_empty mysql:5.6
docker run --link mysql_empty:db -v $PWD:/output project_rails-app export.sh
where export.sh looks something like this:
#!/bin/bash
set -e
RAILS_ENV=development
rails-app bundle exec rake db:create
mysqldump > /output/setup.sql
You could also replace the docker run script with a second compose file if you wanted to.