How do I set up network containers in Bluemix? - containers

I've run two Alpine containers in Bluemix, used a link, and tried a tracert, and it's timing out. Is there something else I need to do to allow them to talk?
$ docker run -d --name net-a alpine sleep 99999
$ docker run -d --name net-b --link net-a:net-a alpine sleep 99999
$ docker exec -i net-b sh
traceroute net-a
traceroute to net-a (172.31.0.27), 30 hops max, 46 byte packets
1 instance-0055703a (172.31.0.28) 2998.949 ms !H 2999.897 ms !H 2999.970 ms !H
Same commands work fine with my local Docker engine.

This should just work - one noticeable difference vs local is that the containers take a bit longer to come up and have networking. Check that the first is in state Running before starting the second, and that the second is in state Running before the exec, perhaps?
Just did a quick test using one of my images (note some delays between running the below commands), and I'm seeing connectivity:
# cf ic run --name net-a myimage
ba11348e-2945-4aed-9ddf-8b85ec418423
# cf ic ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba11348e-294 myimage "" About a minute ago Running 4 seconds ago net-a
# cf ic run --name net-b --link net-a:net-a myimage
a45e9783-47f6-499b-ad46-2f49b275adbc
# cf ic ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a45e9783-47f myimage "" 48 seconds ago Running 10 seconds ago net-b
ba11348e-294 myimage "" 2 minutes ago Running a minute ago net-a
# cf ic exec -ti net-b bash
root#instance-00c1f798:/# traceroute net-a
traceroute to net-a (172.31.0.22), 30 hops max, 60 byte packets
1 net-a (172.31.0.22) 2.398 ms 2.199 ms 2.224 ms

Related

shell script : container vs host

For Loop to start from 2nd argument behaving differently on container and host
#!/bin/bash
for i in "${#:2}"
do
echo $i
done
Call:
script.sh 129 5 6 7
Output:
Container: Alpine:Latest
#skipping 2 characters
9 5 6 7
Host: Debian GNU/Linux
#skipping 1st argument complete
5 6 7
i am not sure how you were able to run the above shell script into the alpine version.
#!/bin/bash
for i in "${#:2}"
do
echo $i
done
as in script you are using #!/bin/bash which is not supported in the alpine.
Issue could be due the different interpreters, sh and bash which one you are using. Debian comes with bash while alpine uses sh by default.
you can quickly verify it using : docker pull alpine:latest
docker run -it alpine bash while docker run -it alpine sh will work.

Mysql container exits with status: Exited (139)

I am facing an issue when i want to run a mysql container: I tried with the example command i found on the Docker hub:
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:5.6.24
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2569c1a8cbd2 mysql:5.6.24 "/entrypoint.sh mysq…" 5 seconds ago Exited (139) 4 seconds ago some-mysql
Shows that the container exited with code 139
And i can't have a single line of logs: the return of the docker logs command is empty...
~ docker logs 2569c1a8cbd2
~
I am using Docker(v19.03.1, build 74b1e89) for Debian(v10.0)
Are you running other containers? (maybe a separate project?)
I have two separate projects with their separate docker-compose files and their own services.
When one is running, the one with a mysql/mariadb container exits with 139. If I docker-compose down the other project, then the mysql container starts correctly.
I'm still figuring out why (came here for an answer to my problem), but you might have something similar.
Today I had the same issue after an upgrade from Debian 9 to 11. The mysql:5.6.24 Docker image just doesn't want to start. My solution was to upgrade to image mysql:5-debian
https://hub.docker.com/layers/mysql/library/mysql/5-debian/images/sha256-5adbbb05d43e67a7ed5f4856d3831b22ece5178d23c565b31cef61f92e3467ea?context=explore

Why docker start is much faster than docker run

I use mysql image that start with this command
docker run --name test-mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d -p 3306:3306 mysql
when docker run in background, It takes about a minute for another application can connect to port 3306.
After that I stop this container with docker stop test-mysql and then start it with docker start test-mysql. in the second case, with start command, the application can connect to port 3306, just after 5 seconds.
Now I take a snapshot from stopped container with docker commit test-mysql mysql2, and run it with docker run -d mysql2 but in this case, the application can connect to mysql2 after a minute!
So,
What's happen with stopped container, that can be start and responsible just in 5 seconds but mysql image can not do it?
Is there any way to take a snapshot after run container, that can be responsible in 10 seconds?
NOTE: Mysql image has an entrypoint that it takes above a minute to start.
Take a look here: https://stackoverflow.com/a/34783353/7719775 for the first Answer.
And for the second, you should take a look here https://docs.docker.com/engine/reference/commandline/commit/, but even in this case docker start will be faster than docker run command

mysql with Exited(1) from docker

Start learning docker and try to setup a mysql container. But it dies immediately with Exited(1).
Following is the command used
docker run mysql -e MYSQL_ROOT_PASSWORD=password1
Looking at docker ps, it does not show any running docker container
with docker ps -a returns the following :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e681f56c52e2 mysql "/entrypoint.sh -e MY" 3 seconds ago Exited(1) 3 seconds ago lonely_rosalind
Nothing shows up for docker logs lonley_rosalind either
Any idea how to determine why if failed ?
I am running
ubuntu 15.04
docker version 1.9.1 build a34a1d5
Try this
docker run -e MYSQL_ROOT_PASSWORD=password1 mysql
When you are writing something after docker image name docker accepts it as a command for execution in your created container. Pattern for docker run:
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Running two Mysql docker containers

I am trying to run two different mysql containers for master->slave replication. I start by building and running the master:
docker build --no-cache -t mysql-master .
docker run -it --name mysql-master -h mysql-master -p 3306:3309 mysql-master /bin/bash
Which works fine and runs the container correctly. I can get as far as getting the information to set up the second container, mysql-slave. When I run the following command:
docker build --no-cache -t mysql-slave .
docker run -it -p 3308:3309 mysql-slave --name mysql-slave --link mysql-master:mysql-slave /bin/bash
The mysql-master container disconnects. I am not sure why but I am sure that there is some kind of conflictions going on with the containers that I may not be aware of. Can anyone suggest what docker command I should be running so that both containers can run simultaneously?
I have a feeling this is because both containers are attempting to access the same port:
root#test2net:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d1942b5e1f69 mysql-slave:latest "/tmp/makeSlaveSQL.s 40 seconds ago Up 40 seconds 3306/tcp, 0.0.0.0:32773->3307/tcp mysql-slave
c9a7632d9cae mysql-master:latest "/tmp/makeMasterSQL. 2 minutes ago Up 2 minutes 0.0.0.0:32769->3306/tcp mysql-master
Is there a way to explicitly cast each container to a specific port. I have tried using EXPOSE in the Dockerfile and the -p to designate different ports but as you can see from above, mysql-slave is still binding to port 3306.