What IP address I should use for communicating in Docker? - mysql

I have a MySql container and another docker container with Jython app.
Inside Jython app - this is a connection string to connect to MySql (it works on host):
mysql_url_string jdbc:mysql://localhost/...
This does not work with 2 docker containers (1 Mysql, 2 Jython app).
What IP address I should use for connection string (instead of localhost)?
Thanks.

Instead of using an IP address (as they may change unless you specifically define network configuration), you can simply link the 2 containers together and refer to them by container name.
version: "3"
services:
mysql:
container_name: mysql
image: somethingsomething/mysql:latest
jython
container_name: jython
image: somethingsomething/jython:latest
links:
- mysql
environment:
jdbc_url: jdbc:mysql://mysql:3306
This linking can also be done via CLI (see: https://linuxconfig.org/basic-example-on-how-to-link-docker-containers)
If you simply must use IP addresses, you can obtain the IP address after linking by checking the /etc/hosts files inside the containers.
Edit Note:
There are alternative ways to approach this without 'linking' but without need more detailed information for how your containers are set up already it's difficult to provide this.
i.e. whether they are standalone containers on host network or bridged network, or created as a docker service with an overlay, or something else!
The different scenarios change the way addressing is created and used for inter container communication so the means of looking up the IP address won't be the same.

Related

Docker Container cant connect to MySQL database

I have a MySQL instance running in a docker container, available on my host system at port 3333. I already tested the connection via the MySQL workbench to verify, that the user I created is able to login to the SQL server.
I also have a wikijs (installation guide found here) instance running in a container.
I have provided all the required environment variables, including the information of the user I already tested, but the container always says that the connection was refused.
Does anybody have an idea on what the problem is?
No information does not help solving your problem but a wild guess:
By default docker containers are joining a virtual network separate from host named bridge.
You can't reach host by localhost or 127.0.0.1, because this is pointing to your docker container itself. To reach host directly either let container use hosts IP by --network=host (with some disadvantages) or use host.docker.internal as DNS-Name instead of an IP.
BUT you should not take the way over host, connect directly to the mySQL-container by using the alias or IP or the container. You'll get that by docker inspect <containername>. No need to map ports then..
Kindly try adjusting the port to 3306 and see if it works

In a ROOTLESS podman setup, how to communicate between containers in different pods

I read all I could find, but documentation on this scenario is scant or unclear for podman. I have the following (contrived) ROOTLESS podman setup:
pod-1 name: pod1
Container names in pod1:
p1c1 -- This is also it's assigned hostname within pod1
p1c2 -- This is also it's assigned hostname within pod1
p1c3 -- This is also it's assigned hostname within pod1
pod-2 name: pod2
Container names in pod2:
p2c1 -- This is also it's assigned hostname within pod2
p2c2 -- This is also it's assigned hostname within pod2
p2c3 -- This is also it's assigned hostname within pod2
I keep certain containers in different pods specifically to avoid port conflict, and to manage containers as groups.
QUESTION:
Give the above topology, how do I communicate between, say, p1c1 and p2c1? In other words, step-by-step, what podman(1) commands do I issue to collect the necessary addressing information for pod1:p1c1 and pod2:p2c1, and then use that information to configure applications in them so they can communicate with one another?
Thank you in advance!
EDIT: For searchers, additional information can be found here.
Podman doesn't have anything like the "services" concept in Swarm or Kubernetes to provide for service discovery between pods. Your options boil down to:
Run both pods in the same network namespace, or
Expose the services by publishing them on host ports, and then access them via the host
For the first solution, we'd start by creating a network:
podman network create shared
And then creating both pods attached to the shared network:
podman pod create --name pod1 --network shared
podman pod create --name pod2 --network shared
With both pods running on the same network, containers can refer to
the other pod by name. E.g, if you were running a web service in
p1c1 on port 80, in p2c1 you could curl http://pod1.
For the second option, you would do something like:
podman pod create --name pod1 -p 1234:1234 ...
podman pod create --name pod2 ...
Now if p1c1 has a service listening on port 1234, you can access that from p2c1 at <some_host_address>:1234.
If I'm interpreting option 1 correctly, if the applications in p1c1 and p2c1 both use, say, port 8080; then there won't be any conflict anywhere (either within the pods and the outer host) IF I publish using something like this: 8080:8080 for app in p1c1 and 8081:8080 for app in p2c1? Is this interpretation correct?
That's correct. Each pod runs with its own network namespace
(effectively, it's own ip address), so services in different pods can
listen on the same port.
Can the network (not ports) of a pod be reassigned once running? REASON: I'm using podman-compose(1), which creates things for you in a pod, but I may need to change things (like the network assignment) after the fact. Can this be done?
In general you cannot change the configuration of a pod or a
container; you can only delete it and create a new one. Assuming that
podman-compose has relatively complete support for the
docker-compose.yaml format, you should be able to set up the network
correctly in your docker-compose.yaml file (you would create the
network manually, and then reference it as an external network in
your compose file).
Here is a link to the relevant Docker documentation. I haven't tried this myself with podman.
Accepted answer from #larsks will only work for rootful containers. In other words, run every podman commands with sudo prefix. (For instance when you connect postgres container from spring boot application container, you will get SocketTimeout exception)
If two containers will work on the same host, then get the ip address of the host, then <ipOfHost>:<port>. Example: 192.168.1.22:5432
For more information you can read this blog => https://www.redhat.com/sysadmin/container-networking-podman
Note: The above solution of creating networks, only works in rootful mode. You cannot do podman network create as a rootless user.

openshift_set_node_ip deprecated in openshift 3.11, what should be used instead?

There is openshift-origin cluster version 3.11. (upgraded from 3.9)
I want to add two new nodes to cluster.
Node Hosts created in openstack project with nat, and use internal network class C (192.168.xxx.xxx), also there are floating ip attached to hosts
There are dns records which resolve fqdn of hosts to floating ips and back.
Scaleup playbook works fine but new nodes appear in cluster with their internal ips and thus nothing works.
In openshift v3.9 and earlier i used in my inventory variable
openshift_set_node_ip = true
and point openshift_ip for adding node.
Now it doesn't work.
What should i use instead of openshift_set_node_ip?
I had a similar problem I solved after reading https://stackoverflow.com/a/29496135 where Kashyap explain how to change the ansible_default_ipv4 fact used to guess the IP address to use.
This variable is created testing a call to 8.8.8.8 (https://github.com/ansible/ansible/blob/e41f1a4d7d8d7331bd338a62dcd880ffe27fc8ea/lib/ansible/module_utils/facts/network/linux.py#L64). You can then add a specific route to 8.8.8.8 to change the ansible_default_ipv4 fact result:
sudo ip r add 8.8.8.8 via YOUR_RIGHT_GATEWAY
Maybe it could help to solve your case.

ECS EC2 Launch Type: Service database connection string

I am trying out a small POC (learning experiment) on docker. I have 3 docker images, one each for a storefront, a search engine and a database engine called, storefront, solr, docmysql respectively. I have tried running them in a docker swarm (on a single node) on ec2 and it works fine.
In the POC, I next needed to move this to AWS ECS using the EC2 launch type on a single Non-Amazon ECS-Optimized AMI. I have installed and started a ecs-agent on this. I have created 3 services with one task for each of the 3 images configured as containers within the task. The question is about connecting to the database from the storefront.
The storefront has a property file where the database connection is typically defined as
"jdbc:mysql://docmysql/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false".
This worked when I ran it as a docker swarm. Once I moved it to ECS (EC2 launch type), I had to expose the port 3306 from my task/container for the docmysql service. This gave me a service endpoint of docmysql.local, with 'local' being a private namespace. I tried changing the connection string to
"jdbc:mysql://docmysql.local/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false"
in the property file and it always fails with " Name or service not known". What should my connection string be? When the service is created I see 2 entries in Route 53, one SRV record and a A record. The A record has as its name a .docmysql.local, If I use this in the database connection string, I see that it works but obvious not the right thing to do with the hadcoded taskid. I have read about AWS Cloud Map (servicediscovery) but still not very clear how to go about it. I will not be putting any loadbalancer in front of my DB task in the service, there will always be only one task for the db.
So what is the best way to generate the connection string that works. Also why did I not have issues when I ran it as a docker swarm.
I know I can use an RDS instead of stating my own database, I will try that but for now need this working as I have started with this. Thanks for any help.
Well, I've raised some points before my own solution within the problem:
Do you need your instance to scale using ECS? If not, migrate it to RDS.
Do you need to deploy it on EC2-Type? If not, use fargate, it is more simple to handle.
Now, I've faced that issue on Fargate, and discovered that depending on your container/task definitions, it can be used inside the same task for testing purposes, so, 127.0.0.1 should be the answer.
On different tasks you need to work with awsvpc network mode so, you will have this:
Each task that uses the awsvpc network mode receives its own elastic network interface, which is attached to the container instance that hosts it. (FROM AWS)
My suggestion is to create a Lambda Function to discover your network interface dynamically.
Read this for deeply understanding:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
https://aws.amazon.com/blogs/developer/invoking-aws-lambda-functions-from-java/

How best to connect containers within Docker

Do i always need to use the --link command to link to containers to one another or can i just ping the ip of the 2nd container from the 1st container.
Example:
Container 1 running mysql (tcp 3306) : ip 10.0.0.7
Container 2 running lamp : ip 10.0.0.8
can 0.8 not just directly connect to 0.7 they are on the same bridge ?
Thanks once again for the help
Regards
Hareem Haque
It depends even on your network topology.
If you choose "secure" setup with --icc=false you will have to use --link for dockers to communicate.
Documentation at [1] explains it.
Link:
[1] - https://docs.docker.com/articles/networking/#communication-between-containers
Regards
Paolo
Basically, I added --icc=true to my docker opts and restarted docker. I just ran a test connecting a php container to a mysql container without using --link. Everything works great. I see no error. I can now easily connect containers together via bridge ip address.
If you want to connect containers on different hosts, the best option available right now is using Weave:
https://github.com/zettio/weave
Another is Open vSwitch, but it's too messy for my taste. Docker's acquisition of SocketPlane could result in something usable, but we are not there yet. I would go with Weave.