How does intra-container networking work with podman? - containers

I am running a RHEL 9.0 compatible OS in my homelab, along with podman version 4.0.2 and podman-compose version 1.0.3. If you need any other information please let me know!
I'm trying to transition from using docker containers to rootless podman containers. To that end, I've brought over a pretty simple set of services that will run on a freshly installed docker setup on nearly any *nix OS I've tried. Simple right? Nope
First, I had to provide full pathing to my container images, can just refer to them as they appear in the docker library. That wasn't so bad.
My compose file declares a bridged network, and each service attaches to that network.
Any other computer on the network can reach any service whose ports are exposed from the container. Ports 8080, 8443, 3306, are all reachable from my laptop.
The problem lies in the inability for containers to communicate with each other. With docker networks, the containers could resolve one another using just the container name as the host name. I've installed ping on each of my containers and am finding that they can all ping themselves when referring to their own container name, but are unable to ping other container. That really puts a damper on my plans for rootless container.
In my compose file, I'm declaring the network first:
version: 3.1
networks:
neta:
driver: bridge
Each service declares a container name and attaches to that network, example:
container_name: httpd
networks:
- neta
...
container_name: mariadb
networks:
- neta
I didn't post my full compose file, because I believe this issue isn't specific to my file, but rather with rootless nature of podman.
My issue is that httpd container can't reach mariadb, nor the other way around.
I'm less that 24 hours into my podman journey, really, less than 4 hours. I just assumed that container networking would be something that would "just work" and am now mistaken.
Any input, links or advice would be appreciated.
Thanks

Related

When creating (not running) a docker, does assigning a container to a network have any real effect?

When i create a container (but not run it yet) by docker container create ... (not by docker run), if I include option --network my_network_name then when i run this docker, will the docker be connected to the network that i specified?
If you say 'no' then it means --network my_network_name does not have any real effect.
More specifically, if i create a container by:
docker container create --name mysql_container --network my_network mysql
then when i run it by:
docker container start -it mysql_container
will mysql_container be automatically connected to my_network?
from Docker. Docs
Network driversđź”—
Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:
bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate. See bridge networks.
host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. See use the host network.
overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers. See overlay networks.
macvlan: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack. See Macvlan networks.
none: For this container, disable all networking. Usually used in conjunction with a custom network driver. none is not available for swarm services. See disable container networking.
Network plugins: You can install and use third-party network plugins with Docker. These plugins are available from Docker Hub or from third-party vendors. See the vendor’s documentation for installing and using a given network plugin.

Openshift OKD 4.5 on VMware

I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.

Is it possible to containerize a cluster?

Is it possible to containerize a minishift or minikube cluster? So that I can docker run -it the container, and oc/kubectl get the resources inside?
The Dockerfile could be like:
FROM alpine:latest
RUN **minishift installation**
ENTRYPOINT ['minishift', 'start']
We currently have a product that has a minishift cluster in a VM, so I was wondering if we can transition it from VM to Container.
Theoretically, yes this might be possible. But this should not be done this way.
While I do understand running Minikube or Minishift inside a Virtual Machine, I cannot understand why you would like to run in inside a container. Both of those are just one node easy to use Kubernetes or OpenShift.
If you already have a cluster why not used it to run the app that you want.
By deploying Minikube or MiniShift into a container, you are creating a huge overhead for your application.
You might be interested in reading this blog Running systemd within a Docker Container. It is a bit dated but it might be something that you are looking for. Daniel Walsh also posted an update in 2019 for that post from 2014, you can find it here

Restart Docker Containers in Sequence after Server Reboot

There are 3 docker containers that need to be restarted automatically whenever the server reboot.
We can start the containers using restart policies, such as
sudo docker run --restart=always -d your_image
but because one container is linked to another, they need to be started in sequence.
Questioin: Is there a way to automatically restart Docker containers in sequence?
Docker doesn't have an option for this, and doing so is an anti-pattern for microservices. Instead, each container should gracefully return errors when it's dependencies aren't available, or as a fall back, you can use something like a wait-for-it command in your container's entrypoint to wait for your dependencies to be available. I'd also recommend against using "links" and instead place all your services on their own docker network, letting the built in dns resolution handle service discovery for you.

Exposing local Docker containers on internet (There are two containers linked to each other).

I have created two docker containers, One is mysql and other is phabricator both are linked and both are locally. I have bound mysql port to 0.0.0.0. Now I want to expose the phabricator to internet. so that everyone can use that. --net=host option does not work with links. Can anyone tell me how can I achieve this ?
You need to start your phabricator container with -p setting which defines the port mapping- Let's say your container internally exposes port 8080, then you can define it like -p 8080:8080, which means that the port 8080 is also externally accessible (as long as your host is reachable from internet on port 8080 without interfering firewall).