Why does container work in Docker but no in GKE - containers

I have a Containerfile installing a go binary[1].
When I build & execute the container via docker run on my Desktop it works fine.
When I however deploy the same container on a GKE pod I get an error:
/bin/sh: /root/service: not found
I would assume that this is a type of security lockdown - but not sure how to get it working on GKE.
[1]:
FROM golang:1.19-alpine AS build
RUN go install github.com/QubitProducts/exporter_exporter#v0.4.5
FROM alpine
COPY --from=build --chown=root:root /go/bin/exporter_exporter /root/service
CMD /root/service

This is because of the volume permission issues for your container. When you are running your container in docker the docker daemon will have access to the root and running your container won’t throw any error since the daemon is already having root access. In kubernetes, pods and containers won’t be having root access by default so when building an image for kubernetes you need to mention the required config maps for mounting root volumes and for executing your code on root volumes.

Related

Calling one docker from another docker encountered connection refused

I am using kubernetes (by windows 10 - docker desktop).
I am using mysql, that is running by helm 3 (loaded from bitnami repository).
I am creating another application.
For now, I am testing on docker (not in kubernetes yet).
Everything is fine, but when trying to connect the database from my project
(BTW - Project works fine, but not when running on docker).
Something like:
docker run --name test-docker --rm my-image:tag --db "root:12345#tcp(127.0.0.1:3306)/test"
(db is a parameter to to connect to db).
I get the message:
2022-02-21T12:18:17.205Z FATAL failed to open db: could not setup schema: cannot create jobs table: dial tcp 127.0.0.1:3306: connect: connection refused
I have investigate a little, and find that the problem may be because the dockers running need to run on the same network.
(Nonetheless, they are actually dockers, when one is running by helm tool for K8S).
this is on:
kubernetes networking
When I run:
nsenter -t your-container-pid -n ip addr
the pid is not directory, so I get the message:
/proc/<pid>/ns/net - No such file or directory
How can I eventually run my project that can use the mysql (running in dockers on K8S)?
Thanks.
Docker containers are isolated from other containers and the external network by default. There are several options to establish connection between Docker containers:
Docker sets up a default bridge network automatically, through which the communication is possible between containers and between containers and the host machine. Both your containers should be on the bridge network - for container with your project to connect to your DB container by referring to it's name. More details on this approach and how it can be set up is here.
You can also create user-defined bridge network - basically, your own custom bridge network - and attach your Docker containers to it. In this way, both containers won't be connected to the default bridge network at all. Example of this approach is described in details here.
First, user-defined network should be created:
docker network create <network-name>
List your newly created network and check with inspect command its IP address and that no containers are connected to it:
docker network ls
docker network inspect <network-name>
You can either connect your containers on their start with --network flag:
docker run -dit --name <container-name1> --network <network-name>
docker run -dit --name <container-name2> --network <network-name>
Or attach running containers by their name or by their ID to your newly created network by docker network connect - more options are listed here:
docker network connect <network-name> <container-name1>
docker network connect <network-name> <container-name2>
To verify that your containers are connected to the network, check again the docker network inspect command.
Once connected in network, containers can communicate with each other, and you can connect to them using another container’s IP address or name.
EDIT: As suggested by #Eitan, when referring to the network instead of a changing IP address in root:12345#tcp(127.0.0.1:3306)/test, special DNS name host.docker.internal can be used - it resolves to the internal IP address used by the host.

Spring-boot failing to connect to mysql container even after supplying proper mysql service name

This might sound repeated question but it is not and this is a crazy bug I feel, however, let me quickly explain my setup:
A simple Spring bootstrap application that runs pretty well on my local and JDBC connection string in application.properites file is as follows.
spring.datasource.url=jdbc:mysql://minesql:3306/datamachine?serverTimezone=UTC
spring.datasource.username=root
spring.datasource.password=****
The docker running instances are:
I copied (with the help of docker cp command) the war file to alpine (unix container) and running it in interactive mode to test and it is throwing exception as it is unable to ping the mysql server. I am certain that the database configurations are fine and clueless why the springboot app is failing to connect to mysql container instance. Note, the mysql container does have "datamachine" database created manually.
This is the error reported:
Please help me understand what I am missing here or what is going wrong.
Just in-case if you wish to know how I started these containers.
For mysql:
docker run -d --name minesql -e MYSQL_ROOT_PASSWORD=**** -p 3306:3306 mysql
Running Java app from the alpine container and this is how I am starting the alpine,
docker run -it --name unix alpine
The interactive mode present me the bash prompt to run the spring-boot war file. (..and running the war file after installing the java 8 in alpine)
You have two docker containers which are running and connected via default bridge network. From docker bridge documentation
Containers on the default bridge network can only access each other by
IP addresses, unless you use the --link option, which is considered
legacy. On a user-defined bridge network, containers can resolve each
other by name or alias.
If you need the second container to be able to resolve the name minesql from inside, you need to create a user-defined network bridge and connect the docker container containers to that.
Create a new network using
docker network create my-net
And add your containers as specified here
Other alternative is to use docker-compose and avoid manual creation of bridge networks for name resolution. For production environment, that would be ideal.

How to connect docker container with host machine's localhost mysql database?

I have a war file that uses the MySQL database in the backend.
I have deployed my war file in a docker container and I am able to ping this from my browser.
I want to connect my app with the MySQL database. This database exists on my host machine's localhost:3306
As I am unable to connect this from inside container's localhost, what I tried is,
I run a command docker inspect --format '{{ .NetworkSettings.IPAddress }}' 213be777a837
This command gave me an IP address 172.17.0.2. I went to MySQL server options and put this IP address in the bind field and restarted the server. After that, I have updated my projects database connection string with 172.17.0.2:3306
But it is not working. Could anyone please tell what I am missing?
I have also tried adding a new DB user with root#% and then run command allow all permission to 'root#%' but nothing worked.
Follow the steps:-
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 dockernet
docker run -p 8082:8080 --network dockernet -d 6ab907c973d2
in your project set connection string : jdbc:mysql://host.docker.internal:3306/....
And then deploy.
tl;dr: Use 172.17.0.1:3306 if you're on Linux.
Longer description:
As I understand what you need to do is to connect from your Docker container to a host port. But what you have done is to try to bind the host process (MySQL) to the container networking interface. Not sure what the implications of a host process trying to bind to another host process network namespace, but IIUC your MySQL process should not be able to bind to that address.
When you start MySQL with default settings that bind it to 0.0.0.0 it's available for Docker containers through the Docker virtual bridge. Therefore, what you should do is to route your requests from the WAR process to the host process through that virtual bridge (if this is the networking mode you're using. If you have not changed any Docker networking settings, it should be). This is done by specifying the bridge gateway address as the MySQL address and the port it's started with.
You can get the bridge IP address by checking your network interfaces. When Docker is installed, it configures the virtual bridge by default, and that should show up as docker0 if you're on Linux. The IP address for this will most probably be 172.17.0.1. So your MySQL address from the container's point of view is jdbc:mysql://172.17.0.1:3306/....
1 - https://docs.docker.com/network/
2 - https://docs.docker.com/network/bridge/
From your question, I am assuming you both your war file and MySQL is deployed locally, and you want to connect them. One way to allow both containers that are locally deployed to talk to each other is by:
Create your own network docker network create <network-name>
Then when you run your war file and MySQL, deploy both of them using the --network. E.g.
War File: docker run --name war-file --network <network-name> <war file image>
MySQL: docker run --name mysql --network <network-name> <MySQL image>
After that, if you should be able to connect to your MySQL using mysql:3306 from inside your war file docker container, since they are both on the same custom network.
If you want to read up more about this, can take a look at docker documentation on network. (https://docs.docker.com/network/bridge/).
Your setup is fine. You just need to do this one change.
While running the application container (the one in which you are deploying your war file), you need to add following argument in its docker run command.
--net=host
Example:
docker run -itd -p 8082:8080 --net=host --name myapp myimage
With this change, you need not to change connection string as well. localhost:3306 would work fine. And you will be able to set up a connection with MySQL.

Connecting to percona docker from a java docker container

I know there have been many similar questions, but none of them are what I want. I'm following this because I specifically need 5.5, at least for now. My java project (which accesses mysql) is in a container I built with
docker build -t projectname-testing .
The Dockerfile is pretty standard, it just copies over a built tarball and extracts it to a specific folder. The CMD is a shell script run_dev_server.sh that just launches the server with dev configurations rather than production ones.
I created a percona docker container with the command given in the link with
docker run --name projectname-mysql-server -e MYSQL_ROOT_PASSWORD="" -d percona:5.5
So now the way I see it, just need the link the two as mentioned in the link:
docker run -p 3306:3306 --name projectname-local --link projectname-mysql-server projectname-testing
Which gives me
docker: Error response from daemon: Cannot link to a non running container: /projectname-mysql-server AS /projectname-local/projectname-mysql-server.
ERRO[0000] error getting events from daemon: net/http: request canceled
Which isn't very helpful and doesn't tell me what happened. Am I understanding this process wrong? What should I be doing?
First of all, I would recommend using the official Percona docker image from Docker Hub, instead of building your own image. The official image has a 5.5 version; https://hub.docker.com/_/percona/
You can either extend this image if you need specific changes (such as a custom configuration), for example;
FROM percona:5.5
COPY my-config.cnf /etc/mysql/conf.d/
Important: I notice you are publishing port 3306 (-p 3306:3306). Publishing a port makes it publicly accessible on the host's network-interface. You should only do this if you have external software that needs to connect to the database. If only your application needs access to the database, publishing the port is not needed, because containers can connect with eachother through the docker container-container network, which is "private" and not reachable from outside the host.
The --link option on the default network is a legacy option that is still around for backward compatibility, but should not be used for most situations. The --link option has a number of limitations;
legacy links are not dynamic; it's not possible to replace a linked container without re-creating all containers linked to that container
restarting a linked container can break the link, with no option to re-establish a link
legacy links are uni-directional
environment variables are shared between containers, which can easily lead to leaking (e.g.) credentials to other containers.
Docker 1.9 introduced custom docker networks, which allows
A simple example;
create a network for your application;
docker network create mynet
create a database container, and attach it to the network; there is no need to publish its ports for other containers to connect to it. (I'm using an nginx image here, just to illustrate the concept);
docker run -d --name db --network mynet nginx:alpine
create an "application" container and attach it to the same network; doing so
allows it to communicate with the db container over that network;
docker run -dit --name app --network mynet alpine sh
The application container can now connect to the db container, using its name
as hostname (db); to illustrate this, open a shell in the app container, install curl and connect to http://db:80;
docker exec -it app sh
/ # apk add --no-cache curl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ca-certificates (20161130-r1)
(2/4) Installing libssh2 (1.7.0-r2)
(3/4) Installing libcurl (7.52.1-r3)
(4/4) Installing curl (7.52.1-r3)
Executing busybox-1.25.1-r0.trigger
Executing ca-certificates-20161130-r1.trigger
OK: 5 MiB in 15 packages
/ # curl http://db:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
You can read more about networks (also how to dynamically attach and detach a container from a network) in the []"docker container networking" section of the documentation](https://docs.docker.com/engine/userguide/networking/)

Installation of system tables failed! boot2docker tutum/mysql mount file volume on Mac OS

I have trouble mounting a volume on tutum/mysql container on Mac OS.
I am running boot2docker 1.5
When I run
docker run -v $HOME/mysql-data:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
i get this error
Installation of system tables failed! Examine the logs in /var/lib/mysql for more information.
Running the above command also creates an empty $HOME/mysql-data/mysql folder.
The tutum/mysql container runs smoothly when no mounting occurs.
I have successfully mounted a folder on the nginx demo container, which means that the boot2docker is setup correctly for mounting volumes.
I would guess that it's just a permissions issue. Either find the uid of the mysql user inside the container and chown the mysql-data dir to that user, or use a data container to hold the volumes.
For more information on data containers see the official docs.
Also note that as the Dockerfile declares volumes, mounting is taking place whether or not you use -v argument to docker run - it just happens in a directory on the host controlled by Docker (under /var/lib/docker) instead of a directory chosen by you.
I've also had a problem starting mysql docker container with error "Installation of system tables failed". There was no changes on the docker image, and there was no recent update on my machine or docker. One thing I was doing differently was that using images that could take up or more than 5GB memory on testing.
After cleaning dangling images and volumes, I was able to start mysql image as usual.
This blog seems to have a good instructions and explains all variations of clean up with docker.