Running Pumba in OpenShift - openshift

Currently I am trying to install Pumba (https://github.com/gaia-adm/pumba) into my Minishift 1.7.0 cluster. After enabling developer user as cluster-admin and allowing volumes to use hostPath with /var/run/docker.sock I was able to deploy the pumba pod. The problem is that when pumba tries to connect to current docker socket there is an exception:
time="2017-10-19T13:42:30Z" level=debug msg="Retrieving running containers"
time="2017-10-19T13:42:30Z" level=error msg="Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/containers/json?limit=0: dial unix /var/run/docker.sock: connect: permission denied"
It seems there is some permissions problem that I have tried to fix without much success.
I have created a gist so you can see how Docker image of Pumba is created as well as the kubernetes file: https://gist.github.com/lordofthejars/14b6999395fb3986694c05bf48453d08
Probably it is something really simple to fix, but I cannot find a way.
Thank you very much for your help

The solution was to run oc adm policy add-scc-to-user privileged system:serviceaccount:fasttest:default with privileged instead of anyuid

Related

Why does container work in Docker but no in GKE

I have a Containerfile installing a go binary[1].
When I build & execute the container via docker run on my Desktop it works fine.
When I however deploy the same container on a GKE pod I get an error:
/bin/sh: /root/service: not found
I would assume that this is a type of security lockdown - but not sure how to get it working on GKE.
[1]:
FROM golang:1.19-alpine AS build
RUN go install github.com/QubitProducts/exporter_exporter#v0.4.5
FROM alpine
COPY --from=build --chown=root:root /go/bin/exporter_exporter /root/service
CMD /root/service
This is because of the volume permission issues for your container. When you are running your container in docker the docker daemon will have access to the root and running your container won’t throw any error since the daemon is already having root access. In kubernetes, pods and containers won’t be having root access by default so when building an image for kubernetes you need to mention the required config maps for mounting root volumes and for executing your code on root volumes.

How to connect docker container with host machine's localhost mysql database?

I have a war file that uses the MySQL database in the backend.
I have deployed my war file in a docker container and I am able to ping this from my browser.
I want to connect my app with the MySQL database. This database exists on my host machine's localhost:3306
As I am unable to connect this from inside container's localhost, what I tried is,
I run a command docker inspect --format '{{ .NetworkSettings.IPAddress }}' 213be777a837
This command gave me an IP address 172.17.0.2. I went to MySQL server options and put this IP address in the bind field and restarted the server. After that, I have updated my projects database connection string with 172.17.0.2:3306
But it is not working. Could anyone please tell what I am missing?
I have also tried adding a new DB user with root#% and then run command allow all permission to 'root#%' but nothing worked.
Follow the steps:-
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 dockernet
docker run -p 8082:8080 --network dockernet -d 6ab907c973d2
in your project set connection string : jdbc:mysql://host.docker.internal:3306/....
And then deploy.
tl;dr: Use 172.17.0.1:3306 if you're on Linux.
Longer description:
As I understand what you need to do is to connect from your Docker container to a host port. But what you have done is to try to bind the host process (MySQL) to the container networking interface. Not sure what the implications of a host process trying to bind to another host process network namespace, but IIUC your MySQL process should not be able to bind to that address.
When you start MySQL with default settings that bind it to 0.0.0.0 it's available for Docker containers through the Docker virtual bridge. Therefore, what you should do is to route your requests from the WAR process to the host process through that virtual bridge (if this is the networking mode you're using. If you have not changed any Docker networking settings, it should be). This is done by specifying the bridge gateway address as the MySQL address and the port it's started with.
You can get the bridge IP address by checking your network interfaces. When Docker is installed, it configures the virtual bridge by default, and that should show up as docker0 if you're on Linux. The IP address for this will most probably be 172.17.0.1. So your MySQL address from the container's point of view is jdbc:mysql://172.17.0.1:3306/....
1 - https://docs.docker.com/network/
2 - https://docs.docker.com/network/bridge/
From your question, I am assuming you both your war file and MySQL is deployed locally, and you want to connect them. One way to allow both containers that are locally deployed to talk to each other is by:
Create your own network docker network create <network-name>
Then when you run your war file and MySQL, deploy both of them using the --network. E.g.
War File: docker run --name war-file --network <network-name> <war file image>
MySQL: docker run --name mysql --network <network-name> <MySQL image>
After that, if you should be able to connect to your MySQL using mysql:3306 from inside your war file docker container, since they are both on the same custom network.
If you want to read up more about this, can take a look at docker documentation on network. (https://docs.docker.com/network/bridge/).
Your setup is fine. You just need to do this one change.
While running the application container (the one in which you are deploying your war file), you need to add following argument in its docker run command.
--net=host
Example:
docker run -itd -p 8082:8080 --net=host --name myapp myimage
With this change, you need not to change connection string as well. localhost:3306 would work fine. And you will be able to set up a connection with MySQL.

docker login to openshift internal docker registry - Gateway Timeout

Running the openshift cluster using minishift in ubuntu OS. minishift IP is "192.168.42.48". I am following the URL to access the internal docker registry.
After the minishift has started successfully, logged in as administrator using "oc login -u system:admin" then added the cluster-role to user "chak".
~/github/cheatsheets$ oc adm policy add-cluster-role-to-user cluster-admin chak
cluster role "cluster-admin" added: "chak"
Then copied the token for user "chak" and trying to login to docker registry but it has failed with below error. The minishift ip and ip in the error output is different. In the terminal, already logged in as administrator and added a cluster-admin role.
So, I expect docker daemon to login to the openshift cluster ip that is started by the minishift. why is docker daemon trying to login to ip in the error rather than than minishift ip?
I also have http_proxy, https_proxy and no_proxy set, since i am connected to corporate network.
~/github/cheatsheets$ docker login -u chak -p C5u5F1iwA6gl4va1K8OZ01DaRPdMYMnDQklErn2FzjY docker-registry-default.127.0.0.1.nip.io
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
error during connect: Post https://192.168.42.253:2376/v1.39/auth: Gateway Timeout
Edit 1:
~/github/hashitvault$ docker login -u chak -p Naqp6NScYF7zOcKN41SuYQ045qR9zBN6lfGVnvxhrU docker-registry-default.192.168.42.186.nip.io
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get http://docker-registry-default.192.168.42.186.nip.io/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
oc internal docker registry route is exposed.
when hit in browser,reaching the 502 server error.
what am i doing wrong here?

Openshift Enterprise 3.11 installation problem when deploy_cluster.yml

I tried to install Openshift 3.11 Enterprise version in RHEL 7.6. prerequisites.yml is success. When I try to start with deploy_cluster.yml
$ ansible-playbook -i playbooks/deploy_cluster.yml ansible-playbook execution failed with error stating "wait for control plane pods to appear"
Trying to
login to oc login -u sysadmin:admin
it says "unable to connect to the server: net/http: TLS handshake timeout" –
when I do docker images I can see openshift images listing. when I do docker ps I do not see any run time container.
Not sure how to resolve this TLS handshake error. Please help me on this.
Thanks.

Failed to deploy Artifactory OSS image in Openshift Online 3 Starter by error "Creating user artifactory failed"

I'm trying to setup artifactory on Openshift Online 3 Starter using docker image docker.bintray.io/jfrog/artifactory-oss:latestfrom here
But when deploying I got an error
I tried to create artifactory user by command oc create serviceaccount artifactory and then oc adm policy add-scc-to-user anyuid -z artifactory but has another error:
Error from server (Forbidden): User "xxxx" cannot get securitycontextconstraints at the cluster scope
You need to be cluster admin in order to be able to run:
oc adm policy add-scc-to-user anyuid -z artifactory
This is because it is granting the right to run things as any user ID, including root. This is something that you as a normal user aren't allowed to do.
Further, in OpenShift Online you can only run things in the user ID range you are assigned. You cannot override that, nor will you be granted additional privileges.
You would need to find a version of the image which doesn't require it be run as root and which can run as an arbitrary user ID.