Error: timed out waiting for the condition - openshift

On Ubuntu 18.04 with Docker 18 and OpenShift OKD oc v3.11.0 a local cluster will not start successfully and produce a time out error message.
Is it possible to start a local cluster on Ubuntu 18.04 using oc cluster up? Is it supported? How should a cluster be started on Ubuntu 18.04.
myuser:~] $ oc cluster up --public-hostname='ocp.127.0.0.1.nip.io' --routing-suffix='apps.ocp.127.0.0.1.nip.io'
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Pulling image openshift/origin-control-plane:v3.11
Pulled 1/5 layers, 23% complete
Pulled 2/5 layers, 43% complete
Pulled 3/5 layers, 80% complete
Pulled 4/5 layers, 96% complete
Pulled 5/5 layers, 100% complete
Extracting
Image pull complete
Pulling image openshift/origin-cli:v3.11
Image pull complete
Pulling image openshift/origin-node:v3.11
Pulled 5/6 layers, 88% complete
Pulled 6/6 layers, 100% complete
Extracting
Image pull complete
Creating shared mount directory on the remote host ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Starting OpenShift using openshift/origin-control-plane:v3.11 ...
I0416 08:32:35.747717 22853 config.go:40] Running "create-master-config"
I0416 08:32:37.456151 22853 config.go:46] Running "create-node-config"
I0416 08:32:38.721454 22853 flags.go:30] Running "create-kubelet-flags"
I0416 08:32:39.763094 22853 run_kubelet.go:49] Running "start-kubelet"
I0416 08:32:39.972403 22853 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
I0416 08:33:13.978672 22853 interface.go:26] Installing "kube-proxy" ...
I0416 08:33:13.978684 22853 interface.go:26] Installing "kube-dns" ...
I0416 08:33:13.978689 22853 interface.go:26] Installing "openshift-service-cert-signer-operator" ...
I0416 08:33:13.978694 22853 interface.go:26] Installing "openshift-apiserver" ...
I0416 08:33:13.978704 22853 apply_template.go:81] Installing "openshift-apiserver"
I0416 08:33:13.978752 22853 apply_template.go:81] Installing "kube-dns"
I0416 08:33:13.978758 22853 apply_template.go:81] Installing "kube-proxy"
I0416 08:33:13.978788 22853 apply_template.go:81] Installing "openshift-service-cert-signer-operator"
I0416 08:33:15.418545 22853 interface.go:41] Finished installing "kube-proxy" "kube-dns" "openshift-service-cert-signer-operator" "openshift-apiserver"
Error: timed out waiting for the condition
[myuser:~] 1 $ docker version
Client:
Version: 18.09.1
API version: 1.39
Go version: go1.10.6
Git commit: 4c52b90
Built: Wed Jan 9 19:35:31 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.1
API version: 1.39 (minimum version 1.12)
Go version: go1.10.6
Git commit: 4c52b90
Built: Wed Jan 9 19:02:44 2019
OS/Arch: linux/amd64
Experimental: false
[myuser:~] $ oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
[myuser:~] $ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.1 LTS"
[myuser:~] $
I think this has to do with Ubuntu 18.04 using dnsmasq by default now. I had some success using script similar to below.
#!/bin/bash
sudo /bin/sh -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf'
oc cluster up --public-hostname='ocp.127.0.0.1.nip.io' --routing-suffix='apps.ocp.127.0.0.1.nip.io'
oc cluster down
sudo /bin/sh -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf'
sudo /bin/sh -c 'echo "nameserver 8.8.8.8" > ~/openshift.local.clusterup/kubedns/resolv.conf'
oc cluster up --public-hostname='ocp.127.0.0.1.nip.io' --routing-suffix='apps.ocp.127.0.0.1.nip.io'
So the problem seems to be that /etc/resolv.conf is different under Ubuntu 18.04 and not suitable for oc cluster up.
After you try above workaround you can test if DNS is working correctly using a script similar to below
#!/bin/bash
oc login -u system:admin -n default
podname=$(oc get pods | grep registry | awk '{print $1;}')
oc exec $podname host github.com

You can solve this issue by stopping all running containers on your machine or stop containers run by openshift if you have other important running containers on your machine and after run again the oc cluster up command:
docker container stop $(docker ps -q)
oc cluster up --skip-registry-check=true

After trying to fix all possible issues like timeouts, EOF, errors, panic and other random problems (I had all of them around 300 fails during oc up) I revert my vm to state before installing of below things and I've done it in correct way I guess because now it works like it should (+ I can do oc cluster up/down without stress)
PS. It can be also limit 100 pulls of Docker image from Openshift (try docker run hello-world).
`sudo apt update && sudo apt upgrade`
`sudo apt install curl`
`curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -`
`sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"`
`sudo apt update && sudo apt -y install docker-ce`
`sudo usermod -aG docker XXXX`
`groups XXXX`
`wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz`
`cat << EOF | sudo tee /etc/docker/daemon.json
{
"insecure-registries" : [ "172.30.0.0/16" ]
}
EOF`
`sudo systemctl daemon-reload`
`sudo systemctl restart docker`
`sudo systemctl is-enabled docker`
`sudo systemctl is-active docker`
Regards,
Artur

Related

Cannot `ssh` from container with `openvpn`

Basic setup
Using:
Fedora 30, fully upgraded (kernel 5.1.19)
Podman 1.4.4
I have this Dockerfile:
FROM fedora:30
ENV LANG C.UTF-8
RUN dnf upgrade -y \
&& dnf install -y \
openssh-clients \
openvpn \
slirp4netns \
&& dnf clean all
CMD ["openvpn", "--config", "/vpn/ovpn.config", "--auth-user-pass", "/vpn/ovpn.auth"]
Which I build with:
podman build -t peque/vpn .
Now, in order to be able to run it successfully, I have to take care of some SELinux issues (see Connect to VPN with Podman).
Fixing SELinux permission issues
sudo dnf install udica
I define this ovpn_container.cil custom policy for the VPN container:
(block ovpn_container
(blockinherit container)
(blockinherit restricted_net_container)
(allow process process (capability (chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin)))
(allow process default_t (dir (open read getattr lock search ioctl add_name remove_name write)))
(allow process default_t (file (getattr read write append ioctl lock map open create)))
(allow process default_t (sock_file (getattr read write append open)))
(allow process tun_tap_device_t (chr_file (ioctl open read write)))
(allow process self (netlink_route_socket (nlmsg_write)))
(allow process unreserved_port_t (tcp_socket (name_connect)))
)
I apply the policy with:
sudo semodule -r ovpn_container
sudo semodule -i ovpn_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil}
Running the container
Now I can successfully run the container with:
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun --security-opt label=type:ovpn_container.process -it peque/vpn
Issues
Once the container is running, I open a terminal, within the container, from which I want to ssh to remote servers:
podman exec -it container_name bash
From the container I am able to ssh to remote servers successfully, but only if they are not within the VPN.
When I try to ssh to servers in the VPN, it gets stuck for a while and then throws this error:
$ ssh server.domain.com
ssh: connect to host server.domain.com port 22: Connection refused
kex_exchange_identification: Connection closed by remote host
What could I be missing?

Installation Requirements for mysql with DBIish on rakudo-star docker image

I was creating an own docker image based on the latest rakudo-star docker image. I wanted to use DBIish to connect to a mysql database. Unfortunately I am not able to get the DBDish::mysql to work.
I've installed default-libmysqlclient-dev as you can see in
# find / -name 'libmysqlclient*.so'
/usr/lib/x86_64-linux-gnu/libmysqlclient_r.so
/usr/lib/x86_64-linux-gnu/libmysqlclient.so
The error i am facing is:
# perl6 -Ilib -e 'use DBDish::mysql; DBDish::mysql.connect()'
Cannot locate native library 'mysqlclient': mysqlclient: cannot open shared object file: No such file or directory
in method setup at /usr/share/perl6/sources/24DD121B5B4774C04A7084827BFAD92199756E03 (NativeCall) line 289
in method CALL-ME at /usr/share/perl6/sources/24DD121B5B4774C04A7084827BFAD92199756E03 (NativeCall) line 539
in method connect at /root/DBIish/lib/DBDish/mysql.pm6 (DBDish::mysql) line 12
in block <unit> at -e line 1
Short answer: you need the package libmysqlclient20 (I added the documentation request to a similar DBIish issue). Debian 9 (stable at the moment) uses and older version than Ubuntu 18.04 (stable at the moment) and Debian Unstable. It also refers to mariadb instead of mysql. Pick libmariadbclient18 on images based on Debian Stable and create a link with the mysql name (see below).
On Debian Testing/Unstable and recent derivatives:
$ sudo apt-get install libmysqlclient20
$ dpkg -L libmysqlclient20
/.
/usr
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libmysqlclient.so.20.3.9
/usr/share
/usr/share/doc
/usr/share/doc/libmysqlclient20
/usr/share/doc/libmysqlclient20/NEWS.Debian.gz
/usr/share/doc/libmysqlclient20/changelog.Debian.gz
/usr/share/doc/libmysqlclient20/copyright
/usr/lib/x86_64-linux-gnu/libmysqlclient.so.20
On Debian 9 and derivatives:
$ dpkg -L libmariadbclient18
/.
/usr
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libmariadbclient.so.18.0.0
/usr/lib/x86_64-linux-gnu/mariadb18
/usr/lib/x86_64-linux-gnu/mariadb18/plugin
/usr/lib/x86_64-linux-gnu/mariadb18/plugin/client_ed25519.so
/usr/lib/x86_64-linux-gnu/mariadb18/plugin/dialog.so
/usr/lib/x86_64-linux-gnu/mariadb18/plugin/mysql_clear_password.so
/usr/share
/usr/share/doc
/usr/share/doc/libmariadbclient18
/usr/share/doc/libmariadbclient18/changelog.Debian.gz
/usr/share/doc/libmariadbclient18/copyright
/usr/lib/x86_64-linux-gnu/libmariadbclient.so.18
Create the link:
$ sudo ln -s /usr/lib/x86_64-linux-gnu/libmariadbclient.so.18 /usr/lib/x86_64-linux-gnu/libmysqlclient.so.18
In order to illustrate this, I created an Ubuntu 18.04 container for the occasion*:
docker run -ti --rm --entrypoint=bash rakudo/ubuntu-amd64-18.04
And the abbreviated commands and output:
# apt-get install -y libmysqlclient20 build-essential
# zef install DBIish
# perl6 -e 'use DBDish::mysql; DBDish::mysql.connect()'
Cannot look up attributes in a DBDish::mysql type object
[...]
The error is because I didn't pass the correct parameters for connect as I didn't have a db running. The important thing is that no .so file is missing.
*: I uploaded it to the Docker Hub, a normal run will put you right in the REPL:
$ docker run -ti --rm rakudo/ubuntu-amd64-18.04
To exit type 'exit' or '^D'
>
(I didn't use the Star image when debugging, but it does not matter because this is a more generic problem.)

Run statsd as a daemon on EC2 instances programatically

EDIT: My goal is to be able to emit metrics from my spring-boot application and have them sent to a Graphite server. For that I am trying to set up statsd. If you can suggest a cleaner approach, that would be better.
I have a Beanstalk application which requires statsd to run as a background process. I was able to specify commands and packages through ebextensions config file as follows:
packages:
yum:
git: []
commands:
01_nodejs_install:
command: sudo yum -y install nodejs npm --enablerepo=epel
ignoreErrors: true
02_mkdir_statsd:
command: mkdir /home/ec2-user/statsd
03_fetch_statsd:
command: git clone https://github.com/etsy/statsd.git /home/ec2-user/statsd
ignoreErrors: true
04_run_statsd:
command: node stats.js exampleConfig.js
cwd: /home/ec2-user/statsd
When I try to deploy the application to a new environment, the EC2 node never comes up fully. I logged in to check what might be going on and noticed in /var/log/cfn-init.log that 01_nodejs_install, 02_mkdir_statsd and 03_fetch_statsd were executed successfully. So I guess the system was stuck on the fourth command (04_run_statsd).
2016-05-24 01:25:09,769 [INFO] Yum installed [u'git']
2016-05-24 01:25:37,751 [INFO] Command 01_nodejs_install succeeded
2016-05-24 01:25:37,755 [INFO] Command 02_mkdir_statsd succeeded
2016-05-24 01:25:38,700 [INFO] Command 03_fetch_statsd succeeded
cfn-init.log (END)
I need help with the following:
If there is a better way to install and run statsd while instantiating an environment, I would appreciate if you could provide details on that approach. This current scheme seems hacky.
If this is the approach I need to stick with, how can I run the fourth command so that statsd can be run as a background process?
Tried a few things and found that the following ebextensions configs work:
packages:
yum:
git: []
commands:
01_nodejs_install:
command: sudo yum -y install nodejs npm --enablerepo=epel
ignoreErrors: true
02_mkdir_statsd:
command: mkdir /home/ec2-user/statsd
03_fetch_statsd:
command: git clone https://github.com/etsy/statsd.git /home/ec2-user/statsd
ignoreErrors: true
04_change_config:
command: cat exampleConfig.js | sed 's/2003/<graphite server port>/g' | sed 's/graphite.example.com/my.graphite.server.hostname/g' > config.js
cwd: /home/ec2-user/statsd
05_run_statsd:
command: setsid node stats.js config.js >/dev/null 2>&1 < /dev/null &
cwd: /home/ec2-user/statsd
Note that I added another command (04_change_config) so that I may configure my own Graphite server and port in statsd configs. This change is not needed to address the original question, though.
The actual run command uses setsid to run the command as a daemon.

How to stop services on Travis CI running by default?

The machine instances running for Travis CI start some services by default which are not useful to my project. Therefore I want to stop these services. My first idea was to use the following block in my .travis.yml to do so:
before_script:
# Disable services enabled by default
- sudo service mysql stop
- sudo service postgresql stop
However, this was successful for one and failed for another machine:
$ sudo service mysql stop
mysql stop/waiting
$ sudo service postgresql stop
* Stopping PostgreSQL 9.1 database server
...done.
* Stopping PostgreSQL 9.2 database server
...done.
* Stopping PostgreSQL 9.3 database server
...done.
...
$ sudo service mysql stop
stop: Unknown instance:
The command "sudo service mysql stop" failed and exited with 1 during .
Another option is /etc/init.d/mysql stop but this could fail on a machine which started the process via the service command. Is there a try-catch I can use in the .travis.yml script?
It turns out that using the mentioned /etc/init.d/ ... works more reliable. There are some warnings that one should use sudo service ... but I was not successful with those. So here is what I am running now:
language: android
jdk:
- oraclejdk7
- openjdk7
android:
components:
# All the build system components should be at the latest version
- tools
- platform-tools
- build-tools-21.1.1
- android-19
# The libraries we can't get from Maven Central or similar
- extra-android-support
notifications:
email: true
before_script:
# Disable services enabled by default
# http://docs.travis-ci.com/user/database-setup/#MySQL
- sudo /etc/init.d/mysql stop
- sudo /etc/init.d/postgresql stop
# The following did not work reliable
# - sudo service mysql stop
# - sudo service postgresql stop
# Ensure Gradle wrapper is executable
- chmod +x gradlew
# Ensure signing configuration is present
- mv app/gradle.properties.example app/gradle.properties
script:
- ./gradlew clean assembleDebug
sudo service mysql stop
I am using travis with laradock with mysql. And that command did the work for me.

ubuntu 12.04 into docker "service mysql start"

I need ubuntu 12.04 with developing web-services running (sshd, apache2.2, php5.3, mysql-server). I have ubuntu 14.04, i installed docker.
Then i started container:
docker run -t -i ubuntu:12.04 /bin/bash
Then:
apt-get update && apt-get install -y mysql-server
After that: service mysql start, service mysql status is not working. If i am running container with 14.04 ubuntu, it works well. The same issue is with sshd server.
service apache2 status, service apache2 stop, service apache2 start works well.
There is no init process running inside the container. Therefore the runelevel can't be determined.
If there is an unknown runlevel, upstart can not start mysql. ... see /etc/init/mysql.conf
...
start on runlevel [2345]
...
If you try to check the runlevel:
$ runlevel
unknown
... you see it is unknown.
In Docker it is the common way to start the application in foreground.
/usr/bin/mysqld_safe
If you want to start more than one application, you can use supervisord.
http://supervisord.org/
https://docs.docker.com/articles/using_supervisord/
Additional i've found a Dockerfile, which starts a init inside a ubuntu:12.04 docker container. Really nice work:
https://github.com/tianon/dockerfiles/blob/master/sbin-init/ubuntu/upstart/12.04/Dockerfile