deploy router status Pending - openshift

By the following composition, openshift-origin, using playbook in ansible, the environment was built.
[node]
openshift-master.example.com<br>
openshift-node01.example.com<br>
openshift-node02.example.com<br>
openshift-etcd.example.com<br>
[/etc/ansible/hosts]
[OSEv3:children]
masters
nodes
etcd
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=origin
[masters]
openshift-master.example.com
[etcd]
openshift-etcd.example.com
# host group for nodes, includes region info
[nodes]
openshift-master.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
openshift-node01.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
openshift-node02.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
In following command, in openshift, login, oh, it was done.
[login command]
oc login -u system:admin -n default
And Replica in router, it was made in following command.
[create router command]
oc scale dc/router --replicas=2
The following event occurs, and a place can't make replica in router.
[create router command]
Failed scheduling
pod (router-2-ievkl) failed to fit in any node fit failure on node (openshift-node01.example.com): CheckServiceAffinity fit failure on node (openshift-node02.example.com): CheckServiceAffinity fit failure on node (openshift-master.example.com): PodFitsHostPorts
It's such situation, but when how doing correspond, would I be able to make replica in router right?

Got the same issue after clean install of origin.
Uncordoning masters do the thing. Thnx to lorenzvth7
During advanced installation, the openshift_hosted_router_selector and openshift_registry_selector Ansible settings are set to region=infra by default. The default router and registry will only be automatically deployed if a node exists that matches the region=infra label.
Also, according to an error topic starter's case "PodFitsHostPorts"
Routers directly attach to port 80 and 443 on all interfaces on a host. Restrict routers to hosts where port 80/443 is available and not being consumed by another service, and set this using node selectors and the scheduler configuration. As an example, you can achieve this by dedicating infrastructure nodes to run services such as routers.
So, this means that you shoul re-label e.g. openshift-node01.example.com as region:infra

Related

How to view Routes pod in OpenShift

I have created a routes for my service in the OpenShift,
oc get routes
NAME HOST/PORT PATH SERVICES PORT
simplewebserver simpleweb.apps.devcluster.os.fly.com simplewebserver 9999
When I ran command: curl http://simpleweb.apps.devcluster.os.fly.com/world
it failed to access my web service. I suspect my route has some problem, but I could not see any route debug information.
My question is, how to find the route pod in the OpenShift Or how to find some route activity information when I access route?
You can check the router logs in logs container of router pods. in our OCP cluster i could see router pods in openshift-ingress namespace.
oc get pods -n openshift-ingress
NAME READY STATUS RESTARTS AGE
router-default-5f9c4b6cb4-12121a 2/2 Running 0 40h
router-default-5f9c4b6cb4-12133a 2/2 Running 0 40h
To get the logs, use below command,
oc -n openshift-ingress -c logs logs -f <router_pod_name>
Also make sure haproxy logs are enabled to find out urls getting hit via router.
https://access.redhat.com/solutions/3397701
As there is limited information about your problem. Here are few things you can try.
Try to curl using a port
curl -kv http://simpleweb.apps.devcluster.os.fly.com:9999
Access the pod logs for which the route was created. Check the service simplewebserver is using the correct selector to route the traffic to the pod.
Do a oc describe service simplewebserver to see the selectors being used.
Check if any network policy is blocking the external traffic.
Check if you can access the target pod using that service from within the same namespace. You can do that by rsh to a pod and then access the service using:
curl -kv http://servicename.projectname.svc.cluster.local

Openshift-origin not create ha-router

I tried to create an OpenShift haproxy router, in openshift origin, with the cli:
oadm router router-ha --service-account=router --type="haproxy-router" --subdomain="${name}-${namespace}.op37.dev.procempa.com.br" --replicas=2 --selector='region=infra' --selector='zone=default'
But, router not create. I have 2 nodes in region=infra, the error is:
2 nodes are available: 1 CheckServiceAffinity, 1 MatchNodeSelector, 2 PodFitsHostPorts.
My openshift-origin is:
Version
OpenShift Master:
v3.7.0+7ed6862
Kubernetes Master:
v1.7.6+a08f5eeb62
OK let's take a look, we'll need to understand what the error is with the deployment of the router, which I'm guessing is most likely related to the nodes and their labels. Can you please run each of the commands below and add the output to your question?
# Get the nodes, showing what labels they have.
oc get nodes -o wide
# Get the recent deployments.
oc get deploy
# For good measure, let's check some status and recent events.
oc status
oc get events
That should provide a lot of diagnostic data to help!

go-ethereum - geth - puppeth - ethstat remote server : docker: command not found

I'm trying to setup a private ethereum test network using Puppeth (as Péter Szilágyi demoed in Ethereum devcon three 2017). I'm running it on a macbook pro (macOS Sierra).
When I try to setup the ethstat network component I get an "docker configured incorrectly: bash: docker: command not found" error. I have docker running and I can use it fine in the terminal e.g. docker ps.
Here are the steps I took:
What would you like to do? (default = stats)
1. Show network stats
2. Manage existing genesis
3. Track new remote server
4. Deploy network components
> 4
What would you like to deploy? (recommended order)
1. Ethstats - Network monitoring tool
2. Bootnode - Entry point of the network
3. Sealer - Full node minting new blocks
4. Wallet - Browser wallet for quick sends (todo)
5. Faucet - Crypto faucet to give away funds
6. Dashboard - Website listing above web-services
> 1
Which server do you want to interact with?
1. Connect another server
> 1
Please enter remote server's address:
> localhost
DEBUG[11-15|22:46:49] Attempting to establish SSH connection server=localhost
WARN [11-15|22:46:49] Bad SSH key, falling back to passwords path=/Users/xxx/.ssh/id_rsa err="ssh: cannot decode encrypted private keys"
The authenticity of host 'localhost:22 ([::1]:22)' can't be established.
SSH key fingerprint is xxx [MD5]
Are you sure you want to continue connecting (yes/no)? yes
What's the login password for xxx at localhost:22? (won't be echoed)
>
DEBUG[11-15|22:47:11] Verifying if docker is available server=localhost
ERROR[11-15|22:47:11] Server not ready for puppeth err="docker configured incorrectly: bash: docker: command not found\n"
Here are my questions:
Is there any documentation / tutorial describing how to setup this remote server properly. Or just on puppeth in general?
Can I not use localhost as "remote server address"
Any ideas on why the docker command is not found (it is installed and running and I can use it ok in the terminal).
Here is what I did.
For the docker you have to use the docker-compose binary. You can find it here.
Furthermore, you have to be sure that an ssh server is running on your localhost and that keys have been generated.
I didn't find any documentations for puppeth whatsoever.
I think I found the root cause to this problem. The SSH daemon is compiled with a default path. If you ssh to a machine with a specific command (other than a shell), you get that default path. This does not include /usr/local/bin for example, where docker lives in my case.
I found the solution here: https://serverfault.com/a/585075:
edit /etc/ssh/sshd_config and make sure it contains PermitUserEnvironment yes (you need to edit this with sudo)
create a file ~/.ssh/environment with the path that you want, in my case:
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
When you now run ssh localhost env you should see a PATH that matches whatever you put in ~/.ssh/environment.

Openshift Origin registry: how to make it accessible?

We are setting up a test cloud Openshift Origin which we created using the openshift ansible playbook. We are following the documentation at: https://docs.openshift.com/container-platform/latest/install_config/install/advanced_install.html
We have not done anything special concerning the openshift registry or router.
We are pretty new to this topic and we tried since few tags to bring the openshift registry accessible....
We have 3 hosts:
master (unschedulable)
node-1 which is set to the region 'infra' and has the registry and router services
node-2 (other region).
Here the services running on the default project:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.78.66 <none> 5000/TCP 3h
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 3h
registry-console 172.30.190.63 <none> 9000/TCP 3h
router 172.30.197.135 <none> 80/TCP,443/TCP,1936/TCP 3h
When we SSH directly on the node-1 where the registry and router are running, we can access the registry without problem and we can push some images. Exactly what is here described: docs.openshift.org/latest/install_config/registry/accessing_registry.html
Now we cannot access the registry for other hosts (master or node-2) and we really do not understand how we can make the registry accessible.... We have of course read: docs.openshift.org/latest/install_config/registry/securing_and_exposing_registry.html#access-insecure-registry-by-exposing-route
We have used this command:
oc expose service docker-registry --hostname=<hostname> -n default
The documentation says: You must be able to resolve this name externally via DNS to the router’s IP address.
As the router does not have any EXTERNAL-IP address attached to it, we do not understand how to reach it.
Is there any oc or oadm command for exposing the router through an external-ip address?
Thanks a lot in advance
Emmanuel
Based on your stated configuration I would expect the path to your UI/API for Openshift (openshift.yourdomain.com) to be routed to the same IP as your node-1, because that is where you are running the router.
If that is the case then you would point the hostname you are passing via the command in DNS to the same IP, or as a CNAME to that host.
oc expose service docker-registry --hostname=<hostname> -n default
In a larger setup with dedicated set of load balancer (lb) nodes you might have a specific A record for the set. You could then have the hostname be a CNAME to that record.

How to make oc cluster up persistent?

I'm using "oc cluster up" to start my Openshift Origin environment. I can see, however, that once I shutdown the cluster my projects aren't persisted at restart. Is there a way to make them persistent ?
Thanks
There are a couple ways to do this. oc cluster up doesn't have a primary use case of persisting resources.
There are couple ways to do it:
Leverage capturing etcd as described in the oc cluster up README
There is a wrapper tool, that makes it easy to do this.
There is now an example in the cluster up --help command, it is bound to stay up to date so check that first
oc cluster up --help
...
Examples:
# Start OpenShift on a new docker machine named 'openshift'
oc cluster up --create-machine
# Start OpenShift using a specific public host name
oc cluster up --public-hostname=my.address.example.com
# Start OpenShift and preserve data and config between restarts
oc cluster up --host-data-dir=/mydata --use-existing-config
So specifically in v1.3.2 use --host-data-dir and --use-existing-config
Assuming you are using docker machine with vm such as virtual box, the easiest way I found is taking a vm snapshot WHILE vm and openshift cluster are up and running. This snapshot will backup memory in addition to disk therefore you can restore entire cluster later on by restoring the vm snapshot, then run docker-machine start ...
btw, as of latest os image openshift/origin:v3.6.0-rc.0 and oc cli, --host-data-dir=/mydata as suggested in the other answer doesn't work for me.
I'm using:
VirtualBox 5.1.26
Kubernetes v1.5.2+43a9be4
openshift v1.5.0+031cbe4
Didn't work for me using --host-data-dir (and others) :
oc cluster up --logging=true --metrics=true --docker-machine=openshift --use-existing-config=true --host-data-dir=/vm/data --host-config-dir=/vm/config --host-pv-dir=/vm/pv --host-volumes-dir=/vm/volumes
With output:
-- Checking OpenShift client ... OK
-- Checking Docker client ...
Starting Docker machine 'openshift'
Started Docker machine 'openshift'
-- Checking Docker version ...
WARNING: Cannot verify Docker version
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.0 image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ...
Using Docker shared volumes for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ...
Using docker-machine IP 192.168.99.100 as the host IP
Using 192.168.99.100 as the server IP
-- Starting OpenShift container ...
Starting OpenShift using container 'origin'
FAIL
Error: could not start OpenShift container "origin"
Details:
Last 10 lines of "origin" container log:
github.com/openshift/origin/vendor/github.com/coreos/pkg/capnslog.(*PackageLogger).Panicf(0xc4202a1600, 0x42b94c0, 0x1f, 0xc4214d9f08, 0x2, 0x2)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/pkg/capnslog/pkg_logger.go:75 +0x16a
github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend.newBackend(0xc4209f84c0, 0x33, 0x5f5e100, 0x2710, 0xc4214d9fa8)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend/backend.go:106 +0x341
github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend.NewDefaultBackend(0xc4209f84c0, 0x33, 0x461e51, 0xc421471200)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend/backend.go:100 +0x4d
github.com/openshift/origin/vendor/github.com/coreos/etcd/etcdserver.NewServer.func1(0xc4204bf640, 0xc4209f84c0, 0x33, 0xc421079a40)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/etcdserver/server.go:272 +0x39
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/etcdserver.NewServer
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/etcdserver/server.go:274 +0x345
Openshift writes to the directories /vm/... (also defined in VirtualBox) but successfully won't start.
See [https://github.com/openshift/origin/issues/12602][1]
Worked for me too, using Virtual Box Snapshots and restoring them.
To make it persistent after each shutdown you need to provide base-dir parameter.
$ mkdir ~/openshift-config
$ oc cluster up --base-dir=~/openshift-config
From help
$ oc cluster up --help
...
Options:
--base-dir='': Directory on Docker host for cluster up configuration
--enable=[*]: A list of components to enable. '*' enables all on-by-default components, 'foo' enables the component named 'foo', '-foo' disables the component named 'foo'.
--forward-ports=false: Use Docker port-forwarding to communicate with origin container. Requires 'socat' locally.
--http-proxy='': HTTP proxy to use for master and builds
--https-proxy='': HTTPS proxy to use for master and builds
--image='openshift/origin-${component}:${version}': Specify the images to use for OpenShift
--no-proxy=[]: List of hosts or subnets for which a proxy should not be used
--public-hostname='': Public hostname for OpenShift cluster
--routing-suffix='': Default suffix for server routes
--server-loglevel=0: Log level for OpenShift server
--skip-registry-check=false: Skip Docker daemon registry check
--write-config=false: Write the configuration files into host config dir
But you shouln't use it, because "cluster up" is removed in version 4.0.0. More here: https://github.com/openshift/origin/pull/21399