Cannot externally access the OpenShift 4.2 built-in docker registry - openshift

I have a kubeadmin account for OpenShift 4.2 and am able to successfully login via oc login -u kubeadmin.
I exposed the built-in docker registry through DefaultRoute as documented in https://docs.openshift.com/container-platform/4.2/registry/securing-exposing-registry.html
My docker client runs on macOS and is configured to trust the default self-signed certificate of the registry
openssl s_client -showcerts -connect $(oc registry info) </dev/null 2>/dev/null|openssl x509 -outform PEM > tls.pem
security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychain tls.pem
Now when I try logging into the built-in registry, I get the following error
docker login $(oc registry info) -u $(oc whoami) -p $(oc whoami -t)
Error response from daemon: Get https://my-openshift-registry.com/v2/: unauthorized: authentication required
The registry logs report the following errors
error authorizing context: authorization header required
invalid token: Unauthorized
And more specifically
oc logs -f -n openshift-image-registry deployments/image-registry
time="2019-11-29T18:03:25.581914855Z" level=warning msg="error authorizing context: authorization header required" go.version=go1.11.13 http.request.host=my-openshift-registry.com http.request.id=aa41909a-4aa0-42a5-9568-91aa77c7f7ab http.request.method=GET http.request.remoteaddr=10.16.7.10 http.request.uri=/v2/ http.request.useragent="docker/19.03.5 go/go1.12.12 git-commit/633a0ea kernel/4.9.184-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))"
time="2019-11-29T18:03:25.581958296Z" level=info msg=response go.version=go1.11.13 http.request.host=my-openshift-registry.com http.request.id=d2216e3a-0e12-4e77-b3cb-fd47b6f9a804 http.request.method=GET http.request.remoteaddr=10.16.7.10 http.request.uri=/v2/ http.request.useragent="docker/19.03.5 go/go1.12.12 git-commit/633a0ea kernel/4.9.184-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration="923.654µs" http.response.status=401 http.response.written=87
time="2019-11-29T18:03:26.187770058Z" level=error msg="invalid token: Unauthorized" go.version=go1.11.13 http.request.host=my-openshift-registry.com http.request.id=638fc003-1d4a-433c-950e-f9eb9d5328c4 http.request.method=GET http.request.remoteaddr=10.16.7.10 http.request.uri="/openshift/token?account=kube%3Aadmin&client_id=docker&offline_token=true" http.request.useragent="docker/19.03.5 go/go1.12.12 git-commit/633a0ea kernel/4.9.184-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))"
time="2019-11-29T18:03:26.187818779Z" level=info msg=response go.version=go1.11.13 http.request.host=my-openshift-registry.com http.request.id=5486d94a-f756-401b-859d-0676e2a28465 http.request.method=GET http.request.remoteaddr=10.16.7.10 http.request.uri="/openshift/token?account=kube%3Aadmin&client_id=docker&offline_token=true" http.request.useragent="docker/19.03.5 go/go1.12.12 git-commit/633a0ea kernel/4.9.184-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))" http.response.contenttype=application/json http.response.duration=6.97799ms http.response.status=401 http.response.written=0
My oc client is
oc version
Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.0+b4261e0", GitCommit:"b4261e07ed", GitTreeState:"clean", BuildDate:"2019-07-06T03:16:01Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6+2e5ed54", GitCommit:"2e5ed54", GitTreeState:"clean", BuildDate:"2019-10-10T22:04:13Z", GoVersion:"go1.12.8", Compiler:"gc", Platform:"linux/amd64"}
My docker info is
docker info
Client:
Debug Mode: false
Server:
Containers: 7
Running: 0
Paused: 0
Stopped: 7
Images: 179
Server Version: 19.03.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.184-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 6
Total Memory: 5.818GiB
Name: docker-desktop
ID: JRNE:4IBW:MUMK:CGKT:SMWT:27MW:D6OO:YFE5:3KVX:AEWI:QC7M:IBN4
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 29
Goroutines: 44
System Time: 2019-11-29T21:12:21.3565037Z
EventsListeners: 2
HTTP Proxy: gateway.docker.internal:3128
HTTPS Proxy: gateway.docker.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
I have tried adding the registry-viewer role to kubeadmin, but this did not make any difference
oc policy add-role-to-user registry-viewer kubeadmin
oc policy add-role-to-user registry-viewer kube:admin
Is there any suggestion as to what I could try or how to diagnose the problem further? I am able to access the registry from within the cluster, however, I need to access it externally through docker login.

As silly as it sounds, the problem was that $(oc whoami) evaluated to kube:admin instead of kubeadmin and only the latter works. For example, in order to successfully login I had to replace
docker login $(oc registry info) -u $(oc whoami) -p $(oc whoami -t)
with
docker login $(oc registry info) -u kubeadmin -p $(oc whoami -t)
The relevant role is registry-viewer, however, I think the user kubeadmin would have it pre-configured
oc policy add-role-to-user registry-viewer kubeadmin
oc adm policy add-cluster-role-to-user registry-viewer kubeadmin

To add registry viewer role the command is
oc adm policy add-cluster-role-to-user registry-viewer kubeadmin
You can refer to their documentation to work with the internal registry.

Related

OpenShift Login failed (401 Unauthorized)

I'm new to openshift or K8'S. I have installed Openshift v3.11.0+bf985b1-463 cluster in my centos 7.
While running prerequisites.yml and deploy_cluster.yml run status is successful. And i have updated htpasswd and granted the cluster-admin role for my user.
htpasswd -b ${HTPASSWD_PATH}/htpasswd $OKD_USERNAME ${OKD_PASSWORD}
oc adm policy add-cluster-role-to-user cluster-admin $OKD_USERNAME
and i have create the user and identity also by the below cmd.
oc create user bob
oc create identity ldap_provider:bob
oc create useridentitymapping ldap_provider:bob bob
When i try to login with oc login -u bob -p password it say's
Login failed (401 Unauthorized)
Verify you have provided correct credentials.
But i can able to login with oc login -u system:admin
For your information: the okd deploy_cluster.yml ran successfully but the below pod is in error state.
Is that cause the problem? cmd oc get pods
Suggest me how can i fix the issue. Thank you.
UPDATE:
I have ran the deploy_cluster.yml once again the login issue is solved able to login. But it fails with the below error.
This phase can be restarted by running: playbooks/openshift-logging/config.yml
Node logging-es-data-master-ioblern6 in cluster logging-es was unable to rollout. Please see documentation regarding recovering during a rolling cluster restart
In openshift console the Logging Pod have the below event.
But all the servers have enough memory like more than 65% is free.
And the Ansible version is 2.6.5
1 Master node config:
4CPU, 16GB RAM, 50GB HDD
2 Slave and 1 infra node config:
4CPU, 16GB RAM, 20GB HDD
To create a new user try to follow these steps:
1 Create on each master node the password entry in htpasswd file with:
$ htpasswd -b </path/to/htpasswd> <user_name>
$ htpasswd -b /etc/origin/master/htpasswd myUser myPassword
2 Restart on each master node the master api and master controllers
$ master-restart controllers && master-restart api
or
$ /usr/local/bin/master-restart api && /usr/local/bin/master-restart controllers
3 Apply needed roles
$ oc adm policy add-cluster-role-to-user cluster-admin myUser
4 Login as myUser
$ oc login -u myUser -p myPassword
Running again the deploy_cluster.yaml after configuring the htpasswd file, will force the restart of master controllers and api so you've been able to login as your new user.
About the other problem, registry-console and loggin-es-data-master pods not running it's because you cannot run again the deploy_cluster.yaml when your cluster is already up and running so you have to uninstall okd and then run again the playbook. This happens because the SDN is already working and all your nodes already own all needed certificates.
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
and then again
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
More detailed informations are here
If, after all this procedure, the logging-es-data-master pod should not run then uninstall the logging component with
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
and then uninstall the whole okd and install it again.
If your cluster is already working and you cannot perform again the installation so try only to uninstall and reinstall the logging component:
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=True
RH detailed instructinos are here

executing kompose in openshift throws Unauthorized at the end

Trying to use kompose to deploy a docker-compose file below into OpenShift.
At the last line I just get Unauthorized error which doesnt point where is the root cause.
Please note before this I have successfully authenticated using oc login and docker login as below
[centos#master user_interface]$ docker login -u `oc whoami` -p `oc whoami -t` docker-registry.default.svc:5000
Login Succeeded
[centos#master user_interface]$ kompose up -v --provider=openshift -f rak-docker-compose.yml --build build-config --namespace=rak
DEBU Docker Compose version: 3
DEBU Compose file dir: /home/centos/user_interface
INFO Buildconfig using http://abc/prj.git::master as source.
DEBU Compose file dir: /home/centos/user_interface
INFO Buildconfig using http://abc/prj.git::master as source.
INFO We are going to create OpenShift DeploymentConfigs, Services and PersistentVolumeClaims for your Dockerized application.
If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead.
FATA Error while deploying application: Unauthorized

pods can't resolve DNS after 'oc cluster up'

On a fresh install of RHEL7.4:
# install the oc client and docker
[root#openshift1 ~]# yum install atomic-openshift-clients.x86_64 docker
# configure and start docker
[root#openshift1 ~]# sed -i '/^\[registries.insecure\]/!b;n;cregistries = ['172.30.0.0\/16']' /etc/containers/registries.conf
[root#openshift1 ~]# systemctl start docker; systemctl enable docker
# these links recommend running 'iptables -F' as a workaround for pod DNS issues
# https://github.com/openshift/origin/issues/12110
# https://github.com/openshift/origin/issues/10139
[root#openshift1 ~]# iptables -F; iptables -F -t nat
[root#openshift1 ~]# oc cluster up --public-hostname 192.168.146.200
Attempting a test apache build gives me this error:
Cloning "https://github.com/openshift/httpd-ex.git " ...
WARNING: timed out waiting for git server, will wait 1m4s
error: fatal: unable to access 'https://github.com/openshift/httpd-ex.git/': Could not resolve host: github.com; Unknown error
DNS server is present
[root#openshift1 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.146.2
I can confirm that the host machine can resolve names:
[root#openshift1 ~]# host github.com
github.com has address 192.30.255.113
github.com has address 192.30.255.112
However this DNS server didn't make it's way down to the pods
[root#openshift1 ~]# oc get pods
NAME READY STATUS RESTARTS AGE
docker-registry-1-rqm9h 1/1 Running 0 38s
persistent-volume-setup-fdbv5 1/1 Running 0 50s
router-1-m6z8w 1/1 Running 0 31s
[root#openshift1 ~]# oc rsh docker-registry-1-rqm9h
sh-4.2$ cat /etc/resolv.conf
nameserver 172.30.0.1
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Is there anything I am missing?
You should not flush the rules, rather you should create a new zone and open additional ports, e.g.:
firewall-cmd --permanent --new-zone dockerc
firewall-cmd --permanent --zone dockerc --add-source $(docker network inspect -f "{{range .IPAM.Config }}{{ .Subnet }}{{end}}" bridge)
firewall-cmd --permanent --zone dockerc --add-port 8443/tcp --add-port 53/udp --add-port 8053/udp
firewall-cmd --reload
Source:
https://github.com/openshift/origin/blob/release-3.7/docs/cluster_up_down.md#linux
EDIT:
Also the DNS server in your /etc/resolv.conf should be routable from your OCP instance.
Source: kubernetes skydns failure to forward request

Gitlab CI + DinD + MySQL services permission issue

I created two GitLab jobs:
Test unit (using a PHP registered docker on GitLab)
Sonar (using docker service to run "Letsdeal/docker-sonar-scanner")
I use the following gitlab-ci-multi-runner configuration:
concurrent = 1
check_interval = 0
[[runners]]
name = "name-ci"
url = "https://uri/ci"
token = "token"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = true
disable_cache = false
volumes = ["/cache"]
shm_size = 0
[runners.cache]
The test unit job works correctly, but the Sonar job failed with the following messages:
service runner-f66e3b66-project-227-concurrent-0-docker-wait-for-service did timeout
2017-07-05T16:13:18.543802416Z mount: mounting none on /sys/kernel/security failed: Permission denied
2017-07-05T16:13:18.543846406Z Could not mount /sys/kernel/security.
2017-07-05T16:13:18.543855189Z AppArmor detection and --privileged mode might break.
2017-07-05T16:13:18.543861712Z mount: mounting none on /tmp failed: Permission denied
When I change the configuration param 'privileged' of 'runner.docker' to false, the Sonar job works but Test Unit fails:
service runner-f66e3b66-project-227-concurrent-0-mysql-wait-for-service did timeout
2017-07-05T15:08:49.178114891Z
2017-07-05T15:08:49.178257497Z ERROR: mysqld failed while attempting to check config
2017-07-05T15:08:49.178266378Z command was: "mysqld --verbose --help"
2017-07-05T15:08:49.178271850Z
2017-07-05T15:08:49.178276837Z mysqld: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Permission denied
The param "privileged" has to be true to be able to use docker in docker. But I don't understand why it makes permission broken for services like MySQL.
Here is my gitlab-ci file:
stage :
- test-unit
- analyse
.php_job_template: &php_job_template
image: custom_docker_image
before_script:
- eval $(ssh-agent -s) && ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
services :
- mysql
variables:
MYSQL_DATABASE: blabla
MYSQL_USER: blabla
MYSQL_PASSWORD: blabla
MYSQL_ROOT_PASSWORD: blabla
test_phpunit_dev:
<<: *php_job_template
stage: test-unit
script:
- mysql -h mysql -u blabla -pblabla <<< "SET GLOBAL sql_mode = '';"
- php composer.phar install -q
- php vendor/bin/phpunit -c tests/phpunit.xml
sonar:
stage: analyse
image: docker:1.12.6
services:
- docker:dind
script:
- docker run --rm -v `pwd`:/build -w /build letsdeal/sonar-scanner:2.7 scan -e
How do I fix this?
Why don't use ciricihq/gitlab-sonar-scanner for instance ?
It doesn't require to use dind or priviledged mode
official github repository
I had the same issue and was able to resolve it by removing MySQL (as I don't need it on my CI server, anyway) and disabling AppArmor. On Ubuntu, you can run:
# Remove Mysql
sudo apt-get remove mysql-server
# Disable AppArmor for MySQL
sudo ln -s /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/disable/
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.mysqld
Source: https://www.cyberciti.biz/faq/ubuntu-linux-howto-disable-apparmor-commands/

openshift import-images error "! error: Import failed (Unauthorized): you may not have access to the Docker image"

I need some inputs/help to bring up my container(customized) on OpenShift
[mag-vm#mag-vm-centos-2 ~]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mag_main latest e039447d7212 13 days ago 1.316 GB
[mag-vm#mag-vm-centos-2 ~]$ oc import-image mag_main:latest
error: no image stream named "mag_main" exists, pass --confirm to create and import
[mag-vm#mag-vm-centos-2 ~]$ oc import-image mag_main:latest --confirm
The import completed with errors.
Name: mag_main
Namespace: cirrus
Created: Less than a second ago
Labels: <none>
Annotations: openshift.io/image.dockerRepositoryCheck=2017-06-07T16:24:49Z
Docker Pull Spec: 172.30.124.119:5000/cirrus/mag_main
Unique Images: 0
Tags: 1
latest
tagged from mag_main:latest
! error: Import failed (Unauthorized): you may not have access to the Docker image "mag_main:latest"
Less than a second ago
[mag-vm#mag-vm-centos-2 ~]$
Could you pls help me to overcome this issue, is it something to do with the "secret" settings?
Thanks in Advance.
Also as an additional input, I am able to bring up the container for this docker image using "docker run" command
STEP #1 : sudo docker run -t mag_main:latest /bin/bash
STEP #2 : Once the container is up, I used "./bin/karaf" to run the services inside this docker container
Pls let me know, how can I do the same from the OpenShift.
OpenShift Details;
[mag-vm#mag-vm-centos-2 ~]$ oc version
oc v1.5.1+7b451fc
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://10.100.71.160:8443
openshift v1.5.1+7b451fc
kubernetes v1.5.2+43a9be4
[mag-vm#mag-vm-centos-2 ~]$
Docker Details;
[mag-vm#mag-vm-centos-2 ~]$ sudo docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:23:11 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:23:11 2016
OS/Arch: linux/amd64
[mag-vm#mag-vm-centos-2 ~]$