Problems with system:admin login after changing to HTPasswd Identity Provider in Openshift Origin - configuration

Wanting to switch to HTPasswd Identity provider i have updated the master-config.yaml to look like this
identityProviders:
- name: my_htpasswd_provider
challenge: true
login: true
provider:
apiVersion: v1
kind: HTPasswdPasswordIdentityProvider
file: /path/to/users.htpasswd
Im using the oc cluster:
oc cluster up --host-data-dir=/opt/openshift_data --host-config-dir=/opt/openshift_conf --use-existing-config
, but when i try to log in with the system:admin user this happens.
oc login -u system:admin
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
Login failed (401 Unauthorized)
You must obtain an API token by visiting https://:8443/oauth/token/request

I got this error, when I changed the authentication provider of my Openshift cluster, and I had already logged in as admin user with the old authentication provider settings.
I had to add mappingMethod: add option to my configuration, so It could map the existing user.
identityProviders:
- challenge: true
login: true
mappingMethod: add
name: my_htpasswd_provider
provider:
apiVersion: v1
kind: HTPasswdPasswordIdentityProvider
file: /var/openshift/users.htpasswd
This is Openshift documentation url:
https://docs.openshift.com/enterprise/3.2/install_config/configuring_authentication.html#mapping-identities-to-users
Hope this helps

Related

Using a CA cert for pulling builder image for S2I build

I have a BuildConfig that has the following strategy block:
strategy:
sourceStrategy:
from:
kind: DockerImage
name: <insecure registry pullspec>
forcePull: true
incremental: true
type: Source
The builder image is coming from a registry that uses a self-signed certificate. How do I tell the build config to either A) use a CA certificate for the registry or B) ignore the certificate errors?
I have tried adding the CA certificate as an opaque secret, and then using pullSecret, but that didn't work:
strategy:
sourceStrategy:
forcePull: true
from:
kind: DockerImage
name: <insecure registry pullspec>
incremental: true
pullSecret:
name: <name of opaque secret with ca cert>
type: Source
I am running this build in an OpenShift 3.11 cluster.
This is actually described in the documentation how to add your own root CA as an additionalTrustedCA:
Setting up additional trusted certificate authorities for builds
Here are the relevant parts:
Create a ConfigMap in the openshift-config namespace containing the trusted certificates for the registries that use self-signed certificates. For each CA file, ensure the key in the ConfigMap is the registry’s hostname in the hostname[..port] format:
$ oc create configmap registry-cas -n openshift-config \
--from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt \
--from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt
Update the cluster image configuration:
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge

Basic Auth doesn't work in kubernetes ingress

I have created pypiserver in kubernetes cluster, I have used https://hub.docker.com/r/pypiserver/pypiserver docker image. I need to create basic auth for the server which I created. I used this method https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pypiserver
labels:
app: pypiserver
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: secret
ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: pypiservice
servicePort: 8080
tls:
- hosts:
- example.com
secretName: secret-tls
But my host name would be "www.example.com/8080" and I don't see ingress has any pod in kubernetes cluster. Ingress is running fine but I don't get auth for this host. (And also I have http://IP adress:8080 which I converted to domain through cloudflare)
Please let me know what am I doing wrong?
I don't know exactly what is your nginx ingress controller version, but I can share what worked for me. I've reproduced it on my GKE cluster.
I installed my nginx ingress controller following this guide. Basically it came down to running the following commands:
If you're using GKE you need to initialize your user as a
cluster-admin with the following command:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
The following Mandatory Command is required for all deployments.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml
I'm using 1.13 version on my GKE so this tip is also applied in my case:
Tip
If you are using a Kubernetes version previous to 1.14, you need to
change kubernetes.io/os to beta.kubernetes.io/os at line 217 of
mandatory.yaml, see Labels details.
But I dealt with it quite differently. Basically you need your Nodes to have kubernetes.io/os=linux label so you can simply label them. Following command will do the job:
kubectl label node --all kubernetes.io/os=linux
Then we're heading to Provider Specific Steps which in case of GKE came down to applying the following yaml:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/cloud-generic.yaml
Then you may want to verify your installation:
To check if the ingress controller pods have started, run the
following command:
kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
or simply run:
kubectl get all -n ingress-nginx
It will also tell you if all the required resorces are properly deployed.
Next we need to write our ingress (ingress object/resource) containing basic-auth related annotations. I was following same tutorial as mentioned in your question.
First we need to create our auth file containing username and hashed password:
$ htpasswd -c auth foo
New password: <bar>
New password:
Re-type new password:
Adding password for user foo
Once we have it, we need to create a Secret object which then we'll use in our ingress:
$ kubectl create secret generic basic-auth --from-file=auth
secret "basic-auth" created
Once it is created we can check if everything went well:
$ kubectl get secret basic-auth -o yaml
apiVersion: v1
data:
auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK
kind: Secret
metadata:
name: basic-auth
namespace: default
type: Opaque
Alright, so far so good...
Then we need to create our ingress resource/object.
My ingress-with-auth.yaml file looks slightly different than the one in the instruction, namely I just added kubernetes.io/ingress.class: nginx to make sure my nginx ingress controller is used rather than built-in GKE solution:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-with-auth
annotations:
kubernetes.io/ingress.class: nginx
# type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: pypiserver
servicePort: 80
In your example you may need to add nginx prefix in your basic-auth related annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: secret
ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
so it looks like this:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: secret
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
First I used the address listed in my ingress resource (it doesn't appear there any more once I added kubernetes.io/ingress.class: nginx annotation in my ingress definition:
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-with-auth foo.bar.com 80 117m
When I tried to access pypi-server using this IP it brought me directly to the page without a need of any authentication. But it looks like if you didn't define proper ingress class, the default is used instead so in practice your ingress definition with auth-basic details isn't taken into consideration and isn't passed to the nginx ingress controller we installed in one of the previous steps.
So what IP address should be used to access your app ? Run the following command which will show you both CLUSTER-IP (can be accessed within your cluster from any Pod or Node) and EXTERNAL-IP of your nginx ingress controller:
$ kubectl get service --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.0.3.220 35.111.112.113 80:30452/TCP,443:30006/TCP 18h
You can basically host many different websites in your cluster and all of them will be available through this IP. All of them can be available on default http 80 port (or https 443 in your case). The only difference between them will be the hostname that you pass in http header of your http request.
Since I don't have a domain pointing to this external IP address and can't simply access my website by going to http://foo.bar.com I need to pass somehow the hostname I'm requesting from 35.111.112.113 address. It can be done in a few ways:
I installed in my Google Chrome browser ModHeader extension which allows me to modify my http request headers and set the hostname I'm requestig to any value I want.
You can do it also using curl as follows:
curl -v http://35.111.112.113 -H 'Host: foo.bar.com' -u 'foo:bar'
You should be prompted for authentication.
If you don't provide -u username:password flag you should get 401 Authorization Required.
Basically hat's all.
Let me know if it helped you. Don't hesitate to ask additional questions if something isn't completely clear.
One more thing. If something still doesn't work you may start from attaching to your nginx ingress controller Pod (check your Pod name first by running kubectl get pods -n ingress-nginx):
kubectl exec -ti -n ingress-nginx nginx-ingress-controller-pod /bin/bash
and checking the content of your /etc/nginx/nginx.conf file. Look for foo.bar.com (or in your case example.com). It should contain similar lines:
auth_basic "Authentication Required - foo";
auth_basic_user_file /etc/ingress-controller/auth/default-ingress-with-auth.passwd;
Then check if the file is present in the indicated location /etc/ingress-controller/auth/default-ingress-with-auth.passwd.
One note to your Service definition. The fact that pypiserver container exposes specifically port 8080 doesn't mean that you need to use this port when accessing it via ingress. In Service definition the port exposed by the Container is called targetPort. You need to specify it when defining your Service but Service itself can expose completely different port. I defined my Service using following command:
kubectl expose deployment pypiserver --type=LoadBalancer --port=80 --target-port=8080
Note that the type should be set to NodePort or LoadBalancer. Then in your ingress definition you don't have to use 8080 but 80 which is the port exposed by your pypiserver Service. Note that there is servicePort: 80 in my ingress object/resource definition. Your example.com domain in cloudflare should point with it's A record to your nginx ingress controller LoadBalancer Service IP (kubectl get svc -n ingress-nginx) without specifying any ports.

How to configure FIWARE Components to avoid AZF domain not created for application response

Summary of the question: How can we let the FIWARE IdM Keyrock and the FIWARE Authzforce set properly the AZF domains, thus without getting "AZF domain not created for application XYZ" response?
I'm trying to configure a server with FIWARE Orion, FIWARE PepProxy Wilma, FIWARE IdM Keyrock, FIWARE Authzforce properly.
I arrived at the point in which the first 3 components work properly and interact with each other, but now I'm trying to insert autorization and I obtain the following error:
AZF domain not created for application.
I've already tried all the solutions presented at the following links but no one works:
https://fiware-pep-proxy.readthedocs.io/en/latest/user_guide/#level-2-basic-authorization
https://www.youtube.com/watch?v=coxFQEY0_So
How to configure the Fiware PEP WILMA proxy to use a Keyrock and Orion instance on my own servers
Fiware IDM+AuthZForce+PEP-Proxy-Wilma
Fiware - how to connect PEP proxy to Orion and configure both with HTTPS?
Fiware AuthZForce error: "AZF domain not created for application"
AuthZForce Security Level 2: Basic Authorization error "AZF domain not created for application"
https://www.slideshare.net/daltoncezane/integrating-fiware-orion-keyrock-and-wilma
“AZF domain not created for application” AuthZforce
Fiware AuthZForce error: "AZF domain not created for application"
Fiware suitable Components
https://www.slideshare.net/FI-WARE/adding-identity-management-and-access-control-to-your-app-70523086
Official documentation not usable because refers to (maybe) old python version of IdM
In the following you can find the instructions to reproduce my scenario:
Install Orion by using the Docker container
Create a directory on your system on which to work (for example, /home/fiware-orion-docker).
Create a new file called docker-compose.yml inside your directory with the following contents:
mongo:
image: mongo:3.4
command: --nojournal
orion:
image: fiware/orion
links:
- mongo
ports:
- "1026:1026"
command: -dbhost mongo -logLevel DEBUG
dns:
- 208.67.222.222
- 208.67.220.220
PAY ATTENTION: without the DNS it will never send notifications!!!
PAY ATTENTION 2 (source ): Connections from docker containers get routed into the (iptables) FORWARD chain, this needs to be configured to allow connections through it. The default is to DROP the connections. Thus if you use a firewall you have to change it:
sudo nano /etc/default/ufw
Set DEFAULTFORWARDPOLICY to “ACCEPT”.
DEFAULT_FORWARD_POLICY="ACCEPT"
Save the file.
Reload ufw
sudo ufw reload
Within the directory you created, type the following command in the command line: sudo docker-compose up -d.
After a few seconds you should have your Context Broker running and listening on port 1026.
Check that everything works with
curl localhost:1026/version
Install FIWARE IdM Keyrock (used for authentication over the Orion Context Broker):
https://github.com/ging/fiware-idm
WARNING -1: (if the next command doesn't work:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu artful stable" )
WARNING 0: if you have a firewall: DISABLE IT, otherwise docker-compose will not work
sudo apt-get install docker-compose
mkdir fiware-idm
cd fiware-idm
create docker-compose.yml
nano docker-compose.yml
version: "3.5"
services:
keyrock:
image: fiware/idm:7.6.0
container_name: fiware-keyrock
hostname: keyrock
networks:
default:
ipv4_address: 172.18.1.5
depends_on:
- mysql-db
ports:
- "3000:3000"
environment:
- DEBUG=idm:*
- IDM_DB_HOST=mysql-db
- IDM_HOST=http://localhost:3000
- IDM_PORT=3000
# Development use only
# Use Docker Secrets for Sensitive Data
- IDM_DB_PASS=secret
- IDM_DB_USER=root
- IDM_ADMIN_USER=admin
- IDM_ADMIN_EMAIL=admin#test.com
- IDM_ADMIN_PASS=1234
mysql-db:
restart: always
image: mysql:5.7
hostname: mysql-db
container_name: db-mysql
expose:
- "3306"
ports:
- "3306:3306"
networks:
default:
ipv4_address: 172.18.1.6
environment:
# Development use only
# Use Docker Secrets for Sensitive Data
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_ROOT_HOST=172.18.1.5"
volumes:
- mysql-db:/var/lib/mysql
networks:
default:
ipam:
config:
- subnet: 172.18.1.0/24
volumes:
mysql-db: ~
sudo docker-compose up -d (This will automatically download the two images and run the IdM Keyrock service. (-d is used to run it in background)).
Now you should be able to access the Identity Management tool through the website http://localhost:3000
username: admin#test.com
password: 1234
Register a new user and enable it through the interface
Then use the GUI to:
Create an "Organization" (e.g., ORGANIZ1)
Create an "application"
Step 1:
Name: Orion Idm
Description: Orion Idm
URL: http://localhost
Callback URL: http://localhost
Grant Type: Authorization Code, Implicit, Resource Owner Password, Client Credentials, Refresh Token
Provider: newuser
Step 2: leave empty
Step 3: choose "Provider"
Step 4:
click on "OAuth2 Credentials" and take notes of "Client ID" (94480bc9-43e8-4c15-ad45-0bb227e42e63) and "Client Secret" (4f6ye5y7-b90d-473a-3rr7-ea2f6dd43246)
Click on "PEP Proxy" and then on "Register a new PEP Proxy"
take notes of "Application Id" (94480bc9-43e8-4c15-ad45-0bb227e42e63), "Pep Proxy Username" (pep_proxy_dad356d2-dasa-4f95-a9hf-9ab06tccf929), and "Pep Proxy Password" (pep_proxy_a33667ec-57y1-498k-85aa-ef77ue5f6234)
Click on "Authorize" (Users) and authorize all the existing users with both roles (Purchaser and Provider for all the options)
Click on "Authorize" (Organizations) and authorize all the existing organizations with both roles (Purchaser and Provider for all the options)
Install the FIWARE Authzforce
sudo docker pull authzforce/server:latest (latest was 8.1.0 at the moment of writing)
sudo docker run -d -p 8085:8080 --name authzforce_server authzforce/server
Install the FIWARE PEP Proxy Wilma (used to enable https and authentication for Orion):
git clone https://github.com/ging/fiware-pep-proxy.git
cd fiware-pep-proxy
cp config.js.template config.js
nano config.js
var config = {};
// Used only if https is disabled
config.pep_port = 5056;
config.https = undefined
config.idm = {
host: 'localhost',
port: 3000,
ssl: false
}
config.app = {
host: 'localhost',
port: '1026',
ssl: false // Use true if the app server listens in https
}
config.response_type = 'code';
// Credentials obtained when registering PEP Proxy in app_id in Account Portal
config.pep = {
app_id: '91180bc9-43e8-4c14-ad45-0bb117e42e63',
username: 'pep_proxy_dad356d2-dasa-4f95-a9hf-9ab06tccf929',
password: 'pep_proxy_a33667ec-57y1-498k-85aa-ef77ue5f6234',
trusted_apps : []
}
// in seconds
config.cache_time = 300;
// list of paths that will not check authentication/authorization
// example: ['/public/*', '/static/css/']
config.public_paths = [];
config.magic_key = undefined;
module.exports = config;
config.authorization = {
enabled: true,
pdp: 'authzforce', // idm|authzforce
azf: {
protocol: 'http',
host: 'localhost',
port: 8085,
custom_policy: undefined, // use undefined to default policy checks (HTTP verb + path).
}
}
install all the dependencies
npm install
run the proxy
sudo node server
Create a user role:
Reconnect to the IdM http://localhost:3000:
click on your application
click on Manage rules at the top of the page
click on the + button near Roles
Name: "trial"
Save
click on the + button near Permission
Permission Name: trial1
Description: trial1
HTTP action: GET
Resource: version
Save
come back to the application
Click on "Authorize" near "Authorized users"
Assign the "trial" role to your user
Now use PostMan to get a token:
connect to localhost:3000/oauth2/token and send the following parameters
Body:
username:
password:
grant_type: password
Header:
Content-Type: application/x-www-form-urlencoded
Authorization: BASIC
take note of the obtained access_token
Try to connect to Orion though http://localhost:5056/version with the following parameters:
Header:
X-auth-token:
You will obtain the following response:
AZF domain not created for application 91180bc9-43e8-4c14-ad45-0bb117e42e63
You appear to have a timing issue with your local set up. More specifically, it appears that the timing for docker-compose on your machine is not waiting for Keyrock to be available before the PEP Proxy times out.
There are multiple strategies for dealing with these issues such as adding a wait in the start-up entrypoint, adding restart:true within the docker-compose amending the infrastructure or using some third party script. A good list of strategies can be found in the answer here.

See service hostname from OpenShift CLI

In OpenShift Container Platform v3.11 I can able to see the service hostname from the web console interface by inspecting the service.
In the web console if going to Applications > Services > service-name > Details.
You see the following info:
Selectors: app=nexus3, deploymentconfig=nexus3
Type: ClusterIP
IP: 172.30.154.6
Hostname: nexus3.xm-nexus.svc
Session affinity: None
Is there a way to see the service hostname from the CLI using the oc tool? I haven't been able to find it from reading the docs or online.
Example Hostname: nexus3.xm-nexus.svc
If you issue a oc get svc you will see the following but not the hostname.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nexus ClusterIP 172.30.186.244 <none> 3000/TCP 2h
Not directly. The hostname doesn't exist on the service object itself so you won't see it via the cli. However it is just a concatenation of (service-name).(service-namespace).svc. See docs on DNS for services
You could template it out via the cli if desired.
oc get svc nexus -o go-template --template='{{.metadata.name}}.{{.metadata.namespace}}.svc{{println}}'
Use oc describe service -n
e.g. oc describe service nexus3 -n
Services are provisioned labels like DNS.
I think the simplest way is
oc get routes
And get the hostname that you need to access by url
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
demowildfly demowildfly-swarmdemo2.192.168.42.87.nip.io demowildfly 8080 None

Why can't I access my Kubernetes service via its IP?

I have a Kubernetes service on GKE as follows:
$ kubectl describe service staging
Name: staging
Namespace: default
Labels: <none>
Selector: app=jupiter
Type: NodePort
IP: 10.11.246.27
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31683/TCP
Endpoints: 10.8.0.33:1337
Session Affinity: None
No events.
I can access the service from a VM directly via one of its endpoints (10.8.0.21:1337) or via the node port (10.240.251.174:31683 in my case). However, if I try to access 10.11.246.27:80, I get nothing. I've also tried ports 1337 and 31683.
Why can't I access the service via its IP? Do I need a firewall rule or something?
Service IPs are virtual IPs managed by kube-proxy. So, in order for that IP to be meaningful, the client must also be a part of the kube-proxy "overlay" network (have kube-proxy running, pointing at the same apiserver).
Pod IPs on GCE/GKE are managed by GCE Routes, which is more like an "underlay" of all VMs in the network.
There are a couple of ways to access non-public services from outside the cluster. Here they are in more detail, but in short:
Create a bastion GCE route for your cluster's services.
Install your cluster's kube-proxy anywhere you want to access the cluster's services.