ssl_client_certificate failed 403 forbidden - kubernetes-ingress

I was trying to install my organization client certificate chain (Pair of this certificate is installed on all organization user laptops so only organization users can access the service) in an ingress. I have created a secret with the below command
kubectl create secret generic auth-tls-chain --from-file=org_client_chain.pem --namespace=default
secret/auth-tls-chain created
And created my ingress as follows
metadata:
name: my-new-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "default/auth-tls-chain"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "3"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
spec:
rules:
- host: example.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-one
port:
number: 80
tls:
- hosts:
- example.com
secretName: echo-tls
But when I try to access my domain am getting a "403 Forbidden" error. I opened nginx config file and can see the certificate has some issues
kubectl exec ingress-nginx-controller-5fbf49f7d7-sjvpw cat /etc/nginx/nginx.conf
# error obtaining certificate: local SSL certificate default/auth-tls-chain was not found
return 403;
My client certificate chain looks like the one below in .PEM format.
-----BEGIN CERTIFICATE-----
sdfkhdskhflds
...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
saflsafhl
sfadfasdf
-----END CERTIFICATE-----
I tried creating the secret with the following command.
kubectl create secret generic ca-secret --from-file=org_client_chain.pem=org_client_chain.pem
but no luck. Can somebody help me here?
Thanks

As mentioned in the Github link you must use the certificate file name as ca.crt containing the full Certificate Authority chain.

Related

OpenSSL SSL_connect: SSL_ERROR_SYSCALL with Istio on OpenShift

I'm trying to set up TLS on Istio, as per the Istio docs.
But when I call the service with curl, I get this:
* Connected to my-dataservice.mydomain.net (10.167.46.4) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: C:/Program Files/GitWP/mingw64/ssl/certs/ca-bundle.crt
* CApath: none
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to my-dataservice.mydomain.net:443
* Closing connection 0
Using Chrome I get ERR_CONNECTION_CLOSED
The virtual service looks like this:
spec:
gateways:
- my-dataservice-gateway
hosts:
- >-
my-dataservice.mydomain.net
http:
- route:
- destination:
host: my-dataservice
And the gateway looks like this:
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- >-
my-dataservice.mydomain.net
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: istio-ingressgateway-certs
mode: SIMPLE
If I switch the gateway config to http, everything works (but on http, not https).
port:
number: 80
name: http
protocol: HTTP
The logs on the envoy proxy sidecar show nothing.
The logs on the istio ingress gateway show this:
2022-03-15T16:11:09.201217Z info sds resource:default pushed key/cert pair to proxy
When I examined the istio-ingressgateway-certs secret (which is in the same namespace as the istio ingress gateway), instead of using secret key names 'cert' and 'key' as per the istio documentation, it had keys 'tls.crt' and 'tls.key', because the secret is of type kubernetes.io/tls. These secret key-value pairs are duplicated in the secret as 'cert' and 'key' respectively. Istio's documentation on how to create the keys doesn't use the (apparently) standard key names used in TLS secrets - but it should pick up either.

Docker Desktop Nginx ingress has no external IP sometimes

I reset my entire Docker Desktop from factory settings and enable kubernetes.
Then, I run kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml and wait for the ingress to be ready.
Then, I deploy my application, which includes several services and an ingress definition.
The ingress is as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
spec:
ingressClassName: nginx
rules:
- host: test.project.com
http:
paths:
- path: "/.*"
pathType: "Prefix"
backend:
service:
name: test-frontend
port:
number: 80
Checking on the service, I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-frontend ClusterIP 10.104.106.210 <none> 80/TCP 40m
kubectl get services -n ingress-nginx returns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.100.44.33 <pending> 80:30753/TCP,443:31632/TCP 51m
ingress-nginx-controller-admission ClusterIP 10.97.85.58 <none> 443/TCP 51m
kubectl get ingresses returns
NAME CLASS HOSTS ADDRESS PORTS AGE
test-ingress nginx test.project.com 80 31m
As you can see, Docker Desktop or the Ingress is not properly binding the ingress to localhost, as it usually does. What I've been doing for the last several weeks is constantly stopping, restarting, rebuilding and resetting my deployments, services, ingresses, nodes, my computer, and Docker desktop until it suddenly starts working. I have never been able to find out what actually fixes it, it seems almost random whether it works or not, and when it stops working.
The only interesting thing I can find involves the events of the test-ingress:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 35m (x3 over 42m) nginx-ingress-controller Scheduled for sync
Normal Sync 27m (x2 over 28m) nginx-ingress-controller Scheduled for sync
Normal Sync 7m55s (x2 over 14m) nginx-ingress-controller Scheduled for sync
Edit: It started working again after a restart of my desktop. Leaving this up for any ideas as to how to prevent this or how to fix it faster next time, as this is the 5th or 6th time this has happened.
may be try
kubectl expose deployment test-ingress-deployment --type=NodePort --port=8080 --name=test-ingress-service -n demo --dry-run=1 -o yaml > mypod-service.yaml
to get the yaml template generate for the service
then start the service by apply that yaml file
then apply the ingress yaml file
on Window 10 and that will assign a random port 9999 that can be access from the "minikube ip":9999/* url
the host name is not really set but in the host file. Ingress can be access via the ip. Ingress is end point access to multiple services regardless of namespaces but the service have to be exposed directly.
if the host file is not update with the minikube ip and the host name then ingress is Scheduled for sync.
it should work with Hyper VM
https://local/hello

Using a CA cert for pulling builder image for S2I build

I have a BuildConfig that has the following strategy block:
strategy:
sourceStrategy:
from:
kind: DockerImage
name: <insecure registry pullspec>
forcePull: true
incremental: true
type: Source
The builder image is coming from a registry that uses a self-signed certificate. How do I tell the build config to either A) use a CA certificate for the registry or B) ignore the certificate errors?
I have tried adding the CA certificate as an opaque secret, and then using pullSecret, but that didn't work:
strategy:
sourceStrategy:
forcePull: true
from:
kind: DockerImage
name: <insecure registry pullspec>
incremental: true
pullSecret:
name: <name of opaque secret with ca cert>
type: Source
I am running this build in an OpenShift 3.11 cluster.
This is actually described in the documentation how to add your own root CA as an additionalTrustedCA:
Setting up additional trusted certificate authorities for builds
Here are the relevant parts:
Create a ConfigMap in the openshift-config namespace containing the trusted certificates for the registries that use self-signed certificates. For each CA file, ensure the key in the ConfigMap is the registry’s hostname in the hostname[..port] format:
$ oc create configmap registry-cas -n openshift-config \
--from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt \
--from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt
Update the cluster image configuration:
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge

Basic Auth doesn't work in kubernetes ingress

I have created pypiserver in kubernetes cluster, I have used https://hub.docker.com/r/pypiserver/pypiserver docker image. I need to create basic auth for the server which I created. I used this method https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pypiserver
labels:
app: pypiserver
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: secret
ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: pypiservice
servicePort: 8080
tls:
- hosts:
- example.com
secretName: secret-tls
But my host name would be "www.example.com/8080" and I don't see ingress has any pod in kubernetes cluster. Ingress is running fine but I don't get auth for this host. (And also I have http://IP adress:8080 which I converted to domain through cloudflare)
Please let me know what am I doing wrong?
I don't know exactly what is your nginx ingress controller version, but I can share what worked for me. I've reproduced it on my GKE cluster.
I installed my nginx ingress controller following this guide. Basically it came down to running the following commands:
If you're using GKE you need to initialize your user as a
cluster-admin with the following command:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
The following Mandatory Command is required for all deployments.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml
I'm using 1.13 version on my GKE so this tip is also applied in my case:
Tip
If you are using a Kubernetes version previous to 1.14, you need to
change kubernetes.io/os to beta.kubernetes.io/os at line 217 of
mandatory.yaml, see Labels details.
But I dealt with it quite differently. Basically you need your Nodes to have kubernetes.io/os=linux label so you can simply label them. Following command will do the job:
kubectl label node --all kubernetes.io/os=linux
Then we're heading to Provider Specific Steps which in case of GKE came down to applying the following yaml:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/cloud-generic.yaml
Then you may want to verify your installation:
To check if the ingress controller pods have started, run the
following command:
kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
or simply run:
kubectl get all -n ingress-nginx
It will also tell you if all the required resorces are properly deployed.
Next we need to write our ingress (ingress object/resource) containing basic-auth related annotations. I was following same tutorial as mentioned in your question.
First we need to create our auth file containing username and hashed password:
$ htpasswd -c auth foo
New password: <bar>
New password:
Re-type new password:
Adding password for user foo
Once we have it, we need to create a Secret object which then we'll use in our ingress:
$ kubectl create secret generic basic-auth --from-file=auth
secret "basic-auth" created
Once it is created we can check if everything went well:
$ kubectl get secret basic-auth -o yaml
apiVersion: v1
data:
auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK
kind: Secret
metadata:
name: basic-auth
namespace: default
type: Opaque
Alright, so far so good...
Then we need to create our ingress resource/object.
My ingress-with-auth.yaml file looks slightly different than the one in the instruction, namely I just added kubernetes.io/ingress.class: nginx to make sure my nginx ingress controller is used rather than built-in GKE solution:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-with-auth
annotations:
kubernetes.io/ingress.class: nginx
# type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: pypiserver
servicePort: 80
In your example you may need to add nginx prefix in your basic-auth related annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: secret
ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
so it looks like this:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: secret
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
First I used the address listed in my ingress resource (it doesn't appear there any more once I added kubernetes.io/ingress.class: nginx annotation in my ingress definition:
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-with-auth foo.bar.com 80 117m
When I tried to access pypi-server using this IP it brought me directly to the page without a need of any authentication. But it looks like if you didn't define proper ingress class, the default is used instead so in practice your ingress definition with auth-basic details isn't taken into consideration and isn't passed to the nginx ingress controller we installed in one of the previous steps.
So what IP address should be used to access your app ? Run the following command which will show you both CLUSTER-IP (can be accessed within your cluster from any Pod or Node) and EXTERNAL-IP of your nginx ingress controller:
$ kubectl get service --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.0.3.220 35.111.112.113 80:30452/TCP,443:30006/TCP 18h
You can basically host many different websites in your cluster and all of them will be available through this IP. All of them can be available on default http 80 port (or https 443 in your case). The only difference between them will be the hostname that you pass in http header of your http request.
Since I don't have a domain pointing to this external IP address and can't simply access my website by going to http://foo.bar.com I need to pass somehow the hostname I'm requesting from 35.111.112.113 address. It can be done in a few ways:
I installed in my Google Chrome browser ModHeader extension which allows me to modify my http request headers and set the hostname I'm requestig to any value I want.
You can do it also using curl as follows:
curl -v http://35.111.112.113 -H 'Host: foo.bar.com' -u 'foo:bar'
You should be prompted for authentication.
If you don't provide -u username:password flag you should get 401 Authorization Required.
Basically hat's all.
Let me know if it helped you. Don't hesitate to ask additional questions if something isn't completely clear.
One more thing. If something still doesn't work you may start from attaching to your nginx ingress controller Pod (check your Pod name first by running kubectl get pods -n ingress-nginx):
kubectl exec -ti -n ingress-nginx nginx-ingress-controller-pod /bin/bash
and checking the content of your /etc/nginx/nginx.conf file. Look for foo.bar.com (or in your case example.com). It should contain similar lines:
auth_basic "Authentication Required - foo";
auth_basic_user_file /etc/ingress-controller/auth/default-ingress-with-auth.passwd;
Then check if the file is present in the indicated location /etc/ingress-controller/auth/default-ingress-with-auth.passwd.
One note to your Service definition. The fact that pypiserver container exposes specifically port 8080 doesn't mean that you need to use this port when accessing it via ingress. In Service definition the port exposed by the Container is called targetPort. You need to specify it when defining your Service but Service itself can expose completely different port. I defined my Service using following command:
kubectl expose deployment pypiserver --type=LoadBalancer --port=80 --target-port=8080
Note that the type should be set to NodePort or LoadBalancer. Then in your ingress definition you don't have to use 8080 but 80 which is the port exposed by your pypiserver Service. Note that there is servicePort: 80 in my ingress object/resource definition. Your example.com domain in cloudflare should point with it's A record to your nginx ingress controller LoadBalancer Service IP (kubectl get svc -n ingress-nginx) without specifying any ports.

Problems with system:admin login after changing to HTPasswd Identity Provider in Openshift Origin

Wanting to switch to HTPasswd Identity provider i have updated the master-config.yaml to look like this
identityProviders:
- name: my_htpasswd_provider
challenge: true
login: true
provider:
apiVersion: v1
kind: HTPasswdPasswordIdentityProvider
file: /path/to/users.htpasswd
Im using the oc cluster:
oc cluster up --host-data-dir=/opt/openshift_data --host-config-dir=/opt/openshift_conf --use-existing-config
, but when i try to log in with the system:admin user this happens.
oc login -u system:admin
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
Login failed (401 Unauthorized)
You must obtain an API token by visiting https://:8443/oauth/token/request
I got this error, when I changed the authentication provider of my Openshift cluster, and I had already logged in as admin user with the old authentication provider settings.
I had to add mappingMethod: add option to my configuration, so It could map the existing user.
identityProviders:
- challenge: true
login: true
mappingMethod: add
name: my_htpasswd_provider
provider:
apiVersion: v1
kind: HTPasswdPasswordIdentityProvider
file: /var/openshift/users.htpasswd
This is Openshift documentation url:
https://docs.openshift.com/enterprise/3.2/install_config/configuring_authentication.html#mapping-identities-to-users
Hope this helps