OpenSSL SSL_connect: SSL_ERROR_SYSCALL with Istio on OpenShift - openshift

I'm trying to set up TLS on Istio, as per the Istio docs.
But when I call the service with curl, I get this:
* Connected to my-dataservice.mydomain.net (10.167.46.4) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: C:/Program Files/GitWP/mingw64/ssl/certs/ca-bundle.crt
* CApath: none
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to my-dataservice.mydomain.net:443
* Closing connection 0
Using Chrome I get ERR_CONNECTION_CLOSED
The virtual service looks like this:
spec:
gateways:
- my-dataservice-gateway
hosts:
- >-
my-dataservice.mydomain.net
http:
- route:
- destination:
host: my-dataservice
And the gateway looks like this:
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- >-
my-dataservice.mydomain.net
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: istio-ingressgateway-certs
mode: SIMPLE
If I switch the gateway config to http, everything works (but on http, not https).
port:
number: 80
name: http
protocol: HTTP
The logs on the envoy proxy sidecar show nothing.
The logs on the istio ingress gateway show this:
2022-03-15T16:11:09.201217Z info sds resource:default pushed key/cert pair to proxy
When I examined the istio-ingressgateway-certs secret (which is in the same namespace as the istio ingress gateway), instead of using secret key names 'cert' and 'key' as per the istio documentation, it had keys 'tls.crt' and 'tls.key', because the secret is of type kubernetes.io/tls. These secret key-value pairs are duplicated in the secret as 'cert' and 'key' respectively. Istio's documentation on how to create the keys doesn't use the (apparently) standard key names used in TLS secrets - but it should pick up either.

Related

ssl_client_certificate failed 403 forbidden

I was trying to install my organization client certificate chain (Pair of this certificate is installed on all organization user laptops so only organization users can access the service) in an ingress. I have created a secret with the below command
kubectl create secret generic auth-tls-chain --from-file=org_client_chain.pem --namespace=default
secret/auth-tls-chain created
And created my ingress as follows
metadata:
name: my-new-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "default/auth-tls-chain"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "3"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
spec:
rules:
- host: example.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-one
port:
number: 80
tls:
- hosts:
- example.com
secretName: echo-tls
But when I try to access my domain am getting a "403 Forbidden" error. I opened nginx config file and can see the certificate has some issues
kubectl exec ingress-nginx-controller-5fbf49f7d7-sjvpw cat /etc/nginx/nginx.conf
# error obtaining certificate: local SSL certificate default/auth-tls-chain was not found
return 403;
My client certificate chain looks like the one below in .PEM format.
-----BEGIN CERTIFICATE-----
sdfkhdskhflds
...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
saflsafhl
sfadfasdf
-----END CERTIFICATE-----
I tried creating the secret with the following command.
kubectl create secret generic ca-secret --from-file=org_client_chain.pem=org_client_chain.pem
but no luck. Can somebody help me here?
Thanks
As mentioned in the Github link you must use the certificate file name as ca.crt containing the full Certificate Authority chain.

Graphhopper server not accessing on a same network

I have successfully deployed graph-hopper on my local server. The problem is that i can access the server using local-host on the server but unable to access it using the server IP locally or from other machine on the same network. For same port if i use docker it works but not the other way around. Here is my configuration:
# Dropwizard server configuration
server:
applicationConnectors:
- type: http
port: 8989
requestLog:
appenders: []
adminConnectors:
- type: http
port: 8991
You need to bind the host, maybe you need to change localhost with the IP address of your server. Avoid using 0.0.0.0.
server:
application_connectors:
- type: http
port: 8989
bind_host: localhost

Docker Desktop Nginx ingress has no external IP sometimes

I reset my entire Docker Desktop from factory settings and enable kubernetes.
Then, I run kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml and wait for the ingress to be ready.
Then, I deploy my application, which includes several services and an ingress definition.
The ingress is as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
spec:
ingressClassName: nginx
rules:
- host: test.project.com
http:
paths:
- path: "/.*"
pathType: "Prefix"
backend:
service:
name: test-frontend
port:
number: 80
Checking on the service, I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-frontend ClusterIP 10.104.106.210 <none> 80/TCP 40m
kubectl get services -n ingress-nginx returns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.100.44.33 <pending> 80:30753/TCP,443:31632/TCP 51m
ingress-nginx-controller-admission ClusterIP 10.97.85.58 <none> 443/TCP 51m
kubectl get ingresses returns
NAME CLASS HOSTS ADDRESS PORTS AGE
test-ingress nginx test.project.com 80 31m
As you can see, Docker Desktop or the Ingress is not properly binding the ingress to localhost, as it usually does. What I've been doing for the last several weeks is constantly stopping, restarting, rebuilding and resetting my deployments, services, ingresses, nodes, my computer, and Docker desktop until it suddenly starts working. I have never been able to find out what actually fixes it, it seems almost random whether it works or not, and when it stops working.
The only interesting thing I can find involves the events of the test-ingress:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 35m (x3 over 42m) nginx-ingress-controller Scheduled for sync
Normal Sync 27m (x2 over 28m) nginx-ingress-controller Scheduled for sync
Normal Sync 7m55s (x2 over 14m) nginx-ingress-controller Scheduled for sync
Edit: It started working again after a restart of my desktop. Leaving this up for any ideas as to how to prevent this or how to fix it faster next time, as this is the 5th or 6th time this has happened.
may be try
kubectl expose deployment test-ingress-deployment --type=NodePort --port=8080 --name=test-ingress-service -n demo --dry-run=1 -o yaml > mypod-service.yaml
to get the yaml template generate for the service
then start the service by apply that yaml file
then apply the ingress yaml file
on Window 10 and that will assign a random port 9999 that can be access from the "minikube ip":9999/* url
the host name is not really set but in the host file. Ingress can be access via the ip. Ingress is end point access to multiple services regardless of namespaces but the service have to be exposed directly.
if the host file is not update with the minikube ip and the host name then ingress is Scheduled for sync.
it should work with Hyper VM
https://local/hello

Service working via Cluster-IP, but not via Ingress

I need help figuring out what I'm doing wrong with my Ingress setup Minikube/ingress-nginx-controller. Kubectl version is 1.19. Minikube version is 1.13.1
I have 2 services: 1 that I created from an image I built myself in dotnetcore and another that I pulled from an example. The example is giving me no problems: I can reach it via http://myapp.com/web. The one I built can be reached directly via its Cluster IP (port 80) in the browser, but can't be reached from the browser using http://myapp.com/datasvc (404 error). Here's a snippet from my Ingress yaml:
- host: myapp.com
http:
paths:
- path: /web2 #works
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080
- path: /datasvc
pathType: Prefix
backend:
service:
name: datasvc
port:
number: 80
And here's what my backends look like:
Rules:
Host Path Backends
---- ---- --------
myapp.com
/web2 web2:8080 172.17.0.7:8080)
/datasvc datasvc:80 172.17.0.8:80)
Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
datasvc ClusterIP 10.100.7.119 <none> 80/TCP 11hde here
web2 NodePort 10.98.6.48 <none> 8080:31122/TCP 12h
CURL output from curl -H "HOST: myapp.com" localhost/web2 -v:
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /web2 HTTP/1.1
> Host: myapp.com
> User-Agent: curl/7.67.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.19.1
< Date: Sun, 04 Oct 2020 16:16:55 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 61
< Connection: keep-alive
<
Hello, world!
Version: 1.0.0
Hostname: web2-7d85fb54bf-f26p2
* Connection #0 to host localhost left intact
CURL output from curl -H "HOST: myapp.com" localhost/datasvc -v
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /datasvc HTTP/1.1
> Host: myapp.com
> User-Agent: curl/7.67.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Server: nginx/1.19.1
< Date: Sun, 04 Oct 2020 16:20:13 GMT
< Content-Length: 0
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
The only difference I can see between the 2 is the fact that the example (web2) service uses type: NodePort and mine uses the default (type: ClusterIP). I tried doing the same with my service, but that made no difference.
I don't know what else to look at diagnostically or where to go from here. I've checked out many Medium posts but haven't come across anything that describes my situation. Please let me know if I should provide more information.
Ingress supports either NodePort service type or LoadBalancer. ClusterIP service will be available only inside minikube VM.
For minikube you should use NodePort service type configuration. To setup ingress on minikube you can follow official documentation.
To expose your deployment you can use kubectl:
kubectl expose deployment <deployment_name> --type=NodePort --port=<port_number>
To check if service is correctly working you can run minikube service list and curl URL that you have exposed.
If everything is working correctly you can setup your ingress and add IP Address and HOST to /etc/hosts.

Basic Auth doesn't work in kubernetes ingress

I have created pypiserver in kubernetes cluster, I have used https://hub.docker.com/r/pypiserver/pypiserver docker image. I need to create basic auth for the server which I created. I used this method https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pypiserver
labels:
app: pypiserver
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: secret
ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: pypiservice
servicePort: 8080
tls:
- hosts:
- example.com
secretName: secret-tls
But my host name would be "www.example.com/8080" and I don't see ingress has any pod in kubernetes cluster. Ingress is running fine but I don't get auth for this host. (And also I have http://IP adress:8080 which I converted to domain through cloudflare)
Please let me know what am I doing wrong?
I don't know exactly what is your nginx ingress controller version, but I can share what worked for me. I've reproduced it on my GKE cluster.
I installed my nginx ingress controller following this guide. Basically it came down to running the following commands:
If you're using GKE you need to initialize your user as a
cluster-admin with the following command:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
The following Mandatory Command is required for all deployments.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml
I'm using 1.13 version on my GKE so this tip is also applied in my case:
Tip
If you are using a Kubernetes version previous to 1.14, you need to
change kubernetes.io/os to beta.kubernetes.io/os at line 217 of
mandatory.yaml, see Labels details.
But I dealt with it quite differently. Basically you need your Nodes to have kubernetes.io/os=linux label so you can simply label them. Following command will do the job:
kubectl label node --all kubernetes.io/os=linux
Then we're heading to Provider Specific Steps which in case of GKE came down to applying the following yaml:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/cloud-generic.yaml
Then you may want to verify your installation:
To check if the ingress controller pods have started, run the
following command:
kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
or simply run:
kubectl get all -n ingress-nginx
It will also tell you if all the required resorces are properly deployed.
Next we need to write our ingress (ingress object/resource) containing basic-auth related annotations. I was following same tutorial as mentioned in your question.
First we need to create our auth file containing username and hashed password:
$ htpasswd -c auth foo
New password: <bar>
New password:
Re-type new password:
Adding password for user foo
Once we have it, we need to create a Secret object which then we'll use in our ingress:
$ kubectl create secret generic basic-auth --from-file=auth
secret "basic-auth" created
Once it is created we can check if everything went well:
$ kubectl get secret basic-auth -o yaml
apiVersion: v1
data:
auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK
kind: Secret
metadata:
name: basic-auth
namespace: default
type: Opaque
Alright, so far so good...
Then we need to create our ingress resource/object.
My ingress-with-auth.yaml file looks slightly different than the one in the instruction, namely I just added kubernetes.io/ingress.class: nginx to make sure my nginx ingress controller is used rather than built-in GKE solution:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-with-auth
annotations:
kubernetes.io/ingress.class: nginx
# type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: pypiserver
servicePort: 80
In your example you may need to add nginx prefix in your basic-auth related annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: secret
ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
so it looks like this:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: secret
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
First I used the address listed in my ingress resource (it doesn't appear there any more once I added kubernetes.io/ingress.class: nginx annotation in my ingress definition:
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-with-auth foo.bar.com 80 117m
When I tried to access pypi-server using this IP it brought me directly to the page without a need of any authentication. But it looks like if you didn't define proper ingress class, the default is used instead so in practice your ingress definition with auth-basic details isn't taken into consideration and isn't passed to the nginx ingress controller we installed in one of the previous steps.
So what IP address should be used to access your app ? Run the following command which will show you both CLUSTER-IP (can be accessed within your cluster from any Pod or Node) and EXTERNAL-IP of your nginx ingress controller:
$ kubectl get service --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.0.3.220 35.111.112.113 80:30452/TCP,443:30006/TCP 18h
You can basically host many different websites in your cluster and all of them will be available through this IP. All of them can be available on default http 80 port (or https 443 in your case). The only difference between them will be the hostname that you pass in http header of your http request.
Since I don't have a domain pointing to this external IP address and can't simply access my website by going to http://foo.bar.com I need to pass somehow the hostname I'm requesting from 35.111.112.113 address. It can be done in a few ways:
I installed in my Google Chrome browser ModHeader extension which allows me to modify my http request headers and set the hostname I'm requestig to any value I want.
You can do it also using curl as follows:
curl -v http://35.111.112.113 -H 'Host: foo.bar.com' -u 'foo:bar'
You should be prompted for authentication.
If you don't provide -u username:password flag you should get 401 Authorization Required.
Basically hat's all.
Let me know if it helped you. Don't hesitate to ask additional questions if something isn't completely clear.
One more thing. If something still doesn't work you may start from attaching to your nginx ingress controller Pod (check your Pod name first by running kubectl get pods -n ingress-nginx):
kubectl exec -ti -n ingress-nginx nginx-ingress-controller-pod /bin/bash
and checking the content of your /etc/nginx/nginx.conf file. Look for foo.bar.com (or in your case example.com). It should contain similar lines:
auth_basic "Authentication Required - foo";
auth_basic_user_file /etc/ingress-controller/auth/default-ingress-with-auth.passwd;
Then check if the file is present in the indicated location /etc/ingress-controller/auth/default-ingress-with-auth.passwd.
One note to your Service definition. The fact that pypiserver container exposes specifically port 8080 doesn't mean that you need to use this port when accessing it via ingress. In Service definition the port exposed by the Container is called targetPort. You need to specify it when defining your Service but Service itself can expose completely different port. I defined my Service using following command:
kubectl expose deployment pypiserver --type=LoadBalancer --port=80 --target-port=8080
Note that the type should be set to NodePort or LoadBalancer. Then in your ingress definition you don't have to use 8080 but 80 which is the port exposed by your pypiserver Service. Note that there is servicePort: 80 in my ingress object/resource definition. Your example.com domain in cloudflare should point with it's A record to your nginx ingress controller LoadBalancer Service IP (kubectl get svc -n ingress-nginx) without specifying any ports.