Service working via Cluster-IP, but not via Ingress - kubernetes-ingress

I need help figuring out what I'm doing wrong with my Ingress setup Minikube/ingress-nginx-controller. Kubectl version is 1.19. Minikube version is 1.13.1
I have 2 services: 1 that I created from an image I built myself in dotnetcore and another that I pulled from an example. The example is giving me no problems: I can reach it via http://myapp.com/web. The one I built can be reached directly via its Cluster IP (port 80) in the browser, but can't be reached from the browser using http://myapp.com/datasvc (404 error). Here's a snippet from my Ingress yaml:
- host: myapp.com
http:
paths:
- path: /web2 #works
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080
- path: /datasvc
pathType: Prefix
backend:
service:
name: datasvc
port:
number: 80
And here's what my backends look like:
Rules:
Host Path Backends
---- ---- --------
myapp.com
/web2 web2:8080 172.17.0.7:8080)
/datasvc datasvc:80 172.17.0.8:80)
Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
datasvc ClusterIP 10.100.7.119 <none> 80/TCP 11hde here
web2 NodePort 10.98.6.48 <none> 8080:31122/TCP 12h
CURL output from curl -H "HOST: myapp.com" localhost/web2 -v:
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /web2 HTTP/1.1
> Host: myapp.com
> User-Agent: curl/7.67.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.19.1
< Date: Sun, 04 Oct 2020 16:16:55 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 61
< Connection: keep-alive
<
Hello, world!
Version: 1.0.0
Hostname: web2-7d85fb54bf-f26p2
* Connection #0 to host localhost left intact
CURL output from curl -H "HOST: myapp.com" localhost/datasvc -v
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /datasvc HTTP/1.1
> Host: myapp.com
> User-Agent: curl/7.67.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Server: nginx/1.19.1
< Date: Sun, 04 Oct 2020 16:20:13 GMT
< Content-Length: 0
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
The only difference I can see between the 2 is the fact that the example (web2) service uses type: NodePort and mine uses the default (type: ClusterIP). I tried doing the same with my service, but that made no difference.
I don't know what else to look at diagnostically or where to go from here. I've checked out many Medium posts but haven't come across anything that describes my situation. Please let me know if I should provide more information.

Ingress supports either NodePort service type or LoadBalancer. ClusterIP service will be available only inside minikube VM.
For minikube you should use NodePort service type configuration. To setup ingress on minikube you can follow official documentation.
To expose your deployment you can use kubectl:
kubectl expose deployment <deployment_name> --type=NodePort --port=<port_number>
To check if service is correctly working you can run minikube service list and curl URL that you have exposed.
If everything is working correctly you can setup your ingress and add IP Address and HOST to /etc/hosts.

Related

OpenSSL SSL_connect: SSL_ERROR_SYSCALL with Istio on OpenShift

I'm trying to set up TLS on Istio, as per the Istio docs.
But when I call the service with curl, I get this:
* Connected to my-dataservice.mydomain.net (10.167.46.4) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: C:/Program Files/GitWP/mingw64/ssl/certs/ca-bundle.crt
* CApath: none
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to my-dataservice.mydomain.net:443
* Closing connection 0
Using Chrome I get ERR_CONNECTION_CLOSED
The virtual service looks like this:
spec:
gateways:
- my-dataservice-gateway
hosts:
- >-
my-dataservice.mydomain.net
http:
- route:
- destination:
host: my-dataservice
And the gateway looks like this:
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- >-
my-dataservice.mydomain.net
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: istio-ingressgateway-certs
mode: SIMPLE
If I switch the gateway config to http, everything works (but on http, not https).
port:
number: 80
name: http
protocol: HTTP
The logs on the envoy proxy sidecar show nothing.
The logs on the istio ingress gateway show this:
2022-03-15T16:11:09.201217Z info sds resource:default pushed key/cert pair to proxy
When I examined the istio-ingressgateway-certs secret (which is in the same namespace as the istio ingress gateway), instead of using secret key names 'cert' and 'key' as per the istio documentation, it had keys 'tls.crt' and 'tls.key', because the secret is of type kubernetes.io/tls. These secret key-value pairs are duplicated in the secret as 'cert' and 'key' respectively. Istio's documentation on how to create the keys doesn't use the (apparently) standard key names used in TLS secrets - but it should pick up either.

Adding HTST header in my internal website doesn't work as expected

I have created a website where I am trying to add the HSTS security header via httpd.conf
<IfModule mod_headers.c>
Header always set Strict-Transport-Security 'max-age=4000; includeSubDomains'
</IfModule>
Adding the above code, able to see the Strict-Transport-Security header added over my HTTPS response header
host> curl -v https://172.21.218.67 --insecure
* About to connect() to 172.21.218.67 port 443 (#0)
* Trying 172.21.218.67... connected
* Connected to 172.21.218.67 (172.21.218.67) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
* Server certificate:
* subject: ****************************************
* start date: Oct 21 06:42:49 2019 GMT
* expire date: Nov 20 06:42:49 2019 GMT
* common name: Insights
* issuer: *****************************************
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 172.21.218.67
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Mon, 21 Oct 2019 10:50:54 GMT
< Server: Apache
< Strict-Transport-Security: max-age=4000; includeSubDomains
< Last-Modified: Mon, 21 Oct 2019 08:58:58 GMT
< ETag: "8f3-59567e4f07362"
< Accept-Ranges: bytes
< Content-Length: 2291
< Content-Type: text/html
But this does create an impact over my website by the browser. (Browser is not redirecting to HTTPS if the user tries to access my website via HTTP).
I could not even see my website listing in chrome's HSTS checklist
chrome://net-internals/#hsts
Do I need to add any other configuration in order to make it work?
As suggested by IMSoP, my test server was not trusted by the server which affected the HSTS functionality.
Solved: Made my test server as a trusted source to the browser by adding a self-signed certificate.
Now the HSTS working as expected.

Ethereum client-go RPC response 403 "invalid host specified"

I'm running ethereum/client-go docker image with the following flags:
docker run -p 8545:8545 ethereum/client-go --rpcapi personal,db,eth,net,web3 --rpc --rpcaddr 0.0.0.0 --rpccorsdomain * --rinkeby
This image is running on machine A and I can query the RPC within it. But when I try to query it from machine B I receive the following response:
Request:
curl -X POST http://<machine_A_address>:8545 -H "Content-Type: application/json" --data '{"jsonrpc":"2รท.0","method":"eth_coinbase","params":[],"id":64}' --verbose
Response:
< HTTP/1.1 403 Forbidden
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Wed, 18 Apr 2018 14:58:44 GMT
< Content-Length: 23
<
invalid host specified
* Connection #0 to host ... left intact
How can I query the ethereum client hosted on machine A from machine B ? Where I can find the ethereum client logs so I can debug it ?
Adding --rpcvhosts=* this flag solved the issue
Since --rpcvhosts is deprecated, you need to specify the flag --http.vhosts=<YOUR_DOMAIN>.
If you need an easy walkaround, you can set --http.vhosts=*, but this solution is a bad security practice.

Connectivity problems between FILAB VMs and Cosmos global instance

I have the same kind of connectivity problem discussed in the question "Cygnus can not persist data on Cosmos global instance". However, I have found no solution after read it.
Nowadays, I have recently deployed two virtual machines in FILAB (both VMs contain Orion ContextBroker 0.26.1 and Cygnus 0.11.0).
When I try to persist data on Cosmos via Cygnus, I get the following error message (the same in both VMs) :
2015-12-17 19:03:00,221 (SinkRunner-PollingRunner-DefaultSinkProcessor)
[ERROR - com.telefonica.iot.cygnus.sinks.OrionSink.process(OrionSink.java:305)]
Persistence error (The /user/rmartinezcarreras/def_serv/def_serv_path/room1_room
directory could not be created in HDFS. Server response: 503 Service unavailable)
On the other hand, when I try to fire a request from the command line of whatever VM, I get the next response:
[root#orionlarge centos]# curl -v -X GET "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/rmartinezcarreras/?
op=liststatus&user.name=rmartinezcarreras" -H "X-Auth-Token: XXXXXXX"
* About to connect() to cosmos.lab.fiware.org port 14000 (#0)
* Trying 130.206.80.46... connected
* Connected to cosmos.lab.fiware.org (130.206.80.46) port 14000 (#0)
> GET /webhdfs/v1/user/rmartinezcarreras/?
op=liststatus&user.name=rmartinezcarreras HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7
NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: cosmos.lab.fiware.org:14000
> Accept: */*
> X-Auth-Token: XXXXX
>
* Closing connection #0
* Failure when receiving data from the peer
curl: (56) Failure when receiving data from the peer
Nevertheless, from an external VM (outside FILAB):
[root#dsieBroker orion]# curl -v -X GET
"http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/rmartinezcarreras/?
op=liststatus&user.name=rmartinezcarreras" -H "X-Auth-Token: XXXXX"
* About to connect() to cosmos.lab.fiware.org port 14000 (#0)
* Trying 130.206.80.46... connected
* Connected to cosmos.lab.fiware.org (130.206.80.46) port 14000 (#0)
> GET /webhdfs/v1/user/rmartinezcarreras/?
op=liststatus&user.name=rmartinezcarreras HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7
NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: cosmos.lab.fiware.org:14000
> Accept: */*
> X-Auth-Token: XXXXXX
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: HEAD, POST, GET, OPTIONS, DELETE
< Access-Control-Allow-Headers: origin, content-type, X-Auth-Token, Tenant-
ID, Authorization
< server: Apache-Coyote/1.1
< set-cookie:
hadoop.auth="u=rmartinezcarreras&p=rmartinezcarreras&t=simple&e=XXXXXX&s=
XXXXhD 8="; Version=1; Path=/
< Content-Type: application/json; charset=utf-8
< transfer-encoding: chunked
< date: Thu, 17 Dec 2015 18:52:46 GMT
< connection: close
< Content-Length: 243
< ETag: W/"f3-NL9+bYJLweyFpoJfNgjQrg"
<
{"FileStatuses":{"FileStatus":
[{"pathSuffix":"def_serv","type":"DIRECTORY","length":0,"owner":
"rmartinezcarreras","group":"rmartinezcarreras","permission":"740",
"accessTime":0,"modificationTime":1450349251833,"blockSize":0,
"replication":0}]}}
* Closing connection #0
Also get good results from my Cosmos account.
How can I solve this? It seems a connectivity problem. Could you help me?
Thank you in advance
Finally, this was a problem with the OAuth2 proxy we are using for Authentication and Authorization. The underlying Express module it is based was adding a content-length header when another transfer-encoding: chunked header was present. As researched in this other question, this combination is not according to the RFC, and was causing certain fully compliant client implementations were reseting the connection.

HHVM inside Docker and MySQL (outside) on host

I setup HHVM inside Docker on cPanel server with this configuration:
http://wiki.mikejung.biz/HHVM
Section - "How to run HHVM in a Ubuntu docker container on cPanel"
My Docker configuration is:
"Name": "/sad_wozniak",
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.13",
"IPPrefixLen": 16,
"MacAddress": "02:42:ac:11:00:0d",
"PortMapping": null,
"Ports": {
"9000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "9000"
and on VirtualHost of domain i add:
<IfModule mod_proxy_fcgi.c>
ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://172.17.0.13:9000/home/username/public_html/
</IfModule>
I check that it works with curl:
HTTP/1.1 200 OK
Date: Sat, 04 Apr 2015 22:34:52 GMT
Server: Apache
X-Powered-By: HHVM/3.6.1
X-Mod-Pagespeed: 1.9.32.3-4448
Cache-Control: max-age=0, no-cache
Content-Length: 17
Content-Type: text/html; charset=utf-8
But i get MySQL error:
Error establishing a database connection
Outside Docker MySQL server has permissions to use any remote connection. I'd like to use "localhost" on server to MySQL server.
MySQL run on:
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 498 19250361 3206/mysqld
How to connect HHVM with this MySQL which is outside the Docker?
As documentation states, you need to add host address as alias to container
http://docs.docker.com/reference/commandline/cli/#adding-entries-to-a-container-hosts-file
Note: Sometimes you need to connect to the Docker host, which means getting the IP address of the host. You can use the following shell commands to simplify this process:
$ alias hostip="ip route show 0.0.0.0/0 | grep -Eo 'via \S+' | awk '{ print \$2 }'"
$ docker run --add-host=docker:$(hostip) --rm -it debian
After that you should be able to connect to host MySQL using address docker:3306 from container.