how to configure ingress to direct traffic to an https backend ... with istio ingress - kubernetes-ingress

Hi i have deployed elastic search in Kubernetes with a self-signed certificate I want expose elastic search URL but am able to do nginx ingress but not successful with istio can any one explained how to do that
this is the virtual service
kind: VirtualService
metadata:
name: elasticsearch
namespace: istio-system
spec:
hosts:
- elasticsearch.domain.com
gateways:
- monitor-gateway
http:
- match:
- port: 443
route:
- destination:
host: elasticsearch.monitor.svc.cluster.local
port:
number: 9200
gateway
# Source: istio-ingress/templates/gateway.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: monitor-gateway
namespace: istio-system
labels:
app: istio-ingress
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: istio-ingress
app.kubernetes.io/version: 1.15.3
helm.sh/chart: gateway-1.15.3
istio: ingress
spec:
selector:
istio: ingress
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
- hosts:
- '*'
port:
name: https
number: 443
protocol: HTTP
- hosts:
- '*'
port:
name: tpc
number: 15021
protocol: TCP

By adding below destination Rule i resloved
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
annotations:
name: elasticsearch
namespace: istio-system
spec:
host: elasticsearch.monitor.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 9200
tls:
clientCertificate: /etc/istio/ingress/ca.cert
mode: SIMPLE
privateKey: /etc/istio/ingress/tls.key

Related

I am trying to create a basic path based routing ingress controller with an AKS managed Load Balancer. need create consistent path based routing

##Working ingress file##
apiVersion: networking.k8s.io/v1
kind: Ingress`enter code here`
metadata:
name: signaler-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.org/websocket-services: "websocket"
spec:
ingressClassName: nginx
tls:
- hosts:
- i2adevcluster-dns.westus2.cloudapp.azure.com
secretName: tls-secret
rules:
- host: i2adevcluster-dns.westus2.cloudapp.azure.com
http:
paths:
- path: /signaler(/|$)(.*)
pathType: Prefix
backend:
service:
name: signaler
port:
number: 3000
- path: /websocket(/|$)(.*)
pathType: Prefix
backend:
service:
name: signaler
port:
number: 3001
##Want to define a path with consistency## prefix /signaler/websocket
##expecting work the same with the below configuration##
--------------------------------------------------------------
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: signaler-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.org/websocket-services: "websocket"
spec:
ingressClassName: nginx
tls:
- hosts:
- i2adevcluster-dns.westus2.cloudapp.azure.com
secretName: tls-secret
rules:
- host: i2adevcluster-dns.westus2.cloudapp.azure.com
http:
paths:
- path: /signaler(/|$)(.*)
pathType: Prefix
backend:
service:
name: signaler
port:
number: 3000
- path: /signaler/websocket(/|$)(.*)
pathType: Prefix
backend:
service:
name: signaler
port:
number: 3001
Details about the solutions I am looking for
my ingress route is working with the inconsistency path but I want to make my path consistent with prefix /signaler with each subpath
The first working configuration is not having path consistency with prefix /signaler with Websocket so it should be /signaler/websocket/ instead of /WebSocket/

Problem with ALB Ingress Controller in redirecting to right path

I have done the setup of ALB (Application Load Balancer) using Ingress Controller (version -> docker.io/amazon/aws-alb-ingress-controller:v1.1.8) for my AWS EKS cluster (v 1.20) running with Fargate profile.
I can access my service using the load balancer link:-
http://5e07dbe1-default-nginxingr-29e9-1260427999.us-east-1.elb.amazonaws.com/
I have 2 different services configured in my Ingress as shown below:-
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "nginx-ingress"
namespace: "default"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-014b302d73097d083
# alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
# alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:195725532069:certificate/b6a9e691-b807-4f10-a0bf-0449730ecdf4
# alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
# alb.ingress.kubernetes.io/backend-protocol: HTTPS
#alb.ingress.kubernetes.io/load-balancer-attributes: "60"
#alb.ingress.kubernetes.io/rewrite-target: /
labels:
app: nginx-ingress
spec:
rules:
- http:
paths:
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: ssl-redirect
# port:
# number: use-annotation
- path: /foo
pathType: Prefix
backend:
service:
name: "nginx-service"
port:
number: 80
- path: /*
pathType: Prefix
backend:
service:
name: "mydocker-svc"
port:
number: 8080
Now the problem is if I put /foo at the end of LB link then nothing happens and I get 404 not found error:-
Both my services are fine with respective Pods running behind their respective Kubernetes NodePort services but they are not accessible using the Ingress. If I swap the path to /* from /foo for the other service (nginx-service), I can then access that but then it will break my previous service (mydocker-svc).
Please let me know where I'm the mistake so that I can fix this issue. Thank you
ALB Controller:-
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
- --ingress-class=alb
- --cluster-name=eks-fargate-alb-demo
- --aws-vpc-id=vpc-0dc46d370e38de475
- --aws-region=us-east-1
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.8
serviceAccountName: alb-ingress-controller
Nginx service:-
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/target-type: ip
name: "nginx-service"
namespace: "default"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "nginx"
mydocker-svc:-
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
eks.amazonaws.com/fargate-profile: fp-default
run: mydocker
name: mydocker-svc
annotations:
alb.ingress.kubernetes.io/target-type: ip
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
eks.amazonaws.com/fargate-profile: fp-default
run: mydocker
type: NodePort
status:
loadBalancer: {}
TargetGroups become unhealthy, if the annotation in Kubernetes NodePort service like alb.ingress.kubernetes.io/target-type: IP is missing:-
You can try this out one i am using as reference
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-usermgmt-restapp-service
labels:
app: usermgmt-restapp
annotations:
# Ingress Core Settings
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: internet-facing
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
#alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
rules:
- http:
paths:
- path: /app1/*
backend:
serviceName: app1-nginx-nodeport-service
servicePort: 80
- path: /app2/*
backend:
serviceName: app2-nginx-nodeport-service
servicePort: 80
- path: /*
backend:
serviceName: usermgmt-restapp-nodeport-service
servicePort: 8095
Read more at : https://www.stacksimplify.com/aws-eks/aws-alb-ingress/kubernetes-aws-alb-ingress-context-path-based-routing/

istio routing http to https upstream causes causes 302

My ingress gateway is at port 80 http and routing to a https destination.
With the following configuration
http://ingress-gateway.example.com/zzz
it gives 302 and the urls changes to https:
https://my-site.example.com/products
Why 302 and what am I missing?
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port: # Note: I am entering using this port
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port: # Note: I am NOT entering using this port
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
credentialName: my-credential
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: apps-domain
spec:
hosts:
- my-site.example.com
ports:
- number: 443
name: https-my-site
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my
spec:
hosts:
- "*"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: /zzz
rewrite:
uri: /products
route:
- destination:
port:
number: 443
host: my-site.example.com
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: my-https-backend
spec:
host: my-site.example.com
trafficPolicy:
tls:
mode: SIMPLE
sni: my-site.example.com
You have a rewrite rule pointing to port 443
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my
spec:
hosts:
- "*"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: /zzz
rewrite:
uri: /products
route:
- destination:
port:
number: 443 # here
host: my-site.example.com

502 Bad Gateway on WorkerNode without the pod

I am trying to understand the Nginx-Ingress on the K8S cluster.
I set up the nginx-ingress controller based on the instructions here
My cluster has 3 nodes
kubernetes-master, kubernetes-node1, kubernetes-node2
I have an IoTPoD running with 1 replica (kubernetes-node1). I have created a Cluster IP service for accessing this pod via rest. Below is the manifest.
apiVersion: v1
kind: Namespace
metadata:
name: myiotgarden
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ioteventshandler
namespace: myiotgarden
labels:
app: ioteventshandler
spec:
replicas: 2
selector:
matchLabels:
app: ioteventshandler
template:
metadata:
labels:
app: ioteventshandler
spec:
containers:
- name: ioteventshandler
image: 192.168.56.105:5000/ioteventshandler:latest
resources:
limits:
memory: "1024M"
requests:
memory: "128M"
imagePullPolicy: "Always"
---
apiVersion: v1
kind: Service
metadata:
name: iotevents-service
namespace: myiotgarden
labels:
app: iotevents-service
spec:
selector:
app: ioteventshandler
ports:
- port: 8080
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ioteventshandler-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
namespace: myiotgarden
spec:
rules:
- host: kubernetes-master
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
- host: kubernetes-node1
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
- host: kubernetes-node2
http:
paths:
- path: /iotEvents/sonoff
backend:
serviceName: iotevents-service
servicePort: 8080
I have a haproxy running on Kmaster node. which has been configured with both the knode1 and knode2 as worker nodes.
frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back
backend http_back
balance roundrobin
server worker 192.168.56.207:80
server worker 192.168.56.208:80
When there are 2 replicas of the ioteventshandler running, the below command works fine.
curl -kL http://kubernetes-master/iotEvents/sonoff/sonoff11
I get back the response perfectly.
However, when I reduce the replicas to 1. the curl command intermittently returns 502 Bad Gateway. I am assuming this happens when the haproxy forwards the request to knode2 where a replica of ioteventshandler is not running.
Question:
Is there a way that the ingress controller on knode2 forwards it to knode1? And how do we do it?
Thank you.

Accessing TCP port using istio ingress gateway outside the cluster

I have my gateway setup this way
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
namespace: dev
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- hosts:
- "bitcoin-testnet-zmq.my.net"
port:
number: 48832
protocol: tcp
name: bitcoin-zmq-testnet
- hosts:
- "*"
port:
number: 80
protocol: http
name: bitcoin-mainnet
Virtual service like this
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bitcoin-testnet-zmq
namespace: dev
spec:
hosts:
- "bitcoin-testnet-zmq.my.net"
gateways:
- my-gateway
tcp:
- match:
- port: 48832
route:
- destination:
port:
number: 48832
name: bitcoin-zmq-testnet
host: bitcoinrpc-testnet-dev-service
and my service is as follows
kind: Service
apiVersion: v1
metadata:
name: bitcoinrpc-testnet-dev-service
namespace: dev
spec:
selector:
app: bitcoin-node-testnet
ports:
- name: bitcoin-testnet
protocol: TCP
port: 80
targetPort: 18332
- name: bitcoin-zmq-testnet
protocol: TCP
port: 48832
targetPort: 48832
type: NodePort
When I login to a pod in the same namespace and do telnet bitcoinrpc-testnet-dev-service 48832, then it can connect.
Also, found that all the other http serviecs can be accessed correctly through the istio-gateway
I don't see an issue with your configurations, actually that's the usage of the istio Gateway, to allow external access to your services.