I am running EKS in private subnet and thus unable to create an internet facing load balancer but was able to create Internal LoadBalancer.
Is there any way I can create Loadbalancer(probably Manually) in public subnet and point to the pods running in EKS in the private subnet.
I was thinking of creating the chain of load balancer in which External load balancer will point to internal load balancer but that too is not possible as the IP address of the internal load balancer is reserved IP.
Can I try some other way to route the traffic from the internet to pod?
I had the same issue and it was because I did not tag the VPC subnets properly:
https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html
I had to add the key: kubernetes.io/cluster/{eks-cluster-name} value: shared tag to the VPC
Then you can create a LB using a service with the type LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: helloworld
type: LoadBalancer
This might help during the service creation: https://blog.giantswarm.io/load-balancer-service-use-cases-on-aws/
Related
I am trying to create a basic path based routing ingress controller with an AKS managed Load Balancer. Trouble is figuring out how to route from the Load Balancer to the Ingress controller.
Here is my ingress controller yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-cpr
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /green/
pathType: Prefix
backend:
service:
name: nginx-green-clusterip-service
port:
number: 80
- path: /red/
pathType: Prefix
backend:
service:
name: nginx-red-clusterip-service
port:
number: 80
As you can see, the ingress controller is responsible to send to the appropriate app based on the incoming path.
But how do I get this connected to a managed Load Balancer?
apiVersion: v1
kind: Service
metadata:
name: loadbal-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: ingress-cpr
I know this line isnt correct "-app: ingress-cpr". But what do I have to do so that the LoadBalancer forwards to the ingress controller?
Thanks in advance,
Jake.
In the service manifest the app's selector should be pointing to the backend service name of the ingress. In this particular case instead of ingress-cpr in the service manifest should be either of the two backends (nginx-green-clusterip-service or nginx-red-clusterip-service). Any traffic via external IP of the managed LB on port 80 should be routed to one of the backend defined in the ingress then.
There is also Microsoft example about creation of basic ingress controller in AKS.
Currently I am using Oracle Cloud to host an Oracle Kubernetes Cluster managed by Rancher. I also have an Oracle MySQL DB that is outside of the cluster.
The kubernetes cluster and db instance are on the same VCN, subnet, and in the same compartment.
The db instance does not have an external IP but has an internal IP.
I have deployed an endpoint and a ClusterIP in an effort to expose the db instance to the application.
apiVersion: v1
kind: Service
metadata:
name: mysql-dev
namespace: development
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
---
kind: Endpoints
apiVersion: v1
metadata:
name: mysql-dev
namespace: development
subsets:
- addresses:
- ip: <DB INTERNAL IP>
ports:
- port: 3306
In my application properties file I referenced the service...
datasource.dev.db=dev
datasource.dev.host=mysql-dev
datasource.dev.username=<USERNAME>
datasource.dev.password=<PASSWORD>
I can't seem to get my application to communicate with the db.
Any help would be much appreciated!
Looks like the mysql version referenced is not compatible with this version of OKE.
Updated the mysql version and it is working well.
I would like to capture the external I.P. address of clients visiting my application. I am using kubernetes on AWS/Kops. The ingress set-up is Voyager configured HAProxy. I am using the LoadBalancer service.
I configured HAProxy through Voyager to add the x-forwarded-for header by using ingress.appscode.com/default-option: '{"forwardfor": "true"}' annotation.
The issue is that when I test the header is coming through with an internal I.P. address of one of my kubernetes nodes, rather than my external I.P. as desired.
I'm not sure what LoadBalancer voyager is using under the covers, there's no associated pod, just one for the ingress-controller.
kubectl describe svc voyager-my-app outputs
Name: <name>
Namespace: <namespace>
Labels: origin=voyager
origin-api-group=voyager.appscode.com
origin-name=<origin-name>
Annotations: ingress.appscode.com/last-applied-annotation-keys:
ingress.appscode.com/origin-api-schema: voyager.appscode.com/v1beta1
ingress.appscode.com/origin-name: <origin-name>
Selector: origin-api-group=voyager.appscode.com,origin-name=<origin-name>,origin=voyager
Type: LoadBalancer
IP: 100.68.184.233
LoadBalancer Ingress: <aws_url>
Port: tcp-443 443/TCP
TargetPort: 443/TCP
NodePort: tcp-443 32639/TCP
Endpoints: 100.96.3.204:443
Port: tcp-80 80/TCP
TargetPort: 80/TCP
NodePort: tcp-80 30263/TCP
Endpoints: 100.96.3.204:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Typically with Kubernetes ingresses, there are a couple relevant settings:
xff_num_trusted_hops, which specifies the number of hops that are "trusted" i.e., internal. This way you can distinguish between internal and external IP addresses.
You'll want to make sure you set ExternalTrafficPolicy: local in your load balancer (you didn't specify what your LB is)
Note I'm mostly familiar with Ambassador (built on Envoy Proxy) which does this by default.
How can be a service that does not use HTTP/s be exposed in Openshift 3.11 or 4.x?
I think routes only support HTTP/s traffic.
I have read about using ExternalIP configuration for services but that makes the operation of the cluster complicated and static compared to routes/ingress.
For example Nginx-ingress-controller allows it with special configurations: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
What are the options in Openshift 3.11 or 4.x?
Thank you.
There is a section in the official OpenShift documentation for this called Getting Traffic Into the Cluster.
The recommendation, in order or preference, is:
- If you have HTTP/HTTPS, use a router.
- If you have a TLS-encrypted protocol other than HTTPS (for example, TLS with the SNI header), use a router.
- Otherwise, use a Load Balancer, an External IP, or a NodePort.
NodePort exposes the Service on each Node’s IP at a static port (30000~32767)[0].
You’ll be able to contact the NodePort Service, from outside the cluster, by requesting : format.
apiVersion: v1
kind: Service
metadata:
name: nodeport
spec:
type: NodePort
ports:
- name: "8080"
protocol: "TCP"
port: 8080
targetPort: 80
nodePort: 30000
selector:
labelName: targetname
We have a issue where connecting to AWS RDS in Istio Service Mesh is results in upstream connect error or disconnect/reset before header .
Our Egress rule is as below
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
namespace: <our-namespace>
name: rds-egress-rule-with
spec:
destination:
service: <RDS End point>
ports:
- port: 80
protocol: http
- port: 443
protocol: https
- port: 3306
protocol: https
The connection to MySQL works fine in a stand alone MySQL in EC2. The connection to AWS RDS works fine without Istio. The problem only occurs in Istio Service Mesh.
We are using istio in Disabled Mutual TLS Configuration.
The protocol in your EgressRule definition should be tcp. The service should contain the IP address or a range of IP addresses in CIDR notation.
Alternatively, you can use the --includeIPRanges flag of istioctl kube-inject, to specify which IP ranges are handled by Istio. Istio will not interfere with the the not-included IP addresses and will just allow the traffic to pass thru.
References:
https://istio.io/latest/blog/2018/egress-tcp/
https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services