Istio has virtual service for pods with istio-proxy side cars but what istio ingress-gateway pod itself , how to enable retries from istio ingressgateway pod.
Use case is that i am seeing 503 error in case of downscaling and want ingressgateway to retry for specific destination
https://istio.io/docs/concepts/traffic-management/
Basically, Istio mesh represents ingress communication model between external Load Balancer through istio-ingressgateway and logical traffic management CRD components, which define a network routes, Authentication/Authorization aspects and service-to-service interactions.
Istio Gateway as the major contributor with an edge istio-ingressgateway service describes essential information about ports and protocols for HTTP/HTTS/TCP connections that are entering the service mesh and a way how to manage the further routing scenarios, therefore istio-ingressgateway does not decide itself about network traffic workflow and target application endpoints.
Retries concept in Istio is actually enclosed in routing rules and composed within VirtualService resource, showing us the main principle of network request re-attempts and their timeouts in case of initial call's failure.
When Istio istio-ingressgateway Pod starts it retrieves the discovery data about Envoy sidecars from Pilot, approaching the desired state through pilot-agent specific flags.
However, I couldn't determine the reported 503 error, during down-scaling istio-ingressgateway replicas in Istio 1.3.
Related
In communication between front-end micro-service and back end micro-service, which of the following is a better approach?
Define ClusterIP for back end service and define DNS name in that service and send HTTP request from the client micro-service with that DNS name?
Send the request to the Ingress controller and it will know to which micro-service to forward the request?
Both are possible. But obviously it's more convenient and secure to simply use svc name and keep traffic inside local network.
I would take approach #2, several reasons, usage of certificates, load balancer, external dns name and not a "local" service name among other things.
I want to allow/block traffic to few endpoints in the Egress Controller network Policy within Openshift using Pod Selector. Is it possible? As per the Documentation its possible from Namespace.
It has to be done per Pod as per the requirement so that each pod can access specific endpoints.
I'm currently trying to build my services on kubernetes using istio and have trouble trying to whitelist all host IPs that are allowed to connect to the Mysql database through mysql.user table.
I always get the following error after a new deployment:
Host 'X.X.X.X' is not allowed to connect to this MySQL server
Knowing that every time i deploy my service a new pod IP always pops out and i have to add replace the old user with the new host IP. I would really like to avoid using '%' for the host.
Is there any way how i could just register the node IP instead to keep its persistence?
Both Kubernetes and Istio provide network-level protections and setting the allowed hosts to "all" is safe.
A Kubernetes network policy is probably the best cluster-level match for what you're looking for. You'd set the database itself to accept connections from all addresses, but then would set a network policy to refuse connections except from pods that have a specific set of labels. Since you control this by label, any new pods that have the appropriate set of labels will be automatically granted access without manual changes.
Depending on your needs, the default protection given by a ClusterIP service may be enough for you. If a service is ClusterIP but not any other type, it is unreachable from outside the cluster; there is no network path to make it accessible. This is often enough to prevent casual network snoopers from finding your database.
Istio's authorization system is a little bit more powerful and robust at a network level. It can limit calls by the Kubernetes service account of the caller, and uses TLS certificates rather than just IP addresses to identify the caller. However, it doesn't come enabled by default, and in my limited experience with it it's very easy to accidentally configure it to do things like block Kubernetes health checks or Prometheus metric probes. If you're satisfied with IP-level security this might be more power than you need.
I have a web app that does http and ws requests. I am trying to deploy it to Openshift v3. Hence, I need my requests to be mapped to ports 80 and 90 in the pod. However:
As mentioned in a related thread it is not possible for a route to expose multiple ports, so, I cannot just map requests to different services based on the port.
I tried setting one route mapping any port to a service with multiple ports, but I get a warning
Route has no target port, but service has multiple ports. The route
will round robin traffic across all exposed ports on the service
I cannot use different routes for http and ws, because the session cookie obtained for http would not be attached for web socket requests.
Solutions (?):
In the related thread Ingress Controller is suggested, but It seems that it can only be set up by a cluster administrator.
I could use two routes and set a separate cookie for each route, but this does not seem right -- why do I have to use 2 cookies for 2 domains, when essentially there is a single domain with a single authentication?
Switch to token authentication?
So, what am I missing? What would be the optimal way to handle this?
If any websocket endpoints are under a unique sub URL path, you could add a second route where which has a path definition for the sub URL path that the route applies to. You could then have requests under that sub URL path routed to the alternate port. You will need to have a definition for the alternate port on the service in addition to the primary port, or create a separate service for the alternate port. Would need to see your current service definition to be more specific. It is odd that you would be using ports 80 and 90 on the pod as that would imply you are running the container as root, which is not normal practice on OpenShift because of the security risks of running any container as root on a container hosting platform.
I have multiple kubernetes clusters that have Google powered load balancers (ingress lbs).
So right now to access my k8s cluster service (s) I just have to ping the public IP given by the $ kubectl get service, cool.
My problem is that sometimes I need to tear down/create clusters, reconfigure services, those services might also need SSL certificates very soon, and my clusters'/services' builds needs to be easily reproducible too (for cloud devs!).
The question is simple: can I instead of having an ingress load balancer IP have an ingress load balancer hostname?
Something like ${LOAD_BALANCER_NAME}.${GOOGLE_PROJECT_NAME}.appspot.com would be uber awesome.
Kubernetes integration with google cloud DNS is a feature request for which there is no immediate timeline (it will happen, I cannot comment on when). You can however create DNS records with the static ip of a loadbalancer.
If I've understood your problem correctly, you're using an L4 loadbalancer (service.Type=LoadBalancer) and you want to be able to delete the service/nodes etc and continue using the same IP (because you have DNS records for it). In other words, you want a loadbalancer not tied to the service lifecycle. This is possible through an L7 loadbalancer [1] & [2], or by recreating the service with an existing IP [3].
Note that [1] divorces the loadbalancer from service lifetime, but if you take down your entire cluster you will lose the loadbalancer. [2] is tied to the Ingress resource, so if you delete your cluster and recreate it, start the loadbalancer controller pod, and recreate the same Ingress resource, it will use the existing loadbalancer. Also note that both [1] and [2] depend on a "beta" resource that will be released with kubernetes 1.1, I'd appreciate your feedback if you deploy them :)
[1] https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
[2] https://github.com/kubernetes/contrib/pull/132
[3] github.com/kubernetes/kubernetes/issues/10323