Kubernetes on GCE - google-compute-engine

I am trying to setup Kubernetes on GCE. My question is , lets say there are 20 minions part of the cluster in Kubernetes, and two services deployed with type LoadBalancer with replicas as 2 each. So K8S will basically put 2 pods on two different minions per service. My question is, would the rest of the minions which are not running any pods also get the iptables rule in chain KUBE-PORTALS-CONTAINER and KUBE-PORTALS-HOST for these two services added? At least that is my observation but would like to confirm if this is just how Kubernetes works on GCE or this is how K8S behaves irrespective of where it is deployed. Is the reason that any service should be reachable from any minion no matter whether the minion is part of that service or not ? Let me know if there is a better community for this question ?

Would the rest of the minions which are not running any pods also get the iptables rule in chain KUBE-PORTALS-CONTAINER and KUBE-PORTALS-HOST for these two services added?
Yes. The load balancer can forward external traffic to any node in the cluster, so every node needs to be able to receive traffic for a service and forward it to the appropriate pod.
I would like to confirm if this is just how Kubernetes works on GCE or this is how K8S behaves irrespective of where it is deployed.
The iptables rule for services within the cluster is the same regardless of where Kubernetes is deployed. Making a service externally accessible differs slightly depending on where you deploy your cluster (e.g. on AWS you'd create the service as type NodePort instead of type LoadBalancer), so the iptables rule for services that are externalized can vary a bit.

Related

Which URL/IP to use, when accessing Kubernetes Nodes on Rancher?

I am trying to expose services to the world outside the rancher clusters.
Api1.mydomain.com, api2.mydomain.com, and so on should be accessible.
Inside rancher we have several clusters. I try to use one cluster specifically. It's spanning 3 nodes node1cluster1, node2cluster1 and node2cluster1.
I have added ingress inside the rancher cluster, to forward service requests for api1.mydomain.com to a specific workload.
On our DNS I entered the api1.mydomain.com to be forwarded, but it didn't work yet.
Which IP URL should I use to enter in the DNS? Should it be rancher.mydomain.com, where the web gui of rancher runs? Should it be a single node of the cluster that had the ingress (Node1cluster1)?
Both these options seem not ideal. What is the correct way to do this?
I am looking for a solution that exposes a full url to the outside world. (Exposing ports is not an option as the companies dns cant forward to them.)
Simple answer based on the inputs provided: Create a DNS entry with the IP address of Node1cluster1.
I am not sure how you had installed the ingress controller, but by default, it's deployed as "DaemonSet". So you can either use any one of the IP addresses of the cluster nodes or all the IP addresses of the cluster nodes. (Don't expect DNS to load balance though).
The other option is to have a load balancer in front with all the node IP addresses configured to actually distribute the traffic.
Another strategy that I have seen is to have a handful of nodes dedicated to run Ingress by use of taints/tolerations and not use them for scheduling regular workloads.

How to access services in K8s from the internal non-K8s network?

Question: How can I provide reliable access from (non-K8s) services running in an GCE network to other services running inside Kubernetes?
Background: We are running a hosted K8s setup in the Google Cloud Platform. Most services are 12factor apps and run just fine within K8s. Some backing stores (databases) are run outside of K8s. Accessing them is easy by using headless services with manually defined endpoints to fixed internal IPs. Those services usually do not need to "talk back" to the services in K8s.
But some services running in the internal GCE network (but outside of K8s) need to access services running within K8s. We can expose the K8s services using spec.type: NodePort and talk to this port on any of the K8s Nodes IPs. But how can we automatically find the right NodePort and a valid Worker Node IP? Or maybe there is even a better way to solve this issue.
This setup is probably not a typical use-case for a K8s deployment, but we'd like to go this way until PetSets and Persistent Storage in K8s have matured enough.
As we are talking about internal services I'd like to avoid using an external loadbalancer in this case.
You can make cluster service IPs meaningful outside of the cluster (but inside the private network) either by creating a "bastion route" or by running kube-proxy on the machine you are connecting from (see this answer).
I think you could also point your resolv.conf at the cluster's DNS service to be able to resolve service DNS names. This could get tricky if you have multiple clusters though.
One possible way is to use an Ingress Controller. Ingress Controllers are designed to provide access from outside a Kubernetes cluster to services running inside the cluster. An Ingress Controller runs as a pod within the cluster and will route requests from outside the cluster to the correct services inside the cluster, based on the configured rules. This provides a secure and reliable way for non-Kubernetes services running in a GCE network to access services running in Kubernetes.

Google Compute Engine - How to allow access from (only) other project instances?

With Google Compute Engine, how do I create a firewall rule so that only instances within the same project are allowed access? Access from other clusters (within same project) should be allowed.
The scenario is to allow a GKE cluster to access a cluster of RethinkDB database servers that run on GCE instances.
"So that only instances within the same project are allowed access" to what?
I assume you don't mean access to the cluster's apiserver, since that IP should already be accessible from all your instances.
If you mean accessing a container in a cluster from an instance outside the cluster, you can create a firewall rule to be more permissive about allowing traffic within your GCE network. You can either be very permissive or a little more fine-grained when doing this:
Very permissive - just create a firewall rule that allows traffic from the source IP range 10.0.0.0/8 to all instances in your network (don't add any "target tags") on all the protocols and ports your care about (e.g. tcp:1-65535,udp:1-65535,icmp). The 10.0.0.0/8 range will cover all instances and containers in your network (and nothing outside of it).
Separate firewall per cluster - do the same thing as number one, but add the target tag that's on all nodes in the cluster. You can get this from looking at one of the instances' tags or by looking at the target tags on the firewalls that GKE created for your cluster when it was created. The benefit of this approach is that it will let everything in your network talk to your cluster without exposing anything else in your network that you don't want to open up quite so much.
If you mean accessing a service from outside the cluster, then it's a little tougher since you need to run the kube-proxy on the instances outside the cluster and configure it to talk to the cluster's apiserver in order to route the service packets properly.
Turns out the problem was that I was accessing the RethinkDB instances via external IPs. For some reason, this causes the firewall rule with internal source IPs not to match. The solution was to access the instances via internal DNS names instead, in which case the firewall rule applies.
Furthermore, there is a default firewall rule already, default-allow-internal, which allows any traffic between instances on the same project. Therefore I do not need to create my own rule.

run multiple mysql pods in kubernetes

I would like to create a kubernetes cluster to deploy mysql databases, like a mysql farm. These databases should be accessible from internet.
All databases on the same node will have the port 3306 listening, the kube-proxy or the DNS addon could redirect each request to an specific container?
I would like to create url's like myDB1.example.com:3306, myDB2.example.com:3306 that goes to an specific container.
I'm deploying this environment in AWS.
It's possible to create this cluster?
Yes. Starting point would be a (customized) MySQL Docker image with EBS backed volumes and you'd be using it in an Replication Controller to handle failover. On top of that you would have a Service that provides a stable and routable interface to the outside world. Optionally, put an AWS Elastic Load Balancer in front of it.

Name kubernete generated Google cloud ingress load balancer

I have multiple kubernetes clusters that have Google powered load balancers (ingress lbs).
So right now to access my k8s cluster service (s) I just have to ping the public IP given by the $ kubectl get service, cool.
My problem is that sometimes I need to tear down/create clusters, reconfigure services, those services might also need SSL certificates very soon, and my clusters'/services' builds needs to be easily reproducible too (for cloud devs!).
The question is simple: can I instead of having an ingress load balancer IP have an ingress load balancer hostname?
Something like ${LOAD_BALANCER_NAME}.${GOOGLE_PROJECT_NAME}.appspot.com would be uber awesome.
Kubernetes integration with google cloud DNS is a feature request for which there is no immediate timeline (it will happen, I cannot comment on when). You can however create DNS records with the static ip of a loadbalancer.
If I've understood your problem correctly, you're using an L4 loadbalancer (service.Type=LoadBalancer) and you want to be able to delete the service/nodes etc and continue using the same IP (because you have DNS records for it). In other words, you want a loadbalancer not tied to the service lifecycle. This is possible through an L7 loadbalancer [1] & [2], or by recreating the service with an existing IP [3].
Note that [1] divorces the loadbalancer from service lifetime, but if you take down your entire cluster you will lose the loadbalancer. [2] is tied to the Ingress resource, so if you delete your cluster and recreate it, start the loadbalancer controller pod, and recreate the same Ingress resource, it will use the existing loadbalancer. Also note that both [1] and [2] depend on a "beta" resource that will be released with kubernetes 1.1, I'd appreciate your feedback if you deploy them :)
[1] https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
[2] https://github.com/kubernetes/contrib/pull/132
[3] github.com/kubernetes/kubernetes/issues/10323