What are the relationships between Kubernetes services and clusters and Google Compute Engine objects? - google-compute-engine

I am setting a couple of services running on Google Container Engine, with traffic coming in through a Google HTTP Load Balancer, using path mapping.
There is a good Google tutorial on setting up content-based load-balancing here, but it is all in terms of plain Google Compute objects like instance groups and backend services. I, however, have Kubernetes services, pods and clusters.
What is the relationship between the Kubernetes objects and the Google Compute resources? How do I map between the two programmatically?
(I am aware that I could be using a Kubernetes web ingress object to do the balancing, as explained here, but it looks like Kubernetes Ingress does not yet support HTTPS, which need.)

What is the relationship between the Kubernetes objects and the Google Compute resources? How do I map between the two programmatically?
https://github.com/kubernetes/contrib/tree/master/Ingress/controllers/gce#overview
(I am aware that I could be using a Kubernetes web ingress object to do the balancing, as explained here, but it looks like Kubernetes Ingress does not yet support HTTPS, which need.)
Ingress will support HTTPS in 1.2. This is what the resource will look like: https://github.com/kubernetes/kubernetes/issues/19497#issuecomment-174112834. In the meanwhile you can setup HTTP loadbalancing with the Ingress and hand modify it to support https. Apologies beforehand that this is convoluted, it will get better soon.
First create an HTTP Ingress:
Create Services of Type=NodePort
Make sure you have BackendService quota
Create a HTTP Ingress
Expose the node port (s) of the service in the Firewall (also as mentioned in https://cloud.google.com/container-engine/docs/tutorials/http-balancer)
Wait till kubect describe ing shows HEALTHY for you backends.
At this point you should be able to curl your Ingress loadbalancer IP and hit the nginx service (or whatever service you created in step 1).
Then do the following, manually through the GCE console:
Change the IP of the Ingress resource from "Ephmermal" to "Static" (look for the IP in kubectl get ing in the "External IP addresses" tab)
Create your ssl cert. If you just want a self signed cert you can do:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/nginx.key -out /tmp/nginx.crt -subj "/CN=nginxsvc/O=nginxsvc"
Create a new target HTTPS proxy and forwarding rule for the HTTPS load balancer and assign it to the same (static) IP of the http load balancer.
At this point you should be able to curl https://loadbalancer-ip -k

Related

GCE managed group (autoscaling) - Proxy/Load Balancer for both HTTP(S) and TCP requests

I have an autoscaling istance group, i need to setup a Proxy/Load balancer that take request and send it to the istance group.
I thinked to use a Load balancer, but I need to grab both HTTP(S) and TCP requests.
There is some way (or some workaround) to solve this?
EDIT: The problem is that from TCP LB settings i can set the backend service (the managed group that i need to set) only for one port.
For your use case, a single load balancing configuration available on Google Cloud Platform will not be able to serve the purpose. On the other hand, since you are using managed instance groups (Autoscaling), it can not be used as backend for 2 different load balancers.
As per my understanding, the closest you can go is by using Network load balancing (TCP) and install SSL certificate to handle HTTPS requests
on the instance level.

Using HTTP Load Balancer with Kubernetes on Google Cloud Platform

I have followed the GKE tutorial for creating an HTTP Load Balancer using the beta Ingress type and it works fine when using the nginx image. My question is about why Ingress is even necessary.
I can create a container engine cluster and then create a HTTP Load Balancer that uses the Kubernetes-created instance group as the service backend and everything seems to work fine. Why would I go through all of the hassel of using Ingress when using Kubernetes for only part of the process seems to work just fine?
While you can create "unmanaged" HTTP Load Balancer by yourself, what happens when you add new deployments (pods with services) and want traffic to be routed to them as well (perhaps using URL Maps)?
What happens when one of your services goes down for some reason and the new service allocates another node port?
The great thing about Ingress is that it manages the HTTP Load Balancer for you while keeping track on Kubernetes' resources and updates the HTTP Load Balancer accordingly.
The ingress object serves two main purposes:
It is simpler to use for repeatable deployments than configuring the HTTP balancer yourself, because you can write a short declarative yaml file of what you want you balancing to look like rather than a script of 7 gcloud commands.
It is (at least somewhat) portable across cloud providers.
If you are running on GKE and don't care about the second one, you can weigh the ease of use of the ingress object and the declarative syntax vs. the additional customization that you get from configuring the load balancer manually.

Howto install the api gateway client certificate into Elastic beanstalk

I have a scalable application on elastic beanstalk running on Tomcat. I read that in front of Tomcat there is an Apache server for reverse proxy. I guess I have to install on apache the client certificate and configure it to accept only request encrypted by this certificate, but I have no idea how to do that.
Can you help me?
After many researches I found a solution. According to the difficult to discover it I want share with you my experience.
My platform on elastic beanstalk is Tomcat 8 with load balancer.
To use the client certificate (at the moment I was writing) you have to terminate the https on instance
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance.html
then
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-tomcat.html
I used this configuration to use both client and server certificates (seems that it doesn't work only with client certificate)
SSLEngine on
SSLCertificateFile "/etc/pki/tls/certs/server.crt"
SSLCertificateKeyFile "/etc/pki/tls/certs/server.key"
SSLCertificateChainFile "/etc/pki/tls/certs/GandiStandardSSLCA2.pem"
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLProtocol All -SSLv2 -SSLv3
SSLHonorCipherOrder On
SSLVerifyClient require
SSLVerifyDepth 1
SSLCACertificateFile "/etc/pki/tls/certs/client.crt"
And last thing: api gateway doesn't work with self signed cerificate (thanks to Client certificates with AWS API Gateway), so you have to buy one from a CA.
SSLCACertificateFile "/etc/pki/tls/certs/client.crt"
This is where you should point the API Gateway provided client side certificate.
You might have to configure the ELB's listener for vanilla TCP on the same port instead of HTTPS. Basically TCP pass through at your ELB, your instance needs to handle on the SSL in order to authorize the requests which provided a valid client certificate.

Access external client IP from behind Google Compute Engine network load balancer

I am running a Ruby on Rails app (using Passenger in Nginx mode) on Google Container Engine. These pods are sitting behind a GCE network load balancer. My question is how to access the external Client IP from inside the Rails app.
The Github issue here seems to present a solution, but I ran the suggested:
for node in $(kubectl get nodes -o name | cut -f2 -d/); do
kubectl annotate node $node \
net.beta.kubernetes.io/proxy-mode=iptables;
gcloud compute ssh --zone=us-central1-b $node \
--command="sudo /etc/init.d/kube-proxy restart";
done
but I am still getting a REMOTE_ADDR header of 10.140.0.1.
On ideas on how I could get access to the real Client IP (for geolocation purposes)?
Edit: To be more clear, I am aware of the ways of accessing the client IP from inside Rails, however all of these solutions are getting me the internal Kubernetes IP, I believe the GCE network load balancer is not configured (or perhaps unable) to send the real client IP.
A Googler's answer to another version of my question verifies what I am trying to do is not currently possible with the Google Container Engine Network Load Balancer currently.
EDIT (May 31, 2017): as of Kubernetes v1.5 and up this is possible on GKE with the beta annotation service.beta.kubernetes.io/external-traffic. This was answered on SO here. Please note when I added the annotation the health checks were not created on the existing nodes. Recreating the LB and restarting the nodes solved the issue.
It seems as though this is not a rails problem at all, but one of GCE. You can try the first part of
request.env["HTTP_X_FORWARDED_FOR"]
Explanation
Getting Orgin IP From Load Balancer advises that https://cloud.google.com/compute/docs/load-balancing/http/ has the text
The proxies set HTTP request/response headers as follows:
Via: 1.1 google (requests and responses)
X-Forwarded-Proto: [http | https] (requests only)
X-Forwarded-For: <client IP(s)>, <global forwarding rule external IP> (requests only)
Can be a comma-separated list of IP addresses depending on the X-Forwarded-For entries appended by the intermediaries the client is
traveling through. The first element in the section
shows the origin address.
X-Cloud-Trace-Context: <trace-id>/<span-id>;<trace-options> (requests only)
Parameters for Stackdriver Trace.

Name kubernete generated Google cloud ingress load balancer

I have multiple kubernetes clusters that have Google powered load balancers (ingress lbs).
So right now to access my k8s cluster service (s) I just have to ping the public IP given by the $ kubectl get service, cool.
My problem is that sometimes I need to tear down/create clusters, reconfigure services, those services might also need SSL certificates very soon, and my clusters'/services' builds needs to be easily reproducible too (for cloud devs!).
The question is simple: can I instead of having an ingress load balancer IP have an ingress load balancer hostname?
Something like ${LOAD_BALANCER_NAME}.${GOOGLE_PROJECT_NAME}.appspot.com would be uber awesome.
Kubernetes integration with google cloud DNS is a feature request for which there is no immediate timeline (it will happen, I cannot comment on when). You can however create DNS records with the static ip of a loadbalancer.
If I've understood your problem correctly, you're using an L4 loadbalancer (service.Type=LoadBalancer) and you want to be able to delete the service/nodes etc and continue using the same IP (because you have DNS records for it). In other words, you want a loadbalancer not tied to the service lifecycle. This is possible through an L7 loadbalancer [1] & [2], or by recreating the service with an existing IP [3].
Note that [1] divorces the loadbalancer from service lifetime, but if you take down your entire cluster you will lose the loadbalancer. [2] is tied to the Ingress resource, so if you delete your cluster and recreate it, start the loadbalancer controller pod, and recreate the same Ingress resource, it will use the existing loadbalancer. Also note that both [1] and [2] depend on a "beta" resource that will be released with kubernetes 1.1, I'd appreciate your feedback if you deploy them :)
[1] https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
[2] https://github.com/kubernetes/contrib/pull/132
[3] github.com/kubernetes/kubernetes/issues/10323