Google Identity-Aware-Proxy and Firewall Rules for Google Kubernetes Engine - google-compute-engine

I want to configure Google Identity Aware Proxy for an application running on Google Kubernetes Engine. To do that i added an Ingress to my Kubernetes Configuration so i get a Load-Balancer to configure as an identity-aware-proxy.
Now GCP shows me a few warnings that are about problematic firewall rules. As all of these rules were configured by GKE i'm not quite sure if they are a problem.
As far as i understand it 10.128.0.0/9 is the default VPC for projects and 10.56.0.0/14 is the ip range for all containers in my kubernetes cluster.
To me this means that "only" internal traffic inside my project/kubernetes-cluster can bypass the IAP. Is that correct?

You’re correct. However, keep in mind that if you have set up an internal load balancer the traffic will bypass the IAP.

Note, you can natively integrate with IAP through Ingress
https://cloud.google.com/iap/docs/enabling-kubernetes-howto

Related

In Google Cloud Platform, Is there a any service to be a replacement of Web server (Nginx Or Apache) and Load Balancer(Haproxy)

RightNow, I am manging URL's and its redirection using Nginx hosted on physical machine. URL Redirection is achieved and pointed to different load balancers(Haproxy) as mentioned in the vhost file of Nginx.
Is there any option available in GCP to provide the services of redirection without using Nginx and Apache? And also let me know what are the alternate options available in GCP for Haproxy
From what I understand you have a few services (and maybe some static content) serving via Haproxy (which is doing load balancing) to the Internet.
Based on that I assume that if someone wants to go to "yourservice.com/example1" would be redirected by load balancer to service1, and if someone types "yourservice.com/static1" them he will be served static content by different service etc.
GCP has exactly a service you're asking for which can do url/content based load balancing. You can also move your current services to the Google Compute Engine (as virtual machines) or Google Kubernetes Engine which will run your services as a containers.
Also - using GCE or GKE can do autoscaling if that's what you need.
Load balancing provided by the GCP can do all the content based balancing that Haproxy which you're using now.
If you're using some internal load balancing now, I believe it could also be done with one load balancer ( and could simplify your configuration (just some VM or containers running your services and one load balancer).
You can read more about load balancing concepts and specifically about setting up GCP to do that.

Firewall rule to allow GKE -> GCR traffic in separate projects

I am running Kubernetes in GCP and I have the GKE cluster and the container registry in separate projects. I added the GKE service account to my GCR project and everything works great.
Now, I would like to restrict any outgoing traffic from my GKE project at the compute level. I have added an egress firewall rule to drop any traffic going out of my VPC network. As a consequence, GKE can't pull images from the registry anymore. I added another firewall rule to allow egress traffic for the GKE service account, but to get it to work I had to add "0.0.0.0/0 all ports" as destination filter. Is there a better way to do this? Is there an IP address range / port for GCR?
Thanks!
GCR does not have a dedicated IP address range.
I am unaware of a way to restrict traffic only for GCR.
Sorry.
There is actually a way to do it.
Create a VPC network and enable the Private Google Access. As you can read in the documentation:
Accessible Services
Google services that you can reach using Private Google access include:
Container registry services, a private Docker image repository on Google Cloud Platform
Then don't allow any connection in the firewall, and it will be blocked by default. With this you will get a GKE cluster that isn't reachable but it will be able to pull images in the GCR.
little old but you can use a GKE private cluster: https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
I found for some reason gcr.io resolves to aws fqdn, so private google access does not work. In my case the cluster is private, so I had to add a cloud nat and allow 443 out. I was able to pull after the firewall rule was created.

How to access service in google container engine from google compute engine instance

I have a cluster on a google container engine. There are internal service with the domain app.superproject with exposed port 9999.
Also I have an instance in google compute engine.
How can I access to service with it's domain name from the instance of google compute engine?
GKE is built on top of GCE, a GKE instance is also a GCE instance. You can view all your instances either in the web console, or with gcloud compute instances list command.
Note that they may not be in the same GCE virtual network, but in your use case, it's better to put them in, e.g., the default network (I guess they are already, but check their network properties if you are not sure), then they're accessible to each other through the internal IPs (if not, check firewall settings).
You can also use instance names, which resolve to internal IPs, e.g., ping instance1.
If they're not in the same GCE virtual network, you have to treat the service as an external service by exposing an external IP, which is not recommended in your use case.

DNS & Routing for Google Compute Engine VMs from Kubernetes on Container Engine

My kubernetes pods are all able to resolve hostnames and ping servers that are on the wider Internet, but they can't do either for our VMs running in the same zone & region on Google Compute Engine.
How does one tell kubernetes / docker to allow outbound traffic to the Google Compute Engine environment (our subnet is 10.240.0.0) and to resolve hostnames for that subnet using 10.240.0.1?
Very silly mistake on my part.
Our Google Container Cluster was configured to use a custom network in the Google Developer Console, while our Google Compute Engine VMs were all configured to use the default network.
That explains that. Make sure the machines are all on the same network and then everything works as you'd hope.

How to access services in K8s from the internal non-K8s network?

Question: How can I provide reliable access from (non-K8s) services running in an GCE network to other services running inside Kubernetes?
Background: We are running a hosted K8s setup in the Google Cloud Platform. Most services are 12factor apps and run just fine within K8s. Some backing stores (databases) are run outside of K8s. Accessing them is easy by using headless services with manually defined endpoints to fixed internal IPs. Those services usually do not need to "talk back" to the services in K8s.
But some services running in the internal GCE network (but outside of K8s) need to access services running within K8s. We can expose the K8s services using spec.type: NodePort and talk to this port on any of the K8s Nodes IPs. But how can we automatically find the right NodePort and a valid Worker Node IP? Or maybe there is even a better way to solve this issue.
This setup is probably not a typical use-case for a K8s deployment, but we'd like to go this way until PetSets and Persistent Storage in K8s have matured enough.
As we are talking about internal services I'd like to avoid using an external loadbalancer in this case.
You can make cluster service IPs meaningful outside of the cluster (but inside the private network) either by creating a "bastion route" or by running kube-proxy on the machine you are connecting from (see this answer).
I think you could also point your resolv.conf at the cluster's DNS service to be able to resolve service DNS names. This could get tricky if you have multiple clusters though.
One possible way is to use an Ingress Controller. Ingress Controllers are designed to provide access from outside a Kubernetes cluster to services running inside the cluster. An Ingress Controller runs as a pod within the cluster and will route requests from outside the cluster to the correct services inside the cluster, based on the configured rules. This provides a secure and reliable way for non-Kubernetes services running in a GCE network to access services running in Kubernetes.