Firewall rule to allow GKE -> GCR traffic in separate projects - google-compute-engine

I am running Kubernetes in GCP and I have the GKE cluster and the container registry in separate projects. I added the GKE service account to my GCR project and everything works great.
Now, I would like to restrict any outgoing traffic from my GKE project at the compute level. I have added an egress firewall rule to drop any traffic going out of my VPC network. As a consequence, GKE can't pull images from the registry anymore. I added another firewall rule to allow egress traffic for the GKE service account, but to get it to work I had to add "0.0.0.0/0 all ports" as destination filter. Is there a better way to do this? Is there an IP address range / port for GCR?
Thanks!

GCR does not have a dedicated IP address range.
I am unaware of a way to restrict traffic only for GCR.
Sorry.

There is actually a way to do it.
Create a VPC network and enable the Private Google Access. As you can read in the documentation:
Accessible Services
Google services that you can reach using Private Google access include:
Container registry services, a private Docker image repository on Google Cloud Platform
Then don't allow any connection in the firewall, and it will be blocked by default. With this you will get a GKE cluster that isn't reachable but it will be able to pull images in the GCR.

little old but you can use a GKE private cluster: https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters

I found for some reason gcr.io resolves to aws fqdn, so private google access does not work. In my case the cluster is private, so I had to add a cloud nat and allow 443 out. I was able to pull after the firewall rule was created.

Related

Google Identity-Aware-Proxy and Firewall Rules for Google Kubernetes Engine

I want to configure Google Identity Aware Proxy for an application running on Google Kubernetes Engine. To do that i added an Ingress to my Kubernetes Configuration so i get a Load-Balancer to configure as an identity-aware-proxy.
Now GCP shows me a few warnings that are about problematic firewall rules. As all of these rules were configured by GKE i'm not quite sure if they are a problem.
As far as i understand it 10.128.0.0/9 is the default VPC for projects and 10.56.0.0/14 is the ip range for all containers in my kubernetes cluster.
To me this means that "only" internal traffic inside my project/kubernetes-cluster can bypass the IAP. Is that correct?
You’re correct. However, keep in mind that if you have set up an internal load balancer the traffic will bypass the IAP.
Note, you can natively integrate with IAP through Ingress
https://cloud.google.com/iap/docs/enabling-kubernetes-howto

Google Cloud - Adding additional Internal IP to VM

I'm trying to build a webserver in Google Cloud Platform that hosts multiple websites (GBP, IE, FR, DK etc.)
Generally, we assign a range of IPs to the server statically, set the bindings in IIS, then loadbalance using a virtual IP.
It seems near enough impossible to assign another internal IP in GCP. Lots of guides about additional external IPs, but we don't want a public facing webserver like this.
Anybody have any idea on how to add additional internal IPs to a VM / Instance?
Also, I have tried changing the internal address I have assigned to the Instance to static in network adapter settings, next thing I know I can't access my VM for love nor money, had to delete and re-create. If I go into advanced settings to add additional static IPs, w'ere set to DHCP apparently, so can't add additional IPs.
Thanks all.
Answer that I recieved from GCE discussion group, in Google Groups:
"You can add additional internal IP addresses to a VM instance. This is possible by enabling IP forwarding for the VM, creating a static network route, adding appropriate firewall rules, and setting additional internal IP addresses to network adapter of Windows. These steps are described in this article for Linux machines (https://cloud.google.com/compute/docs/networking#set_a_static_target_ip_address). The same steps are valid for Windows VMs. You will need to keep the initial internal IP address, subnet mask, gateway address and DNS settings of the adapter and manually enter them in properties of IPv4 of the network adapter. The below is a screenshot of my configuration on a VM instance (Windows 2008 R2) that perfectly works."
Update:
Now, you can create instances with multiple network interfaces On Google Compute Engine and assign IPs. For more information, refer to this public documentation link. However, currently it has following limitations:
Alias IP ranges are not supported on any network interface on a VM
that has multiple network interfaces enabled.
You cannot modify or delete the network interfaces after the VM has
been created.

Networking across Google Cloud projects

Is it possible to route/forward all tcp traffic for a specific port originating from one instances group to that tcp port for a specific instance in a 2nd project? In a single project this is not difficult, but without static IP's (auto-scaling instance group with hundreds of instances) it is not clear how to route across proejcts.
Use Shared VPC. It allows you to share a VPC network across projects in the same organization.
I found these answers in need of further details or perhaps outdated? First, for those who don't know, a VPC is a Virtual Private Cloud network. Yes, you need a VPC, but not necessarily a shared one that requires an organization configuration. An easy solution is to use VPC Network Peering.
When you create a compute engine instance, you are assigned to a VPC, the "default" VPC. If you have instances in more than 1 project and you want to communicate between them, then you need to create another VPC that doesn't share the same subnet as the default VPC, but only if the two projects have the same default subnet.
One VPC might have 10.142.0.0/20 for its network and another might have 10.143.0.0/20 for its network. This would be fine, but if they both have 10.142.0.0/20, that won't work and you'd need to create a new VPC.
Now, you go to VPC network menu option in the console and add a new VPC, if needed. If you do that, then you need to set up firewall and routing similar to that of the default VPC. If you don't, then traffic on the same VPC, between compute engines, will not be possible.
Now, go to the VPC network peering option and create an entry in one project that points to the VPC of the other project. It will tell you that it is waiting to connect. Now go to the other project and create a network peering entry that has the opposite configuration. For example, in project A, with VPC AA and project B with VPC BB, you create an entry in project A that uses AA and points to BB. In project B, you create an entry that uses BB and points to AA. After some validation, the connection, if valid, will connect. Once connected, it creates all of the routes necessary to get between the two project VPCs.
Now, if your firewall settings are correct, you should be able to send and receive traffic between projects.
The "only" way to connect between your instances on different Google Cloud projects is either through VPN or using the public IP. By using the Public IP, I mean either through a NAT gateway or directly from instance to instance using the public IP. You can have more information about Google Cloud VPN in this Help Center article.

How to access service in google container engine from google compute engine instance

I have a cluster on a google container engine. There are internal service with the domain app.superproject with exposed port 9999.
Also I have an instance in google compute engine.
How can I access to service with it's domain name from the instance of google compute engine?
GKE is built on top of GCE, a GKE instance is also a GCE instance. You can view all your instances either in the web console, or with gcloud compute instances list command.
Note that they may not be in the same GCE virtual network, but in your use case, it's better to put them in, e.g., the default network (I guess they are already, but check their network properties if you are not sure), then they're accessible to each other through the internal IPs (if not, check firewall settings).
You can also use instance names, which resolve to internal IPs, e.g., ping instance1.
If they're not in the same GCE virtual network, you have to treat the service as an external service by exposing an external IP, which is not recommended in your use case.

How to access services in K8s from the internal non-K8s network?

Question: How can I provide reliable access from (non-K8s) services running in an GCE network to other services running inside Kubernetes?
Background: We are running a hosted K8s setup in the Google Cloud Platform. Most services are 12factor apps and run just fine within K8s. Some backing stores (databases) are run outside of K8s. Accessing them is easy by using headless services with manually defined endpoints to fixed internal IPs. Those services usually do not need to "talk back" to the services in K8s.
But some services running in the internal GCE network (but outside of K8s) need to access services running within K8s. We can expose the K8s services using spec.type: NodePort and talk to this port on any of the K8s Nodes IPs. But how can we automatically find the right NodePort and a valid Worker Node IP? Or maybe there is even a better way to solve this issue.
This setup is probably not a typical use-case for a K8s deployment, but we'd like to go this way until PetSets and Persistent Storage in K8s have matured enough.
As we are talking about internal services I'd like to avoid using an external loadbalancer in this case.
You can make cluster service IPs meaningful outside of the cluster (but inside the private network) either by creating a "bastion route" or by running kube-proxy on the machine you are connecting from (see this answer).
I think you could also point your resolv.conf at the cluster's DNS service to be able to resolve service DNS names. This could get tricky if you have multiple clusters though.
One possible way is to use an Ingress Controller. Ingress Controllers are designed to provide access from outside a Kubernetes cluster to services running inside the cluster. An Ingress Controller runs as a pod within the cluster and will route requests from outside the cluster to the correct services inside the cluster, based on the configured rules. This provides a secure and reliable way for non-Kubernetes services running in a GCE network to access services running in Kubernetes.