I have set up a VPN tunnel from my on prem datacenter to a Google Cloud project.
I have set up a BGP session on my on prem router, and on a Google Cloud router, that works, and they can see each other subnets and I can ping instances from each side.
The problem comes when I advertise a default route 0.0.0.0/0 from my on prem datacenter to my Cloud router. I have removed the 0.0.0.0/0 default route from my Google Cloud network already. So what this setup will do is route all egress traffic from Google Cloud instances to the on prem network. That default route is not accepted by Cloud router and not added to the routes table.
Can someone explain if cloud router has a filter against default route advertisements via BGP ?
I ran into the same problem. Google filters route 0.0.0.0/0 from BGP. However, you can workaround this if you announce 0.0.0.0/1 and 128.0.0.0/1 via BGP.
Related
I connect to a Google Cloud MySQL DB from my laptop, however my IP address not only changes at home, but also when I travel. I have to specify the allowed public ip address in Google Cloud, but every time I reconnect I have to log in & update google cloud with my new IP address.
What is the best solution to not have to do that?
To solve this issue you should use Cloud SQL Auth proxy:
The Cloud SQL Auth proxy provides secure access to your instances without the need for Authorized networks or for configuring SSL.
You can find more details how it works at the documentation:
The Cloud SQL Auth proxy works by having a local client running in the
local environment. Your application communicates with the Cloud SQL
Auth proxy with the standard database protocol used by your database.
The Cloud SQL Auth proxy uses a secure tunnel to communicate with its
companion process running on the server.
While the Cloud SQL Auth proxy can listen on any port, it only creates
outgoing connections to your Cloud SQL instance on port 3307. If your
client machine has an outbound firewall policy, make sure it allows
outgoing connections to port 3307 on your Cloud SQL instance's IP.
I am trying to establish site to site vpn from Google cloud to my home. I am using Route based VPN option in Google cloud and I see that the connection is established from my home to Google cloud. When I ping my home network from Google compute instance, I can see the incoming traffic at home. But, Google compute instance is not receiving any traffic. I have the following routes
Default route destination 0.0.0.0/0 next hop Internet gateway (automatically created)
Default route destination 192.168.2.0/24 next hop vpc-network (automatically created)
Route destination 192.168.1.0/24 next hop vpn-tunnel (I created to route traffic from GCP to my home)
The firewall is open from any ip to vpc network.
I am thinking it is a Routing/Firewall problem, but lost on the next steps to debug. Any help is appreciated.
I have a web app that does http and ws requests. I am trying to deploy it to Openshift v3. Hence, I need my requests to be mapped to ports 80 and 90 in the pod. However:
As mentioned in a related thread it is not possible for a route to expose multiple ports, so, I cannot just map requests to different services based on the port.
I tried setting one route mapping any port to a service with multiple ports, but I get a warning
Route has no target port, but service has multiple ports. The route
will round robin traffic across all exposed ports on the service
I cannot use different routes for http and ws, because the session cookie obtained for http would not be attached for web socket requests.
Solutions (?):
In the related thread Ingress Controller is suggested, but It seems that it can only be set up by a cluster administrator.
I could use two routes and set a separate cookie for each route, but this does not seem right -- why do I have to use 2 cookies for 2 domains, when essentially there is a single domain with a single authentication?
Switch to token authentication?
So, what am I missing? What would be the optimal way to handle this?
If any websocket endpoints are under a unique sub URL path, you could add a second route where which has a path definition for the sub URL path that the route applies to. You could then have requests under that sub URL path routed to the alternate port. You will need to have a definition for the alternate port on the service in addition to the primary port, or create a separate service for the alternate port. Would need to see your current service definition to be more specific. It is odd that you would be using ports 80 and 90 on the pod as that would imply you are running the container as root, which is not normal practice on OpenShift because of the security risks of running any container as root on a container hosting platform.
I am running Kubernetes in GCP and I have the GKE cluster and the container registry in separate projects. I added the GKE service account to my GCR project and everything works great.
Now, I would like to restrict any outgoing traffic from my GKE project at the compute level. I have added an egress firewall rule to drop any traffic going out of my VPC network. As a consequence, GKE can't pull images from the registry anymore. I added another firewall rule to allow egress traffic for the GKE service account, but to get it to work I had to add "0.0.0.0/0 all ports" as destination filter. Is there a better way to do this? Is there an IP address range / port for GCR?
Thanks!
GCR does not have a dedicated IP address range.
I am unaware of a way to restrict traffic only for GCR.
Sorry.
There is actually a way to do it.
Create a VPC network and enable the Private Google Access. As you can read in the documentation:
Accessible Services
Google services that you can reach using Private Google access include:
Container registry services, a private Docker image repository on Google Cloud Platform
Then don't allow any connection in the firewall, and it will be blocked by default. With this you will get a GKE cluster that isn't reachable but it will be able to pull images in the GCR.
little old but you can use a GKE private cluster: https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
I found for some reason gcr.io resolves to aws fqdn, so private google access does not work. In my case the cluster is private, so I had to add a cloud nat and allow 443 out. I was able to pull after the firewall rule was created.
I have used firewall rules but I still cannot receive traffic on vm instance. I want to allow http server incoming connection. By default google compute engine does not allow incoming traffic outside the network, so you have to create firewall rules. In google cloud platform documentation it suggests to disable operating system firewall. To disable it I need my user password, which I never created. So what to do now? I need password for my user, I am the creator of vm instance. Any help?
These are my firewall settings:
saad_hussain#saad:~$ gcloud compute firewall-rules list
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
default-allow-http default 0.0.0.0/0 tcp:80 http-server
default-allow-https default 0.0.0.0/0 tcp:443 https-server
default-allow-icmp default 0.0.0.0/0 icmp
default-allow-internal default 10.128.0.0/9 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default 0.0.0.0/0 tcp:3389
default-allow-ssh default 0.0.0.0/0 tcp:22
http default 0.0.0.0/0 tcp:80
https default 0.0.0.0/0 tcp:80
Open Google Cloud Platform and log in.
Click Console at the top-right
Click Computer Engine from the left menu.
Than click VM instances from the left menu.
Click the virtual machine instance's three-dot menu(...) which you want to allow the port connection.
Select "View network details". (Now you can see rules about firewall)
Click "Firewall Rules" from left menu.
Click "CREATE FIREWALL RULE" button at the top of page.
At here you can allow any ip to connect to your vm instance or allow any port to connection. Now you can adjust firewall for vm instance good luck.
Here is some advice to troubleshoot similar issues. Have a look to:
a) Google Firewall. As per the comments and the output provided, port 80 is already opened but will only apply to instances that hold the tag ¨http-server¨.
b) Making sure that a firewall inside the VM is not filtering packets. As also mentioned in the comments most of the public images provided by Google allow the traffic by default.
c) Making sure that the service is not only listening on localhost and it is using an IPv4 address
Using nmap can help to determine if the issue is being caused by a firewall or the server not listening in the appropiate port. The lastest can also be verified using ¨netstat --listen¨