Routing on Google Compute Engine from a machine that doesn't have public IP to the internet - google-compute-engine

On Google Compute Engine we have machines which do not have public IPs (because a quota limits the number of machines that can have public IP addresses). We need these non-public-IP machines to access data from Google Storage buckets which appears to mean that we have to route to the Internet. But we can't get to anything outside of our network from these non-public-IP machines. All packets drop.
We've found some documentation https://developers.google.com/compute/docs/networking#routing that describes setting up routing from machines that do not have public IP addresses to one that does.
We tried creating a machine "proxy" that has ip-forwarding turned on and has firewall rules that allow http and https (I don't think this detail matters, but we did it). We created a network "nat" that has a 0.0.0.0/0 forward to "proxy" rule. Our hope was that data from the non-public-IP machine on the "nat" network would forward their packets to "proxy" and then "proxy" would act as a gateway to the Internet somehow, but this does not work.
I suspect that we have to do some kind of routing instruction on "proxy" that we aren't doing that tells proxy to forward to the Google Internet gateway, but I'm not sure what this should be. Perhaps a rule in iptables? Or some sort of NAT program?

You may be able to use iptables NAT to get it working. On the proxy instance (as root):
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface eth1 -j ACCEPT
echo 1 > /proc/sys/net/ipv4/ip_forward

Related

Opening port 19132 on an Oracle compute instance (ubuntu-20.04)

I've created an Oracle Cloud infrastructure compute instance running Ubuntu 20.04. I am trying to open port 19132.
As per another question I found
Opening port 80 on Oracle Cloud Infrastructure Compute node
I've created a public subnet which has an internet gateway and added ingress rules for port 19132 (in the security lists)
netstat looks good
netstat -tulpn
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:19132 0.0.0.0:* 1007/./bedrock_serv
I installed ufw and added rules to allow 19132 but I still can't connect to it from the outside world. Can anyone point out where I am going wrong?
I got the same issue on the Oracle cloud.
Here is what works for me;
First, install firewalld
sudo apt install firewalld
Then open the port in public zone;
sudo firewall-cmd --zone=public --permanent --add-port=19132/tcp
Finally, reload firewalld
sudo firewall-cmd --reload
Looks like you need to have a Public IP configured on that VM for it to be reachable from the internet.
Please look at
https://docs.cloud.oracle.com/en-us/iaas/Content/Network/Tasks/managingpublicIPs.htm
For an instance to communicate directly with the internet, all of the following are required:
The instance must be in a public subnet.
The instance must have a public IP address.
The instance's VCN must have an internet gateway.
The public subnet must have route tables and security lists configured accordingly.
You haven't mentioned anything about the route table. If missing add to it a route with destination=0.0.0.0/0 and target=the Internet Gateway.
Two questions come to mind:
You have specified two rules, one for TCP and one for UDP.
Your netstat shows that something is listening for UDP traffic. Is
there also something listening on TCP or are you using UDP only for
the test?
Can you tell us anything about the traffic characteristics
on this port? I'm asking because if it is UDP traffic the only way
for connection tracking to work is to track the source/dest IP and
port. Since the port will not be present in fragments, the traffic
will be dropped. This could be happening on the ingress or egress
side. To verify, you could create test ingress/egress rules for all
UDP traffic to/from your test IP.
Since your ingress rules are stateful, the egress rules shouldn't matter but it wouldn't hurt to double check them. If none of these things work, you might try a tool like echoping to get more insight into whether or not the traffic is having trouble on the ingress or egress side.
Please check the order of your IPtables rules. Could you post the following command's output for Input chain.
sudo iptables -S INPUT
I have seen Iptables rules as the single prominent reason for these issues.
Regards
Muthu
I think you have to allow user or add user who can connect like this:
create user 'user'#'publicIP' identified by 'password';
grant all privileges on *.* to 'user'#'publicIP' with grant option;
flush privileges;
Here publicIP can be '0.0.0.0' or your system IP address.
Don't use '0.0.0.0' as it is open to all, I have faced various breaches on my GCP machine which leads to account block.

Handling requests to and from non-default network interface

I am working on a project that requires me to have multiple network interfaces. I followed the documentation and created three interfaces. I also changed the firewall rules. But even after changing the firewall rules, I am not getting a reply for an ICMP request to the second interface's external IP.
As seen in the screenshot I have allowed all protocols from anywhere to any instance in my network enter image description here
If you look at the routing table of your VM instance, you'll see that the default route is configured on the primary network interface eth0:
vm-instance:$ ip route
default via 10.156.0.1 dev eth0
...
Whether an Ephemeral or a Static External IP address is configured, this External IP is unknown to the operating system of the VM instance. The External IP address is mapped to the VM's Internal address transparently by VPC. You can verify this with the command
vm-instance:$ ip -4 address show
You'll see that there are no External IPs bound.
Furthermore, IP packet forwarding is disabled both between the network cards of the VM instance and network interfaces of Google-provided Linux. The commands below can verify that:
CloudShell:$ gcloud compute instances describe vm-instance --zone=your-zone | grep canIpForward
vm-instance:$ sudo sysctl net.ipv4.ip_forward
Therefore when a ping packet is received by a secondary interface, it can't reply.
To explore this behavior a bit, you may launch tcpdump on the VM instance so that listen on a secondary interface, for example eth1:
vm-instance:$ sudo apt-get install tcpdump
vm-instance:$ sudo tcpdump -i eth1
then find out External IP of your Cloud Shell appliance and ping the secondary External IP of your VM instance from Cloud Shell:
CloudShell:$ curl ifconfig.me/ip
CloudShell:$ ping [secondary_ip_of_vm_instance]
You'll see in the tcpdump output on the console of your VM instance how ICMP packets are arriving to the eth1 interface from the External IP address of your workstation. But they are not replied.
Google provides explanation of this behavior in the Troubleshooting section of the VPC documentation and suggests possible workarounds:
Virtual Private Cloud > Doc > Creating instances with multiple network interfaces > Troubleshooting > I am not able to connect to secondary interface using external IP:
The DHCP server programs a default route only on the primary network
interface of the VM. If you want to connect to the secondary interface
using an external IP, there are two options. If you only need to
connect outside the network on the secondary network interface, you
can set a default route on that network interface. Otherwise, you can
use Configuring Policy
Routing
to configure a separate routing table using source-based policy
routing in your VM.

Load Balancer not able to connect with backend

I have deployed the Spring boot app on the OCI compute and its comping up nicely. Compute is created with public ip and have the security list updated to allow connections from internet. But, I wasn't able to hit the end point from internet. For that reason, I thought of configuring the load balancer.
Created load balancer in a separate subnet(10.0.1.0/24), routing table and security list. Configured the LB's security list to send all protocol packets to compute's CIDR(10.0.0.0/24) and configured compute's security list to accept the packets from LB. I was expecting LB to make connection with back end. But, its not.
I am able to hit the LB from internet :-
Lb's routing table with all ips routed through internet gateway. There is no routing defined for compute's CIDR as its in the VCN.
LB has its own security list, which has allowed out going packets to compute and incoming from internet as below:
Compute's security list accepting packet's from LB:
Let me know, if I am missing something here.
My internet gateway :-
My backend set connection configuration from LB:
LB fails to make connection with backend, there seems to be no logging info available :
App is working fine , if I access from the compute node :
The LB has a health check that tests the connection to your service. If it fails, the LB will keep your backend out of rotation and give you the critical health like you're seeing.
You can get to it by looking at the backend set and clicking the Update Health Check button.
Edit:
Ultimately I figured it out, you should run the following commands on your backend:
sudo firewall-cmd --permanent --add-port=8080/tcp
sudo firewall-cmd --reload
Use the port that you configured your app to listen on.
I used httpd instead of spring, but I also did the following
sudo semanage fcontext -a -t httpd_sys_content_t "/var/www/html(/.*)?"
sudo restorecon -F -R -v /var/www/html
I'm not really too familiar with selinux but you may need to do something similar for your application.
Additionally, setting up a second host in the same subnet to login to and test connecting to the other host will help troubleshooting, since it will verify if your app is accessible at all outside the host that it's on. Once it is, the LB should come up fine.
TL;DR In my case it helped to switch the Security List rules from stateful to stateless on the 2 relevant subnets (where the loadbalancer was hosted and where the backends were located).
In our deployment I had a loadbalancer with public IP located on one subnet, while the backend to this loadbalancer was on another subnet. Both subnets had one ingress and one egress rule - to allow everything (i.e. 0.0.0.0/0 and all ports allowed). The backends were still not reachable from the loadbalancer and the healthchecks were failing.
Even despite the fact that in my case as per the documentation switching between stateful and stateless should not have an effect, it solved my issue.

Cannot access Google Cloud Compute Instance External IP

I have set up an Google Cloud Compute Instance:
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Haswell
Zone
us-east1-c
I can ssh in using the external address.
I have installed the vncserver and can access it on port 5901 from localhost as well as the internal IP.
I am trying to access it from the static, external IP address but it is not working.
I have configured the firewall to open to port to 0.0.0.0/0, but it is not reachable.
Can anyone help?
------after further investigation from the tips from the two answers (thanks, both!), I have a partial answer:
The Google Cloud Compute instance was set, by default, to not allow
HTTP traffic. I reset the configuration to allow HTTP traffic. I
then tried the troubleshooting tip to run a small HTTP service in
python. I was able to get a ressponse from the service over the
internet.
The summary of the current situation is as follows:
The external IP address can be reached
It is enabled and working for SSH
It is enabled and working for HTTP
It does not seem to allow traffic from vncserver
Any idea how to configure the compute instance to allow for vncserver traffic?
If you already verified that Google Firewall or your VM are not blocking packets, you must make sure that VNC service is configured to listen on the external IP address.
You can always use a utility like nmap outside Google project to reveal information on the port status.
enable http/https traffic form the firewall as per the need. it will work!!
The Google Cloud Compute instance was set, by default, to not allow HTTP traffic. I reset the configuration to allow HTTP traffic. I then tried the troubleshooting tip to run a small HTTP service in python. I was able to get a response from the service over the internet.
As such, the original question is answered, I can access Google Cloud Compute Instance External IP. My wider issue is still not solved, but I will post a new, more specific question about this issue
TLDR: make sure you are requesting http not https
In my case i was following the link from my CE instance's External Ip property which takes you directly to the https version and i didn't set up https, so that was causing the 'site not found' error.
Create an entry in your local ssh config file as below with mentioned local forward port. In my case its an example of yarn's IP, which I want to access in browser.
Host hadoop
HostName <External-IP>
User <Local-machine-username>
IdentityFile ~/.ssh/<private-key-for-above-user>
LocalForward 8089 <Internal-IP>:8088
In addition to having the firewall rules to allow HTTP traffic in both Google Cloud Platform and within the OS of the instance, make sure you install a web server such as Apache or Nginx.
After installing the web server, you connect to the instance using SSH and verify you do not get a failed connection with the following command:
$ sudo wget http://localhost
If the connection is positive, it means that you can access your external URL:
http://<IP-EXTERNAL-VM>
Usually there are two main things to check.
1. Port
By default, only port 80, 443 and ICMP are exposed. If your server is running on a different port, create a record for the same.
2. Firewall
Make sure you are allowing http and https traffic based on your need.
oua re
For me the problem was that I set up the traffic for the firewall rule to be 'Egress' instead of 'Ingress'.
If anyone already initiated 'https'
just disable it and check again.

Multiple IP addresses on a single Google Compute Engine instance

I'm trying to have my GCE instance listen on multiple IP addresses (for SEO reasons - to host multiple low traffic sites on the same instance).
Final objective: mydomain.com points to IP1, myotherdomain.es points to IP2, the GCE instance will listen on both IP1 and IP2 and serve content accordingly.
I added a target instance pointing to my main instance and managed to create a forwarding rule like this:
gcloud compute forwarding-rules create another-ip --port 80 --target-instance MY_TARGET_INSTANCE_URL
It actually created an ephemeral IP address; I tried to promote it to static but I exceeded my quota (I'm currently on my 2 months free trial).
Is this correct though? Will I be able to create any number of static IPs and point them to my only instance once the trial ends? I also couldn't find anything about pricing: I know an IP assigned to an active instance is free, but what about additional ones?
Since this is a necessary configuration for a site I'm managing, I'd like to be sure it works before committing to moving everything on GCE.
You can get multiple external IPs for one VM instance with forwarding rules.
By default, VM will be assigned with an ephemeral external IP, you can promote it to static external IP, which will remain unchanged after stop and restart.
Extra external IPs have to be attached to forwarding rules which point to the VM. You can use (or promote to) static IPs as well.
The command you may want to use:
Create a TargetInstance for your VM instance:
gcloud compute target-instances create <target-instance-name> --instance <instance-name> --zone=<zone>
Create a ForwardingRule pointing to the TargetInstance:
gcloud compute forwarding-rules create <forwarding-rule-name> --target-instance=<target-instance-name> --ip-protocol=TCP --ports=<ports>
See Protocol Forwarding.
I am also need 2 static ips for one compute engine instance but google's quota is not allow this.
You can see your quotas from https://console.cloud.google.com/iam-admin/quotas
An other possibility is to have multiple network interface on the VM
This require adding a new VPC network, the ip 10.130.0.0/20 is not used on the current infrastructure and can be used as an additional network, you would add the proper firewall rules and the proper routing rules (you can copy the default one to avoid any miss-configuration)
Note that you can not add a network interface to an existing machine, you would need to
Turn off the current machine
Detach disk and network (without deleting them !!!)
Create a new machine with 2 network cards or more
Attach the old disk and network to the new machine
Finally you would need to pay attention to the default gateway, the classic network behavior would make everything go through the first network interface the second won't be accessible until you change the default gateway and or create the proper routing rules.
Typically you have eth0 and eth1 this example makes eth1 available to services that bind to eth1
ip addr add 10.130.0.2/32 broadcast 10.130.0.2 dev eth1
ip link set eth1 up
ip route add 10.130.0.1 src 10.130.0.2 dev eth1
ip route add 10.130.0.1 src 10.130.0.2 dev eth1 table 100
ip route add default via 10.130.0.1 dev eth1 metric 10
ip route add default via 10.130.0.1 dev eth1 table 100
ip rule add from 10.130.0.2/32 table 100
ip rule add to 10.130.0.2/32 table 100
curl --interface eth1 ifconfig.co
curl --interface eth0 ifconfig.co
ping -I eth1 8.8.8.8
Here is the documentation, alternatively this guide may help.