Handling requests to and from non-default network interface - google-compute-engine

I am working on a project that requires me to have multiple network interfaces. I followed the documentation and created three interfaces. I also changed the firewall rules. But even after changing the firewall rules, I am not getting a reply for an ICMP request to the second interface's external IP.
As seen in the screenshot I have allowed all protocols from anywhere to any instance in my network enter image description here

If you look at the routing table of your VM instance, you'll see that the default route is configured on the primary network interface eth0:
vm-instance:$ ip route
default via 10.156.0.1 dev eth0
...
Whether an Ephemeral or a Static External IP address is configured, this External IP is unknown to the operating system of the VM instance. The External IP address is mapped to the VM's Internal address transparently by VPC. You can verify this with the command
vm-instance:$ ip -4 address show
You'll see that there are no External IPs bound.
Furthermore, IP packet forwarding is disabled both between the network cards of the VM instance and network interfaces of Google-provided Linux. The commands below can verify that:
CloudShell:$ gcloud compute instances describe vm-instance --zone=your-zone | grep canIpForward
vm-instance:$ sudo sysctl net.ipv4.ip_forward
Therefore when a ping packet is received by a secondary interface, it can't reply.
To explore this behavior a bit, you may launch tcpdump on the VM instance so that listen on a secondary interface, for example eth1:
vm-instance:$ sudo apt-get install tcpdump
vm-instance:$ sudo tcpdump -i eth1
then find out External IP of your Cloud Shell appliance and ping the secondary External IP of your VM instance from Cloud Shell:
CloudShell:$ curl ifconfig.me/ip
CloudShell:$ ping [secondary_ip_of_vm_instance]
You'll see in the tcpdump output on the console of your VM instance how ICMP packets are arriving to the eth1 interface from the External IP address of your workstation. But they are not replied.
Google provides explanation of this behavior in the Troubleshooting section of the VPC documentation and suggests possible workarounds:
Virtual Private Cloud > Doc > Creating instances with multiple network interfaces > Troubleshooting > I am not able to connect to secondary interface using external IP:
The DHCP server programs a default route only on the primary network
interface of the VM. If you want to connect to the secondary interface
using an external IP, there are two options. If you only need to
connect outside the network on the secondary network interface, you
can set a default route on that network interface. Otherwise, you can
use Configuring Policy
Routing
to configure a separate routing table using source-based policy
routing in your VM.

Related

Load Balancer not able to connect with backend

I have deployed the Spring boot app on the OCI compute and its comping up nicely. Compute is created with public ip and have the security list updated to allow connections from internet. But, I wasn't able to hit the end point from internet. For that reason, I thought of configuring the load balancer.
Created load balancer in a separate subnet(10.0.1.0/24), routing table and security list. Configured the LB's security list to send all protocol packets to compute's CIDR(10.0.0.0/24) and configured compute's security list to accept the packets from LB. I was expecting LB to make connection with back end. But, its not.
I am able to hit the LB from internet :-
Lb's routing table with all ips routed through internet gateway. There is no routing defined for compute's CIDR as its in the VCN.
LB has its own security list, which has allowed out going packets to compute and incoming from internet as below:
Compute's security list accepting packet's from LB:
Let me know, if I am missing something here.
My internet gateway :-
My backend set connection configuration from LB:
LB fails to make connection with backend, there seems to be no logging info available :
App is working fine , if I access from the compute node :
The LB has a health check that tests the connection to your service. If it fails, the LB will keep your backend out of rotation and give you the critical health like you're seeing.
You can get to it by looking at the backend set and clicking the Update Health Check button.
Edit:
Ultimately I figured it out, you should run the following commands on your backend:
sudo firewall-cmd --permanent --add-port=8080/tcp
sudo firewall-cmd --reload
Use the port that you configured your app to listen on.
I used httpd instead of spring, but I also did the following
sudo semanage fcontext -a -t httpd_sys_content_t "/var/www/html(/.*)?"
sudo restorecon -F -R -v /var/www/html
I'm not really too familiar with selinux but you may need to do something similar for your application.
Additionally, setting up a second host in the same subnet to login to and test connecting to the other host will help troubleshooting, since it will verify if your app is accessible at all outside the host that it's on. Once it is, the LB should come up fine.
TL;DR In my case it helped to switch the Security List rules from stateful to stateless on the 2 relevant subnets (where the loadbalancer was hosted and where the backends were located).
In our deployment I had a loadbalancer with public IP located on one subnet, while the backend to this loadbalancer was on another subnet. Both subnets had one ingress and one egress rule - to allow everything (i.e. 0.0.0.0/0 and all ports allowed). The backends were still not reachable from the loadbalancer and the healthchecks were failing.
Even despite the fact that in my case as per the documentation switching between stateful and stateless should not have an effect, it solved my issue.

IAP tunnel to VM

I’ve a question regarding Compute VM and its associated privileges. I have ‘Owner’ privileges at Project level. I created a VM but was not able to assign an external IP address to it. Upon referring to google cloud docs, it appears that I’ll still be able to connect to this VM using VPN or IAP. Upon clicking the SSH link next to the VM, I see that it uses a Cloud-IAP tunnel but the connection fails.
Here is the error message
External IP address was not found; defaulting to using IAP tunneling.
ERROR: (gcloud.compute.start-iap-tunnel) Error while connecting [4003: u'failed to connect to backend'].
ssh_exchange_identification: Connection closed by remote host
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
How do I go about connecting to this VM?
Appreciate your help with this
https://hodari.be/posts/2019_09_30_access_private_gke_nodes_with_ssh/
https://cloud.google.com/iap/docs/using-tcp-forwarding
Firewall rules that are configured to allow access from Cloud IAP's TCP forwarding netblock, 35.235.240.0/20, on all ports of your machine. This ensures that connections are allowed from Cloud IAP's TCP forwarding IP addresses to the TCP port of the admin service on your resource. Note that you might not need to adjust your firewall rules if the default-allow-ssh and default-allow-rdp default rules are applied to ports used for SSH and RDP.
As probably you already have default-allow-ssh instead of trying:
gcloud compute start-iap-tunnel stage-es-kibana 5601 --local-host-port=localhost:5601
jump to port via extra ssh layer:
gcloud compute ssh stage-es-kibana -- -N -L 5601:localhost:5601
or open Google Firewall between host/port stage-es-kibana:5601 and subnet 35.235.240.0/20.
This is a permissions issue.
You are trying to ssh into your vm thru google's IAP proxy.
You don't have permissions to create the tunnel from your computer to the proxy server.
You need have the role "roles/iap.tunnelResourceAccessor" to ssh to your vm:
It seems that the GCP CE requires to initialize SSH and other services after its RUNNING status.
I used a workaround by adding a sleep (60 sec) command, after starting the VM and before SSH using the IAP tunnel.
In my case I solved or worked around it by omitting the --tunnel-through-iap parameter that is passed to gcloud compute ssh.
try open Google Firewall subnet 35.235.240.0/20

Multiple IP addresses on a single Google Compute Engine instance

I'm trying to have my GCE instance listen on multiple IP addresses (for SEO reasons - to host multiple low traffic sites on the same instance).
Final objective: mydomain.com points to IP1, myotherdomain.es points to IP2, the GCE instance will listen on both IP1 and IP2 and serve content accordingly.
I added a target instance pointing to my main instance and managed to create a forwarding rule like this:
gcloud compute forwarding-rules create another-ip --port 80 --target-instance MY_TARGET_INSTANCE_URL
It actually created an ephemeral IP address; I tried to promote it to static but I exceeded my quota (I'm currently on my 2 months free trial).
Is this correct though? Will I be able to create any number of static IPs and point them to my only instance once the trial ends? I also couldn't find anything about pricing: I know an IP assigned to an active instance is free, but what about additional ones?
Since this is a necessary configuration for a site I'm managing, I'd like to be sure it works before committing to moving everything on GCE.
You can get multiple external IPs for one VM instance with forwarding rules.
By default, VM will be assigned with an ephemeral external IP, you can promote it to static external IP, which will remain unchanged after stop and restart.
Extra external IPs have to be attached to forwarding rules which point to the VM. You can use (or promote to) static IPs as well.
The command you may want to use:
Create a TargetInstance for your VM instance:
gcloud compute target-instances create <target-instance-name> --instance <instance-name> --zone=<zone>
Create a ForwardingRule pointing to the TargetInstance:
gcloud compute forwarding-rules create <forwarding-rule-name> --target-instance=<target-instance-name> --ip-protocol=TCP --ports=<ports>
See Protocol Forwarding.
I am also need 2 static ips for one compute engine instance but google's quota is not allow this.
You can see your quotas from https://console.cloud.google.com/iam-admin/quotas
An other possibility is to have multiple network interface on the VM
This require adding a new VPC network, the ip 10.130.0.0/20 is not used on the current infrastructure and can be used as an additional network, you would add the proper firewall rules and the proper routing rules (you can copy the default one to avoid any miss-configuration)
Note that you can not add a network interface to an existing machine, you would need to
Turn off the current machine
Detach disk and network (without deleting them !!!)
Create a new machine with 2 network cards or more
Attach the old disk and network to the new machine
Finally you would need to pay attention to the default gateway, the classic network behavior would make everything go through the first network interface the second won't be accessible until you change the default gateway and or create the proper routing rules.
Typically you have eth0 and eth1 this example makes eth1 available to services that bind to eth1
ip addr add 10.130.0.2/32 broadcast 10.130.0.2 dev eth1
ip link set eth1 up
ip route add 10.130.0.1 src 10.130.0.2 dev eth1
ip route add 10.130.0.1 src 10.130.0.2 dev eth1 table 100
ip route add default via 10.130.0.1 dev eth1 metric 10
ip route add default via 10.130.0.1 dev eth1 table 100
ip rule add from 10.130.0.2/32 table 100
ip rule add to 10.130.0.2/32 table 100
curl --interface eth1 ifconfig.co
curl --interface eth0 ifconfig.co
ping -I eth1 8.8.8.8
Here is the documentation, alternatively this guide may help.

Not able to connect to kafka server on google compute engine from local machine

I am running my zookeeper and kafka server on google compute engine. Both are running on default ports(zookeeper on 2181 an kafka on 9092). Both are running on the same instance. I have opened up both the ports as well. In my server.properties I have configured
zookeeper.connect=<InternalIP>:2181
host.name=localhost
If I try to push/consume message form the same server, I am able to do so
To push/consume I use
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
However, If try to the same from my local machine I get kafka.common.FailedToSendMessageException in producer and java.net.ConnectException: connection refuesed in case of consumer
I try to push/consume via
bin/kafka-console-producer.sh --broker-list <ExternalIP>:9092 --topic topic1
bin/kafka-console-consumer.sh --zookeeper <ExternalIP>:2181 --topic topic1 --from-beginning
Please note that i am able to ping the external ip from my local system.
I have configured the below mentioned firewall rules in compute engine
Description
kafka port enabled
Network
default
Source filter
Allow from any source (0.0.0.0/0)
Allowed protocols and ports
tcp:9092
Description
zookeeper port enabled
Network
default
Source filter
Allow from any source (0.0.0.0/0)
Allowed protocols and ports
tcp:2181
You must access to the cloud compute VM instance through SSH, then edit the kafka configuration file.
$ sudo vim /opt/bitnami/kafka/config/server.properties
Uncomment the line # advertised.listeners=PLAINTEXT://:9092 and replace with advertised.listeners=PLAINTEXT://[instance_public_id_address]:9092
As a last step restart the kafka service
sudo /opt/bitnami/ctlscript.sh restart
It's important to consider which the default IP address of the GCP compute VM is ephemeral so you must change it to static in the GCP Configuration panel of the Kafka instance, in order to avoid change the configuration file each time that the IP address changes.
Have you configured any firewall rules? You don't mention it, so I assume not.
From https://cloud.google.com/compute/docs/networks-and-firewalls#firewalls :
"By default, all incoming traffic from outside a network is blocked and no packet is allowed into an instance without an appropriate firewall rule."

Routing on Google Compute Engine from a machine that doesn't have public IP to the internet

On Google Compute Engine we have machines which do not have public IPs (because a quota limits the number of machines that can have public IP addresses). We need these non-public-IP machines to access data from Google Storage buckets which appears to mean that we have to route to the Internet. But we can't get to anything outside of our network from these non-public-IP machines. All packets drop.
We've found some documentation https://developers.google.com/compute/docs/networking#routing that describes setting up routing from machines that do not have public IP addresses to one that does.
We tried creating a machine "proxy" that has ip-forwarding turned on and has firewall rules that allow http and https (I don't think this detail matters, but we did it). We created a network "nat" that has a 0.0.0.0/0 forward to "proxy" rule. Our hope was that data from the non-public-IP machine on the "nat" network would forward their packets to "proxy" and then "proxy" would act as a gateway to the Internet somehow, but this does not work.
I suspect that we have to do some kind of routing instruction on "proxy" that we aren't doing that tells proxy to forward to the Google Internet gateway, but I'm not sure what this should be. Perhaps a rule in iptables? Or some sort of NAT program?
You may be able to use iptables NAT to get it working. On the proxy instance (as root):
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface eth1 -j ACCEPT
echo 1 > /proc/sys/net/ipv4/ip_forward