I have this Google instance where another Ephemeral IP is forwarded to. In fact all TCP ports on that IP is getting forwarded to the target instance.
Now what I need to do is to forward all UDP ports from the same IP to the same instance.
Unfortunately running this command:
gcutil --service_version="v1" --project="trainer-484" addforwardingrule "eu-rule-1-1-udp" --region="europe-west1" --protocol="UDP" --target="eu-pool" --ip="x.y.x.x"
I get the following error:
Invalid value for field 'resource.natIP': 'natIP/x.y.x.x'. Resource was not found.
This is a serious problem as we need to be able to forward all protocols not just a subset of protocols.
You can't add another forwarding rule to that ephemeral IP (See documentation at [1]).
You need to reserve an IP with command :
gcutil --project="trainer-484" reserveaddress --region="europe-west1" ip-name
Than you can use the reserved IP to add forwarding rules.
Kind Regards,
Paolo
[1] - https://developers.google.com/compute/docs/gcutil/reference/forwardingrule
Related
I've created an Oracle Cloud infrastructure compute instance running Ubuntu 20.04. I am trying to open port 19132.
As per another question I found
Opening port 80 on Oracle Cloud Infrastructure Compute node
I've created a public subnet which has an internet gateway and added ingress rules for port 19132 (in the security lists)
netstat looks good
netstat -tulpn
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:19132 0.0.0.0:* 1007/./bedrock_serv
I installed ufw and added rules to allow 19132 but I still can't connect to it from the outside world. Can anyone point out where I am going wrong?
I got the same issue on the Oracle cloud.
Here is what works for me;
First, install firewalld
sudo apt install firewalld
Then open the port in public zone;
sudo firewall-cmd --zone=public --permanent --add-port=19132/tcp
Finally, reload firewalld
sudo firewall-cmd --reload
Looks like you need to have a Public IP configured on that VM for it to be reachable from the internet.
Please look at
https://docs.cloud.oracle.com/en-us/iaas/Content/Network/Tasks/managingpublicIPs.htm
For an instance to communicate directly with the internet, all of the following are required:
The instance must be in a public subnet.
The instance must have a public IP address.
The instance's VCN must have an internet gateway.
The public subnet must have route tables and security lists configured accordingly.
You haven't mentioned anything about the route table. If missing add to it a route with destination=0.0.0.0/0 and target=the Internet Gateway.
Two questions come to mind:
You have specified two rules, one for TCP and one for UDP.
Your netstat shows that something is listening for UDP traffic. Is
there also something listening on TCP or are you using UDP only for
the test?
Can you tell us anything about the traffic characteristics
on this port? I'm asking because if it is UDP traffic the only way
for connection tracking to work is to track the source/dest IP and
port. Since the port will not be present in fragments, the traffic
will be dropped. This could be happening on the ingress or egress
side. To verify, you could create test ingress/egress rules for all
UDP traffic to/from your test IP.
Since your ingress rules are stateful, the egress rules shouldn't matter but it wouldn't hurt to double check them. If none of these things work, you might try a tool like echoping to get more insight into whether or not the traffic is having trouble on the ingress or egress side.
Please check the order of your IPtables rules. Could you post the following command's output for Input chain.
sudo iptables -S INPUT
I have seen Iptables rules as the single prominent reason for these issues.
Regards
Muthu
I think you have to allow user or add user who can connect like this:
create user 'user'#'publicIP' identified by 'password';
grant all privileges on *.* to 'user'#'publicIP' with grant option;
flush privileges;
Here publicIP can be '0.0.0.0' or your system IP address.
Don't use '0.0.0.0' as it is open to all, I have faced various breaches on my GCP machine which leads to account block.
I am working on a project that requires me to have multiple network interfaces. I followed the documentation and created three interfaces. I also changed the firewall rules. But even after changing the firewall rules, I am not getting a reply for an ICMP request to the second interface's external IP.
As seen in the screenshot I have allowed all protocols from anywhere to any instance in my network enter image description here
If you look at the routing table of your VM instance, you'll see that the default route is configured on the primary network interface eth0:
vm-instance:$ ip route
default via 10.156.0.1 dev eth0
...
Whether an Ephemeral or a Static External IP address is configured, this External IP is unknown to the operating system of the VM instance. The External IP address is mapped to the VM's Internal address transparently by VPC. You can verify this with the command
vm-instance:$ ip -4 address show
You'll see that there are no External IPs bound.
Furthermore, IP packet forwarding is disabled both between the network cards of the VM instance and network interfaces of Google-provided Linux. The commands below can verify that:
CloudShell:$ gcloud compute instances describe vm-instance --zone=your-zone | grep canIpForward
vm-instance:$ sudo sysctl net.ipv4.ip_forward
Therefore when a ping packet is received by a secondary interface, it can't reply.
To explore this behavior a bit, you may launch tcpdump on the VM instance so that listen on a secondary interface, for example eth1:
vm-instance:$ sudo apt-get install tcpdump
vm-instance:$ sudo tcpdump -i eth1
then find out External IP of your Cloud Shell appliance and ping the secondary External IP of your VM instance from Cloud Shell:
CloudShell:$ curl ifconfig.me/ip
CloudShell:$ ping [secondary_ip_of_vm_instance]
You'll see in the tcpdump output on the console of your VM instance how ICMP packets are arriving to the eth1 interface from the External IP address of your workstation. But they are not replied.
Google provides explanation of this behavior in the Troubleshooting section of the VPC documentation and suggests possible workarounds:
Virtual Private Cloud > Doc > Creating instances with multiple network interfaces > Troubleshooting > I am not able to connect to secondary interface using external IP:
The DHCP server programs a default route only on the primary network
interface of the VM. If you want to connect to the secondary interface
using an external IP, there are two options. If you only need to
connect outside the network on the secondary network interface, you
can set a default route on that network interface. Otherwise, you can
use Configuring Policy
Routing
to configure a separate routing table using source-based policy
routing in your VM.
I am trying to follow this tutorial. You do not have to read whole tutorial, my small goal is to create firewall rule on Google Compute engine and connect to using telnet.
I did create firewall rule:
But when type telnet X.X.X.X 5901, I get back
Connecting To X.X.X.X...Could not open connection to the host, on port 5901: Connect failed
I replaced actual ip with X.X.X.X in the above.
Any suggestions how I can troubleshoot it?
That should work!
I suspect vncserver isn't running (correctly) on the instance.
Or you're using the internal IP rather than the external IP address.
Did you confirm the server is running before you tried access it remotely? The tutorial suggests:
nc localhost 5901
But, you could also try:
ss --tcp --listening | grep 5901
and should see something similar to
LISTEN 0 5 *:5901
you need to tag the one GCE instance with vnc-server, in order to apply the rule. setting IP ranges to the home network might be tighter than permitting range 0.0.0.0. think one can use Stackdriver to log whenever a firewall rule applies. the host firewall might also prevent the access (eg. when Stackdriver logs, but it still not works).
I've setup my VM to use a network only allowing a whitelist of IP addresses on the SSH protocol on port 22.
If I try to SSH into my instance via the web browser within the developer console the connection is correctly refused, as it isn't originating from one of my permitted IP addresses.
I'm curious if there is a way to have my whitelist of IP addresses and still SSH into the VM via the browser. I know I can still connect using gcutil, and it would obviously work if I had the IP address.
Looking at the documentation, it isn't listed as a known issue.
When connecting from Developer Console SSH tool the instance receives connection from Google IP range, I made a test and it was from 74.125.0.0/16 range. You could try to temporary white list this range and see if you can access.
Regards
Paolo
On Google Compute Engine we have machines which do not have public IPs (because a quota limits the number of machines that can have public IP addresses). We need these non-public-IP machines to access data from Google Storage buckets which appears to mean that we have to route to the Internet. But we can't get to anything outside of our network from these non-public-IP machines. All packets drop.
We've found some documentation https://developers.google.com/compute/docs/networking#routing that describes setting up routing from machines that do not have public IP addresses to one that does.
We tried creating a machine "proxy" that has ip-forwarding turned on and has firewall rules that allow http and https (I don't think this detail matters, but we did it). We created a network "nat" that has a 0.0.0.0/0 forward to "proxy" rule. Our hope was that data from the non-public-IP machine on the "nat" network would forward their packets to "proxy" and then "proxy" would act as a gateway to the Internet somehow, but this does not work.
I suspect that we have to do some kind of routing instruction on "proxy" that we aren't doing that tells proxy to forward to the Google Internet gateway, but I'm not sure what this should be. Perhaps a rule in iptables? Or some sort of NAT program?
You may be able to use iptables NAT to get it working. On the proxy instance (as root):
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface eth1 -j ACCEPT
echo 1 > /proc/sys/net/ipv4/ip_forward