How to assign multiple outgoing IPs addresses to a single instance on GCE? - google-compute-engine

How does one assign multiple ephemeral external IP addresses to the same machine on Google Compute Engine? The web interface only discusses the primary IP addresses, but I see no mention of adding more addresses.
I found a related question over at https://stackoverflow.com/a/39963576/14731 but it focuses on routing multiple incoming IPs to the same instance.
My application is a web client that needs to make multiple outgoing connections from multiple source IPs.

Yes it's possible, with some steps:
Create the same number of VPC (Network) as you need interfaces
Create a subnet inside each VPC and make sure the are not overlapping
Add a firewall rule in the first VPC to allow SSH from your location
Create an instance with multiple interfaces (one in each VPC) and assign external address to each one
SSH to your instance via the address located on the first VPC
Configure a separate routing table for each network interface
Things you have to know:
You can add interfaces only on instance creation
I've got an error on configuration of routing table, but it worked (RTNETLINK answers: File exists)
Routing table of secondary interfaces are not persisted, you have to manage how to do this
Results
yann#test-multiple-ip:~$ ip a
[...]
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
link/ether 42:01:c0:a8:00:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.2/32 brd 192.168.0.2 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::4001:c0ff:fea8:2/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
link/ether 42:01:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.2/32 brd 192.168.1.2 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::4001:c0ff:fea8:102/64 scope link
valid_lft forever preferred_lft forever
yann#test-multiple-ip:~$ curl --interface eth0 ifconfig.co
35.241.195.172
yann#test-multiple-ip:~$ curl --interface eth1 ifconfig.co
35.241.253.41

Related

2 interface in side a pod, midhaul_ker#midhaul_edk and midhaul_ker#midhaul_edk, is this a kind of veth pair loop?

below is part of the output of "ip a" in a pod with SRIOV and dpdk, i have trouble to understand midhaul_ker and midhaul_edk connection, are they a veth pair connected to each other, if yes, what is the traffic flow would be?
4: midhaul_edk#midhaul_ker: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether f2:2b:96:c5:20:83 brd ff:ff:ff:ff:ff:ff
inet6 fe80::f02b:96ff:fec5:2083/64 scope link
valid_lft forever preferred_lft forever
5: midhaul_ker#midhaul_edk: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ae:70:3b:ac:7d:66 brd ff:ff:ff:ff:ff:ff
inet 10.97.62.18/27 brd 10.97.62.31 scope global midhaul_ker
valid_lft forever preferred_lft forever
inet6 fe80::40c8:23ff:fe1e:13c2/64 scope link
valid_lft forever preferred_lft forever
6: backhaul_edk#backhaul_ker: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 92:50:df:52:1e:b2 brd ff:ff:ff:ff:ff:ff
inet6 fe80::9050:dfff:fe52:1eb2/64 scope link
valid_lft forever preferred_lft forever
7: backhaul_ker#backhaul_edk: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ba:1c:4c:1b:18:88 brd ff:ff:ff:ff:ff:ff
inet 10.97.62.113/27 brd 10.97.62.127 scope global backhaul_ker
valid_lft forever preferred_lft forever
inet6 fe80::ecce:33ff:fec7:fa05/64 scope link
valid_lft forever preferred_lft forever

Docker: Accessing to a mysql container from another container in same host (No route to host error)

I have a virtual machine with this IP: 10.23.23.23
On this VM, Docker is running and 2 containers are created:
Container1 (Apache running) : This container exposes the port 13080 and bin the port 80 of apache inside the container
Container2 (mysql) : This container exposes the port 5555 and bind the port 3306 of mysql inside the container.
In Container1, i am trying to access to container2, but i get the following error: SQLSTATE[HY000] [2002] No route to host
Notes:
The following command on VM host:
ip addr show docker0
returns:
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:cf:7e:ea:b7 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::41:cfff:fe7e:eab7/64 scope link
valid_lft forever preferred_lft forever
What can i do to be able to join the second container (mysql) from the first one ?
This seems to be IP:PORT exposing issue. If you want to refer containers using host IP make sure mysql is listening on all the IPs i.e 0.0.0.0 in the container and then give it a try using host ports.
Normally these issues occur when services in the container are running at localhost/127.0.0.1.

Can't connect to port 80 on Google Cloud Compute instance despite firewall rule

In summary, although I've set a firewall rule that allows tcp:80, my GCE instance, which is on the "default" network, is not accepting connections to port 80. It appears only port 22 is open on my instance. I can ping it, but can't traceroute to it in under 64 hops.
What follows is my investigation that led me to those conclusions.
gcloud beta compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
default-allow-http default INGRESS 1000 tcp:80
default-allow-https default INGRESS 1000 tcp:443
default-allow-icmp default INGRESS 65534 icmp
default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default INGRESS 65534 tcp:3389
default-allow-ssh default INGRESS 65534 tcp:22
temp default INGRESS 1000 tcp:8888
gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
ssrf3 us-west1-c f1-micro true 10.138.0.4 35.197.33.182 RUNNING
gcloud compute instances describe ssrf3
...
name: ssrf3
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: external-nat
natIP: 35.197.33.182
type: ONE_TO_ONE_NAT
kind: compute#networkInterface
name: nic0
network: https://www.googleapis.com/compute/v1/projects/hack-170416/global/networks/default
networkIP: 10.138.0.4
subnetwork: https://www.googleapis.com/compute/v1/projects/hack-170416/regions/us-west1/subnetworks/default
...
tags:
fingerprint: 6smc4R4d39I=
items:
- http-server
- https-server
I ssh into 35.197.33.182 (which is the ssrf3 instance) and run:
sudo nc -l -vv -p 80
On my local machine, I run:
nc 35.197.33.182 80 -vv
hey
but nothing happens.
So I try to ping the host. That looks healthy:
ping 35.197.33.182
PING 35.197.33.182 (35.197.33.182): 56 data bytes
64 bytes from 35.197.33.182: icmp_seq=0 ttl=57 time=69.172 ms
64 bytes from 35.197.33.182: icmp_seq=1 ttl=57 time=21.509 ms
Traceroute quits after 64 hops, without reaching the 35.197.33.182 destination.
So I check which ports are open with nmap:
nmap 35.197.33.182
Starting Nmap 7.12 ( https://nmap.org ) at 2017-06-18 16:39 PDT
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 3.06 seconds
nmap 35.197.33.182 -Pn
Starting Nmap 7.12 ( https://nmap.org ) at 2017-06-18 16:39 PDT
Nmap scan report for 182.33.197.35.bc.googleusercontent.com (35.197.33.182)
Host is up (0.022s latency).
Not shown: 999 filtered ports
PORT STATE SERVICE
22/tcp open ssh
Nmap done: 1 IP address (1 host up) scanned in 6.84 seconds
… even when I’m running nc -l -p 80 on 35.197.33.182.
Ensure that VM level firewall is not intervening. For example, Container-Optimized OS is a bit special in comparison to all other default images:
By default, the Container-Optimized OS host firewall allows only outgoing connections, and accepts incoming connections only through the SSH service. To accept incoming connections on a Container-Optimized OS instance, you must open the ports your services are listening on.
https://cloud.google.com/container-optimized-os/docs/how-to/firewall
Checking the two check boxes "Allow HTTP traffic" and "Allow HTTPS traffic" did the trick. This created two Firewall rules, that opened the ports 80 and 443.
Manually adding rules for those port didn't work for some reason, but it worked with checking the boxes.
On a quick glance, your setup seems to be correct.
You have allowed INGRESS tcp:80 for all instances in the default network.
Your VM is on the default network.
Traceroute will not give a good indication when you have VMs running on Cloud providers, because of the use of SDNs, virtual networks and whole bunch of intermediate networking infrastructure unfortunately.
One thing I notice is that your instance has 2 tags http-server and https-server. These could be used by some other firewall rules possibly which is somehow blocking traffic to your VM's tcp:80 port.
There are other variables in your setup and I'm happy to debug if needed further.
Tag based firewall rules
You can try tag based firewall rules which will apply the firewall rule only to instances which have the specified target tag.
Network tags are used by networks to identify which instances are
subject to certain firewall rules and network routes. For example, if
you have several VM instances that are serving a large website, tag
these instances with a shared word or term and then use that tag to
apply a firewall rule that allows HTTP access to those instances. Tags
are also reflected in the metadata server, so you can use them for
applications running on your instances. When you create a firewall
rule, you can provide either sourceRanges or sourceTags but not both.
# Add a new tag based firewall rule to allow ingress tcp:80
gcloud compute firewall-rules create rule-allow-tcp-80 --source-ranges 0.0.0.0/0 --target-tags allow-tcp-80 --allow tcp:80
# Add the allow-tcp-80 target tag to the VM ssrf3
gcloud compute instances add-tags ssrf3 --tags allow-tcp-80
It might take a few seconds to couple of minutes for the changes to take effect.
NOTE: Since you're opening up ports of VM's external IPs to the internet, take care to restrict access accordingly as per the needs of your application running on these ports.
After lots of trail and error, the following worked for me on ubuntu-1404-trusty-v20190514
, with a nodejs app listening on port 8080. Accept port 80 and 8080, and then redirect 80 to 8080.
sudo iptables -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -i eth0 -p tcp --dport 8080 -j ACCEPT
sudo iptables -t nat -A OUTPUT -o lo -p tcp --dport 80 -j REDIRECT --to-port 8080
sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
Incase you are a windows server instance , You could try to turn off the Windows Defender and check if it's blocking the incoming connection.

zabbix_sender: send value error: ZBX_TCP_READ() timed out

I have some problems sending my test data to Zabbix Server.
My configurations are set up, even have Zabbix agent installed and correctly working (sends data for monitoring CPU, memory,...).
This is the situation: I have a Zabbix installed on a Debian VM, and configured a host with correct IP, port, item(Zabbix trapper).
I want to send a value just for testing from my Windows 10 PC using "zabbix_sender"; later I want to find a way to get data from a .txt file for monitoring.
Used command from my cmd:
zabbix_sender -vv -z XXX.XXX.X.X -p XXXX -s "IT-CONS-PC4" -k trap -o "test"
Error:
zabbix_sender [8688]: DEBUG: send value error: ZBX_TCP_READ() timed out
Did someone else had this issue?
This errors out on the network level.
check that the local firewall on the Zabbix server allows incoming connections on the server port (10051 by default)
check that the VM network connectivity is correct
As a simple test, you can telnet from the box with zabbix_sender to the Zabbix server on port 10051. If that fails, you have a basic network issue.
after many an hour, this is what fixed it.
Active Agent Checks were failing with following error or similar:
active check configuration update from [zabbix.verticalcomputers.com:10051] started to fail (ZBX_TCP_READ() timed out)
For whatever reason, active agents wont be able to connect properly (active checks won't work, only passive), if server is behind a firewall NAT'd, and you don't have the following in your ifcfg-eth0 (or whatever NIC) file. It will work if you bypass the firewall and put a public IP right on the zabbix server.
NM_CONTROLLED=no
BOOTPROTO=static
If you use the CentOS 7 wizard, or nmtui to config your NIC, instead of manually, those lines don't get added.
I noticed this because when running "ip add", I'd get the following:
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:07:82:07 brd ff:ff:ff:ff:ff:ff
inet 10.32.2.25/24 brd 10.32.2.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
Notice the "noprefixroute". That was unsettling, so I dug for a long time online, with no leads. After adding the two lines to the NIC config mentioned above, and restarting the network, now looks like this:
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:07:82:07 brd ff:ff:ff:ff:ff:ff
inet 10.32.2.25/24 brd 10.32.2.255 scope global eth0
valid_lft forever preferred_lft forever
Late but i hope this could help you.
Check the last sections of scripts docs
Be sure that your zabbix server has connection with your zabbix agent (normally cause by firewalls)
1.1 by port 10050 for passive checks
1.2 by port 10051 for active checks
# you can do it with telnet from your zabbix server
> telnet <agent ip> <10050 or 10051>
Trying <agent ip>...
Connected to <agent ip>.
Escape character is '^]'.
You can modify your server/agent config file to increase Timeout directive. By default is 3 and you can set it up to 30 seconds. If you do this, be sure to modify in both server and agent.
2.1 Don't forget restarting the services service zabbix-agent restart and service zabbix-server restart

Running Multiple Websites on The same Server with Multiple IPv4

From what I understand since that I have a server with 29 usable IPs, that means I should be able to make multiple websites on the same machine with a different IP.
My only question is how to do this, I've looked around but it all seems to be just people trying to find out how to do it if they only have one IP. But I have a server running Apache2 and 29 IPS
First thing you need to do is see if the IP addresses are set up on the NIC interfaces, Type ifconfig in the command line from SSH and look for interfaces with your IP addresses, if there is only one public you need to do the following.
In SSH type cd /etc/network/interfaces
then type ls -l
You will see a list of interfaces like below:
eth0 is what you are looking for, And look to see if you have eth0:0 eth0:1 eth0:2 if you do not have the other eth interfaces do the following.
Let’s assume that we want to create four additional virtual interfaces to bind 4 IP addresses (172.16.16.126, 172.16.16.127, 172.16.16.128, 172.16.16.129 ,and 172.16.16.130) to the NIC.
Type cd /etc/network/interfaces
Now open “eth0” file and add “IPADDR_START” and “IPADDR_END” IP address range as shown below.
Type vi eth0
This /etc/network/interfaces text assigns three IP addresses to eth0.
auto eth0
allow-hotplug eth0
iface eth0 inet static
address 192.168.1.42
netmask 255.255.255.0
gateway 192.168.1.1
auto eth0:0
allow-hotplug eth0:0
iface eth0:0 inet static
address 192.168.1.43
netmask 255.255.255.0
auto eth0:1
allow-hotplug eth0:1
iface eth0:1 inet static
address 192.168.1.44
netmask 255.255.255.0
An alias interface should not have "gateway" or "dns-nameservers"; dynamic IP assignment is permissible.
The above configuration is the previous traditional method that reflects the traditional use of ifconfig to configure network devices. ifconfig has introduced the concept of aliased or virtual interfaces. Those types of virtual interfaces have names of the form interface:integer and ifconfig treats them very similarly to real interfaces.
Save it and restart/start network service
Type service network restart
Verify that virtual interfaces are created with IP Address.
Type ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:28:FD:4C
inet addr:172.16.16.125 Bcast:172.16.16.100 Mask:255.255.255.224
inet6 addr: fe80::20c:29ff:fe28:fd4c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1385 errors:0 dropped:0 overruns:0 frame:0
TX packets:1249 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:127317 (124.3 KiB) TX bytes:200787 (196.0 KiB)
Interrupt:18 Base address:0x2000
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:28:FD:4C
inet addr:172.16.16.126 Bcast:172.16.16.100 Mask:255.255.255.224
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:18 Base address:0x2000
eth0:1 Link encap:Ethernet HWaddr 00:0C:29:28:FD:4C
inet addr:172.16.16.127 Bcast:172.16.16.100 Mask:255.255.255.224
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:18 Base address:0x2000
eth0:2 Link encap:Ethernet HWaddr 00:0C:29:28:FD:4C
inet addr:172.16.16.128 Bcast:172.16.16.100 Mask:255.255.255.224
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:18 Base address:0x2000
eth0:3 Link encap:Ethernet HWaddr 00:0C:29:28:FD:4C
inet addr:172.16.16.129 Bcast:172.16.16.100 Mask:255.255.255.224
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:18 Base address:0x2000
eth0:4 Link encap:Ethernet HWaddr 00:0C:29:28:FD:4C
inet addr:172.16.16.130 Bcast:172.16.16.100 Mask:255.255.255.224
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:18 Base address:0x2000
For HTTP:
To set up Name based virtual hosting you must need to tell Apache to which IP you will be using to receive the Apache requests for all the websites or domain names. We can do this with NameVirtualHost directive. Open Apache main configuration file with VI editor.
Type vi /etc/httpd/conf/httpd.conf
Search for NameVirtualHost and uncomment this line by removing the # sign in front of it.
NameVirtualHost
Next add the IP with possible in which you want to receive Apache requests. After the changes, your file should look like this:
NameVirtualHost 192.168.0.100:80
Now, it’s time to setup Virtual host sections for your domains, move to the bottom of the file by pressing Shift + G. Here in this example, We are setting up virtual host sections for two domains
www.example1.com
www.example2.com
Add the following virtual directives at the bottom of the file for each site and ip address you have below list 2 samples.
<VirtualHost 192.168.0.100:80>
ServerAdmin webmaster#example1.com
DocumentRoot /var/www/html/example1.com
ServerName www.example1.com
ErrorLog logs/www.example1.com-error_log
CustomLog logs/www.example1.com-access_log common
</VirtualHost>
<VirtualHost *:80>
ServerAdmin webmaster#example2.com
DocumentRoot /var/www/html/example2.com
ServerName www.example2.com
ErrorLog logs/www.example2.com-error_log
CustomLog logs/www.example2.com-access_log common
</VirtualHost>
You are free to add as many directives you want to add in your domains virtual host section. When you are done with changes in httpd.conf file, please check the syntax of files with following command.
type httpd -t
Syntax OK
It is recommended to check the syntax of the file after making some changes and before restarting the Web server because if any syntax goes wrong Apache will refuse to work with some errors and eventually affect your existing web server go down for a while. If syntax is OK. Please restart your Web server and add it to chkconfig to make your web server start in runlevel 3 and 5 at the boot time only.
Type service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]