OpenVPN not creating default route on Fedora 21 - fedora

On Fedora 21 I'm not getting a new default route set when I establish an OpenVPN connection. On Fedora 20 it was fine (with the exact same .ovpn configuration file).
Any ideas?

Add this to your .ovpn configuration file.
route-delay 5

I experienced this myself. In my .ovpn client config file, I have redirect-gateway def1, yet route -n still showed my default route was not changed.
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 1024 0 0 em1
10.1.0.0 172.22.22.57 255.255.0.0 UG 20 0 0 tun0
A traceroute verified this. So I found a workaround to manually create the route for now. Below, the first line shows how to construct the needed command, and the second one is an example command based on my example route output above.
sudo route add -net {Destination1} netmask {Genmask1} gw {Gateway2} dev {Iface2}
sudo route add -net 0.0.0.0 netmask 0.0.0.0 gw 172.22.22.57 dev tun0
When you're done with VPN, run the same thing again, except with add replaced by del.
sudo route del -net 0.0.0.0 netmask 0.0.0.0 gw 172.22.22.57 dev tun0
I don't know why OpenVPN's behavior changed. I did try going to Fedora's Settings > Network > Wired > gear > IPv4 > Routes and turning off Automatic, but that did not help.

Related

QEMU hostfwd works only for some ports

I compiled qemu-system-x86_64 on aarch64 host, and was able to run a x86_64 guest with a command like
qemu-system-x86_64 -m 4096 -drive file=vmimage.qcow2,if=virtio \
-boot once=c,menu=on -net nic,model=virtio-net-pci \
-net user,hostfwd=tcp::8080-:80,hostfwd=tcp::22222-:22
I could ssh into the guest using
ssh -p22222 user#localhost
Meanwhile, port 80 was not forwarded successfully.
For debugging, I used nc to listen to port 80 inside the guest
nc -l 80
Then in the host, I connected to the forwarded port
nc localhost 8080
However, it was unable to connect to guest nc .
I tried the monitor interface. When the host nc command is executed, info usernet shows following:
(qemu) info usernet
Hub 0 (#net162):
Protocol[State] FD Source Address Port Dest. Address Port RecvQ SendQ
TCP[SYN_SENT] 33 127.0.0.1 8080 10.0.2.15 80 0 0
TCP[ESTABLISHED] 21 127.0.0.1 22222 10.0.2.15 22 0 0
TCP[HOST_FORWARD] 12 * 8080 10.0.2.15 80 0 0
TCP[HOST_FORWARD] 11 * 22222 10.0.2.15 22 0 0
...
I believe the SYN_SENT (FD 33) corresponded to the host nc command, and this matched the HOST_FORWARD line (FD 12). However, it never became ESTABLISHED. And a few seconds later, nc died with Connection reset by peer. , and the FD 33 line disappeared.
If I nc localhost 22222, I can see the OpenSSH banner.
So it seems only port 22 forwarded. Any idea about the cause or how to debug?
Both host and guest had no firewalliptables configured, and SELinux is permissive.
Thanks
Edit:
As a temporary workaround, I configured a second nic, and used port 22 of the new interface for forwarding my service. I also switch to the newer -nic option, but hostfwd still worked for port 22 only.
qemu-system-x86_64 -m 4096 -drive file=vmimage.qcow2,if=virtio \
-boot once=c,menu=on \
-nic user,model=virtio-net-pci,hostfwd=tcp::60022-:22 \
-nic user,model=virtio-net-pci,net=10.0.3.0/24,hostfwd=tcp::8080-10.0.3.15:22
To forward successfully, I also need to
Configure sshd to listen to port 22 the first nic only.
Configure my service to listen to port 22 of the second nic.
Configure the second nic to use a different network. Otherwise, both nics were assigned the same IP (10.0.2.15. I may better hardcode the IP for both nics.)
The problem was actually about firewall. My VM (based on Oracle Linux 8.5 on Oracle Linux VM Templates) actually had firewall rules in both iptables and nft. After disabling both iptables and nft, the port forward worked.

How to prevent snmpd from listening on port 161?

I am trying to force snmpd to listen on port 1610 (instead of the default port 161).
When I turn on debugging, it looks like snmpd insists on listening on port 161, in addition to any other agent address I specify.
I am running net-snmp 5.7.2 on Ubuntu.
Here is my snmpd.conf:
agentaddress dtlsudp:localhost:1610
agentuser root
agentgroup root
Here's how I launch snmpd:
snmpd -f -r -DALL -c snmpd.conf
I can see that snmpd parses the config file and recognizes the desired port 1610, but it tries to listen on port 161 as well!
read_config:parser: Found a parser. Calling it: agentaddress / dtlsudp:localhost:1610
snmpd_ports: port spec: udp:127.0.0.1:161,udp:localhost:1610,dtlsudp:localhost:1610,udp:localhost:1610,dtlsudp:localhost:1610
netsnmp_ds_set_string: Setting APP:2 = "udp:127.0.0.1:161,udp:localhost:1610,dtlsudp:localhost:1610,udp:localhost:1610,dtlsudp:localhost:1610"
snmp_agent: final port spec: "udp:127.0.0.1:161,udp:localhost:1610,dtlsudp:localhost:1610,udp:localhost:1610,dtlsudp:localhost:1610"
How can I prevent snmpd from listening on port 161 ???
Any help appreciated.
I discovered that snmpd always reads /etc/snmp/snmpd.conf unless you explicitly disable that using the -C option.
The following command worked. It only read my local config file.
snmpd -f -DALL -C -c snmpd.conf

Can't connect to port 80 on Google Cloud Compute instance despite firewall rule

In summary, although I've set a firewall rule that allows tcp:80, my GCE instance, which is on the "default" network, is not accepting connections to port 80. It appears only port 22 is open on my instance. I can ping it, but can't traceroute to it in under 64 hops.
What follows is my investigation that led me to those conclusions.
gcloud beta compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
default-allow-http default INGRESS 1000 tcp:80
default-allow-https default INGRESS 1000 tcp:443
default-allow-icmp default INGRESS 65534 icmp
default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default INGRESS 65534 tcp:3389
default-allow-ssh default INGRESS 65534 tcp:22
temp default INGRESS 1000 tcp:8888
gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
ssrf3 us-west1-c f1-micro true 10.138.0.4 35.197.33.182 RUNNING
gcloud compute instances describe ssrf3
...
name: ssrf3
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: external-nat
natIP: 35.197.33.182
type: ONE_TO_ONE_NAT
kind: compute#networkInterface
name: nic0
network: https://www.googleapis.com/compute/v1/projects/hack-170416/global/networks/default
networkIP: 10.138.0.4
subnetwork: https://www.googleapis.com/compute/v1/projects/hack-170416/regions/us-west1/subnetworks/default
...
tags:
fingerprint: 6smc4R4d39I=
items:
- http-server
- https-server
I ssh into 35.197.33.182 (which is the ssrf3 instance) and run:
sudo nc -l -vv -p 80
On my local machine, I run:
nc 35.197.33.182 80 -vv
hey
but nothing happens.
So I try to ping the host. That looks healthy:
ping 35.197.33.182
PING 35.197.33.182 (35.197.33.182): 56 data bytes
64 bytes from 35.197.33.182: icmp_seq=0 ttl=57 time=69.172 ms
64 bytes from 35.197.33.182: icmp_seq=1 ttl=57 time=21.509 ms
Traceroute quits after 64 hops, without reaching the 35.197.33.182 destination.
So I check which ports are open with nmap:
nmap 35.197.33.182
Starting Nmap 7.12 ( https://nmap.org ) at 2017-06-18 16:39 PDT
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 3.06 seconds
nmap 35.197.33.182 -Pn
Starting Nmap 7.12 ( https://nmap.org ) at 2017-06-18 16:39 PDT
Nmap scan report for 182.33.197.35.bc.googleusercontent.com (35.197.33.182)
Host is up (0.022s latency).
Not shown: 999 filtered ports
PORT STATE SERVICE
22/tcp open ssh
Nmap done: 1 IP address (1 host up) scanned in 6.84 seconds
… even when I’m running nc -l -p 80 on 35.197.33.182.
Ensure that VM level firewall is not intervening. For example, Container-Optimized OS is a bit special in comparison to all other default images:
By default, the Container-Optimized OS host firewall allows only outgoing connections, and accepts incoming connections only through the SSH service. To accept incoming connections on a Container-Optimized OS instance, you must open the ports your services are listening on.
https://cloud.google.com/container-optimized-os/docs/how-to/firewall
Checking the two check boxes "Allow HTTP traffic" and "Allow HTTPS traffic" did the trick. This created two Firewall rules, that opened the ports 80 and 443.
Manually adding rules for those port didn't work for some reason, but it worked with checking the boxes.
On a quick glance, your setup seems to be correct.
You have allowed INGRESS tcp:80 for all instances in the default network.
Your VM is on the default network.
Traceroute will not give a good indication when you have VMs running on Cloud providers, because of the use of SDNs, virtual networks and whole bunch of intermediate networking infrastructure unfortunately.
One thing I notice is that your instance has 2 tags http-server and https-server. These could be used by some other firewall rules possibly which is somehow blocking traffic to your VM's tcp:80 port.
There are other variables in your setup and I'm happy to debug if needed further.
Tag based firewall rules
You can try tag based firewall rules which will apply the firewall rule only to instances which have the specified target tag.
Network tags are used by networks to identify which instances are
subject to certain firewall rules and network routes. For example, if
you have several VM instances that are serving a large website, tag
these instances with a shared word or term and then use that tag to
apply a firewall rule that allows HTTP access to those instances. Tags
are also reflected in the metadata server, so you can use them for
applications running on your instances. When you create a firewall
rule, you can provide either sourceRanges or sourceTags but not both.
# Add a new tag based firewall rule to allow ingress tcp:80
gcloud compute firewall-rules create rule-allow-tcp-80 --source-ranges 0.0.0.0/0 --target-tags allow-tcp-80 --allow tcp:80
# Add the allow-tcp-80 target tag to the VM ssrf3
gcloud compute instances add-tags ssrf3 --tags allow-tcp-80
It might take a few seconds to couple of minutes for the changes to take effect.
NOTE: Since you're opening up ports of VM's external IPs to the internet, take care to restrict access accordingly as per the needs of your application running on these ports.
After lots of trail and error, the following worked for me on ubuntu-1404-trusty-v20190514
, with a nodejs app listening on port 8080. Accept port 80 and 8080, and then redirect 80 to 8080.
sudo iptables -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -i eth0 -p tcp --dport 8080 -j ACCEPT
sudo iptables -t nat -A OUTPUT -o lo -p tcp --dport 80 -j REDIRECT --to-port 8080
sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
Incase you are a windows server instance , You could try to turn off the Windows Defender and check if it's blocking the incoming connection.

zabbix_sender: send value error: ZBX_TCP_READ() timed out

I have some problems sending my test data to Zabbix Server.
My configurations are set up, even have Zabbix agent installed and correctly working (sends data for monitoring CPU, memory,...).
This is the situation: I have a Zabbix installed on a Debian VM, and configured a host with correct IP, port, item(Zabbix trapper).
I want to send a value just for testing from my Windows 10 PC using "zabbix_sender"; later I want to find a way to get data from a .txt file for monitoring.
Used command from my cmd:
zabbix_sender -vv -z XXX.XXX.X.X -p XXXX -s "IT-CONS-PC4" -k trap -o "test"
Error:
zabbix_sender [8688]: DEBUG: send value error: ZBX_TCP_READ() timed out
Did someone else had this issue?
This errors out on the network level.
check that the local firewall on the Zabbix server allows incoming connections on the server port (10051 by default)
check that the VM network connectivity is correct
As a simple test, you can telnet from the box with zabbix_sender to the Zabbix server on port 10051. If that fails, you have a basic network issue.
after many an hour, this is what fixed it.
Active Agent Checks were failing with following error or similar:
active check configuration update from [zabbix.verticalcomputers.com:10051] started to fail (ZBX_TCP_READ() timed out)
For whatever reason, active agents wont be able to connect properly (active checks won't work, only passive), if server is behind a firewall NAT'd, and you don't have the following in your ifcfg-eth0 (or whatever NIC) file. It will work if you bypass the firewall and put a public IP right on the zabbix server.
NM_CONTROLLED=no
BOOTPROTO=static
If you use the CentOS 7 wizard, or nmtui to config your NIC, instead of manually, those lines don't get added.
I noticed this because when running "ip add", I'd get the following:
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:07:82:07 brd ff:ff:ff:ff:ff:ff
inet 10.32.2.25/24 brd 10.32.2.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
Notice the "noprefixroute". That was unsettling, so I dug for a long time online, with no leads. After adding the two lines to the NIC config mentioned above, and restarting the network, now looks like this:
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:07:82:07 brd ff:ff:ff:ff:ff:ff
inet 10.32.2.25/24 brd 10.32.2.255 scope global eth0
valid_lft forever preferred_lft forever
Late but i hope this could help you.
Check the last sections of scripts docs
Be sure that your zabbix server has connection with your zabbix agent (normally cause by firewalls)
1.1 by port 10050 for passive checks
1.2 by port 10051 for active checks
# you can do it with telnet from your zabbix server
> telnet <agent ip> <10050 or 10051>
Trying <agent ip>...
Connected to <agent ip>.
Escape character is '^]'.
You can modify your server/agent config file to increase Timeout directive. By default is 3 and you can set it up to 30 seconds. If you do this, be sure to modify in both server and agent.
2.1 Don't forget restarting the services service zabbix-agent restart and service zabbix-server restart

tcpdump doesn't captures properly on specific port

I'm in a network and i wanna capture ftp packets from another server in the network but i have a problem with tcpdump about this.
I've used this command :
tcpdump -i eth0 dst X.X.X.X -A and port 21
But it doesn't shows anything! ( i tested and sure that ftp port is 21 )
But if i use this on my server it works properly.
tcpdump -i eth0 -A and port 21
I've this problem when i enter " port " in the command. but if i enter a command without specific port it works and captures properly.
What is the problem?
Thanks.
I don't have enough reputation to ask a question, so this is part question and part insight.
Is the IP you're filtering on the client or the server for the FTP connection?
For the first command, try using src x.x.x.x or just host x.x.x.x and port 21.
For the second command, the "and" is not necessary with the -A flag. This should look more like this:
tcpdump -A -i eth0 port 21
tcpdump -Ai eth0 port 21
Another thing I've seen is if there are vlan tags, normal filtering won't work without adding "vlan and " to your filter. For example:
tcpdump -A -i eth0 "vlan and host x.x.x.x and port 21"
Also keep in mind that FTP uses a control and data connection. The control is over port 21, but the data can vary depending on whether you're using active or passive FTP.