writing data to linux tun interface - tun

i have created a linux tun interface, set ipaddr, broadcast etc.. using open/ioctl apis.
This is how the tun interface looks like,
TEST_TUN: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500
link/none
inet 45.45.45.1/24 scope global TEST_TUN
valid_lft forever preferred_lft forever
Any message written by a virtual host(binded on addr 45.45.45.1:udp=7070) is received by tun_fd(fd returned during tun device creation).
If tun_fd writes an msg ( IP(dst=45.45.45.1)+transport(udp_dst=7070)+payload) is not received on the virtual host. wireshark capture shows that the packet is being received on the kernel side, but virtual host doesnot received any packet.
what could be the reasons for kernel not forwarding the packet to virtual host ?

Had turned your tun device in up state?
if not then you can simply do that via..
$sudo ifconfig tun0 up
and you also can do that via ioctle commands..
if your device is already in a up state then you linux destro must have forwarding on..
you can do this via..
#echo "1" > /proc/sys/net/ipv4/ip_forwarding
or
$sudo sysctl net.ipv4.ip_forward=1
$sudo sysctl -p
then it comes to routing rules..you must have to set routing rule so that all the traffic is captured via your virtual interface..
$sudo ip route 128/1 dev tun
this command will add a route in you kernel routing table and all the traffic will flow through your virtual interface..
If I understood you correctly then may be this will be helpful for you..

Related

Ubuntu 20.04 NetworkManager ignoring eth0 configuration? Netplan configuration abandoned?

Until sometime last night, my UBUNTU 20.04 system was working fine with this configuration file at /etc/netplan/01-network-manager-all.yaml
# Let NetworkManager manage all devices on this system
network:
version: 2
renderer: NetworkManager
ethernets:
eth0:
dhcp4: no
addresses:
- 192.168.1.6/24
gateway4: 192.168.1.254
nameservers:
addresses: [192.168.1.2]
This has been working great for a couple of years, but sometime during the evening, connection to 192.168.1.6 was lost from other servers (I know because I had ssh connections that were dropped during the night).
Upon investigation I found that the (normally headless) server had a new IP address (.92 rather than .6), and apparently this configuration file is no longer applicable.
I found that network-manager is in the /etc/init.d/ directory which seems to mean that, for whatever reason, the system is now ignoring that previous configuration. It's a mystery to me why this would suddenly change.
Anyway, I found how to configure NetworkManager for the result I want, and came up with this, which I placed into (new file) /etc/NetworkManager/conf.d/ethernet.conf:
[802-3-ethernet]
auto-negotiate=true
mac-address=b4:2e:99:a2:58:77
[connection]
id=Wired connection 1
uuid=06563f32-7cd9-3ee1-ac71-e5bb775a4840
type=802-3-ethernet
timestamp=0
[ipv6]
method=ignore
[ipv4]
method=manual
dns=192.168.1.2
address1=192.168.1.6/24,192.168.1.254
(I got the uuid value from 'nmcli conn show' and I got the mac addr from 'ip a show eth0')
dennis#velmicro:/etc/NetworkManager 01/10 10:01:12
> nmcli conn show
NAME UUID TYPE DEVICE
Wired connection 1 06563f32-7cd9-3ee1-ac71-e5bb775a4840 ethernet eth0
ls2021.lovelady.com a4fa8d23-a06d-4955-bfd9-5d7de76584c2 wifi wlan0
dennis#velmicro:/etc/NetworkManager 01/10 10:01:30
> ip a show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether b4:2e:99:a2:58:77 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.92/24 brd 192.168.1.255 scope global dynamic noprefixroute eth0
valid_lft 84428sec preferred_lft 84428sec
Here's what /etc/NetworkManager/NetworkManager.conf looks like:
[main]
plugins=ifupdown,keyfile
[ifupdown]
managed=false
[device]
wifi.scan-rand-mac-address=no
[keyfile]
unmanaged-devices=*,except:type:wifi,except:type:wwan,except:type:ethernet
Restarting NetworkManager, and even a complete reboot seems to produce no errors and yet the configuration is apparently ignored: the 192.168.1.92 address persists.
What am I missing to make this system static IP to the address I need?
Bonus points: How would I determine what caused the sudden (apparent) switch to NetworkManager from netplan?
The following commands, entered at the command line as user root, solved this problem:
nmcli con modify 06563f32-7cd9-3ee1-ac71-e5bb775a4840 ipv4.address 192.168.1.6/24
nmcli con modify 06563f32-7cd9-3ee1-ac71-e5bb775a4840 ipv4.gateway 192.168.1.254
nmcli con modify 06563f32-7cd9-3ee1-ac71-e5bb775a4840 ipv4.dns "192.168.1.2"
nmcli con modify 06563f32-7cd9-3ee1-ac71-e5bb775a4840 ipv4.method manual
nmcli con modify 06563f32-7cd9-3ee1-ac71-e5bb775a4840 ipv4.dns-search "lovelady.com"
nmcli con modify 06563f32-7cd9-3ee1-ac71-e5bb775a4840 ipv6.method disabled
nmcli connection up 06563f32-7cd9-3ee1-ac71-e5bb775a4840
After executing those commands, the following configuration could be found in /etc/NetworkManager/system-connections/Wired\ connection\ 1.nmconnection (it was created for me)
[connection]
id=Wired connection 1
uuid=06563f32-7cd9-3ee1-ac71-e5bb775a4840
type=ethernet
autoconnect-priority=-999
interface-name=eth0
permissions=
timestamp=1673626973
[ethernet]
mac-address-blacklist=
[ipv4]
address1=192.168.1.6/24,192.168.1.254
dns=192.168.1.2;
dns-search=lovelady.com;
method=manual
[ipv6]
method=disabled
[proxy]
The placement of a device configuration into the /etc/NetworkManager/conf.d structure was a mistake. No device configurations should be placed into that directory.
Command journalctl -u NetworkManager.service (executable by any user) ultimately helped reveal what was going on, and why the configuration file I created did not have the desired effect.
All device configurations should go under /etc/NetworkManager/system-connections/ and ideally should be created via a sequence of nmcli commands as above.
This device is now (again) configured as I like. Note that the former configuration file (/etc/netplan/01-network-manager-all.yaml) is no longer used by the system, for an unknown reason. ls -lu /etc/netplan/01* reveals that it has not been read in several days, despite a series of reboots.

Is it possible to access -chardev without going via -serial in qemu?

i am using qemu-system-arm to execute a bare-metal cortex-m3 binary using a custom machine populated with emulations of memory mapped devices.
To exchange data between host and the m3 binary running in qemu, I start qemu with
-chardev udp, id=ch0, port=x, localport=y -serial chardev:ch0
Then in qemu I bind a device to serial_hds[0]. Writing to the serial device, then results in udp packets sent to the host.
My question is: do I have to make the connection to -serial ? Can I in some way access the created chardevs without using the way via the -serial?
I want to setup qemu to listen on 10 udp ports, but what I understand, the -serial option is limited to max 4 devices.
QEMU's chardev abstraction has "front ends" and "back ends".
The "back end" is whatever you connect to on the host side (which might be a UDP port, stdin/stdout, a UNIX domain socket, etc). The -chardev option is what creates and configures this back end.
The "front end" is the part on the QEMU side of this. The most common use is a UART (serial port), but you can also use chardevs to specify how to talk to the QEMU monitor, or to a guest parallel port.
In this case your problem is "what are the N things that the guest sees", ie what are the front ends? There has to be something here, which means your board needs to actually create multiple UARTs or something. -serial is a limit of 4 (you can probably raise that with a local hack changing MAX_SERIAL_PORTS), but if your device model is written to take a QEMU chardev rather than to look directly at serial_hds[] it should be possible to configure it other than via -serial (either with -device ... or -global ... to set the chardev as the device property).

What is the "destination address" for a TAP/TUN device?

What is the purpose of the "destination address" for a TAP/TUN device?
Pytun lets you easily set parameters of a tap/tun device:
tun = TapTunDevice(name='mytun')
tun.addr = '10.66.66.1'
tun.dstaddr = '10.66.66.2'
tun.netmask = '255.255.255.0'
tun.up()
Doing this will result in a device configured as such:
$ ifconfig mytun
mytun: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500
inet 10.66.66.1 netmask 255.255.255.0 destination 10.66.66.2
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I understand that the system now has a virtual interface with IP 10.66.66.1. And it's presumable that in this scenario, the TUN device would be "connected" to a (e.g. VPN gateway) device whose IP address is 10.66.66.2.
But what purpose specifically, does it serve for the kernel to know that this is a "point-to-point" interface, and the IP address of the destination? Does it impact routing in some way that simply configuring the route table would not achieve?
Setting the dstaddr property results in a SIOCSIFDSTADDR ioctl.
The netdevice(7) man page simply says:
SIOCGIFDSTADDR, SIOCSIFDSTADDR
Get or set the destination address of a point-to-point device
using ifr_dstaddr. For compatibility, only AF_INET addresses
are accepted or returned. Setting the destination address is
a privileged operation.
I don't care about all this I want to configure my interface
You don't need to set a destination address. If you want to configure 10.66.66.1/24 on the interface, you can do:
tun = TapTunDevice(name='mytun')
tun.addr = '10.66.66.1'
tun.netmask = '255.255.255.0'
tun.up()
This interface only connects two hosts, so you don't actually need a whole /24. You can only say that 10.66.66.1 is connected to 10.66.66.2 (10.66.66.1 peer 10.66.66.2):
tun = TapTunDevice(name='mytun')
tun.addr = '10.66.66.1'
tun.dstaddr = '10.66.66.2'
tun.netmask = '255.255.255.255'
tun.up()
In this setup, the two IP addresses do not need to be in the same range at all.
Alternatively, you could use a /31, RFC3021:
tun = TapTunDevice(name='mytun')
tun.addr = '10.66.66.2'
tun.dstaddr = '10.66.66.3'
tun.netmask = '255.255.255.254'
tun.up()
Notice, how I had to change the IP addresses in order for them to be in the same /31.
What is a POINTOPOINT device?
The POINTOPOINT means that on this interface there is no Layer 2 addressing (no MAC address) on this interface:
no ARP requests (IPv4);
no NDP requests (IPv6);
the neighbour table is useless for this interface (ip neighbour);
in routing table entries for this interface the via directive is ignored;
packets on this interface are always send to the same (only) next-hop.
Examples of POINTOPOINT devices
PPP interfaces: There is not Layer 2 address for PPP as this type of interface connects a single host to another host (hence the name "point-to-point protocol")
TUN interfaces: They are IP only interfaces without a Layer 2.
POINTOPOINT means that this is a point-to-point interface (surprise!) which means that there can be only one peer connected at the other-side of the interface: you have one neighbor on this interface and you do not need to use ARP/NDP for mapping IP address to link-layer address (and you do not have link layer address at all).
In contrast, an Ethernet device is not a point-to-point interface because multiple hosts can be directly reacheable via this interface. When you send an IP packet to such a device, the network stack has to find a layer 2 identifier (using ARP, NDP) for the intended IP address and send the message to this link-layer address.
Say this, is your routing table (in Ethernet):
default via 192.0.2.1 dev eth0 proto static metric 100
192.0.2.0/24 dev eth0 proto kernel scope link src 192.0.2.2 metric 100
Multiple hosts can be directly connected to you via the eth0 interface. If you want to send a packet to 198.51.100.1, this route is selected:
default via 192.0.2.1 dev eth0 proto static metric 100
which means that among all your neighbors on the eth0 device, you have to send the packet to 192.0.2.1. In order to to that, your network stack has to find the MAC address of 192.0.2.1 by using ARP.
On a POINTOPOINT device, there is always only one neighbor so you don't need to do ARP, you only need to send the packet.
TUN and PPP interfaces are POINTOPOINT devices. Ethernet, Ethernet TAP devices and Wifi interfaces are not POINTOPOINT.
What is the destination (peer) address?
Usually the IP configuration of an interface is in the form: 192.0.2.1/24. This means that the ip address of this interface is 192.0.2.1 and that all IP in the 192.0.2.0/24 subnet are directly reachable via this interface: this adds a routing rule 192.0.2.0/24 dev tun0.
The Linux kernel supports another type of configuration when the local IP address and the peer address does not belong to the same IP subnet: 192.0.2.1 peer 198.51.100.1. This means that the IP address of this interface is 192.0.2.1 and that the IP address of the peer is 198.51.100.1: this adds a routing rule 198.51.100.1 dev tun0. A more general form can be used: 192.0.2.1 peer 198.51.100.1/24.
$ ip address show tun0
14: tun0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500
link/none
inet 192.0.2.1 198.51.100.1/24 scope global tun0
valid_lft forever preferred_lft forever
The dstaddr parameter (and the SIOCSIFDSTADDR) can be used to set such as destination address.
This is useful if you don't want to allocate a common subnet for the two peers. You don't have to use a special destination address with point to point interface. You could use a standard IP subnet. Or you could allocate a /31. Using the destination address/peer configuration, you can avoid allocating a subnet for this point-to-point link.
What is the relation between the peer/destination address and POINTOPOINT devices?
These are independant. You don't have top set a destination address on a POINTOPOINT interface. You can set a destination address on a POINTOPOINT and you can do it on a normal one as well.
However, using a peer destination address is especially/mostly useful for POINTOPOINT interfaces.
If you add an interface with
inet 10.66.66.1 netmask 255.255.255.0
No matter if you create it as point to point, or not- a new routing entry will be added to the kernel for 10.66.66.1/24 with destination of the new interface.
So I don't think that there is a difference there.

Randomly can't connect to guest vm in libvirt

I cannot reliably trigger this, although if I spin up many vms at a time and then attempt to connect to some of them, I run into this condition:
$ ping 192.168.122.135
PING 192.168.122.135 (192.168.122.135) 56(84) bytes of data.
From 192.168.122.1 icmp_seq=1 Destination Host Unreachable
From 192.168.122.1 icmp_seq=2 Destination Host Unreachable
From 192.168.122.1 icmp_seq=3 Destination Host Unreachable
Note that this does not happen for all VMs that I create and start, only a handful of them (randomly).
The vm that has obtained the ip 192.168.122.135 has the following for its network in its domain xml:
<interface type='network'>
<mac address='52:54:00:3d:72:ab'/>
<source network='default'/>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</interface>
And the default network is defined as (and yes, 22 vms are currently running):
<network connections='22'>
<name>default</name>
<uuid>69674b8b-f067-4513-b594-3e52360f391b</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
The output from ifconfig for vnet0 (referenced by the VM's network domain xml) and virbr0 (used by the default network as shown above):
$ sudo ifconfig vnet0
vnet0 Link encap:Ethernet HWaddr fe:54:00:3d:72:ab
inet6 addr: fe80::fc54:ff:fe3d:72ab/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:425 errors:0 dropped:0 overruns:0 frame:0
TX packets:1304 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:57503 (57.5 KB) TX bytes:67257 (67.2 KB)
and
$ sudo ifconfig virbr0
virbr0 Link encap:Ethernet HWaddr fe:54:00:08:e9:a4
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:882508 errors:0 dropped:0 overruns:0 frame:0
TX packets:2527165 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:93980992 (93.9 MB) TX bytes:3047773583 (3.0 GB)
Below is the partial output from ip route list:
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
The route output above makes me think that it should be working. BUT ITS NOT. and it only fails sometimes, and works most of the time.
Why can't I connect to the guest (192.168.122.135) from the host??
I was originally using filters, but removing the filters from the VM's domain xml has no effect on this condition randomly showing up. If I spin up many VMs at the same time I can get it to happen to a lot of them. Some of the VMs work just fine though and allow me to connect.
Also, I am using ubuntu 14.04.3:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty
With kernel 3.19.0-30-generic.
More info - virsh version:
$ virsh --version
1.2.2
libvirtd version:
$ libvirtd --version
libvirtd (libvirt) 1.2.2
I don't have enough reputation to comment... But I have a few suggestions on things you could try to further explore the problem.
Question: Does assigning an IP address in the 192.168.122.X subnet on vnet0 do anything? The route that is configured seems to suggest that your traffic will go to virbr0 since it has the 192.168.122.1 IP address. If you can't ping any other devices in that subnet, then I suspect that's the issue.
If that doesn't get you anywhere...
Packet trace on host / VM
Try doing a packet dump on virbr0 and on the internal VM interface when this occurs. Ping the VM, and see what kind of traffic you see.
sudo tcpdump -n -i virbr0 -v "icmp or arp"
Depending on what you see there, will help narrow down the source of the problem. If you're not even getting your pings on that interface, then it's a routing issue on the host. If pings are going in, but the VM isn't seeing them, then it's a network/routing issue with the libvirt network.
I recommend also doing the above with a working VM, so you have a reference to compare the traffic against.
Check ARP Cache
Check your ARP cache on the host when this occurs. Does the mac address exist in the cache? Maybe it's getting mangled...
To dump the arp cache:
# arp
Check your libvirt logs
If configured, libvirt will log to syslog using the 'libvirtd' tag. Check your configuration to be sure this is enabled. It seems unlikely it's a libvirt issue, but it wouldn't hurt to turn on the logging.
To enable this setting
# vi /etc/libvirt/libvirtd.conf
Add the line
log_outputs_"1:syslog:libvirtd"
Restart libvirt
# service libvirt-bin restart
I had similar issue. I just tried following command to check whether machine is installed properly or not.
lsmod | grep kvm
If it is showing kvm details then machine is installed properly.
After that to restart the services
service libvirtd restart
Also check gateway using the below command
netstat -rn
I have the same network setting, and similar problem in a CentOS 7 host. Eventually, it turned out that the problem was guest VM's firewall setting blocked echo request and other external connection. After changing the firewall setting, the problem is solved.
My case, I've a hardware server where Libvirt is installed.
On this server I create VM in where install libvirt and after that I've get random network interruption and ping response with 192.168.122.1:
From 192.168.122.1 icmp_seq=1 Destination Host Unreachable
I've fixed this be deleting default libvirt network on hardware server like this:
virsh net-destroy default
virsh net-undefine default

How to allow protocol-41 (6in4) through the GCE firewall?

As a stop-gap until Google supports native IPv6 on Google Compute Engine, I'd like to configure a 6in4 (IP protocol 41) tunnel.
I added a firewall rule to allow protocol 41 on my VM's network:
Name Source tag / IP range Allowed protocols / ports Target tags
allow-6in4 216.66.xxx.xxx 41 Apply to all targets
And configured the tunnel in /etc/network/interfaces:
auto 6in4
iface 6in4 inet6 v4tunnel
address 2001:470:xxxx:xxxx::2
netmask 64
endpoint 216.66.xxx.xxx
gateway 2001:470:xxxx:xxxx::1
ttl 64
up ip link set mtu 1280 dev $IFACE
And ping6 2001:470:xxxx:xxxx::1 and verified that 6in4 traffic was outbound:
$ sudo tcpdump -pni eth0 host 216.66.xxx.xxx
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
22:52:03.732841 IP 10.240.xxx.xxx > 216.66.xxx.xxx: IP6 2001:470:xxxx:xxxx::2 > 2001:470:xxxx:xxxx::1: ICMP6, echo request, seq 1, length 64
22:52:04.740726 IP 10.240.xxx.xxx > 216.66.xxx.xxx: IP6 2001:470:xxxx:xxxx::2 > 2001:470:xxxx:xxxx::1: ICMP6, echo request, seq 2, length 64
22:52:05.748690 IP 10.240.xxx.xxx > 216.66.xxx.xxx: IP6 2001:470:xxxx:xxxx::2 > 2001:470:xxxx:xxxx::1: ICMP6, echo request, seq 3, length 64
I changed the endpoint temporarily to an address where I can run tcpdump, and confirmed that packets are not arriving at the destination. I even tried NAT myself in case GCE wasn't doing this for 6in4 packets, but no luck (iptables -t nat -A POSTROUTING -p ipv6 -j SNAT --to-source 130.211.xxx.xxx).
Has anyone gotten a 6in4 tunnel to work on a GCE VM? Is there some magic setting I missed somewhere?
TL;DR: You can't.
Per Networking and Firewalls:
Traffic that uses a protocol other than TCP, UDP, and ICMP is blocked, unless explicitly allowed through Protocol Forwarding.
Per Protocol Forwarding:
Google Compute Engine supports protocol forwarding for the following
protocols:
AH: Specifies the IP Authentication Header protocol.
ESP: Specifies the IP Encapsulating Security Payload protocol.
SCTP: Specifies the Stream Control Transmission Protocol.
TCP: Specifies the Transmission Control Protocol.
UDP: Specifies the User Datagram Protocol.
Hence, a Protocol Forwarding rule needs to be for one of the following IP protocol numbers:
51 (AH)
50 (ESP)
132 (SCTP)
6 (TCP)
17 (UDP)
The Protocol Forwarding page makes it clear that other protocol numbers, such as 41 (6in4) are not supported:
Note: This is an exhaustive list of supported protocols. Only protocols that appear here are supported for protocol forwarding.