tcpdump whitelist filter for UPnP - tcpdump

How to capture all UPnP traffic with tcpdump? I’d like to use “white list” and to collect only UPnP traffic, not something else.
So have started and wrote this filter:
tcpdump -i eth0 -nevvv -s 0 '(udp port 1900) or (tcp port 2869)'
Used following info from Wikipedia:
UPnP uses UDP port 1900 and TCP port 2869.
How to elaborate the filter further?

It's not quite as simple as that. SSDP (the discovery protocol) uses port 1900 (and apparently in some cases 2869) but the actual UPnP service can be on whatever port: SSDP is just a way to discover that port and other details about the service.
See UPnP Device Architecture spec (pdf) for more details.

Related

I need help in understanding Port vs protocol

my question as follows Why do we need port when there’s protocol ,- that’s exactly defining what are the terms of transferring or receiving data
Did not actually get it, i am new to web processes:)
A protocol is a specification for how two devices should exchange data in a way that they can both understand. A port is kind of a numbered 'tag' that helps a computer decide who should receive an incoming piece of data.
Many protocols have a port that they run on by default; this makes it easier to discover them or configure applications that use them. But that's not a hard rule; they could always listen on a different port, as long as anyone contacting them knew about the change.
A protocol is an agreement on how to interpret data and how to respond to messages. They generally specify message formats and legal messages. Examples of protocols include:
TCP/IP
HTTP
SSH
A port is part of socket end point in TCP and UDP. They allow the operating system to distinguish which TCP or UDP service on the host should receive incoming messages.
The confusion generally arises because, a number of ports are reserved (eg. port 80) and are generally listened to by severs expecting a particular protocol (HTTP in the case of port 80). While messages send to port 80 are generally expected to be HTTP messages, there is nothing stopping an non-HTTP server from listening on port 80 or an HTTP server from listening on an alternative port (for example 8080 or 8088).

RSYSLOG listening on ephemeral (high) port

I've been poking around the internet trying to get an answer to this one but so far I've only seen it as "normal" behavior.
I have a fedora 29 host configured to send rsyslog messages over the default 514 port. That works as intented and has been for some time now. I had a client notice that the host would "listen" on an ephemeral port that appears to change with each reboot:
ss -tulnp | grep 46852
udp UNCONN 1536 0 0.0.0.0:468520.0.0.0:* users:(("rsyslogd",pid=676,fd=15))
also:
lsof -i :46852 -P
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
rsyslogd 676 root 15u IPv4 24836 0t0 UDP *:46852
Anyone know why rsyslog is doing this? It appears to be default behavior, and I'm not worried about it as the port can't be hit externally (firewall prohibits it) but just wanted to understand it. I also couldn't find anything in the rsyslog docs that talked about it.
Thanks!
This is just observed behavior I am curious about.
This isn't something that rsyslog is doing, but rather your OS.
Clients are assigned port numbers (random and sequential) by your operating system, as part of the sequence of system calls, that create a network connection. For example TCP and UDP typically use an "ephemeral" port for the client-end of a client–server communication.
These port numbers are - as you said - called "ephemeral" because they are valid only for the life of the connection and have no special significance.
As to why ephemeral ports are used.. I don't know. Maybe someone on ServerFault or Network Engineering can answer this question.
From my understanding ephemeral ports can be used either temporary or private. So if a service (temporarily) needs a port it can use an ephemeral port. After the service has done it's requests and has timed-out for some time, the port is released and can be used by some other service. This way a service doesn't block a port even though it doesn't even use it, or just frequently uses it.

Opening port 19132 on an Oracle compute instance (ubuntu-20.04)

I've created an Oracle Cloud infrastructure compute instance running Ubuntu 20.04. I am trying to open port 19132.
As per another question I found
Opening port 80 on Oracle Cloud Infrastructure Compute node
I've created a public subnet which has an internet gateway and added ingress rules for port 19132 (in the security lists)
netstat looks good
netstat -tulpn
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:19132 0.0.0.0:* 1007/./bedrock_serv
I installed ufw and added rules to allow 19132 but I still can't connect to it from the outside world. Can anyone point out where I am going wrong?
I got the same issue on the Oracle cloud.
Here is what works for me;
First, install firewalld
sudo apt install firewalld
Then open the port in public zone;
sudo firewall-cmd --zone=public --permanent --add-port=19132/tcp
Finally, reload firewalld
sudo firewall-cmd --reload
Looks like you need to have a Public IP configured on that VM for it to be reachable from the internet.
Please look at
https://docs.cloud.oracle.com/en-us/iaas/Content/Network/Tasks/managingpublicIPs.htm
For an instance to communicate directly with the internet, all of the following are required:
The instance must be in a public subnet.
The instance must have a public IP address.
The instance's VCN must have an internet gateway.
The public subnet must have route tables and security lists configured accordingly.
You haven't mentioned anything about the route table. If missing add to it a route with destination=0.0.0.0/0 and target=the Internet Gateway.
Two questions come to mind:
You have specified two rules, one for TCP and one for UDP.
Your netstat shows that something is listening for UDP traffic. Is
there also something listening on TCP or are you using UDP only for
the test?
Can you tell us anything about the traffic characteristics
on this port? I'm asking because if it is UDP traffic the only way
for connection tracking to work is to track the source/dest IP and
port. Since the port will not be present in fragments, the traffic
will be dropped. This could be happening on the ingress or egress
side. To verify, you could create test ingress/egress rules for all
UDP traffic to/from your test IP.
Since your ingress rules are stateful, the egress rules shouldn't matter but it wouldn't hurt to double check them. If none of these things work, you might try a tool like echoping to get more insight into whether or not the traffic is having trouble on the ingress or egress side.
Please check the order of your IPtables rules. Could you post the following command's output for Input chain.
sudo iptables -S INPUT
I have seen Iptables rules as the single prominent reason for these issues.
Regards
Muthu
I think you have to allow user or add user who can connect like this:
create user 'user'#'publicIP' identified by 'password';
grant all privileges on *.* to 'user'#'publicIP' with grant option;
flush privileges;
Here publicIP can be '0.0.0.0' or your system IP address.
Don't use '0.0.0.0' as it is open to all, I have faced various breaches on my GCP machine which leads to account block.

Routing on Google Compute Engine from a machine that doesn't have public IP to the internet

On Google Compute Engine we have machines which do not have public IPs (because a quota limits the number of machines that can have public IP addresses). We need these non-public-IP machines to access data from Google Storage buckets which appears to mean that we have to route to the Internet. But we can't get to anything outside of our network from these non-public-IP machines. All packets drop.
We've found some documentation https://developers.google.com/compute/docs/networking#routing that describes setting up routing from machines that do not have public IP addresses to one that does.
We tried creating a machine "proxy" that has ip-forwarding turned on and has firewall rules that allow http and https (I don't think this detail matters, but we did it). We created a network "nat" that has a 0.0.0.0/0 forward to "proxy" rule. Our hope was that data from the non-public-IP machine on the "nat" network would forward their packets to "proxy" and then "proxy" would act as a gateway to the Internet somehow, but this does not work.
I suspect that we have to do some kind of routing instruction on "proxy" that we aren't doing that tells proxy to forward to the Google Internet gateway, but I'm not sure what this should be. Perhaps a rule in iptables? Or some sort of NAT program?
You may be able to use iptables NAT to get it working. On the proxy instance (as root):
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface eth1 -j ACCEPT
echo 1 > /proc/sys/net/ipv4/ip_forward

Rabbitmq listen to UDP connection

Is there a way to have RabbitMQ listen for UDP connections and put those packets into somesort of default queue which can then be pulled from by a standard client? Would ActiveMQ or ZeroMQ be better for this?
Consider using a simple proxy front for receiving incoming UDP packets and sending them off to RabbitMQ via AMQP. E.g. in Python you can setup a UDP server and then use the AMQP Pika library to speak with your RabbitMQ server.
Cheers!
Someone also built a udp-exchange plugin for rabbitMQ.
I haven't personally used this, but it seems like it would do the job for you without having to write your own udp to amqp forwarder ..
https://github.com/tonyg/udp-exchange
here's the excerpt
Extends RabbitMQ Server with support for a new experimental exchange type, x-udp.
Each created x-udp exchange listens on a specified UDP port for incoming messages, and relays them on to the queues bound to the exchange. It also takes messages published to the exchange and relays them on to a specified IP address and UDP port.