How to use tcpdump to retrieve mcs index - tcpdump

I am trying to figure out how to use tcpdump to find the mcs index for packets. I have gotten as far as figuring out that I need to be extracting something from the radiotap header (I am on a mac so I should have access to this). I have gotten as far as:
tcpdump -i en1 -I -y ieee802_11_radio
I am also open to using something like tshark. With tshark I have gotten as far as:
tshark -i en1 -Y radiotap.mcs.index -I
I do not know where to go from here or if this might be giving me what I need without me knowing it. This might just be a question about deciphering the output, but I am not really sure. I have done a lot of searching but have not found a lot of explicit documentation.

Tcpdump doesn't currently extract the 11n or 11ac information from a radiotap header, so you can't get the MCS index with it. This is a bug; I'll fix it.
The TShark command you would want would be something such as
tshark -i en0 -I -Y radiotap.mcs.index -T fields -e radiotap.mcs.index
-Y radiotap.mcs.index means "discard packets that don't have radiotap.mcs.index"; -T fields -e radiotap.mcs.index means "print the value of radiotap.mcs.index if it's present in the packet" (it prints a blank line if it's not present in the packet, which is why you also use the -Y flag).

Related

How to specify multiple images as source to signalstats filter with FFProbe?

The signalstats of a single image can be calculated via ffprobe with this command:
ffprobe -f lavfi -i "movie=0000002110.jpg,signalstats,metadata=print"
I'd like to do this in a folder with sequential file numbers. In ffmpeg I'm able to achieve this sort of input like this:
ffmpeg -start_number 1036 -i %010d.jpg -vf "crop=100:100:0:0" \cropped\%010d.jpg
The filter documentation for the movie parameter implies that if I might have the right format or stream specifier things might go better.
I am able to get ffmpeg to do it with a standard -i via:
ffmpeg -start_number 2110 -i %010d.jpg -vf signalstats,metadata=print -f null -
FFProbe would be better for my target environment.
This would be the syntax for reading an image sequence with ffprobe.
ffprobe -f lavfi -i "movie='%010d.jpg':f=image2:format_opts='start_number=2110',signalstats,metadata=print" -v 0 -show_entries frame_tags
The data will be printed to stdout.

How to limit tcpdump to collect data for set time. ( Only collect for 60 sec for example)

I am trying to run tcp dump to collect all packets for a set time ( ie: 60 seconds,) but not sure how I can achieve it captures all packets and then writes it to file.
So far I have tried:
tcpdump -s0 -i 0.0 -c 5 -vv -n host XXX.XXX.XXX.XXX -w /var/log/XXX.pcap -v
but don't think that is the best option.
Any advice much appreciated!
How to limit tcpdump to collect data for set time
You can combine the options -W (Used in conjunction with the -G option, this will limit the number of rotated dump files that get created, exiting with status 0 when
reaching the limit.) and -G rotate_seconds to that effect, i. e. change -c 5 to -W1 -G60.

Stress test API using multiple JSON files

I am trying to fire 40000 requests towards an API using 40000 different JSON files.
Normally I could do something like this:
for file in /dir/*.json
do
#ab -p $file -T application/json -c1 -n1 <url>
curl -X POST -d#"$file" <url> -H "Content-Type: application/json"
done;
My problem is that I want to run simultaneous requests, e.g. 100 and I want the total time it took to send all requests etc. recorded. I can't use the -c 100 -n 40000 in ab since its the same URL with different files.
The files/requests all look something like
{"source":"000000000000","type":"A"}
{"source":"000000000001","type":"A"}
{"source":"000000000003","type":"A"}
I was not able to find any tool that supports this out of the box (e.g. Apache Benchmark - ab).
I came across this example here on SO (modded for this question).
Not sure I understand why that example would "cat /tmp" when mkfifo tmp is a file and not a dir though. Might work?
mkfifo tmp
counter=0
for file in /dir/*.json
do
if [ $counter -lt 100 ]; then
curl -X POST -H "Content-Type: application/json" -d#"$file" <url> &
let $[counter++];
else
read x < tmp
curl -X POST -H "Content-Type: application/json" -d#"$file" <url> &
fi
done;
cat /tmp > /dev/null
rm tmp
How should I go about achieving this in perl, ksh, bash or similar or does anyone know any tools that supports this out of the box?
Thanks!
If your request is just to time the total time take for sending these 40000 curl requests with different JSON each time, you can use good use of GNU parallel. The tool has great ways achieve job concurrency by making use of multiple cores on your machine.
The download procedure is quite simple. Follow How to install GNU parallel (noarc.rpm) on CentOS 7 for quick and easy list of steps. The tool has a lot more complicated flags to solve multiple use-cases. For your requirement though, just go the folder containing these JSON files and do
parallel --dry-run -j10 curl -X POST -H "Content-Type: application/json" -d#{} <url> ::: *.json
The above command tries to dry run your command, in terms of how parallel sets up the flags and processes its arguments and starts running your command. Here {} represents your JSON file. We've specified here to run 10 jobs at a time and increase the number depending on how fast it runs on your machine and by checking the number of cores on your machine. There are also flags to limit the overall CPU to be allowed use by parallel, so that it doesn't totally choke your system.
Remove --dry-run to run your actual command. And to clock the time taken for the process to complete, use the time command just prefix it before the actual command as time parallel ...

Simple way to verify valid BPF filter

What is the simplest way to verify a BPF filter as a normal user?
Easiest I have found is to run tcpdump with a small pcap file as input to the -r option.
$ tcpdump -r one_packet.pcap -F invalid_bpf.conf 2> /dev/null ; echo $?
1
$ tcpdump -r one_packet.pcap -F valid_bpf.conf 2> /dev/null ; echo $?
0
Returns standard error codes for invalid or valid BPF filters. This requires that I have a PCAP file to provide as input.
Is there a way to do this simple test without a PCAP file or special privileges?
IF you have a shell that has a built-in "echo" command that supports escape sequences, one somewhat-perverse way of doing this would be to do
echo -en "\0324\0303\0262\0241\02\0\04\0\0\0\0\0\0\0\0\0\0377\0377\0\0\01\0\0\0"|\
./tcpdump -r - -F bpf.conf 2>/dev/null; echo $?
This worked for me on OS X 10.8, which has bash 3.2.48(1)-release (x86_64-apple-darwin12).
That "echo" command writes out a short pcap file with no packets in it, and with a link-layer header type of DLT_EN10MB. That will test whether the filter is valid for Ethernet; there are filters that are valid for some link-layer header types but not valid for others, such as "not broadcast", which is valid for Ethernet but not for PPP, so you'll need to choose some link-layer header type to use when testing.

How To Capture network packets to MySQL

I'm going to design a network Analyzer for WiFi (802.11)
Currently I use tshark to capture and parse the WiFi frames and then pipe the output to a perl script to store the parsed information to Mysql database.
I just find out that I miss alot of frames in this process. I checked and the frames seem to be lost during the Pipe (when the output is delivered to perl to get srored in Mysql)
Here is how it goes
(Tshark) -------frames are lost----> (Perl) --------> (MySQL)
this is the how I pipe the output of tshark to script:
sudo tshark -i mon0 -t ad -T fields -e frame.time -e frame.len -e frame.cap_len -e radiotap.length | perl tshark-sql-capture.pl
this is simple template of the perl script I use (tshark-sql-capture.pl)
# preparing the MySQL
my $dns = "DBI:mysql:capture;localhost";
my $dbh = DBI->connect($dns,user,pass);
my $db = "captured";
while (<STDIN>) {
chomp($data = <STDIN>);
($time, $frame_len, $cap_len, $radiotap_len) = split " ", $data;
my $sth = $dbh-> prepare("INSERT INTO $db VALUES (str_to_date('$time','%M %d, %Y %H:%i:%s.%f'), '$frame_len', '$cap_len', '$radiotap_len'\n)" );
$sth->execute;
}
#Terminate MySQL
$dbh->disconnect;
Any Idea which can help to make the performance better is appreciated.Or may be there is an Alternative mechanism which can do better.
Right now my performance is 50% means I can store in mysql around half of the packets I'v captured.
Things written in a pipe don't get lost, what's probably really going on is that tshark tries to write to the pipe but perl+mysql is too slow to process the input so the pipeb is full, write would block so tshark just drops the packets.
Bottleneck could be either MySQL or Perl itself but probably the DB. Check CPU usage, measure insert rate. Then pick a faster DB or write to multiple DBs. You can also try batch inserts and increasing the size of the pipe buffer.
Update
while (<STDIN>)
this reads a line into $_, then you ignore it.
For pipe problems, you can improve packet capture with GULP http://staff.washington.edu/corey/gulp/
From the Man pages:
1) reduce packet loss of a tcpdump packet capture:
(gulp -c works in any pipeline as it does no data interpretation)
tcpdump -i eth1 -w - ... | gulp -c > pcapfile
or if you have more than 2, run tcpdump and gulp on different CPUs
taskset -c 2 tcpdump -i eth1 -w - ... | gulp -c > pcapfile
(gulp uses CPUs #0,1 so use #2 for tcpdump to reduce interference)
you can use a FIFO file, then read the packets and inserts in mysql using insert delay.
sudo tshark -i mon0 -t ad -T fields -e frame.time -e frame.len -e frame.cap_len -e radiotap.length > MYFIFO