How to limit tcpdump to collect data for set time. ( Only collect for 60 sec for example) - tcpdump

I am trying to run tcp dump to collect all packets for a set time ( ie: 60 seconds,) but not sure how I can achieve it captures all packets and then writes it to file.
So far I have tried:
tcpdump -s0 -i 0.0 -c 5 -vv -n host XXX.XXX.XXX.XXX -w /var/log/XXX.pcap -v
but don't think that is the best option.
Any advice much appreciated!

How to limit tcpdump to collect data for set time
You can combine the options -W (Used in conjunction with the -G option, this will limit the number of rotated dump files that get created, exiting with status 0 when
reaching the limit.) and -G rotate_seconds to that effect, i. e. change -c 5 to -W1 -G60.

Related

How can I limit strace output size?

I am running this command on plesk ubuntu via ssh terminal:
strace -p 1234567 -Tf 2>&1 | grep -v select > /path/file.log
This traces a running process, filters out select commands, and writes the output to a file.
How can I limit the size of that file to 8M? The goal is to capture the last 8M of output before the process dies. The file grows quickly, so logrotate's daily cycle won't do, and it can't be a manual process because it may have to run for days. I tried piping into "tail -c 8M", but I think buffering is preventing any output from that. How can I accomplish this?

Stress test API using multiple JSON files

I am trying to fire 40000 requests towards an API using 40000 different JSON files.
Normally I could do something like this:
for file in /dir/*.json
do
#ab -p $file -T application/json -c1 -n1 <url>
curl -X POST -d#"$file" <url> -H "Content-Type: application/json"
done;
My problem is that I want to run simultaneous requests, e.g. 100 and I want the total time it took to send all requests etc. recorded. I can't use the -c 100 -n 40000 in ab since its the same URL with different files.
The files/requests all look something like
{"source":"000000000000","type":"A"}
{"source":"000000000001","type":"A"}
{"source":"000000000003","type":"A"}
I was not able to find any tool that supports this out of the box (e.g. Apache Benchmark - ab).
I came across this example here on SO (modded for this question).
Not sure I understand why that example would "cat /tmp" when mkfifo tmp is a file and not a dir though. Might work?
mkfifo tmp
counter=0
for file in /dir/*.json
do
if [ $counter -lt 100 ]; then
curl -X POST -H "Content-Type: application/json" -d#"$file" <url> &
let $[counter++];
else
read x < tmp
curl -X POST -H "Content-Type: application/json" -d#"$file" <url> &
fi
done;
cat /tmp > /dev/null
rm tmp
How should I go about achieving this in perl, ksh, bash or similar or does anyone know any tools that supports this out of the box?
Thanks!
If your request is just to time the total time take for sending these 40000 curl requests with different JSON each time, you can use good use of GNU parallel. The tool has great ways achieve job concurrency by making use of multiple cores on your machine.
The download procedure is quite simple. Follow How to install GNU parallel (noarc.rpm) on CentOS 7 for quick and easy list of steps. The tool has a lot more complicated flags to solve multiple use-cases. For your requirement though, just go the folder containing these JSON files and do
parallel --dry-run -j10 curl -X POST -H "Content-Type: application/json" -d#{} <url> ::: *.json
The above command tries to dry run your command, in terms of how parallel sets up the flags and processes its arguments and starts running your command. Here {} represents your JSON file. We've specified here to run 10 jobs at a time and increase the number depending on how fast it runs on your machine and by checking the number of cores on your machine. There are also flags to limit the overall CPU to be allowed use by parallel, so that it doesn't totally choke your system.
Remove --dry-run to run your actual command. And to clock the time taken for the process to complete, use the time command just prefix it before the actual command as time parallel ...

How to use wireshark to capture mysql query sql clearly

Because we develop using remote Mysql server , so cannot check query sql easily, if use local server you can tail - f general_log_file to see which sql are executed when call some http interface. So I installed a wireshark to capture these query sql send from local. At first I use local mysql to verify it.
The capture filter is
then I executed two query sql in mysql terminal
select version();
select now();
but very disappointing I cannot find these two sql packets in wireshark
I only found these four packets.
But from a post I knew
To filter out the mysql packets you just use the filter ‘mysql‘ or ‘mysql.query != “”‘ when you only want packets that request a query. After that you can add a custom column with the field name ‘mysql.query’ to have a list of queries that where executed.
and the effect is like this
It's convenient to capture only query sql and very clearly displayed these query sql. So how could I use wireshark to implement this?
hi #Jeff S.
I tried your command, please see below
#terminal 1
tshark -i lo0 -Y "mysql.command==3"
Capturing on 'Loopback'
# terminal 2
mysql -h127.0.0.1 -u root -p
select version();
#result: nothing output in terminal 1
and tshark -i lo0 -Y "mysql.command==3" -T fields -e mysql.query is same with tshark -i lo -Y "mysql.command==3" also nothing output. But if I only use tshark -i lo0, it has output
Capturing on 'Loopback'
1 0.000000 127.0.0.1 -> 127.0.0.1 TCP 68 57881 → 3306 [SYN] Seq=0 Win=65535 Len=0 MSS=16344 WS=32 TSval=1064967501 TSecr=0 SACK_PERM=1
2 0.000062 127.0.0.1 -> 127.0.0.1 TCP 68 3306 → 57881 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=16344 WS=32 TSval=1064967501 TSecr=1064967501 SACK_PERM=1
3 0.000072 127.0.0.1 -> 127.0.0.1 TCP 56 57881 → 3306 [ACK] Seq=1 Ack=1 Win=408288 Len=0 TSval=1064967501 TSecr=1064967501
4 0.000080 127.0.0.1 -> 127.0.0.1 TCP 56 [TCP Window Update] 3306 → 57881 [ACK] Seq=1 Ack=1 Win=408288 Len=0 TSval=1064967501 TSecr=1064967501
...
You can use tshark and save to a pcap or just export the fields you're interested in.
To save to a pcap (if you want to use wireshark to view later):
tshark -i lo -Y "mysql.command==3" -w outputfile.pcap
tshark -i lo -R "mysql.command==3" -w outputfile.pcap
-R is deprecated for single pass filters, but it will depend on your version
-i is interface so replace that with whatever interface you are using (e.g -i eth0)
To save to a text file:
tshark -i lo -Y "mysql.command==3" -T fields -e mysql.query > output.txt
You can also use BPF filters with tcpdump (and wireshark pre cap filters). They are more complex, but less taxing on your system if you're capturing a lot of traffic.
sudo tcpdump -i lo "dst port 3306 and tcp[(((tcp[12:1]&0xf0)>>2)+4):1]=0x03" -w outputfile.pcap
NOTE:
*This looks for 03 (similar mysql.command==3) within the TCP payload.
**Since this is a pretty loose filter, I also added 3306 to restrict to only traffic destined for that port.
***The filter is based on your screenshot. I cannot validate it right now so let me know if it doesn't work.
Example Output:
Useful answers here:
https://serverfault.com/questions/358978/how-to-capture-the-queries-run-on-mysql-server
In particular: SoMoSparky's answer of:
tshark -T fields -R mysql.query -e mysql.query
and user1038090's answer of:
tcpdump -i any -s 0 -l -vvv -w - dst port 3306 | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
I had similar "problem"
Try to check your mysql ssl
Probably the ssl was turned on hence the traffic was encrypted
You can refer to this post to check the ssl: https://dba.stackexchange.com/questions/36776/how-can-i-verify-im-using-ssl-to-connect-to-mysql
I tried another tshark command from this post, and it could capture query sql from local to remote mysql server.
tshark -i en0 -d tcp.port==3306,mysql -T fields -e mysql.query 'port 3306'
Capturing on 'Wi-Fi'
select version()
select now()
select rand()
but it also output some blank lines between these sql. I tried below command want to remove blank line but failed
tshark -i en0 -d tcp.port==6006,mysql -Y "frame.len>10" -T fields -e mysql.query 'port 6006'
And unfortunately this command cannot support capturing query sql to local mysql(5.7.12).
tshark -i lo -d tcp.port==3306,mysql -T fields -e mysql.query 'port 3306'
Capturing on 'Loopback'
Nothing output except blank lines.
Wireshark tool supports MySQL protocol:
https://www.wireshark.org/docs/dfref/m/mysql.html
Then config wireshark
a.menu Analyze --> Decode as --> add "field=tcp_port value=3306 current=MySQL"
b.filter ‘mysql‘ or ‘mysql.query != “”‘

How to use tcpdump to retrieve mcs index

I am trying to figure out how to use tcpdump to find the mcs index for packets. I have gotten as far as figuring out that I need to be extracting something from the radiotap header (I am on a mac so I should have access to this). I have gotten as far as:
tcpdump -i en1 -I -y ieee802_11_radio
I am also open to using something like tshark. With tshark I have gotten as far as:
tshark -i en1 -Y radiotap.mcs.index -I
I do not know where to go from here or if this might be giving me what I need without me knowing it. This might just be a question about deciphering the output, but I am not really sure. I have done a lot of searching but have not found a lot of explicit documentation.
Tcpdump doesn't currently extract the 11n or 11ac information from a radiotap header, so you can't get the MCS index with it. This is a bug; I'll fix it.
The TShark command you would want would be something such as
tshark -i en0 -I -Y radiotap.mcs.index -T fields -e radiotap.mcs.index
-Y radiotap.mcs.index means "discard packets that don't have radiotap.mcs.index"; -T fields -e radiotap.mcs.index means "print the value of radiotap.mcs.index if it's present in the packet" (it prints a blank line if it's not present in the packet, which is why you also use the -Y flag).

How To Capture network packets to MySQL

I'm going to design a network Analyzer for WiFi (802.11)
Currently I use tshark to capture and parse the WiFi frames and then pipe the output to a perl script to store the parsed information to Mysql database.
I just find out that I miss alot of frames in this process. I checked and the frames seem to be lost during the Pipe (when the output is delivered to perl to get srored in Mysql)
Here is how it goes
(Tshark) -------frames are lost----> (Perl) --------> (MySQL)
this is the how I pipe the output of tshark to script:
sudo tshark -i mon0 -t ad -T fields -e frame.time -e frame.len -e frame.cap_len -e radiotap.length | perl tshark-sql-capture.pl
this is simple template of the perl script I use (tshark-sql-capture.pl)
# preparing the MySQL
my $dns = "DBI:mysql:capture;localhost";
my $dbh = DBI->connect($dns,user,pass);
my $db = "captured";
while (<STDIN>) {
chomp($data = <STDIN>);
($time, $frame_len, $cap_len, $radiotap_len) = split " ", $data;
my $sth = $dbh-> prepare("INSERT INTO $db VALUES (str_to_date('$time','%M %d, %Y %H:%i:%s.%f'), '$frame_len', '$cap_len', '$radiotap_len'\n)" );
$sth->execute;
}
#Terminate MySQL
$dbh->disconnect;
Any Idea which can help to make the performance better is appreciated.Or may be there is an Alternative mechanism which can do better.
Right now my performance is 50% means I can store in mysql around half of the packets I'v captured.
Things written in a pipe don't get lost, what's probably really going on is that tshark tries to write to the pipe but perl+mysql is too slow to process the input so the pipeb is full, write would block so tshark just drops the packets.
Bottleneck could be either MySQL or Perl itself but probably the DB. Check CPU usage, measure insert rate. Then pick a faster DB or write to multiple DBs. You can also try batch inserts and increasing the size of the pipe buffer.
Update
while (<STDIN>)
this reads a line into $_, then you ignore it.
For pipe problems, you can improve packet capture with GULP http://staff.washington.edu/corey/gulp/
From the Man pages:
1) reduce packet loss of a tcpdump packet capture:
(gulp -c works in any pipeline as it does no data interpretation)
tcpdump -i eth1 -w - ... | gulp -c > pcapfile
or if you have more than 2, run tcpdump and gulp on different CPUs
taskset -c 2 tcpdump -i eth1 -w - ... | gulp -c > pcapfile
(gulp uses CPUs #0,1 so use #2 for tcpdump to reduce interference)
you can use a FIFO file, then read the packets and inserts in mysql using insert delay.
sudo tshark -i mon0 -t ad -T fields -e frame.time -e frame.len -e frame.cap_len -e radiotap.length > MYFIFO