I am trying to programmatically capture a stream of packets by using Tshark. The simplified terminal command I am using is:
tshark -i 2 -w output.pcap
This is pretty straightforward, but I then need to get a .csv file in order to easily analyze the information captured.
By opening the .pcap file in Wireshark and exporting it in .csv what I get is a file structured as follows:
"No.","Time","Source","Destination","Protocol","Length","Info"
but,again, I need to do this in an automatic way. So I tried using the command:
tshark -r output.pcap -T fields -e frame.number -e ip.src -e ip.dst -e frame.len -e frame.time -e frame.time_relative -E header=y -E separator=, > output.csv
but I can not find anywhere the name of the "Info" field I get when manually exporting the .csv.
Any ideas? Thanks!
Yes, you can if you use the latest Development Release.
See Wireshark Bug 2892.
Download the Development Release Version 1.9.0.
Use the following command:
$ tshark -i 2 -T fields -e frame.time -e col.Info
Output
Feb 28, 2013 20:58:24.604635000 Who has 10.10.128.203? Tell 10.10.128.1
Feb 28, 2013 20:58:24.678963000 Who has 10.10.128.163? Tell 10.10.128.1
Note
-e col.Info,
Use capital I
How about directly exporting the packets to a csv file.
sudo tshark > fileName.csv
Related
So I've got the following project OpenFHE-development and when I run the build process, there are lots of warnings. However, most of these warnings are fine to ignore (we vet them before pushing to the main branch)
Specifically, is there a way to take
pth/python -m sphinx -T -E -b readthedocssinglehtmllocalmedia -d _build/doctrees -D language=en . _build/localmedia
and convert it to
pth/python -m sphinx -T -E -b readthedocssinglehtmllocalmedia -d _build/doctrees -D language=en . _build/localmedia 2> errors.txt
(pipe the stderr to a file instead of having it display on stdout)?
Does not seem to be possible at the moment. See git discussion
I have a simple pcap with some web traffic and am using tshark to obtain some header information from it:
I use the following command:
tshark -r ./capture-1-5 -Y "http2" -o tls.keylog_file:ssl-key.log \
-T fields -e frame.number -e _ws.col.Time -e ip.src -e tcp.srcport \
-e ip.dst -e tcp.dstport -e _ws.col.Protocol -e frame.len \
-e _ws.col.Info -E header=y -E separator="," -E quote=d \
-E occurrence=f > desegmented.csv
I realized that in this case all fragments are reassembled resulting in huge packets. However, I do not want reassembled packets. So, I add an extra option to tshark:
tshark -r ./capture-1-5 -Y "http2" -o tls.keylog_file:ssl-key.log \
-T fields -e frame.number -e _ws.col.Time -e ip.src -e tcp.srcport \
-e ip.dst -e tcp.dstport -e _ws.col.Protocol -e frame.len \
-e _ws.col.Info -E header=y -E separator="," -E quote=d \
-E occurrence=f -o tcp.desegment_tcp_streams:FALSE > segmented.csv
My intuition is that the resultant disassembled.csv file should be greater in size and should contain more rows given that the "packets above the MTU" will be shown as more than one packet.
However, I observe the opposite. The resultant file without assembly is smaller and has almost halved the number of rows.
-rw-r--r-- 1 root root 210K May 18 18:21 desegmented.csv
-rw-r--r-- 1 root root 97K May 18 18:21 segmented.csv
# cat desegmented.csv |wc -l
2635
# cat segmented.csv |wc -l
1233
Is this a normal behavior? I don't see (manually) where the packets start to disappear (and why) or see any pattern because of the two-way communication (missing packets here and there).
I assume that maybe, in the disassebmled.csv case, every packet or even the whole packet stream that resulted in at least one packet above the MTU is completely dropped.
I tried to also apply ip.defragment:FALSE but still the same results.
Thanks
For reproducing, the files can be downloaded from here
Thanks, #JimD., I have already come to a similar conclusion!
Packet capture itself has to be segmented to do this precisely.
So, tried to go one layer below, and make the packet capture itself to be segmented via
ethtool -K eth0 gso off tso off gro off sg off tx off rx off
(just to make sure).
The problem is that packet capturing is done in a docker container, so at multiple places, I have to issue this command to be fully working.
These places include the docker0 bridge, eth0 inside the container and the corresponding vethXXXXXX on the host, from which the second requires privileged containers that pose further issues :)
I apologize if my question does not deserve a standard to ask here.
I have two files(products-data.json, orders-data.json) inside the following directory:
G:\kb\Couchbase\CB121
and I imported the products-data.json successfully using the following command:
G:\kb\Couchbase\CB121>cbimport.exe json -c couchbase://127.0.0.1 -u sattar -p 156271 -b sampleDB -f lines -d file://products-data.json -t 4 -g %type%::%variety%::#MONO_INCR#
But when I try to import orders-data.json in the same way as follows:
G:\kb\Couchbase\CB121>cbimport.exe json -c couchbase://127.0.0.1 -u sattar -p 156271 -b sampleDB -f lines -d file://orders-data.json -t 4 -g %type%::%order_id%
I am getting the following error:
2018-01-21T12:01:31.211+06:00 [31mERRO: open orders-data.json: The system cannot find the file specified.[0m[2m -- jsondata.(*Parallelizer).Execute() at source.go:198[0m
2018-01-21T12:01:31.212+06:00 [31mERRO: open orders-data.json: The system cannot find the file specified.[0m[2m -- plan.(*data).execute() at data.go:89[0m
Json import failed: open orders-data.json: The system cannot find the file specified.
It kills my day. Any help is appreciated. Thanks.
I have just dumped a riak db (back-up). But the backup file is a binary file.
Is there a lib that it deserialize it into a human readable file (JSON w/e) ?
I haven't found anything on google, neither on Stack Overflow.
Found a solution for my current problem:
Connect to the env and then run following command:
wget https://s3-us-west-2.amazonaws.com/ps-tools/riak-data-migrator-0.2.9-bin.tar.gz
tar -xvzf riak-data-migrator-0.2.9-bin.tar.gz
cd riak-data-migrator-0.2.9
java -jar riak-data-migrator-0.2.9.jar -d -r /var/riak_export -a -h 127.0.0.1 -p 8087 -H 8098
(source: https://github.com/basho-labs/riak-data-migrator)
EDIT
Another way to export riak db https://www.npmjs.com/package/riak-bucket-exporter
#!/bin/bash
for bucket in $(curl http://localhost:8098/riak?buckets=true | sed -e 's/[{}:"]//gi' -e 's/buckets\[//' -e 's/\]//' -e 's/,/ /g')
do
echo "Exporting bucket $bucket"
rm -f $bucket.json
riak-bucket-exporter -H localhost -p 8098 $bucket
done
echo "Export done"
As all the suggestions listed so far appear to be broken in one way or another (at least for me and riak-kv#2.x), I ultimately resorted to homegrow a bash shell script that leverages riak-kv's HTTP API with no other prerequisites than curl and jq to accomplish an export of sorts.
It can be found in this gist here: https://gist.github.com/cueedee/0b26ec746c4ef578cd98e93c93d2b6e8 hoping that someone will find it useful.
How is it possible to run this and output the innobackupex output to a file (but still send output to the display)?
innobackupex --user=root --password=pass --databases="db" --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz
I need to ouput the innobackupex log with ... completed OK! in the last line to a file? How can I do that?
I've also noticed that it is a bit challenging to save the "OK" output from xtrabackup to the log file, as the Perl script playing with tty. Here is what worked for me.
If you need execute innobackupex from the command line, you can do:
nohup innobackupex --user=root --password=pass --databases="db" --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz 2>/path/mybkp.log
if you need to script it and get an OK message you can do:
/bin/bash -c "innobackupex --user=root --password=pass --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz" 2>/path/mybkp.log
Please note that in the second command, the double quote closes before the 2>
Prepend
2> >(tee file)
to your command.