how to calculate number of bytes going through network with tcpdump? - tcpdump

I have tcpdump like this
sudo tcpdmp tcp -n -i eth0 -w test.dmp
I want to calculate the number of tcp bytes going through eth0. I capture all the package using tcpdump as above. Is the file size equal the number of bytes or tcpdump add additional information into the dump file?

Yes, tcpdump adds additional information to the file.
It (currently) writes only in pcap format, which means there's a 24-byte header at the beginning of the file, giving information such as the link-layer header type for packets in the file, so the first thing you'd need to do would be to subtract 24 from the size of the file.
In addition, each packet has a 16-byte header giving an arrival time stamp for the packet, the length of the packet, and the number of bytes of packet data that was captured. This means that you would need to subtract 16*{number of packets} from the length - but the only way to get the number of packets is to read the file, so you can't get the number of bytes just by looking at the file size!
Note also that some versions of tcpdump did not default to a "snapshot length" of 0, so the number of bytes of packet data that is captured may be less than the number of packet bytes on the network.
Therefore, what you should do is write a program (use libpcap, as it already knows pcap format and you don't have to write your own code to understand it) that reads all the packets and adds up the "length of the packet" field (it's the len field in the struct pcap_pkthdr structure; do not use caplen, as that's the number of bytes of packet data that was captured) values for all the packets.
You say eth0, so the link-layer header type is probably Ethernet, and there is, for example, no radio meta-data, as might be the case if you were capturing in monitor mode on a Wi-Fi adapter. In the cases where there's extra meta-data in the link-layer header, you'd need to subtract that.

Related

Tcpdump dynamic filter based on length

I'm trying to capture all DHCP Discover packets that don't end with ff that should be the last byte of a correct BOOTP request.
I can filter all DHCP Discover by using the correct offset ether[284:1] because it is at the beginning of the packet but what changes is obviously the length of the entire request.
Is there a way to dynamically calculate the length of the packet and use it as a proper offset?
Thanks

Big binary data share between processes

I have a big binary data iof ip data about Xmb. Processes use binary do some search algorithm to lookup ip address. I have three method.
1. put in ets. but I suppose every read access will copy big binary to process. :(
2. put in gen_server state. processes use gen_server:call to get address.The short coming concurrency.
3. compile binary into beam. but when I compile get
eheap_alloc: Cannot allocate 1318267840 bytes of memory (of type "heap")
which the best practice of big data share in erlang?
Binaries over 64 bytes in size are stored as reference counted binaries and their data is stored outside the heap of any process. If such a binary is sent to any process, the underlying data is not duplicated. So, if you store such a binary in an ETS table and then access it from various processes, the underlying data will not be copied, only its reference count will be incremented/decremented. I'd suggest going with the ETS table solution.
Here's a demonstration of the memory usage at boot, after inserting a 100MB binary into an ETS table, and after fetching a copy of the binary into the shell process. The memory usage does not change after we have a copy binary stored in the shell process. The same would not be true if it was million character string (list of integers) that we were copying in from ETS or another process.
1> erlang:memory().
[{total,21912472},
{processes,5515456},
{processes_used,5510816},
{system,16397016},
{atom,223561},
{atom_used,219143},
{binary,844872},
{code,4808780},
{ets,301232}]
2> ets:new(foo, [named_table, set]).
foo
3> ets:insert(foo, {foo, binary:copy(<<".">>, 104857600)}).
true
4> erlang:memory().
[{total,127038632},
{processes,5600320},
{processes_used,5599952},
{system,121438312},
{atom,223561},
{atom_used,220445},
{binary,105770576},
{code,4908097},
{ets,308416}]
5> X = ets:lookup(foo, foo).
[{foo,<<"........................................................................................................"...>>}]
6> erlang:memory().
[{total,127511632},
{processes,6082360},
{processes_used,6081992},
{system,121429272},
{atom,223561},
{atom_used,220445},
{binary,105761504},
{code,4908097},
{ets,308416}]
You can find a lot more info about how to efficiently work with binaries in Erlang in the link above.

why does libpcap/tcpdump add/pad '0x00' bytes at the end of IP/TCP packets?

I use both Tcpdump and libpcap(a program which uses libpcap) to capture TCP packets. And I notice there are some packets which are padded/added with additional 0x00 bytes at the end. For example, while the IP length indicated in the IP header says that the length is 40 bytes, tcpdump captures 46 bytes. and I notice there are 6 0x00 bytes at the end of the TCP packets.
They don't add those bytes.
The machine sending the packets does, because that's required on Ethernet.
A 40-byte IP packet, when sent on Ethernet, would be 54 bytes long, because there's a 14-byte Ethernet header before the IP header and payload.
However, the minimum packet length on Ethernet is 60 bytes (not including the 4-byte FCS at the end). That means that the packet has to be padded to 60 bytes, which means adding 6 bytes of padding at the end.
(That's one reason why the IP header has a length field - so that the receiver of the packet knows how much is IP and how much is padding.)

Compression packet of Minecraft as3 (ActionScript3)

I'm trying to connect to minecraft server with as3.
The server returns a ByteArray that I am unable to understand.
Here's an example:
«¢00
*H÷
0[ÞJí"
nöí_Jí2Q»÷/½KW9ó`ä¦ËJ!ôàNÄÇgkÉÚY`*u\êRáåLøjTp9ÔÅڕQ̺ÐWÊýƶ[Ð5æsövxåIIæ>Z
u¾C­ӷ.C¹i΍PWûóM×
I tried the following to interpret the data:
bytes.uncompress();
but I got this error:
Error: Error #2058: There was an error decompressing the data.
According to http://wiki.vg/Protocol#Packet_format, the Packet format for Minecraft is as shown below. You need to interpret the bits of the data that you receive as shown below and then send the compressed data to zlib, if the packet is compressed.
Packet format
Without compression
Field Name Field Type Notes
------------ ------------ -----------------------------------------------------------------------
Length VarInt Length of packet data + length of the packet ID
Packet ID VarInt
Data Byte Array Depends on the connection state and packet ID, see the sections below
With compression
Once a Set Compression packet is sent,
zlib compression
is enabled for all following packets. The format of a packet changes
slighty to include the size of the uncompressed packet.
Field Name Field Type Notes
------------ ------------ -----------------------------------------------------------------------
Packet Length VarInt Length of Data + length of Data Length
Data Length VarInt Length of uncompressed Data or 0
Data Byte Array zlib compressed packet, including packet ID (see the sections below)
How do you know that your packet is compressed? According to this same documentation, compression isn't enabled until a Set Compression is sent.

tcpdump time precision how to?

I want to get tcpdump at low time resolution (at milliseconds) instead of default microseconds.
in tcpdump manual I got -j argument with acceptable precisions as 'host_lowprec' and 'host_hiprec'
tcpdump -i any -n -j host_lowprec "tcp"
I have 2 questions:
host_lowprec = ? precision
and
host_hiprec = ? precision
can I set precision to milliseconds or nanoseconds ? if yes how?
In answer to your first question:
The tcpdump man page says of the -j option:
-j tstamp_type
--time-stamp-type=tstamp_type
Set the time stamp type for the capture to tstamp_type. The names to use for the time stamp types are given in pcap-tstamp(7); not all the types listed there will necessarily be valid for any given interface.
and the pcap-tstamp(7) man page says:
... The time stamp types are listed here; the first value is the #define to use in code, the second value is the value returned by pcap_tstamp_type_val_to_name() and accepted by pcap_tstamp_type_name_to_val().
PCAP_TSTAMP_HOST - host
Time stamp provided by the host on which the capture is being done. The precision of this time stamp is unspecified; it might or might not be synchronized with the host operating system's clock.
PCAP_TSTAMP_HOST_LOWPREC - host_lowprec
Time stamp provided by the host on which the capture is being done. This is a low-precision time stamp, synchronized with the host operating system's clock.
PCAP_TSTAMP_HOST_HIPREC - host_hiprec
Time stamp provided by the host on which the capture is being done. This is a high-precision time stamp; it might or might not be synchronized with the host operating system's clock. It might be more expensive to fetch than PCAP_TSTAMP_HOST_LOWPREC.
PCAP_TSTAMP_ADAPTER - adapter
Time stamp provided by the network adapter on which the capture is being done. This is a high-precision time stamp, synchronized with the host operating system's clock.
PCAP_TSTAMP_ADAPTER_UNSYNCED - adapter_unsynced
Time stamp provided by the network adapter on which the capture is being done. This is a high-precision time stamp; it is not synchronized with the host operating system's clock.
Neither host_lowprec nor host_hiprec specify an exact precision. The precision set with -j does NOT affect the way time stamps are stored in a capture file; they will be stored as seconds and microseconds, unless you have a newer version of tcpdump that supports the --time-stamp-precision option and the OS can deliver nanosecond time stamps, in which case they will be stored as seconds and nanoseconds and the file will have a different "magic number" so that tcpdump/Wireshark/etc. can read the time stamps properly.
All the -j option controls is how much of the microseconds (or nanoseconds) value is significant.
In answer to your second question:
There is no mechanism for storing times in pcap files as seconds and milliseconds, and there's no explicit option to request that the microseconds (or nanoseconds) value have only 3 significant figures.
There is an option to request that the time stamps be stored as seconds and nanoseconds. If you are doing a live capture, this will work only if the operating system supports delivering seconds and nanoseconds time stamps when capturing; this currently only works on newer versions of Linux.
What is it that you are trying to accomplish here?