tcpdump time precision how to? - tcpdump

I want to get tcpdump at low time resolution (at milliseconds) instead of default microseconds.
in tcpdump manual I got -j argument with acceptable precisions as 'host_lowprec' and 'host_hiprec'
tcpdump -i any -n -j host_lowprec "tcp"
I have 2 questions:
host_lowprec = ? precision
and
host_hiprec = ? precision
can I set precision to milliseconds or nanoseconds ? if yes how?

In answer to your first question:
The tcpdump man page says of the -j option:
-j tstamp_type
--time-stamp-type=tstamp_type
Set the time stamp type for the capture to tstamp_type. The names to use for the time stamp types are given in pcap-tstamp(7); not all the types listed there will necessarily be valid for any given interface.
and the pcap-tstamp(7) man page says:
... The time stamp types are listed here; the first value is the #define to use in code, the second value is the value returned by pcap_tstamp_type_val_to_name() and accepted by pcap_tstamp_type_name_to_val().
PCAP_TSTAMP_HOST - host
Time stamp provided by the host on which the capture is being done. The precision of this time stamp is unspecified; it might or might not be synchronized with the host operating system's clock.
PCAP_TSTAMP_HOST_LOWPREC - host_lowprec
Time stamp provided by the host on which the capture is being done. This is a low-precision time stamp, synchronized with the host operating system's clock.
PCAP_TSTAMP_HOST_HIPREC - host_hiprec
Time stamp provided by the host on which the capture is being done. This is a high-precision time stamp; it might or might not be synchronized with the host operating system's clock. It might be more expensive to fetch than PCAP_TSTAMP_HOST_LOWPREC.
PCAP_TSTAMP_ADAPTER - adapter
Time stamp provided by the network adapter on which the capture is being done. This is a high-precision time stamp, synchronized with the host operating system's clock.
PCAP_TSTAMP_ADAPTER_UNSYNCED - adapter_unsynced
Time stamp provided by the network adapter on which the capture is being done. This is a high-precision time stamp; it is not synchronized with the host operating system's clock.
Neither host_lowprec nor host_hiprec specify an exact precision. The precision set with -j does NOT affect the way time stamps are stored in a capture file; they will be stored as seconds and microseconds, unless you have a newer version of tcpdump that supports the --time-stamp-precision option and the OS can deliver nanosecond time stamps, in which case they will be stored as seconds and nanoseconds and the file will have a different "magic number" so that tcpdump/Wireshark/etc. can read the time stamps properly.
All the -j option controls is how much of the microseconds (or nanoseconds) value is significant.
In answer to your second question:
There is no mechanism for storing times in pcap files as seconds and milliseconds, and there's no explicit option to request that the microseconds (or nanoseconds) value have only 3 significant figures.
There is an option to request that the time stamps be stored as seconds and nanoseconds. If you are doing a live capture, this will work only if the operating system supports delivering seconds and nanoseconds time stamps when capturing; this currently only works on newer versions of Linux.
What is it that you are trying to accomplish here?

Related

Kafka Consumer - How to set fetch.max.bytes higher than the default 50mb?

I want my consumers to process large batches, so I aim to have the consumer listener "awake", say, on 1800mb of data or every 5min, whichever comes first.
Mine is a kafka-springboot application, the topic has 28 partitions, and this is the configuration I explicitly change:
Parameter
Value I set
Default Value
Why I set it this way
fetch.max.bytes
1801mb
50mb
fetch.min.bytes+1mb
fetch.min.bytes
1800mb
1b
desired batch size
fetch.max.wait.ms
5min
500ms
desired cadence
max.partition.fetch.bytes
1801mb
1mb
unbalanced partitions
request.timeout.ms
5min+1sec
30sec
fetch.max.wait.ms + 1sec
max.poll.records
10000
500
1500 found too low
max.poll.interval.ms
5min+1sec
5min
fetch.max.wait.ms + 1sec
Nevertheless, I produce ~2gb of data to the topic, and I see the consumer-listener (a Batch Listener) is called many times per second -- way more than desired rate.
I logged the serialized-size of the ConsumerRecords<?,?> argument, and found that it is never more than 55mb.
This hints that I was not able to set fetch.max.bytes above the default 50mb.
Any idea how I can troubleshoot this?
Edit:
I found this question: Kafka MSK - a configuration of high fetch.max.wait.ms and fetch.min.bytes is behaving unexpectedly
Is it really impossible as stated?
Finally found the cause.
There is a broker fetch.max.bytes setting, and it defaults to 55mb. I only changed the consumer preferences, unaware of the broker-side limit.
see also
The kafka KIP and the actual commit.

JMeter: Capturing Throughput in Command Line Interface Mode

In Jmeter v2.13, is there a way to capture Throughput via non-GUI/Command Line mode?
I have the jmeter.properties file configured to output via the Summariser and I'm also outputting another [more detailed] .csv results file.
call ..\..\binaries\apache-jmeter-2.13\bin\jmeter -n -t "API Performance.jmx" -l "performanceDetailedResults.csv"
The performanceDetailedResults.csv file provides:
timeStamp
elapsed time
responseCode
responseMessage
threadName
success
failureMessage
bytes sent
grpThreads
allThreads
Latency
However, no amount of tweaking the .properties file or the test itself seems to provide Throuput results like I get via the GUI's Summary Report's Save Table Data button.
All articles, postings, and blogs seem to indicate it wasn't possible without manual manipulation in a spreadsheet. But I'm hoping someone out there has figured out a way to do this with no, or minimal, manual manipulation as the client doesn't want to have to manually calculate the Throughput value each time.
It is calculated by JMeter Listeners so it isn't something you can enable via properties files. Same applies to other metrics which are calculated like:
Average response time
50, 90, 95, and 99 percentiles
Standard Deviation
Basically throughput is calculated as simple as dividing total number of requests by elapsed time.
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time)
Hopefully it won't be too hard for you.
References:
Glossary #1
Glossary #2
Did you take a look at JMeter-Plugins?
This tool can generate aggregate report through commandline.

Time difference between two packets using Radiotap header MAC timestamp

I am trying to parse MAC timestamp fields of radiotap headers of 802.11 packets captured on monitor mode.
TSFT field of radiotap header is 64bit value in microseconds. Raw hex value is highlighted below.
The MAC timestamp value is represented in decimal by Wireshark
This decimal value is decimal value of 2b1c20cb00000000.
What I'm trying to do is get the time difference between two frames using hex value in radiotap header MAC timestamp field.
For example:
frame #2 has decimal value of 3106049021945315329 (2b1ae72100000001) and
frame #3 has 3106066889009266689 (2b1af76100000001).
subtracting this values gives 1AC47FFFFF5C1. And assuming this is in microseconds the value is equal to 470900214.330817 seconds.
What is the process following this steps to get time difference of 0.000071 seconds by using the values in MAC timestamp field of radiotap header
Thank you
The "MAC timestamp" field in the radiotap header is the value in microseconds of the MAC's 64-bit 802.11 Time Synchronization Function timer when the first bit of the MPDU arrived at the MAC.
This is taken directly from the MAC via the device driver for the particular WiFi card you have, and may or may not be accurate or correct, depending on the driver implementation.
The "Time" column displays the elapsed time since the first frame was received. This is calculated by libpcap using the system clock on the host and is the time the frame was first seen by libpcap.
Both of these time values are computed using different clocks, so cannot be directly compared. If the MAC timestamp field is correct and accurate (which yours appears not to be - maybe a driver issue) then it should be used as the reference time, and the libpcap time should only be used as a rough guide.

how to calculate number of bytes going through network with tcpdump?

I have tcpdump like this
sudo tcpdmp tcp -n -i eth0 -w test.dmp
I want to calculate the number of tcp bytes going through eth0. I capture all the package using tcpdump as above. Is the file size equal the number of bytes or tcpdump add additional information into the dump file?
Yes, tcpdump adds additional information to the file.
It (currently) writes only in pcap format, which means there's a 24-byte header at the beginning of the file, giving information such as the link-layer header type for packets in the file, so the first thing you'd need to do would be to subtract 24 from the size of the file.
In addition, each packet has a 16-byte header giving an arrival time stamp for the packet, the length of the packet, and the number of bytes of packet data that was captured. This means that you would need to subtract 16*{number of packets} from the length - but the only way to get the number of packets is to read the file, so you can't get the number of bytes just by looking at the file size!
Note also that some versions of tcpdump did not default to a "snapshot length" of 0, so the number of bytes of packet data that is captured may be less than the number of packet bytes on the network.
Therefore, what you should do is write a program (use libpcap, as it already knows pcap format and you don't have to write your own code to understand it) that reads all the packets and adds up the "length of the packet" field (it's the len field in the struct pcap_pkthdr structure; do not use caplen, as that's the number of bytes of packet data that was captured) values for all the packets.
You say eth0, so the link-layer header type is probably Ethernet, and there is, for example, no radio meta-data, as might be the case if you were capturing in monitor mode on a Wi-Fi adapter. In the cases where there's extra meta-data in the link-layer header, you'd need to subtract that.

getrusage vs. clock_gettime()

I am trying to obtain the CPU time consumed by a process on Ubuntu. As far as I know, there are two functions can do this job: getrusage() and clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &tp). In my code, calling getrusage() immediately after clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &tp), always gives different results.
Can anyone please help me understand which function gives higher resolution, and what advantages/disadvantages of these functions have?
Thanks.
getrusage(...)
Splits CPU time into system and user components in ru_utime and ru_stime respectively.
Roughly microsecond resolution: struct timeval has the field tv_usec, but this resolution is usually limited to about 4ms/250Hz (source)
Available on SVr4, 4.3BSD, POSIX.1-2001: this means it is available on both Linux and OS X
See the man page
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, ...)
Combined total of system and user time with no way to separate it into system/user time components.
Nanosecond resolution: struct timespec is a clone of struct timeval but with tv_nsec instead of tv_usec. Exact resolution depends on how the timer is implemented on given system, and can be queried with clock_getres.
Requires you to link to librt
Clock may not be available. In this case, clock_gettime will return -1 and set errno to EINVAL, so it's a good idea to provide a getrusage fallback. (source)
Available on SUSv2 and POSIX.1-2001: this means it is available on Linux, but not OS X.
See the man page