Time difference between two packets using Radiotap header MAC timestamp - libpcap

I am trying to parse MAC timestamp fields of radiotap headers of 802.11 packets captured on monitor mode.
TSFT field of radiotap header is 64bit value in microseconds. Raw hex value is highlighted below.
The MAC timestamp value is represented in decimal by Wireshark
This decimal value is decimal value of 2b1c20cb00000000.
What I'm trying to do is get the time difference between two frames using hex value in radiotap header MAC timestamp field.
For example:
frame #2 has decimal value of 3106049021945315329 (2b1ae72100000001) and
frame #3 has 3106066889009266689 (2b1af76100000001).
subtracting this values gives 1AC47FFFFF5C1. And assuming this is in microseconds the value is equal to 470900214.330817 seconds.
What is the process following this steps to get time difference of 0.000071 seconds by using the values in MAC timestamp field of radiotap header
Thank you

The "MAC timestamp" field in the radiotap header is the value in microseconds of the MAC's 64-bit 802.11 Time Synchronization Function timer when the first bit of the MPDU arrived at the MAC.
This is taken directly from the MAC via the device driver for the particular WiFi card you have, and may or may not be accurate or correct, depending on the driver implementation.
The "Time" column displays the elapsed time since the first frame was received. This is calculated by libpcap using the system clock on the host and is the time the frame was first seen by libpcap.
Both of these time values are computed using different clocks, so cannot be directly compared. If the MAC timestamp field is correct and accurate (which yours appears not to be - maybe a driver issue) then it should be used as the reference time, and the libpcap time should only be used as a rough guide.

Related

Chrome Dev tools .har file for _webSocketTraffic has a "time" field - what does it mean?

I am trying to understand the websocketTraffic data exported from my Chrome dev tools. An example looks like this:
{
'type': 'receive',
'time': 1640291138.212745,
'opcode': 1,
'data': '<r xmlns=\'urn:xmpp:sm:3\'/>',
}
I see a "time" field but I actually cant find anything about what it means except this from the spec (http://www.softwareishard.com/blog/har-12-spec/):
time [number] - Total elapsed time of the request in milliseconds. This is the sum of all timings available in the timings object (i.e. not including -1 values) .
Is this really milliseconds, down to the millionth of a millisecond? I am trying to see how much time has elapsed between two WS events, so any insight would be very helpful. Thanks
Disclaimer:
This answer is not backed by official docs. However, I studied this problem for quite some time now, and my solution seems to make sense.
Answer:
Move the dot 3 places to the right, (i.e 1640291138.212745 -> 1640291138212.745) and you will get the actual time. Try to run this
new Date(1640291138212.745).toISOString()
and see if it fits your startedDateTime in the parent WebSocket entry in your har.
Probably Chrome saves the "time" field as seconds since epoch, instead of milliseconds since epoch. So "moving the dot 3 places to the right" actually means to multiply by a 1000 and that means converting to milliseconds.

Why is MySQL's maximum time limit 838:59:59?

I've run into the limit myself, but despite lots of chatter online, I've never seen an explanation for why the upper and lower limit for the TIME data type is what it is. The official reference at http://dev.mysql.com/doc/refman/5.7/en/time.html says
TIME values may range from '-838:59:59' to '838:59:59'. The hours part may be so large because the TIME type can be used not only to represent a time of day (which must be less than 24 hours), but also elapsed time or a time interval between two events (which may be much greater than 24 hours, or even negative).
But I'm wondering not why the hours part is allowed to be "so large", but why it's cut off where it is. There doesn't seem to be any significance to that many hours in regards to days, or if I try to imagine possible cutoffs for how many seconds could be stored as an integer. So why the range?
The TIME values were always stored on 3 bytes in MySQL. But the format changed on version 5.6.4. I suspect this was not the first time when it changed. But the other change, if there was one, happened long time ago and there is no public evidence of it. The MySQL source code history on GitHub starts with version 5.5 (the oldest commit is from May 2008) but the change I am looking for happened somewhere around 2001-2002 (MySQL 4 was launched in 2003)
The current format, as described in the documentation, uses 6 bits for seconds (possible values: 0 to 63), 6 bits for minutes, 10 bits for hours (possible values: 0 to 1023), 1 bit for sign (add the negative values of the already mentioned intervals) and 1 bit is unused and labelled "reserved for future extensions".
It is optimized for working with time components (hours, minutes, seconds) and doesn't waste much space. Using this format it's possible to store values between -1023:59:59 and +1023:59:59. However MySQL limits the number of hours to 838, probably for backward compatibility with applications that were written a while ago, when I think this was the limit.
Until version 5.6.4, the TIME values were also stored on 3 bytes and the components were packed as days * 24 * 3600 + hours * 3600 + minutes * 60 + seconds. This format was optimized for working with timestamps (because it was, in fact, a timestamp). Using this format it would be possible to store values in the range of about -2330 to +2330 hours. While having this big range of values available, MySQL was still limiting the values to -838 to +838 hours.
There was bug #11655 on MySQL 4. It was possible to return TIME values outside the -838..+838 range using nested SELECT statements. It was not a feature but a bug and it was fixed.
The only reason to limit the values to this range and to actively change any piece of code that produces TIME values outside it was backward compatibility.
I suspect MySQL 3 used a different format that, due to the way the data was packed, limited the valid values to the range -838..+838 hours.
By looking into the current MySQL's source code I found this interesting formula:
#define TIME_MAX_VALUE (TIME_MAX_HOUR*10000 + TIME_MAX_MINUTE*100 + TIME_MAX_SECOND)
Let's ignore for the moment the MAX part of the names used above and let's remember only that TIME_MAX_MINUTE and TIME_MAX_SECOND are numbers between 00 and 59. The formula just concatenates the hours, minutes and seconds in a single integer number. For example, the value 170:29:45 becomes 1702945.
This formula raises the following question: given that the TIME values are stored on 3 bytes with sign, what is the maximum positive value that can be represented this way?
The value we are looking for is 0x7FFFFF that in decimal notation is 8388607. Since the last four digits (8607) should be read as minutes (86) and seconds (07) and their maximum valid values is 59, the greatest value that can be stored on 3 bytes with sign using the formula above is 8385959. Which, as TIME is +838:59:59. Ta-da!
Guess what? The fragment of C code listed above was extracted from this:
/* Limits for the TIME data type */
#define TIME_MAX_HOUR 838
#define TIME_MAX_MINUTE 59
#define TIME_MAX_SECOND 59
#define TIME_MAX_VALUE (TIME_MAX_HOUR*10000 + TIME_MAX_MINUTE*100 + TIME_MAX_SECOND)
I am sure this is how MySQL 3 used to keep the TIME values internally. This format imposed the limitation of the range, and the backward compatibility requirement on the subsequent versions propagated the limitation to our days.
DATETIME is stored based on a base of 10, see Date and Time Data Type Representation:
DATETIME: Eight bytes: A four-byte integer for date packed as YYYY×10000 + MM×100 + DD and a four-byte integer for time packed as HH×10000 + MM×100 + SS
For convinience and some other reasons, the (old) time format was encoded in the same way, using 3 bytes:
Hours * 10000 + Minutes * 100 + Seconds
This means:
3 bytes = 2^24 = 16.777.216
with sign: 2^23 = 8.388.608
Using the encoding, this represents the magical 838 hours. And max. 8608 seconds for the minutes and seconds (without overflow), which results in the largest valid time 838:59:59. One nice thing about this is that the integer representation of that time, 8385959, is easily readable to a human. But this encoding of course leaves gaps, invalid (unused) integer values (like 8309999).
As of MySQL 5.6.4, time format changed its encoding to
1 bit sign (1= non-negative, 0= negative)
1 bit unused (reserved for future extensions)
10 bits hour (0-838)
6 bits minute (0-59)
6 bits second (0-59)
---------------------
24 bits = 3 bytes
Even though it could now store more hours, for compatibility it still just allows 838 hours.
Obviously, it's hard to answer these types of questions without getting direct feedback from the designers of the database.
But there is some documentation regarding how the different data types are stored internally, and, to an extent, it can help us understand this a little bit.
For, instance, regarding the TIME data type, notice how it's stored internally according to the documentation:
TIME encoding for nonfractional part:
1 bit sign (1= non-negative, 0= negative)
1 bit unused (reserved for future extensions)
10 bits hour (0-838)
6 bits minute (0-59)
6 bits second (0-59)
---------------------
24 bits = 3 bytes
So, as you can see, the goal is to fit the information within 3 bytes. And, of those 3 bytes, 10 bits are reserved for the hours, which pretty much determines the overall range.
That said, 10 bits does allow values up to 1023, so I guess, technically, without any changes to the storage size, the range could have been -1023:59:59 to 1023:59:59. Why they didn't do that and they chose 838 as the cutoff, I have no idea.

tcpdump time precision how to?

I want to get tcpdump at low time resolution (at milliseconds) instead of default microseconds.
in tcpdump manual I got -j argument with acceptable precisions as 'host_lowprec' and 'host_hiprec'
tcpdump -i any -n -j host_lowprec "tcp"
I have 2 questions:
host_lowprec = ? precision
and
host_hiprec = ? precision
can I set precision to milliseconds or nanoseconds ? if yes how?
In answer to your first question:
The tcpdump man page says of the -j option:
-j tstamp_type
--time-stamp-type=tstamp_type
Set the time stamp type for the capture to tstamp_type. The names to use for the time stamp types are given in pcap-tstamp(7); not all the types listed there will necessarily be valid for any given interface.
and the pcap-tstamp(7) man page says:
... The time stamp types are listed here; the first value is the #define to use in code, the second value is the value returned by pcap_tstamp_type_val_to_name() and accepted by pcap_tstamp_type_name_to_val().
PCAP_TSTAMP_HOST - host
Time stamp provided by the host on which the capture is being done. The precision of this time stamp is unspecified; it might or might not be synchronized with the host operating system's clock.
PCAP_TSTAMP_HOST_LOWPREC - host_lowprec
Time stamp provided by the host on which the capture is being done. This is a low-precision time stamp, synchronized with the host operating system's clock.
PCAP_TSTAMP_HOST_HIPREC - host_hiprec
Time stamp provided by the host on which the capture is being done. This is a high-precision time stamp; it might or might not be synchronized with the host operating system's clock. It might be more expensive to fetch than PCAP_TSTAMP_HOST_LOWPREC.
PCAP_TSTAMP_ADAPTER - adapter
Time stamp provided by the network adapter on which the capture is being done. This is a high-precision time stamp, synchronized with the host operating system's clock.
PCAP_TSTAMP_ADAPTER_UNSYNCED - adapter_unsynced
Time stamp provided by the network adapter on which the capture is being done. This is a high-precision time stamp; it is not synchronized with the host operating system's clock.
Neither host_lowprec nor host_hiprec specify an exact precision. The precision set with -j does NOT affect the way time stamps are stored in a capture file; they will be stored as seconds and microseconds, unless you have a newer version of tcpdump that supports the --time-stamp-precision option and the OS can deliver nanosecond time stamps, in which case they will be stored as seconds and nanoseconds and the file will have a different "magic number" so that tcpdump/Wireshark/etc. can read the time stamps properly.
All the -j option controls is how much of the microseconds (or nanoseconds) value is significant.
In answer to your second question:
There is no mechanism for storing times in pcap files as seconds and milliseconds, and there's no explicit option to request that the microseconds (or nanoseconds) value have only 3 significant figures.
There is an option to request that the time stamps be stored as seconds and nanoseconds. If you are doing a live capture, this will work only if the operating system supports delivering seconds and nanoseconds time stamps when capturing; this currently only works on newer versions of Linux.
What is it that you are trying to accomplish here?

How to cancel rounding big numbers at Zabbix graphs?

I have a graph that displays the count of files in a directory. I need the exact number of files in the graph but I cannot find that feature in the Zabbix configuration.
Any suggestions?
I've just found answer.
We need to change ZBX_UNITS_ROUNDOFF_UPPER_LIMIT in /include/defines.inc.php - It is responsible for number of digits after comma, when value is greater than roundoff threshold. By default it is 2.

Zabbix trigger expression - detect a drop and stay in problem state

I have this trigger that fires upon a match of the rule below:
{monitoring:test.item.change(0)}<-100
When my graph goes down by over 100 units, an event gets created. The event should switch to OK status when the graph goes back up. The graph has different average values at different times of day and besides, the item is a trapper value, which does not support flexible intervals. My problem is this; when the graph falls by over 100 units, let's say from 300 to 10, a PROBLEM situation is created. At the next interval, if the value is still low (e.g 13), Zabbix creates an OK event, because although the value is still low, the expression does not return true because the graph hasn't gone down by a further 100 units. Any ideas on how I could fix this? I have been trying to use
{{monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100}
but Zabbix wouldn't take that expression. This is supposed to compare the last value of test.item to the average value of the past 30 minutes and raise an alert when the difference exceeds 100.
This, I believe, would sort out my problem situation of a false OK status when the graph remains at a low value.
EDIT: I think I have cracked it. Zabbix has accepted the below expression:
{monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100
I think you'll soon realize that expression won't solve your targeted behavior and will keep on flapping between PROBLEM and OK.
You have just shifted the 'did a -100 change occurred' check between 'the last and previous last' values
to 'the last and the average of the last half an hour'.
Checking if either there was an abrupt change OR
if the value is still too low will probably better mimic your expected scenario,
{monitoring:test.item.last(0)}>100 | {monitoring:test.item.max(#2)}<20
max(#2)<20 checks if the maximum of the last 2 values is bellow 20.
EDIT: After reading your comment maybe this approach (after some tweaking for your expected values) will better serve you,
({monitoring:test.item.avg(1800)}<10 & {monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>20) | ({monitoring:test.item.avg(1800)}>100 & {monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100)
This way, you'll better fit your trigger for the different volumes during the day.