Chrome Dev tools .har file for _webSocketTraffic has a "time" field - what does it mean? - google-chrome

I am trying to understand the websocketTraffic data exported from my Chrome dev tools. An example looks like this:
{
'type': 'receive',
'time': 1640291138.212745,
'opcode': 1,
'data': '<r xmlns=\'urn:xmpp:sm:3\'/>',
}
I see a "time" field but I actually cant find anything about what it means except this from the spec (http://www.softwareishard.com/blog/har-12-spec/):
time [number] - Total elapsed time of the request in milliseconds. This is the sum of all timings available in the timings object (i.e. not including -1 values) .
Is this really milliseconds, down to the millionth of a millisecond? I am trying to see how much time has elapsed between two WS events, so any insight would be very helpful. Thanks

Disclaimer:
This answer is not backed by official docs. However, I studied this problem for quite some time now, and my solution seems to make sense.
Answer:
Move the dot 3 places to the right, (i.e 1640291138.212745 -> 1640291138212.745) and you will get the actual time. Try to run this
new Date(1640291138212.745).toISOString()
and see if it fits your startedDateTime in the parent WebSocket entry in your har.
Probably Chrome saves the "time" field as seconds since epoch, instead of milliseconds since epoch. So "moving the dot 3 places to the right" actually means to multiply by a 1000 and that means converting to milliseconds.

Related

converting strange timestamp from json to normal data time

I'm getting some very odd looking timestamps from netflix's video API, they look like this ;
527637852762
I assume its a timestamp as in the json it looks like this,
"time": 527780548207
How can I convert this ? it should equate to around some day in September 2017.
So far I've tired dividing by 100, 1000 and no luck.
Thanks
Seconds since 1.1.1970?
It is a common unix time stamp
Referring to the Netflix API documentation they say:
For times since the epoch both seconds and milliseconds are supported because both are in common use and it helps to avoid confusion when copy and pasting from another source.
Milliseconds Timestamp to Date
I figured it out, its a 2001 timestamp. Seconds since 2001. So take that figure / 1000 and that converts correctly :)
epochconverter.com/coredata

What governs playback speed when encoding with Android's MediaCodec + mp4parser?

I'm trying to record, encode and finally create a short movie on Android (using API 16) with a combination of MediaCodec and Mp4Parser (to encapsulate into .mp4).
Everything is working just fine, except for the duration of the .mp4: its always 3 seconds long - and runs at about twice the 'right' speed.
The input to encoder is 84 frames (taken 100ms apart).
The last frame sets the 'end of stream' flag.
I set the presentation time for each frame on queueInputBuffer
I've tried to tweak every conceivable parameter - but nothing seems to make a difference - the film is always 3 seconds long - and always played way too fast.
So what governs the playback seepd? how do I generate a film with 'natuarl' speed?
I figured it out: When encapsulating with mp4parser (needed if you target API<18), you need to set the speed in mp4parser's API. The presentation time you provide to queueInputBuffer appearently make no difference if you're not using Android's built-in muxer (available only from API18).
I stumbled on this question on github, which indicates the following is required:
new H264TrackImpl(new FileDataSourceImpl(rawDataFile), "eng", 100, 10);
the last two parameters (timeScale and frameTick) set the playback speed to 'noraml'.

Error: clock scan "-100000 minutes" -base 1

I just stumbled upon this legacy code, which uses the deprecated clock FreeScan:
clock scan "-100000 minutes" -base 1
which leads to an error. However,
clock scan "-99999 minutes" -base 1
seems to work. I would be interested in the reason for this limit, or is this a bug?
It's a misfeature really, and one that isn't going to be fixed.
The issue is that a six (or more) digit number can be interpreted as either a number or as a timestamp or a time or a date. The parser (something horrible hacked from the output of yacc) gets confused, and when we hit confusion, we spit out an error. Now we could have theoretically fixed it, but this was hardly the worst problem in the parser. (That free text parser is definitely stupid.)
When we worked out just how badly broken it all was, we created the defined-format parser and clock add as replacements. They're less magical, and much less wrong.

understanding getByteTimeDomainData and getByteFrequencyData in web audio

The documentation for both of these methods are both very generic wherever I look. I would like to know what exactly I'm looking at with the returned arrays I'm getting from each method.
For getByteTimeDomainData, what time period is covered with each pass? I believe most oscopes cover a 32 millisecond span for each pass. Is that what is covered here as well? For the actual element values themselves, the range seems to be 0 - 255. Is this equivalent to -1 - +1 volts?
For getByteFrequencyData the frequencies covered is based on the sampling rate, so each index is an actual frequency, but what about the actual element values themselves? Is there a dB range that is equivalent to the values returned in the returned array?
getByteTimeDomainData (and the newer getFloatTimeDomainData) return an array of the size you requested - its frequencyBinCount, which is calculated as half of the requested fftSize. That array is, of course, at the current sampleRate exposed on the AudioContext, so if it's the default 2048 fftSize, frequencyBinCount will be 1024, and if your device is running at 44.1kHz, that will equate to around 23ms of data.
The byte values do range between 0-255, and yes, that maps to -1 to +1, so 128 is zero. (It's not volts, but full-range unitless values.)
If you use getFloatFrequencyData, the values returned are in dB; if you use the Byte version, the values are mapped based on minDecibels/maxDecibels (see the minDecibels/maxDecibels description).
Mozilla 's documentation describes the difference between getFloatTimeDomainData and getFloatFrequencyData, which I summarize below. Mozilla docs reference the Web Audio
experiment ; the voice-change-o-matic. The voice-change-o-matic illustrates the conceptual difference to me (it only works in my Firefox browser; it does not work in my Chrome browser).
TimeDomain/getFloatTimeDomainData
TimeDomain functions are over some span of time.
We often visualize TimeDomain data using oscilloscopes.
In other words:
we visualize TimeDomain data with a line chart,
where the x-axis (aka the "original domain") is time
and the y axis is a measure of a signal (aka the "amplitude").
Change the voice-change-o-matic "visualizer setting" to Sinewave to
see getFloatTimeDomainData(...)
Frequency/getFloatFrequencyData
Frequency functions (GetByteFrequencyData) are at a point in time; i.e. right now; "the current frequency data"
We sometimes see these in mp3 players/ "winamp bargraph style" music players (aka "equalizer" visualizations).
In other words:
we visualize Frequency data with a bar graph
where the x-axis (aka "domain") are frequencies or frequency bands
and the y-axis is the strength of each frequency band
Change the voice-change-o-matic "visualizer setting" to Frequency bars to see getFloatFrequencyData(...)
Fourier Transform (aka Fast Fourier Transform/FFT)
Another way to think about "time domain vs frequency" is shown the diagram below, from Fast Fourier Transform wikipedia
getFloatTimeDomainData gives you the chart on on the top (x-axis is Time)
getFloatFrequencyData gives you the chart on the bottom (x-axis is Frequency)
a Fast Fourier Transform (FFT) converts the Time Domain data into Frequency data, in other words, FFT converts the first chart to the second chart.
cwilso has it backwards.
the time data array is the longer one (fftSize), and the frequency data array is the shorter one (half that, frequencyBinCount).
fftSize of 2048 at the usual sample rate of 44.1kHz means each sample has 1/44100 duration, you have 2048 samples at hand, and thus are covering a duration of 2048/44100 seconds, which 46 milliseconds, not 23 milliseconds. The frequencyBinCount is indeed 1024, but that refers to the frequency domain (as the name suggests), not the time domain, and it the computation 1024/44100, in this context, is about as meaningful as adding your birth date to the fftSize.
A little math illustrating what's happening: Fourier transform is a 'vector space isomorphism', that is, a mapping going bijectively (i.e., reversible) between 2 vector spaces of the same dimension; the 'time domain' and the 'frequency domain.' The vector space dimension we have here (in both cases) is fftSize.
So where does the 'half' come from? The frequency domain coefficients 'count double'. Either because they 'actually are' complex numbers, or because you have the 'sin' and the 'cos' flavor. Or, because you have a 'magnitude' and a 'phase', which you'll understand if you know how complex numbers work. (Those are 3 ways to say the same in a different jargon, so to speak.)
I don't know why the API only gives us half of the relevant numbers when it comes to frequency - I can only guess. And my guess is that those are the 'magnitude' numbers, and the 'phase' numbers are thrown out. The reason that this is my guess is that in applications, magnitude is far more important than phase. Still, I'm quite surprised that the API throws out information, and I'd be glad if some expert who actually knows (and isn't guessing) can confirm that it's indeed the magnitude. Or - even better (I love to learn) - correct me.

Zabbix trigger expression - detect a drop and stay in problem state

I have this trigger that fires upon a match of the rule below:
{monitoring:test.item.change(0)}<-100
When my graph goes down by over 100 units, an event gets created. The event should switch to OK status when the graph goes back up. The graph has different average values at different times of day and besides, the item is a trapper value, which does not support flexible intervals. My problem is this; when the graph falls by over 100 units, let's say from 300 to 10, a PROBLEM situation is created. At the next interval, if the value is still low (e.g 13), Zabbix creates an OK event, because although the value is still low, the expression does not return true because the graph hasn't gone down by a further 100 units. Any ideas on how I could fix this? I have been trying to use
{{monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100}
but Zabbix wouldn't take that expression. This is supposed to compare the last value of test.item to the average value of the past 30 minutes and raise an alert when the difference exceeds 100.
This, I believe, would sort out my problem situation of a false OK status when the graph remains at a low value.
EDIT: I think I have cracked it. Zabbix has accepted the below expression:
{monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100
I think you'll soon realize that expression won't solve your targeted behavior and will keep on flapping between PROBLEM and OK.
You have just shifted the 'did a -100 change occurred' check between 'the last and previous last' values
to 'the last and the average of the last half an hour'.
Checking if either there was an abrupt change OR
if the value is still too low will probably better mimic your expected scenario,
{monitoring:test.item.last(0)}>100 | {monitoring:test.item.max(#2)}<20
max(#2)<20 checks if the maximum of the last 2 values is bellow 20.
EDIT: After reading your comment maybe this approach (after some tweaking for your expected values) will better serve you,
({monitoring:test.item.avg(1800)}<10 & {monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>20) | ({monitoring:test.item.avg(1800)}>100 & {monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100)
This way, you'll better fit your trigger for the different volumes during the day.