What could be the reason that the total time of a request is higher than the Connection Setup time + Request/Response time?
357.32ms > (19.04 + 0.56 + 8.56 + 23.46 + 0.41)ms
Am I missing something?
I had a similar problem. In my case I noticed that the "gap" was always after the "DNS lookup" as shown below. Somehow Chrome couldn't measure the time taken to translate localhost into127.0.0.1`
I solved the problem by replacing localhost to 127.0.0.1 in the URL bar. This removed 80%~90% of the overall load time
Related
I want my consumers to process large batches, so I aim to have the consumer listener "awake", say, on 1800mb of data or every 5min, whichever comes first.
Mine is a kafka-springboot application, the topic has 28 partitions, and this is the configuration I explicitly change:
Parameter
Value I set
Default Value
Why I set it this way
fetch.max.bytes
1801mb
50mb
fetch.min.bytes+1mb
fetch.min.bytes
1800mb
1b
desired batch size
fetch.max.wait.ms
5min
500ms
desired cadence
max.partition.fetch.bytes
1801mb
1mb
unbalanced partitions
request.timeout.ms
5min+1sec
30sec
fetch.max.wait.ms + 1sec
max.poll.records
10000
500
1500 found too low
max.poll.interval.ms
5min+1sec
5min
fetch.max.wait.ms + 1sec
Nevertheless, I produce ~2gb of data to the topic, and I see the consumer-listener (a Batch Listener) is called many times per second -- way more than desired rate.
I logged the serialized-size of the ConsumerRecords<?,?> argument, and found that it is never more than 55mb.
This hints that I was not able to set fetch.max.bytes above the default 50mb.
Any idea how I can troubleshoot this?
Edit:
I found this question: Kafka MSK - a configuration of high fetch.max.wait.ms and fetch.min.bytes is behaving unexpectedly
Is it really impossible as stated?
Finally found the cause.
There is a broker fetch.max.bytes setting, and it defaults to 55mb. I only changed the consumer preferences, unaware of the broker-side limit.
see also
The kafka KIP and the actual commit.
In the official documentation, it is set to None. What does None means?
I believe that the default timeout in forward-request set to none is not the no timeout.
As Specified in the same official documentation, forward-request is a root element and the required in the operation level policy.
Any value greater than 240 seconds won't be reliable and it can timeout within the range of 0 to 240 seconds, and the fix of timeout issue can be like changing the implementation a bit if that's in your control based on the number of requests forwarded by APIM gateway or backend instances or due to proxy response time.
Documentation shows 300 default but max is 240 seconds. I got timeout after 30 second so I believe it is a mistake in docs. Default should be 30 seconds.
I am trying to understand the websocketTraffic data exported from my Chrome dev tools. An example looks like this:
{
'type': 'receive',
'time': 1640291138.212745,
'opcode': 1,
'data': '<r xmlns=\'urn:xmpp:sm:3\'/>',
}
I see a "time" field but I actually cant find anything about what it means except this from the spec (http://www.softwareishard.com/blog/har-12-spec/):
time [number] - Total elapsed time of the request in milliseconds. This is the sum of all timings available in the timings object (i.e. not including -1 values) .
Is this really milliseconds, down to the millionth of a millisecond? I am trying to see how much time has elapsed between two WS events, so any insight would be very helpful. Thanks
Disclaimer:
This answer is not backed by official docs. However, I studied this problem for quite some time now, and my solution seems to make sense.
Answer:
Move the dot 3 places to the right, (i.e 1640291138.212745 -> 1640291138212.745) and you will get the actual time. Try to run this
new Date(1640291138212.745).toISOString()
and see if it fits your startedDateTime in the parent WebSocket entry in your har.
Probably Chrome saves the "time" field as seconds since epoch, instead of milliseconds since epoch. So "moving the dot 3 places to the right" actually means to multiply by a 1000 and that means converting to milliseconds.
I would love to be able to wait for a random amount of time (let's say a number between 5-12 seconds, chosen at random each time) before executing my next action in Puppeteer, in order to make the behaviour seem more authentic/real world user-like.
I'm aware of how to do it in plain Javascript (as detailed in the Mozilla docs here), but can't seem to get it working in Puppeteer using the waitFor call (which I assume is what I'm supposed to use?).
Any help would be greatly appreciated! :)
You can use vanila JS to randomly wait between 5-12 seconds between action.
await page.waitFor((Math.floor(Math.random() * 12) + 5) * 1000)
Where:
5 is the start number
12 is the end number
1000 means it's converting seconds to milliseconds
(PS: However, if you question is about waiting 5-12 seconds randomly before every action, then you should have a class with wrapper, which is a different issue until you update your question.)
I have this trigger that fires upon a match of the rule below:
{monitoring:test.item.change(0)}<-100
When my graph goes down by over 100 units, an event gets created. The event should switch to OK status when the graph goes back up. The graph has different average values at different times of day and besides, the item is a trapper value, which does not support flexible intervals. My problem is this; when the graph falls by over 100 units, let's say from 300 to 10, a PROBLEM situation is created. At the next interval, if the value is still low (e.g 13), Zabbix creates an OK event, because although the value is still low, the expression does not return true because the graph hasn't gone down by a further 100 units. Any ideas on how I could fix this? I have been trying to use
{{monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100}
but Zabbix wouldn't take that expression. This is supposed to compare the last value of test.item to the average value of the past 30 minutes and raise an alert when the difference exceeds 100.
This, I believe, would sort out my problem situation of a false OK status when the graph remains at a low value.
EDIT: I think I have cracked it. Zabbix has accepted the below expression:
{monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100
I think you'll soon realize that expression won't solve your targeted behavior and will keep on flapping between PROBLEM and OK.
You have just shifted the 'did a -100 change occurred' check between 'the last and previous last' values
to 'the last and the average of the last half an hour'.
Checking if either there was an abrupt change OR
if the value is still too low will probably better mimic your expected scenario,
{monitoring:test.item.last(0)}>100 | {monitoring:test.item.max(#2)}<20
max(#2)<20 checks if the maximum of the last 2 values is bellow 20.
EDIT: After reading your comment maybe this approach (after some tweaking for your expected values) will better serve you,
({monitoring:test.item.avg(1800)}<10 & {monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>20) | ({monitoring:test.item.avg(1800)}>100 & {monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100)
This way, you'll better fit your trigger for the different volumes during the day.