chrome dev-tools network throttling seems slower than setting - google-chrome

When using chrome dev tools, the network throttling functionality seems to simulate a slower connection than the kb/s down setting defines.
For example when simulating with the preset of 50kb/s for GPRS and downloading a 256kb file, chrome shows the file taking a total of 42.89 sec for the content download. Yet, 256 / 50 would come to 5.12 seconds. Am I missing something here?
Thanks for reading,
-cybo

Internet connection speeds are measured in kilobits instead of kilobytes to describe the connection speed. That explains the 8x difference between the value you got and what you expected.
Here's another example, downloading the 181 kilobyte StackOverflow sprites file.
50kb/s is 6.25KB/s. We'd expect the download to take 181KB / (6.25KB/s) = 28.96s, which closely matches the actual value of 28.83s.

Related

Three.js freezes Chrome completely, huge texture in GLTF model

I want to load a ludicrous, binary glTF object with only a few polygons (~250), and a huge texture of size 10,000 x 5,500 pixels. The file is "only" 20MB in size.
When I load it using Three.js, Chrome hangs in its entirety for nearly 15 seconds. When looking in the profiler, pretty much nothing is going on during the freezing time.
If you want to load the file yourself, you can download it at https://phychi.com/uni/threejs/models/freezing-monster.glb, and the whole scene can be visited at https://phychi.com/uni/threejs/ (until I've found a solution or given up).
The behavior stays the same, whether I call GLTFLoader.load(), GLTFLoader.loadAsync(), or create my own Promise, and call .then(addToScene), without any awaits.
Does somebody have a magical solution? Or if not, how could I profile it more efficiently, seeing the internal calls? Or should I just open a bug report for Chrome/Three.js?
PS: Windows 10 Personal, Ryzen 5 2600, 32 GB RAM, RX 580 8GB.
The issue should be resolved by upgrading the library to r135(the current release).
The releases r133 and r134 have a change that introduced a performance regression on Windows when using sRGB encoded textures.

IMX6ULL 4G download stalling issue

We have been using IMX6ULL processors along with a Quectel 4G module in our custom made boards. The 4G module can be initialised, brought up and the PPP0 interface can also be initialised which in-turn does provide us with internet connectivity too but, when we start downloading files (of about 10 MB - 200 MB), we have observed that the download begins to stall at irregular intervals. While the download does stall, the PPP0 interface is still up but we lose internet connectivity hence, we have to kill PPPD and re-initialise PPP0.
We have tried using different variations of PPP0 initialisation scripts that we could get our hands on but the issue still persisted however, recently when we wanted to dump the traffic on the PPP0 interface using TCPDUMP in-order to analyse the same, we observed that the download does not stall anymore and we also observed a much better 4G throughput too. We have still not been able to figure out why this is indeed the case. Any inputs or guidance on the same would be of great help.
P.S: The kernel version we have been using is 4.1.15 but, we have observed a similar behavior with the 5.4.70 kernel too.
Thanks in advance
Regards
Nitin
Check the 4G network first with AT+COPS? and AT+CSQ
Does the Module disconnect from the base station?
Do not try kill pppd and restart setting up ppp0, and try AT+CFUN=0 \ AT+CFUN=1 to restart the network registration first.
And for 4G module, the Quectel provide a tool named quectel-CM to setup internet connection, it is of better performance than ppp.
btw, have you check the the memory used and the CPU status?

Chrome memory measurement now almost flat for longer test runs

In order to check our web application for memory leaks, I run a machine which does the following:
it runs automated End-to-End tests over (almost) the entire application in Chrome
after each block of tests, it goes to a state of the web application where almost nothing happens
it triggers gc(); for garbage collection
it saves totalJSHeapSize, and usedJSHeapSize to a file
it plots out the results for each test run to a graph
That way, we can see how much the memory increases and which are the problematic parts of our application: At some point the memory increases, at some point it decreases.
Till yesterday, it looked like this:
Bright red (upper line): totalJSHeapSize, light red (lower line): usedJSHeapSize
Yesterday, I updated Chrome to version 69. And now the chart looks quite different:
The start and end amount of memory used (usedJSHeapSize) is almost the same. But as you can clearly see, the way it changes over the course of the test (ca. 1,5h) is quite different.
My questions are now:
Is this a change in reality or in measurement? I.e. did Chrome change its memory handling? Or just the way it puts out memory values via totalJSHeapSize, and usedJSHeapSize?
Concerning memory leaks, is it good news or bad news for me? Like: Before I had dozens of spots where memory increases, now I have just three. Is this true? Or are the memory leaks in the now flat areas still there and hidden?
I'm also thankful for any background information on how Chrome changed its memory measurement.
Some additional info:
The VM runs under KUbuntu 18.04
It's a single web page application done with AngularJS 1.6
The outcome of the memory measurement is quite stable - both before and after the update of Chrome
EDIT:
It seems this was a bug of Chrome version 69. At least, with an update to Chrome 70, this strange behavior is gone and everything looks almost as before.
I don't think you should be worry about it. This can happen due to the memory manager used inside the chrome. You didn't mentioned the version of your first memory graph, possibility that the memory manager used between these two version is different. Chrome was using the TCMalloc which take the large chunk of memory from the OS and manage it, once the memory shortage happenned with TCMalloc then it ask again a big chunk of memory from OS and start managing it. So the later graph what you are seeing have less up and downs (but bigger then previous one) due to that. Hope it answered your query.
As you mentioned that
The outcome of the memory measurement is quite stable - both before and after the update of Chrome
You don't need to really worry about it, the way previously chrome was allocating memory and how it does with new version is different(possible different memory manager) that's it.

Chrome dev tools showing high cache storage utilisation

I was analyzing the service workers and cache storage we have implemented in our website.
Going through the process, I found out that the amount of cache storage used by the website is huge.
The cumulative size of the files that I am adding to cache storage is not more than 5-6 MB. But in the chrome dev tools, it shows approximately 131 MB storage used.
Chrome 63 on OS X.
In incognito mode it shows usage as high as 100 MB which causes Quota Exceeded error .
Even after clearing browsing data from chrome settings, and reloading the webpage (bandwidth speed - 1MBps) , just after 4-5 seconds the storage use is shown as 130 MB which is practically not possible because
1) As mentioned above my actual data size added to cache is 5-6 MB .
2) even if it was somehow getting 130 MB (I don't know how), downloading 130 MB is just not possible given my bandwidth limitations.
What might be the issue here ?
Why does it show such high cache storage usage?
This question is a duplicate of Chrome shows high cache storage use and until it gets closed, I'm going to leave an answer here for visibility. Feel free to delete after closing.
See also limitations of opaque responses.
TL;DR
Each opaque response (the result of a request made to a remote origin when CORS is not enabled), even a 100-byte GIF, takes on average 7MB of cache.
Solutions include adding crossorigin="anonymous" in script and img tags, and removing { mode: 'no-cors' }.

swap space used while physical memory is free

i recently have migrated between 2 servers (the newest has lower specs), and it freezes all the time even though there is no load on the server, below are my specs:
HP DL120G5 / Intel Quad-Core Xeon X3210 / 8GB RAM
free -m output:
total used free shared buffers cached
Mem: 7863 7603 260 0 176 5736
-/+ buffers/cache: 1690 6173
Swap: 4094 412 3681
as you can see there is 412 mb ysed in swap while there is almost 80% of the physical ram available
I don't know if this should cause any trouble, but almost no swap was used in my old server so I'm thinking this does not seem right.
i have cPanel license so i contacted their support and they noted that i have high iowait, and yes when i ran sar i noticed sometimes it exceeds 60%, most often it's 20% but sometimes it reaches to 60% or even 70%
i don't really know how to diagnose that, i was suspecting my drive is slow and this might cause the latency so i ran a test using dd and the speed was 250 mb/s so i think the transfer speed is ok plus the hardware is supposed to be brand new.
the high load usually happens when i use gzip or tar to extract files (backup or restore a cpanel account).
one important thing to mention is that top is reporting that mysql is using 100% to 125% of the CPU and sometimes it reaches much more, if i trace the mysql process i keep getting this error continually:
setsockopt(376, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported)
i don't know what that means nor did i get useful information googling it.
i forgot to mention that it's a web hosting server for what it's worth, so it has the standard setup for web hosting (apache,php,mysql .. etc)
so how do i properly diagnose this issue and find the solution, or what might be the possible causes?
As you may have realized by now, the free -m output shows 7603MiB (~7.6GiB) USED, not free.
You're out of memory and it has started swapping which will drastically slow things down. Since most applications are unaware that the virtual memory is now coming from much slower disk, the system may very well appear to "hang" with no feedback describing the problem.
From your description, the first process I'd kill in order to regain control would be the Mysql. If you have ssh/rsh/telnet connectivity to this box from another machine, you may have to login from that in order to get a usable commandline to kill from.
My first thought (hypothesis?) for what's happening is...
MySQL is trying to do something that is not supported as this machine is currently configured. It could be missing a library or an environment variable is not set or any number things.
That operation allocates some memory but is failing and not cleaning up the allocation when it does. If this were a shell script, it could be fixed by putting an event trap command at the beginning that runs a function that releases memory and cleans up.
The code is written to keep retrying on failure so it rapidly uses up all your memory. Refering back to the shell script illustration, the trap function might also prompt to see if you really want to keep retrying.
Not a complete answer but hopefully will help.