Chrome dev tools showing high cache storage utilisation - google-chrome

I was analyzing the service workers and cache storage we have implemented in our website.
Going through the process, I found out that the amount of cache storage used by the website is huge.
The cumulative size of the files that I am adding to cache storage is not more than 5-6 MB. But in the chrome dev tools, it shows approximately 131 MB storage used.
Chrome 63 on OS X.
In incognito mode it shows usage as high as 100 MB which causes Quota Exceeded error .
Even after clearing browsing data from chrome settings, and reloading the webpage (bandwidth speed - 1MBps) , just after 4-5 seconds the storage use is shown as 130 MB which is practically not possible because
1) As mentioned above my actual data size added to cache is 5-6 MB .
2) even if it was somehow getting 130 MB (I don't know how), downloading 130 MB is just not possible given my bandwidth limitations.
What might be the issue here ?
Why does it show such high cache storage usage?

This question is a duplicate of Chrome shows high cache storage use and until it gets closed, I'm going to leave an answer here for visibility. Feel free to delete after closing.
See also limitations of opaque responses.
TL;DR
Each opaque response (the result of a request made to a remote origin when CORS is not enabled), even a 100-byte GIF, takes on average 7MB of cache.
Solutions include adding crossorigin="anonymous" in script and img tags, and removing { mode: 'no-cors' }.

Related

Three.js freezes Chrome completely, huge texture in GLTF model

I want to load a ludicrous, binary glTF object with only a few polygons (~250), and a huge texture of size 10,000 x 5,500 pixels. The file is "only" 20MB in size.
When I load it using Three.js, Chrome hangs in its entirety for nearly 15 seconds. When looking in the profiler, pretty much nothing is going on during the freezing time.
If you want to load the file yourself, you can download it at https://phychi.com/uni/threejs/models/freezing-monster.glb, and the whole scene can be visited at https://phychi.com/uni/threejs/ (until I've found a solution or given up).
The behavior stays the same, whether I call GLTFLoader.load(), GLTFLoader.loadAsync(), or create my own Promise, and call .then(addToScene), without any awaits.
Does somebody have a magical solution? Or if not, how could I profile it more efficiently, seeing the internal calls? Or should I just open a bug report for Chrome/Three.js?
PS: Windows 10 Personal, Ryzen 5 2600, 32 GB RAM, RX 580 8GB.
The issue should be resolved by upgrading the library to r135(the current release).
The releases r133 and r134 have a change that introduced a performance regression on Windows when using sRGB encoded textures.

View the contents of memory footprint of a chrome tab

I've got a memory problem in my application. Symptoms are weird: js heap stays stable at around 30 mb, while memory footprint of a browser tab is growing over time out of bounds and eventually crashes entire mac os (I'm looking at memory footprint numbers provided by chrome's task manager). It takes several hours and I can reproduce it consistently. As heap review doesn't give me any clues at all, I'd like to examine the memory allocations for the browser tab, where my application is running. Does chrome provide any tools for that?

chrome dev-tools network throttling seems slower than setting

When using chrome dev tools, the network throttling functionality seems to simulate a slower connection than the kb/s down setting defines.
For example when simulating with the preset of 50kb/s for GPRS and downloading a 256kb file, chrome shows the file taking a total of 42.89 sec for the content download. Yet, 256 / 50 would come to 5.12 seconds. Am I missing something here?
Thanks for reading,
-cybo
Internet connection speeds are measured in kilobits instead of kilobytes to describe the connection speed. That explains the 8x difference between the value you got and what you expected.
Here's another example, downloading the 181 kilobyte StackOverflow sprites file.
50kb/s is 6.25KB/s. We'd expect the download to take 181KB / (6.25KB/s) = 28.96s, which closely matches the actual value of 28.83s.

Huge difference between memory usage displayed in chrome's task manager and memory timeline

I am trying to profile the memory usage of my application using chrome's memory time line.
However i am seeing huge difference between the values reported in the time line and the values displayed by the chrome's task manager. When i profiled myweb page on chrome yesterday, the memory usage in the time line is between 11MB to 14 MB. However chrome's task manager is hovering at around 135MB Why is this difference ? If memory usage as displayed in the timeline is correct what's memory that's used by the task manager ?

swap space used while physical memory is free

i recently have migrated between 2 servers (the newest has lower specs), and it freezes all the time even though there is no load on the server, below are my specs:
HP DL120G5 / Intel Quad-Core Xeon X3210 / 8GB RAM
free -m output:
total used free shared buffers cached
Mem: 7863 7603 260 0 176 5736
-/+ buffers/cache: 1690 6173
Swap: 4094 412 3681
as you can see there is 412 mb ysed in swap while there is almost 80% of the physical ram available
I don't know if this should cause any trouble, but almost no swap was used in my old server so I'm thinking this does not seem right.
i have cPanel license so i contacted their support and they noted that i have high iowait, and yes when i ran sar i noticed sometimes it exceeds 60%, most often it's 20% but sometimes it reaches to 60% or even 70%
i don't really know how to diagnose that, i was suspecting my drive is slow and this might cause the latency so i ran a test using dd and the speed was 250 mb/s so i think the transfer speed is ok plus the hardware is supposed to be brand new.
the high load usually happens when i use gzip or tar to extract files (backup or restore a cpanel account).
one important thing to mention is that top is reporting that mysql is using 100% to 125% of the CPU and sometimes it reaches much more, if i trace the mysql process i keep getting this error continually:
setsockopt(376, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported)
i don't know what that means nor did i get useful information googling it.
i forgot to mention that it's a web hosting server for what it's worth, so it has the standard setup for web hosting (apache,php,mysql .. etc)
so how do i properly diagnose this issue and find the solution, or what might be the possible causes?
As you may have realized by now, the free -m output shows 7603MiB (~7.6GiB) USED, not free.
You're out of memory and it has started swapping which will drastically slow things down. Since most applications are unaware that the virtual memory is now coming from much slower disk, the system may very well appear to "hang" with no feedback describing the problem.
From your description, the first process I'd kill in order to regain control would be the Mysql. If you have ssh/rsh/telnet connectivity to this box from another machine, you may have to login from that in order to get a usable commandline to kill from.
My first thought (hypothesis?) for what's happening is...
MySQL is trying to do something that is not supported as this machine is currently configured. It could be missing a library or an environment variable is not set or any number things.
That operation allocates some memory but is failing and not cleaning up the allocation when it does. If this were a shell script, it could be fixed by putting an event trap command at the beginning that runs a function that releases memory and cleans up.
The code is written to keep retrying on failure so it rapidly uses up all your memory. Refering back to the shell script illustration, the trap function might also prompt to see if you really want to keep retrying.
Not a complete answer but hopefully will help.