Can I lower a Chrome tab's memory capacity? - google-chrome

Websites often take up too much memory and can make the browser slow. I'd prefer it if the tab didn't crash (a laggy UI is preferred). I could turn off Javascript if the site was unusable with the memory cap.

This kind of functionality is not available nor would it ever be available. A feature such as memory limits per-tab is an extremely niche use and it wouldn't have a return on investment for implementing and rolling it out. The team is far better off working on the memory issues themselves and reducing their usage internally.
It is difficult to simply "cap memory" for web apps. When it is available it will get used and when it isn't then something is going to get dumped out of memory. Part of the memory issues are within the Chrome team however, some are because web developers to program responsibly and not waste user resources.
There simply isn't much in user-space that can be done here. It is up to the Chrome team to optimize internally as well as web developers being responsible and thoughtful towards what they produce.

Related

How to know when to use Chrome Dev Tools: Performance vs Memory tab

Lets say I have a web app which is slow and I want to identify possible bottlenecks. I First would go into the network tab and see if the server is the problem, if network calls are okay then I should proceed with performance and memory tabs?
What is the use case of the performance tab and the use cases of the memory tab?
What is the use case of the performance tab and the use cases of the memory tab?
The Performance panel gives you a complete view of the performance of a page during a recording. This includes network requests, JS execution, parsing, rendering, painting, etc.
The Memory panel gives you detailed views into how a page is using memory. People mostly use it to debug memory issues. When a page gets progressively slower as you use it, that's sometimes a memory leak. When a page is consistently slow, that's sometimes a page that is using too much memory.
Lets say I have a web app which is slow and I want to identify possible bottlenecks. I First would go into the network tab...
Actually, I recommend starting with the Performance panel. It can show you network activity, as well as a bunch of other page activity. Go to the Network panel after you've identified that the problem is a network problem.
See Get Started With Runtime Performance to get familiar with the Performance panel.
Record the page load.
Once you've got a recording, there's a bunch of different sections on the Performance panel that can help you spot various bottlenecks:
The Network section can help you spot network bottlenecks.
The Memory section can help you see memory usage.
The Main section shows you JS, parsing, rendering, and painting activity.
See Performance Analysis Reference for lots more on the Performance panel.

What contributes to 'Other' section in Native memory snapshot in Chrome developer tools?

As I load more data into my application, 'Other' section keeps growing in my web app. What does this mean? I have checked my code for any memory leaks and couldn't find any. Also, the objects I hold in my application are very much needed for the application.
Varunkumar Nagarajan
How do you checking your app for a leak?
Have you seen Tool to track down JavaScript memory leak
Native memory snapshot is an experimental feature and couldn't be used for detecting memory leaks at the moment.

Is it possible to control LCD components in software?

Is it possible, say, using a programming language like C or C++, to write a program that directly interacts with the power inverter or controller in a modern LCD monitor?
I'm told that it used to be possible to forcefully overclock the oscillator in a CRT to make it catch on fire. I'm curious as to whether the same principle can be applied to a modern monitor.
Being able to inflict real damage on a modern external monitor is highly unlikely.
Connections like VGA, DVI and HDMI don't provide sufficiently direct access to the screen's hardware.
The hardware design of a consumer product can be considered flawed if it allows a killer poke, i.e. destruction of a hardware component by issuing
software instructions.
In modern PC hardware, laptops have a tightly integrated display. It may be possible to write a program that has harmful effects on a laptop's backlight,
e.g. by flicking it on and off rapidly by calling the ACPI interface.
From http://ibm-acpi.sourceforge.net/README:
Whatever you do, do NOT ever call thinkpad-acpi backlight-level change
interface and the ACPI-based backlight level change interface
(available on newer BIOSes, and driven by the Linux ACPI video driver)
at the same time. The two will interact in bad ways, do funny things,
and maybe reduce the life of the backlight lamps by needlessly kicking
its level up and down at every change.
Since inputs are digital or at least inputs with D/A converters it is unlikely. That might work with really old VGA monitors without any digital logic. VGA in general does not even have clock, just hsync and vsync which tells timing for returning electron beam and was direct controller for controlling beam. Most modern CRT monitors had automatic detection of improper inputs, so no it is impossible to kill LCD this way.
http://www.epanorama.net/documents/pc/vga_timing.html

How is dynamic memory allocation handled when extreme reliability is required?

Looks like dynamic memory allocation without garbage collection is a way to disaster. Dangling pointers there, memory leaks here. Very easy to plant an error that is sometimes hard to find and that has severe consequences.
How are these problems addressed when mission-critical programs are written? I mean if I write a program that controls a spaceship like Voyager 1 that has to run for years and leave a smallest leak that leak can accumulate and halt the program sooner or later and when that happens it translates into epic fail.
How is dynamic memory allocation handled when a program needs to be extremely reliable?
Usually in such cases memory won't be dynamically allocated. Fixed sections of memory are used to store arguments and results, and memory usage is tightly controlled and highly tested.
This is the same problem as a long running web server or something like an embedded control system in heating and ventilation heating system.
When I worked for Potterton and then Schlumberger in the Buildings Energy Management Sector we did not use dynamic memory allocation. We had fixed size blocks. A given block would be used for a specified purpose and nothing else. The sizes of the blocks dictated how many of them there could be, so you could choose to have X of this and Y of that functionality etc.
Sounds constrained, but for the fixed, discrete tasks it was enough.
Its important, because if you get it wrong you could blow up a boiler and take half a school building with you :-(
Summary: In some situations, you avoid dynamic memory altogether.
Even without garbage collection and memory leaks, classic malloc/free can fail if you have fragmentation, so a static memory layout is the only sure way to guarantee that no problem arises.
One could also design the system with fault tolerance in mind in case of bugs getting by testing. Checkpoint and recovery techniques could conceivably be used for long running programs like the Voyager example, but are likely tricky to implement when there are stringent real time requirements.

How much is File I/O a performance factor in web development?

I know the mantra is that the database is always the long pole in the tent anytime a page is being generated server-side.
But there's also a good bit of file i/o going on on a web server. Scripted code is replete with include/require statements. Moreover, its typically a practice to store templated html outside the application in files which are loaded and filled in accordingly.
How much of a role does file i/o play when concened with web development? Does it ever become an issue? When is it too much? Do web servers/languages cache anything?
Has it ever really mattered in your experience?
10 years ago, disks were so much faster than processors that you didn't have to worry about it so much. You'd run out of CPU (or saturate your NIC) before disk became an issue. Nowadays, CPUs and gigabit NICs could make disk the bottleneck, BUT....
Most non-database disk usage is so easily parallelizable. If you haven't designed your hosting architecture to scale horizontally by adding more systems, that's more important than fine-tuning disk access.
If you have designed to scale horizontally, usually just buying more servers is cheaper than trying to figure out how to optimize disk. Not to mention, things like SSD or even RAM disks for your templates will make it a non-issue.
It's very rare to have a serving architecture that scales horizontally, popular enough to cause scalability problems, but not profitable enough to afford another 1u in your rack.
File I/O will only become a factor (for static content and static page includes) if your bandwidth to the outside world is similar to your disk bandwidth. This would imply either you have a really fast connection, are serving content on a fast LAN, or have really slow disks (or are having a lot of disk contention). So most likely the answer is no.
Of course, this assumes that you are not loading a large file only for a small portion of the file.
File I/O is one of many factors, including bandwidth, network connectivity, memory etc, which may affect the performance of a web application. The most effective way to determine if file I/O is causing you any issues is to run some profiling on the server and see if this represents is the bounding factor on your performance.
A lot of this will depend upon what types of files you are loading from disk, lots of small files will have very different properties to a few large files. Web servers can cache files, both internally in memory and can indicate to a client that a file (e.g. an image) can be cached and so does not need to be requested every time.
Do not prematurely optimize. Its evil, or something.
However, I/O is about the slowest thing you can do on a computer. Try to keep it at a minimum, but don't let Knuth see what you're doing.
I would say that file IO speed only becomes an issue if you are serving tons of static content. When you are processing data, and executing code to render the pages, the time to read the page itself from disk is negligible. File I/O is important in cases where the static files you are serving up are unable to fit into memory, such as when you are serving video or image files. It could also happen with html files, but since the size is so small with html files, this is less likely.