It is logical to run multiple processes when multiple tabs are there but in my Google Chrome i found multiple processes under single tab only. I thought it was some thread stuck so i restarted my PC and opened Google Chrome only and found same behavior. I am using Windows 7.
Chrome has plugins, web apps, rendering engines and others as separate processes from the browser itself.
That is done so that if one of those processes fails, it won't affect the whole browser, or even the whole tab, because those are separate processes too.
For example, Firefox doesn't have that, instead it detects the script in the page that should be causing the problem and shows you a dialog for if you want to stop it.
In summary:
Chrome treats these as different processes:
The browser
The browser (yes, again. Chrome by itself it's already 2 processes)
Each tab
Each extension (at least one per extension)
Each web app
Each plugin
Each whatever, everybody is a process, yay!
And that helps things can run in parallel and that that stuff doesn't end up crashing the whole browser.
Less crashes or at least when a process crash not all the browser crash, increase security, to make things run in parallel
Google Chrome takes advantage of these properties and puts web apps and plug-ins in separate processes from the browser itself. This means that a rendering engine crash in one web app won’t affect the browser or other web apps. It means the OS can run web apps in parallel to increase their responsiveness, and it means the browser itself won’t lock up if a particular web app or plug-in stops responding. It also means we can run the rendering engine processes in a restrictive sandbox that helps limit the damage if an exploit does occur.In this world, browsers that put everything in one process face real challenges for robustness, responsiveness, and security. If one web app causes a crash in the rendering engine, it will take the rest of the browser with it, including any other web apps that are open. Web apps often have to compete with each other for CPU time on a single thread, sometimes causing the entire browser to become unresponsive. Security is also a concern, because a web page that exploits a vulnerability in the rendering engine can often take over your entire computer.
https://www.howtogeek.com/124218/why-does-chrome-have-so-many-open-processes/
On top of this, the parts of the browser that render HTML, JavaScript, and CSS have become extraordinarily complex over time. These rendering engines frequently have bugs as they continue to evolve, and some of these bugs may cause the rendering engine to occasionally crash. Also, rendering engines routinely face untrusted and even malicious code from the web, which may try to exploit these bugs to install malware on your computer.In this world, browsers that put everything in one process face real challenges for robustness, responsiveness, and security. If one web app causes a crash in the rendering engine, it will take the rest of the browser with it, including any other web apps that are open. Web apps often have to compete with each other for CPU time on a single thread, sometimes causing the entire browser to become unresponsive. Security is also a concern, because a web page that exploits a vulnerability in the rendering engine can often take over your entire computer.
https://blog.chromium.org/2008/09/multi-process-architecture.html
Related
Process-per-site-instance and Process-per-site should be understood? I read the explanation here, I feel that there is no difference between the two arguments? It is hoped that God can give some simpler explanations. Examples are better?
Process-per-site-instance:
Chromium creates a renderer process for each instance of a site the user visits. This ensures that pages from different sites are rendered independently, and that separate visits to the same site are also isolated from each other. Thus, failures (e.g., renderer crashes) or heavy resource usage in one instance of a site will not affect the rest of the browser. This model is based on both the origin of the content and relationships between tabs that might script each other. As a result, two tabs may display pages that are rendered in the same process, while navigating to a cross-site page in a given tab may switch the tab's rendering process.
Process-per-site:Chromium also supports a process model that isolates different sites from each other, but groups all instances of the same site into the same process. This model is based on the origin of the content and not the relationships between tabs.
link:
One of the most obvious difference is that under Process-per-site mode, it makes sure that each site uses no more than one process, while in Process-per-site-instance (which is the default mode) one sites could have more than one processes.
Here is a simple experiment to show the difference:
Process-per-site-instance
First open the Chromium (I did on Chromium build 778138, I think the result will be the same on any recent Chrome build) in the normal/default mode.
Then open two github.com tabs.
Open the Task Manager (under More Tools in Chromium).
As you can see these two tabs have two different process ID 86892 and 86894
Process-per-site
Quit Chromium and reopen it with addition argument by running this in the Terminal to enter the process per site mode (I'm using MacOS, for Windows you could follow this):
open -a "Chromium" --args --process-per-site
Then open two github.com tabs.
Open the Task Manager same as the previous step.
As you can see the two github tabs have the same process ID 86831
Another interesting observation is that in the first one (Process-per-site-instance) the total Memory footprint is around 125 MB while the second one (Process-per-site) is around 88 MB, which is 30% less! But the downside is, as stated on Chromuim website:
(Process-per-site) Can result in large renderer processes. Sites like google.com host a wide variety of applications that may be open concurrently in the browser, all of which would be rendered in the same process. Thus, resource contention and failures in these applications could affect many tabs, making the browser seem less responsive. It is unfortunately hard to identify site boundaries at a finer granularity than the registered domain name without breaking backwards compatibility.
Further reading:
In this article the author did some interesting experiences under the Process-per-site-instance mode and I think it could further improve your understanding on the Chrome Process Models.
With an increasing mobile user base I would like to be able to gauge a baseline for site performance. Typically I can do this using chrome dev tools, checking when DOMContentLoaded finishes, and checking all my javascript tags to make sure they're in acceptable threshholds. How would I go about automating this so I can create performance dashboards?
Maybe phantomjs, selenium can do this? What headless chrome implementation could I use to achieve this.
You can use Lighthouse to capture a variety of performance metrics.
For real user metrics, you can instrument your app however you see fit with the User Timing API.
I have a complex web app, which is working fine in desktop browsers, as well as in the Android native browser (which is part of why I got so long into this project before noticing this problem). The server setup is using the Typesafe Stack (Play/Akka/Scala), but I suspect that's not relevant to the question. Suffice it to say, it uses bog-standard transient session cookies to keep your login.
The problem is, in Chrome and Safari, that transient session appears to be too fragile, and very unpredictably so. In both cases, so long as I am working actively in the browser, everything is fine. But if I switch away from the browser for a while and return to it, it often loses the session cookie, forcing a re-login. Sometimes it takes an hour or two, sometimes just a few minutes -- I haven't yet been able to figure out a pattern.
Note that this doesn't involve closing the tab with my app in it, or manually closing the browser process. I would expect to be able to switch away from Chrome and come back to it using the app switcher and still have my session there; for some reason, though, it seems to be frequently and quickly losing the session cookie. This is a killer problem: users shouldn't be forced to re-login too often.
Any ideas or pointers to why these browsers might be losing their session cookies so easily? I've done lots of web development, but this is my first time seriously targeting mobile browsers, and I'm clearly missing something...
Today, i observed an interesting behavior. I am using windows XP-sp3 OS.
When i open a new tab in Google Chrome & view the task manager, a new process is created.
But, after some time, this process is terminated.
Why it is showing such kind of behavior? Is it due to system call vfork()? Does the child process immediately call exec()?
Does it happen only with Google Chrome or all other browsers behave in a similar fashion?
AFAIK Chrome maintains one process for each tab, also one process for some plugins too. They preferred multi-process architecture over multi-threaded one because when you are making network application which communicate with network all the time, you can expect to receive packets which can garble the memory. So having multi-process will prevent all but one process, as opposed to multi-threaded will kill the the tabs.
You can enlighten your self on following blog:
http://blog.chromium.org/2008/09/multi-process-architecture.html
What is new in HTML 5’s “offline web applications” feature which was not already available in all browsers?
Offline caching is the job of the browser — how did it become a job of HTML?
A web cache is a mechanism for the
temporary storage (caching) of web
documents, such as HTML pages and
images, to reduce bandwidth usage,
server load, and perceived lag. A web
cache stores copies of documents
passing through it; subsequent
requests may be satisfied from the
cache if certain conditions are met.
As written in Wikipedia’s article for Web cache.
And this is written for offline web cache in the W3C website:
In order to enable users to continue
interacting with Web applications and
documents even when their network
connection is unavailable — for
instance, because they are traveling
outside of their ISP's coverage area —
authors can provide a manifest which
lists the files that are needed for
the Web application to work offline
and which causes the user's browser to
keep a copy of the files for use
offline.
What is HTML 5 doing better and different in caching?
Is it similar to offline mode in Internet Explorer 5? And can we cache the data beyond the limit of amount of space set in browser?
Please give me an example so that I can understand the difference of HTML 5 offline cache, and browser caches.
Web browser caching is when browsers decide to store files locally to improve performance. HTTP allows web servers to suggest browsers how long to store the files for, and allows browsers to ask the server whether a file has changed (so that they can avoid re-downloading it).
However, it’s not designed to reliably store assets required by an offline application. It’s ultimately up to the browser whether, and for how long, it caches the files. And browsers will often stop using their cached version if they can’t contact the server to check that it’s up-to-date.
The HTML5 offline web applications spec provides web authors with the ability to tell browsers what to store for offline access, and requires browsers to keep those files up-to-date when it is online. It also provides a DOM property that tells the developer whether the browser is online or offline, and events that fire when the online status changes.
As Peeter describes in his answer, this allows web app developers to store user-inputted data whilst the user is offline, then sync it with the server when they’re online again. The developer has to do this storage and syncing manually, as the browser only provides the events indicating online status, but if the browser also supports localStorage, the developer can store the data there.
I can do no better than point you to the relevant chapter of Dive into HTML5: http://diveintohtml5.ep.io/offline.html
You can now cache dynamic data, instead of just js/css/html files / images.
Lets say you've got a todo list application open in your browser. You're connected to the internet and you're adding a bunch of stuff you have to do.
Boom, you're on an airplane without a connection. You've got 6 hours of time to kill so you decide to get some work done. You finish all of the things on your todo list (the list was still open in your browser). You select all of the items and change their state to "finished".
Your plane lands, you open up your laptop and refresh the page. All the changes you did without a connection are now synced to the server as you have a internet connection now.