Process-per-site-instance and Process-per-site should be understood? I read the explanation here, I feel that there is no difference between the two arguments? It is hoped that God can give some simpler explanations. Examples are better?
Process-per-site-instance:
Chromium creates a renderer process for each instance of a site the user visits. This ensures that pages from different sites are rendered independently, and that separate visits to the same site are also isolated from each other. Thus, failures (e.g., renderer crashes) or heavy resource usage in one instance of a site will not affect the rest of the browser. This model is based on both the origin of the content and relationships between tabs that might script each other. As a result, two tabs may display pages that are rendered in the same process, while navigating to a cross-site page in a given tab may switch the tab's rendering process.
Process-per-site:Chromium also supports a process model that isolates different sites from each other, but groups all instances of the same site into the same process. This model is based on the origin of the content and not the relationships between tabs.
link:
One of the most obvious difference is that under Process-per-site mode, it makes sure that each site uses no more than one process, while in Process-per-site-instance (which is the default mode) one sites could have more than one processes.
Here is a simple experiment to show the difference:
Process-per-site-instance
First open the Chromium (I did on Chromium build 778138, I think the result will be the same on any recent Chrome build) in the normal/default mode.
Then open two github.com tabs.
Open the Task Manager (under More Tools in Chromium).
As you can see these two tabs have two different process ID 86892 and 86894
Process-per-site
Quit Chromium and reopen it with addition argument by running this in the Terminal to enter the process per site mode (I'm using MacOS, for Windows you could follow this):
open -a "Chromium" --args --process-per-site
Then open two github.com tabs.
Open the Task Manager same as the previous step.
As you can see the two github tabs have the same process ID 86831
Another interesting observation is that in the first one (Process-per-site-instance) the total Memory footprint is around 125 MB while the second one (Process-per-site) is around 88 MB, which is 30% less! But the downside is, as stated on Chromuim website:
(Process-per-site) Can result in large renderer processes. Sites like google.com host a wide variety of applications that may be open concurrently in the browser, all of which would be rendered in the same process. Thus, resource contention and failures in these applications could affect many tabs, making the browser seem less responsive. It is unfortunately hard to identify site boundaries at a finer granularity than the registered domain name without breaking backwards compatibility.
Further reading:
In this article the author did some interesting experiences under the Process-per-site-instance mode and I think it could further improve your understanding on the Chrome Process Models.
Related
It is logical to run multiple processes when multiple tabs are there but in my Google Chrome i found multiple processes under single tab only. I thought it was some thread stuck so i restarted my PC and opened Google Chrome only and found same behavior. I am using Windows 7.
Chrome has plugins, web apps, rendering engines and others as separate processes from the browser itself.
That is done so that if one of those processes fails, it won't affect the whole browser, or even the whole tab, because those are separate processes too.
For example, Firefox doesn't have that, instead it detects the script in the page that should be causing the problem and shows you a dialog for if you want to stop it.
In summary:
Chrome treats these as different processes:
The browser
The browser (yes, again. Chrome by itself it's already 2 processes)
Each tab
Each extension (at least one per extension)
Each web app
Each plugin
Each whatever, everybody is a process, yay!
And that helps things can run in parallel and that that stuff doesn't end up crashing the whole browser.
Less crashes or at least when a process crash not all the browser crash, increase security, to make things run in parallel
Google Chrome takes advantage of these properties and puts web apps and plug-ins in separate processes from the browser itself. This means that a rendering engine crash in one web app won’t affect the browser or other web apps. It means the OS can run web apps in parallel to increase their responsiveness, and it means the browser itself won’t lock up if a particular web app or plug-in stops responding. It also means we can run the rendering engine processes in a restrictive sandbox that helps limit the damage if an exploit does occur.In this world, browsers that put everything in one process face real challenges for robustness, responsiveness, and security. If one web app causes a crash in the rendering engine, it will take the rest of the browser with it, including any other web apps that are open. Web apps often have to compete with each other for CPU time on a single thread, sometimes causing the entire browser to become unresponsive. Security is also a concern, because a web page that exploits a vulnerability in the rendering engine can often take over your entire computer.
https://www.howtogeek.com/124218/why-does-chrome-have-so-many-open-processes/
On top of this, the parts of the browser that render HTML, JavaScript, and CSS have become extraordinarily complex over time. These rendering engines frequently have bugs as they continue to evolve, and some of these bugs may cause the rendering engine to occasionally crash. Also, rendering engines routinely face untrusted and even malicious code from the web, which may try to exploit these bugs to install malware on your computer.In this world, browsers that put everything in one process face real challenges for robustness, responsiveness, and security. If one web app causes a crash in the rendering engine, it will take the rest of the browser with it, including any other web apps that are open. Web apps often have to compete with each other for CPU time on a single thread, sometimes causing the entire browser to become unresponsive. Security is also a concern, because a web page that exploits a vulnerability in the rendering engine can often take over your entire computer.
https://blog.chromium.org/2008/09/multi-process-architecture.html
Operating system concepts 9th edition, page 123, "MULTIPROCESS
ARCHITECTURE—CHROME BROWSER"
At this part, the author said that each tab represents a separate
process, but when I look at task manager(windows), there's only one
process under "Google Chrome", for example, it's Stack Overflow now,
I'm still opening other tabs, why can't I find it in task manager?
There're also some other "process", but I think it's "nothing" to do
with these tabs, because when there's only one tab, they're still
here. So how to understand what the book says?
Chromium supports four different models that affect how the browser allocates pages into renderer processes. By default, Chromium (Chrome) uses a separate OS process for each instance of a web site the user visits. However, users can specify command-line switches when starting Chromium to select one of the other architectures: one process for all instances of a web site, one process for each group of connected tabs, or everything in a single process.
In my case I have the following situation:
MacOS:
Windows:
As you can see, each of tasks has its own PID (process ID)
Details:
Also you can refer to Chrome is using 1 process per website instead of per tab, Chrome tabs and processes questions.
And here is official documentation about process model of Chrome / Chromium.
Process-per-site:
Chromium also supports a process model that isolates different sites from each other, but groups all instances of the same site into the same process. To use this model, users should specify a --process-per-site command-line switch when starting Chromium. This creates fewer renderer processes, trading some robustness for lower memory overhead. This model is based on the origin of the content and not the relationships between tabs.
Process-per-tab:
The process-per-site-instance and process-per-site models both consider the origin of the content when creating renderer processes. Chromium also supports a simpler model which dedicates one renderer process to each group of script-connected tabs. This model can be selected using the --process-per-tab command-line switch.
Yes, it shows the currently opened tab MainTitle.
Simple console program to know:
Process[] localByName = Process.GetProcessesByName("Chrome");
foreach (Process p in localByName)
{
Console.WriteLine(
string.Format(
"Process: {0}\n Title:{1} \n P\n",
p.ProcessName,
p.MainWindowTitle
)
);
}
I'm using the couchbase admin tool and one of the most useful features for me is the ability to go into the documents of a particular bucket and then using the document filter dialog I type a document prefix that I've reserved for a particular document type and then I immediately get a filtered list of just documents of this type.
For instance, if I had a bucket called "sports" which had data for all sorts of sports, I might have set of records related to tennis, football, etc. and let's assume that the ID's of these documents were all prefixed with the particular sport in question. So in this case I'd simply put football into the Document Filter dialog and would expect to see just those documents whose ID's start with "football". This is happening as I type. This functionality works perfectly fine on my main development machine but on my laptop and in my production environment typing results in nothing as I type. I can press the "Lookup Id" button on any environment and as long as a proper ID has been specified it will load the document but the real-time filtering is for me critical to making the admin functionality useful to me.
It's worth mentioning that both my main dev machine and laptop are on OSX and production is Ubuntu. Also of note, my main development environment is still creeping around on version 2.0.1 because I'm afraid of losing this functionality but my laptop is running 2.5.1 and I think prod is the same.
Also, looking at the network panel in the debugger I do notice an important variation:
Both laptop and main dev machines load the document viewer without any JS errors
Independant of typing on into the filter dialog my main dev fires off REST calls periodically to: http://couchserver:8091/pools/default?uuid=xxxxxx&waitChange=20000&etag=xxxxxx
As soon as I type into the filter dialog I see network requests that look like this: http://couchserver:8091/couchBase/reference_data/_all_docs?startkey=%22football%22&endkey=%22football%EF%BF%BF%22&skip=0&include_docs=true&limit=21&_=1399627171015
My laptop, where the functionality doesn't work, does also seem to have the basic polling message listed above but when I type into the filter dialog no message is sent (and no JS error thrown either). Just silence. :(
It appears from IRC and other channels that this functionality has been removed because it was causing stability problems with large datasets. This is a bit worrisome to me and I still feel strongly that this functionality is highly desirable in an admin tool (at least in development environments although I would argue both prod and dev).
Anyway, while the UI still uses the "filter" terminology I think it's fair to say the filter terminology has been removed. I will now have to write my own admin interface. :(
Our web analytics package includes detailed information about user's activity within a page, and we show (click/scroll/interaction) visualizations in an overlay atop the web page. Currently this is an IFrame containing a live rendering of the page.
Since pages change over time, older data no longer corresponds to the current layout of the page. We would like to run a spider to occasionally take snapshots of the pages, allowing us to maintain a record of interactions with various versions of the page.
We have a working implementation of this (Linux), but the snapshot process is a hideous Python/JavaScript/HTML hack which opens a Firefox window, screenshotting and scrolling and merging and saving to a file. This requires us to install the X stack on our normally headless servers, and takes over a minute per page.
We would prefer a headless implementation with performance closer to that of the rendering time in a regular web browser, but haven't found anything.
There's some movement towards building something using Mozilla source as a starting point, but that seems like overkill to me, as well as a maintenance nightmare if we try to keep it up to date.
Suggestions?
An article on Digital Inspiration points towards CutyCapt which is cross-platform and uses the Webkit rendering engine as well as IECapt which uses the present IE rendering engine and requires Windows, natch. Nothing off the top of my head which uses Gecko, Firefox's rendering engine.
I doubt you're going to be able to get away from X, however. Since CutyCapt requires Qt, it requires either X or a Windows installation. And, similarly, IECapt will require Windows (or Wine if you want to try to run it under Linux, and then you're back to needing X). I doubt you'll be able to find a rendering engine which doesn't require Qt, Gtk, GDI, or Cocoa, and therefore requires a full install of display libraries.
Why not store the HTML that is sent out to the client? You could then use that to redisplay in a webbrowser as a page to show what it looked like.
Using your webanalytics data about use actions, you could they use that to default the combo boxes, fields etc to the values the client would have had, even change the CSS on buttons, etc, to mark them as being pushed.
As a benefit, you don't need the X stack, don't need to do any crawling or storing of images.
EDIT (Re Andrew Moore):
This is were you store the current CSS/images under a version number. Place an easily parsable version number in a comment in the HTML. If you change your CSS/images and use the existing names, increment the version number in the HTML output sent out.
The system that stores the HTML will know that it needs to grab a new copy and store under a new number. When redisplaying, it simply uses the version number to determine which CSS/image set to use.
We currently have a system here that uses a very similiar system so we can track users actions and provide better support when they call our help desk, as they can bring up the users session and follow what they did, even some-what live.
you can even code it to auto-censor sensitive fields when it is stored.
depending on the specifics of your needs perhaps you could get away with using one of the many free webpage thumbnail services? snapcasa, for example lets you generate thousands per month / no charge no advertizing .. (not ever used, just googled 'free thumbnail service') to find this.
just a thot
I was wondering, how would you go about writing an application that basically houses other applications inside of it?
The reason I ask is that I'd love to build an app that 'conquers' my current explosion of open windows. I've used virtual window managers before and they're nice and all, but I could do so many things with an app like I mention.
Alternatively does anyone know of an easy to use/intuitive application for confining windows to 'regions' of your screen? Something like GridMove, but more intuitive and less flakey?
You could create a window, then you could enumerate all Windows that have the style WS_OVERLAPPEDWINDOW, select the ones belonging to the application you want to house, then call SetParent on the window, setting the parent to the window you created. You could also use FindWindow to find a window by its title.
All the windows inside the house can never leave the house window's boundaries, but they still follow all the same rules. You can still click-and-drag windows etc.
The problem here is that if the application inside the house creates another window, its parent will most likely be the desktop window, not the house window.
I think what you are describing is generally called a Window Manager. The Windows shell is itself a (poor) example of a window manager. You might want to investigate some alternatives. I know there has been some success in getting KDE ported to Windows, so you might want to look at the current state of that project.
Microsoft also provides a PowerToy (IIRC) that gives you virtual desktop support, but it's really bad. Have you considered just getting a second monitor (and perhaps a utility such as MultiMon Taskbar to get a second task bar on the other monitor)?
Here is code that uses FindWindow / SetParent to create a tabbed view combining different applications Jedi Window Dock
I also wrote an application (not free, not open source) that takes this idea a bit further called WindowTabs.
The only caution I would give you is that not all applications like being parented. If your writing .NET, there are some "Gotcha's" there (which is why WindowTabs doesn't use parenting).
Also, in general, once you do a SetParent, you are joining the threads at a Win32 level meaning that if one hangs, all of them are toast.
Multiple Document Interfaces could help you out.
Despite the multiple down votes, I stand by this answer because the OP never stated the source of the "explosion of windows." I've seen business apps that open several windows at a time (or users that would open several instances "to save time") where MDI would've been a nice feature for them.
If the OP is a power user who has a need for another window manager because he runs many apps at once, then this really doesn't apply. It also isn't the problem I'd be addressing -- it would be finding a way to have fewer windows.
In general, there's always a VM.
It may be overkill or it may not work depending on the specifics of what you're trying to do. But VMWare will let you copy/paste files and text between your VM and local machine, so it's not that far off of being a true window manager. The system requirements aren't even that outrageous, considering how much memory iTunes + a typical browser eat up.