Couchbase admin tool falling over with filtered document list - couchbase

I'm using the couchbase admin tool and one of the most useful features for me is the ability to go into the documents of a particular bucket and then using the document filter dialog I type a document prefix that I've reserved for a particular document type and then I immediately get a filtered list of just documents of this type.
For instance, if I had a bucket called "sports" which had data for all sorts of sports, I might have set of records related to tennis, football, etc. and let's assume that the ID's of these documents were all prefixed with the particular sport in question. So in this case I'd simply put football into the Document Filter dialog and would expect to see just those documents whose ID's start with "football". This is happening as I type. This functionality works perfectly fine on my main development machine but on my laptop and in my production environment typing results in nothing as I type. I can press the "Lookup Id" button on any environment and as long as a proper ID has been specified it will load the document but the real-time filtering is for me critical to making the admin functionality useful to me.
It's worth mentioning that both my main dev machine and laptop are on OSX and production is Ubuntu. Also of note, my main development environment is still creeping around on version 2.0.1 because I'm afraid of losing this functionality but my laptop is running 2.5.1 and I think prod is the same.
Also, looking at the network panel in the debugger I do notice an important variation:
Both laptop and main dev machines load the document viewer without any JS errors
Independant of typing on into the filter dialog my main dev fires off REST calls periodically to: http://couchserver:8091/pools/default?uuid=xxxxxx&waitChange=20000&etag=xxxxxx
As soon as I type into the filter dialog I see network requests that look like this: http://couchserver:8091/couchBase/reference_data/_all_docs?startkey=%22football%22&endkey=%22football%EF%BF%BF%22&skip=0&include_docs=true&limit=21&_=1399627171015
My laptop, where the functionality doesn't work, does also seem to have the basic polling message listed above but when I type into the filter dialog no message is sent (and no JS error thrown either). Just silence. :(

It appears from IRC and other channels that this functionality has been removed because it was causing stability problems with large datasets. This is a bit worrisome to me and I still feel strongly that this functionality is highly desirable in an admin tool (at least in development environments although I would argue both prod and dev).
Anyway, while the UI still uses the "filter" terminology I think it's fair to say the filter terminology has been removed. I will now have to write my own admin interface. :(

Related

Chrome Process Models : Process-per-site-instance and Process-per-site

Process-per-site-instance and Process-per-site should be understood? I read the explanation here, I feel that there is no difference between the two arguments? It is hoped that God can give some simpler explanations. Examples are better?
Process-per-site-instance:
Chromium creates a renderer process for each instance of a site the user visits. This ensures that pages from different sites are rendered independently, and that separate visits to the same site are also isolated from each other. Thus, failures (e.g., renderer crashes) or heavy resource usage in one instance of a site will not affect the rest of the browser. This model is based on both the origin of the content and relationships between tabs that might script each other. As a result, two tabs may display pages that are rendered in the same process, while navigating to a cross-site page in a given tab may switch the tab's rendering process.
Process-per-site:Chromium also supports a process model that isolates different sites from each other, but groups all instances of the same site into the same process. This model is based on the origin of the content and not the relationships between tabs.
link:
One of the most obvious difference is that under Process-per-site mode, it makes sure that each site uses no more than one process, while in Process-per-site-instance (which is the default mode) one sites could have more than one processes.
Here is a simple experiment to show the difference:
Process-per-site-instance
First open the Chromium (I did on Chromium build 778138, I think the result will be the same on any recent Chrome build) in the normal/default mode.
Then open two github.com tabs.
Open the Task Manager (under More Tools in Chromium).
As you can see these two tabs have two different process ID 86892 and 86894
Process-per-site
Quit Chromium and reopen it with addition argument by running this in the Terminal to enter the process per site mode (I'm using MacOS, for Windows you could follow this):
open -a "Chromium" --args --process-per-site
Then open two github.com tabs.
Open the Task Manager same as the previous step.
As you can see the two github tabs have the same process ID 86831
Another interesting observation is that in the first one (Process-per-site-instance) the total Memory footprint is around 125 MB while the second one (Process-per-site) is around 88 MB, which is 30% less! But the downside is, as stated on Chromuim website:
(Process-per-site) Can result in large renderer processes. Sites like google.com host a wide variety of applications that may be open concurrently in the browser, all of which would be rendered in the same process. Thus, resource contention and failures in these applications could affect many tabs, making the browser seem less responsive. It is unfortunately hard to identify site boundaries at a finer granularity than the registered domain name without breaking backwards compatibility.
Further reading:
In this article the author did some interesting experiences under the Process-per-site-instance mode and I think it could further improve your understanding on the Chrome Process Models.

Does chrome really create a process for each tab?

Operating system concepts 9th edition, page 123, "MULTIPROCESS
ARCHITECTURE—CHROME BROWSER"
At this part, the author said that each tab represents a separate
process, but when I look at task manager(windows), there's only one
process under "Google Chrome", for example, it's Stack Overflow now,
I'm still opening other tabs, why can't I find it in task manager?
There're also some other "process", but I think it's "nothing" to do
with these tabs, because when there's only one tab, they're still
here. So how to understand what the book says?
Chromium supports four different models that affect how the browser allocates pages into renderer processes. By default, Chromium (Chrome) uses a separate OS process for each instance of a web site the user visits. However, users can specify command-line switches when starting Chromium to select one of the other architectures: one process for all instances of a web site, one process for each group of connected tabs, or everything in a single process.
In my case I have the following situation:
MacOS:
Windows:
As you can see, each of tasks has its own PID (process ID)
Details:
Also you can refer to Chrome is using 1 process per website instead of per tab, Chrome tabs and processes questions.
And here is official documentation about process model of Chrome / Chromium.
Process-per-site:
Chromium also supports a process model that isolates different sites from each other, but groups all instances of the same site into the same process. To use this model, users should specify a --process-per-site command-line switch when starting Chromium. This creates fewer renderer processes, trading some robustness for lower memory overhead. This model is based on the origin of the content and not the relationships between tabs.
Process-per-tab:
The process-per-site-instance and process-per-site models both consider the origin of the content when creating renderer processes. Chromium also supports a simpler model which dedicates one renderer process to each group of script-connected tabs. This model can be selected using the --process-per-tab command-line switch.
Yes, it shows the currently opened tab MainTitle.
Simple console program to know:
Process[] localByName = Process.GetProcessesByName("Chrome");
foreach (Process p in localByName)
{
Console.WriteLine(
string.Format(
"Process: {0}\n Title:{1} \n P\n",
p.ProcessName,
p.MainWindowTitle
)
);
}

Can Chrome DevTools preferences be set automatically, or imported?

I've been experimenting with using Chrome DevTools as my primary authoring tool, and am now mostly using them.
As I continue to increase my usage, I'm running into some pain points.
Usually, when I begin working on a project, I now create a dedicated Chrome profile for it. I do this automatically by invoking Chrome with the --user-data-dir flag and storing the browser profile right within the project.
Then I go into the tools, set up my workspace, map my local directories, and so forth. This works great.
What doesn't work so great is that this is a very repetitive process. I'd love to be able to specify the workspace mappings within the project somehow, and then generate the appropriate profile. I'd also love to be able to set other preferences (like indentation, and various other settings on the DevTools "General" page) in a standard way.
I've thought of three ways this might be possible:
There might be an API for this, but I doubt it, as programmatic manipulation of browser preferences obviously is disfavored (but would someone have carved out an exception for DevTools?),
There might be a way to import/export DevTools preferences, and I might be able to generate the import format,
I might be able to figure out where they're stored in the user directory, and manipulate them myself (so far I haven't, though).
There's also one partial solution I've considered:
I might be able to copy a "template" browser profile to get some of the shared settings above. Then I'd still have to do the workspace mapping each time, but I might be able to get away with not doing the rest.
One really elaborate strategy I could try would be to use browser automation, as suggested in Google Chrome - how can i programmatically enable chrome://flags some of the modules from disable mode to enabled mode? ... but that seems like overkill even as I start using the stuff more heavily; I don't think I'm quite ready to invest that kind of up-front effort in it.
Is anyone familiar enough with how the Chrome DevTools preferences work to judge which strategies might be most promising?
There is no way easily sync DevTools settings. They are stored in localstorage scoped to the DevTools. Which means they are in a special sqlite DB which isn't easy to transfer between machines (plus you'd bring all the other stuff with it.)
Sadly, you are left porting this all around by-hand with each new machine.

What does "headless" mean?

While reading the QTKit Application Programming Guide I came across the term 'headless environments' - what does this mean? Here is the passage:
...including applications with a GUI and tools intended to run in a “headless” environment. For example, you can use the framework to write command-line tools that manipulate QuickTime movie files.
"Headless" in this context simply means without a graphical display. (i.e.: Console based.)
Many servers are "headless" and are administered over SSH for example.
Headless means that the application is running without a graphical user interface (GUI) and sometimes without user interface at all.
There are similar terms for this, which are used in slightly different context and usage. Here are some examples.
Headless / Ghost / Phantom
This term is rather used for heavy weight clients. The idea is to run a client in a non-graphical mode, with a command line for example. The client will then run until its task is finished or will interact with the user through a prompt.
Eclipse for instance can be run in headless mode. This mode comes in handy when it comes to running jobs in background, or in a build factory.
For example, you can run Eclipse in graphic mode to install plugins. This is OK if you just do it for yourself. However, if you're packaging Eclipse to be used by the devs of a large company and want to keep up with all the updates, you probably want to find a more reproducible, automatic easier way.
That's when the headless mode comes in: you may run Eclipse in command line with parameters that indicate which plugins to install.
The nice thing about this method is that it can be integrated in a build factory!
Faceless
This term is rather used for larger scale application. It's been coined in by UX designers. A faceless app interacts with users in a manner that is traditionally dedicated to human users, like mails, SMS, phone... but NOT a GUI.
For example, some companies use SMS as an entry point to dialog with users: the user sends a SMS containing a request to a certain number. This triggers automated services to run and reply to the user.
It's a nice user experience, because one can do some errands from one's telephone. You don't necessarily need to have an internet connection, and the interaction with the app is asynchronous.
On the back-end side, the service can decide that it does not understand the user's request and get out of the automated mode. The user enters then in an interactive mode with a human operator without changing his communication tool.
You most likely know what a browser is. Now take away the GUI, and you have what’s called a headless browser. Headless browsers can do all of the same things that normal browsers do, but faster. They’re great for automating and testing web pages programmatically.
Headless can be referred in terms of a browser or a program that doesn't require a GUI. Not really useful for a general person to view and only to pass the info in the form of code to another program.
So why one uses a Headless program?
Simply because it improves the speed and performance and is available for all user, including those that have access to the graphic card. Allows testing browserless setups and helps you multitask.
Guide to Headless Browser
What is GUI ?
In software development it is an architectural design that completely separates the backend from the front end. The front end, gui, or UI is a stand alone piece and communicates to the backend through an API. This allows for a multi server architecture, flexibility in software stack and performance optimization.

How to take screenshot of rendered HTML page

Our web analytics package includes detailed information about user's activity within a page, and we show (click/scroll/interaction) visualizations in an overlay atop the web page. Currently this is an IFrame containing a live rendering of the page.
Since pages change over time, older data no longer corresponds to the current layout of the page. We would like to run a spider to occasionally take snapshots of the pages, allowing us to maintain a record of interactions with various versions of the page.
We have a working implementation of this (Linux), but the snapshot process is a hideous Python/JavaScript/HTML hack which opens a Firefox window, screenshotting and scrolling and merging and saving to a file. This requires us to install the X stack on our normally headless servers, and takes over a minute per page.
We would prefer a headless implementation with performance closer to that of the rendering time in a regular web browser, but haven't found anything.
There's some movement towards building something using Mozilla source as a starting point, but that seems like overkill to me, as well as a maintenance nightmare if we try to keep it up to date.
Suggestions?
An article on Digital Inspiration points towards CutyCapt which is cross-platform and uses the Webkit rendering engine as well as IECapt which uses the present IE rendering engine and requires Windows, natch. Nothing off the top of my head which uses Gecko, Firefox's rendering engine.
I doubt you're going to be able to get away from X, however. Since CutyCapt requires Qt, it requires either X or a Windows installation. And, similarly, IECapt will require Windows (or Wine if you want to try to run it under Linux, and then you're back to needing X). I doubt you'll be able to find a rendering engine which doesn't require Qt, Gtk, GDI, or Cocoa, and therefore requires a full install of display libraries.
Why not store the HTML that is sent out to the client? You could then use that to redisplay in a webbrowser as a page to show what it looked like.
Using your webanalytics data about use actions, you could they use that to default the combo boxes, fields etc to the values the client would have had, even change the CSS on buttons, etc, to mark them as being pushed.
As a benefit, you don't need the X stack, don't need to do any crawling or storing of images.
EDIT (Re Andrew Moore):
This is were you store the current CSS/images under a version number. Place an easily parsable version number in a comment in the HTML. If you change your CSS/images and use the existing names, increment the version number in the HTML output sent out.
The system that stores the HTML will know that it needs to grab a new copy and store under a new number. When redisplaying, it simply uses the version number to determine which CSS/image set to use.
We currently have a system here that uses a very similiar system so we can track users actions and provide better support when they call our help desk, as they can bring up the users session and follow what they did, even some-what live.
you can even code it to auto-censor sensitive fields when it is stored.
depending on the specifics of your needs perhaps you could get away with using one of the many free webpage thumbnail services? snapcasa, for example lets you generate thousands per month / no charge no advertizing .. (not ever used, just googled 'free thumbnail service') to find this.
just a thot