I am using Chrome Dev Tools. I use it to generate an audit report, then save it as a .json file. Then, how to reopen it in Chrome? I cannot find a way to open the .json file in Chrome.
Open-AudIT cleverly checks an association's system and stores the arrangements of the found gadgets.
An incredible revealing structure empowers data, for example, programming permitting, setup changes, non-approved gadgets, limit usage and equipment guarantee status to be removed and investigated.
Open-AudIT Enterprise accompanies extra highlights including Business Dashboards, Report sifting, Scheduled revelation, Scheduled Reports and Maps.
Related
Operating system concepts 9th edition, page 123, "MULTIPROCESS
ARCHITECTURE—CHROME BROWSER"
At this part, the author said that each tab represents a separate
process, but when I look at task manager(windows), there's only one
process under "Google Chrome", for example, it's Stack Overflow now,
I'm still opening other tabs, why can't I find it in task manager?
There're also some other "process", but I think it's "nothing" to do
with these tabs, because when there's only one tab, they're still
here. So how to understand what the book says?
Chromium supports four different models that affect how the browser allocates pages into renderer processes. By default, Chromium (Chrome) uses a separate OS process for each instance of a web site the user visits. However, users can specify command-line switches when starting Chromium to select one of the other architectures: one process for all instances of a web site, one process for each group of connected tabs, or everything in a single process.
In my case I have the following situation:
MacOS:
Windows:
As you can see, each of tasks has its own PID (process ID)
Details:
Also you can refer to Chrome is using 1 process per website instead of per tab, Chrome tabs and processes questions.
And here is official documentation about process model of Chrome / Chromium.
Process-per-site:
Chromium also supports a process model that isolates different sites from each other, but groups all instances of the same site into the same process. To use this model, users should specify a --process-per-site command-line switch when starting Chromium. This creates fewer renderer processes, trading some robustness for lower memory overhead. This model is based on the origin of the content and not the relationships between tabs.
Process-per-tab:
The process-per-site-instance and process-per-site models both consider the origin of the content when creating renderer processes. Chromium also supports a simpler model which dedicates one renderer process to each group of script-connected tabs. This model can be selected using the --process-per-tab command-line switch.
Yes, it shows the currently opened tab MainTitle.
Simple console program to know:
Process[] localByName = Process.GetProcessesByName("Chrome");
foreach (Process p in localByName)
{
Console.WriteLine(
string.Format(
"Process: {0}\n Title:{1} \n P\n",
p.ProcessName,
p.MainWindowTitle
)
);
}
I've an embedded system which runs firmware and has USB mass storage with size 79kB. So when you plug in the device to any computer(MAC/Windows), it pops as a 79kB flash drive. The firmware creates files which has transaction records. The objective is to display these transactions (tables and simple graphs) to the user. I've narrowed down to a web browser. So the user (with MAC/Windows PC) can plug in the USB device mass storage and open an HTML file in the mass storage drive and view all the transactions in the form of tables and simple bar graphs. The tricky part comes here: the device(firmware) needs to update it's clock, and this time input has to be sourced from the MAC/Windows PC. How can this be achieved?
This is the minimum requirement. Further, through the web browser the user wants to write some configuration parameters for e.g. through a text box and a submit button in the HTML page.
NOTE: Here the device has USB mass storage type and the web browser approach were selected so that there is no prerequisites for the user.
Please suggest an alternative if this can be done using another approach for e.g. a different class of USB or some other application locally available on MAC/Windows desktop/laptop. For e.g. the application should run on both on Mac and Windows i.e. the code should be the same but can be built into separate packages one for Mac and the other (.exe) for Windows. Please suggest a platform for this that has same source but can be built for both mac and windows. Thanks!
As far as I know, there is no way a web browser could write to a file. If such a thing was possible, it would be a huge security issue.
You have to write a piece of native software to do all the tasks you name. That can be done in pretty much any programming language, and if you're developing embedded systems I reckon you must have some experience in programming.
I'm looking at doing something similar and have an idea, though you may be better equipped to run with it than I am. Have the define contain a directory called "SET_DATE" with files "YEAR15" through "YEAR99", "MON01" through "MON12", "DATE01" through "DATE31", "H00" through "H23", "M00" through "M59", "S00" through "S59", and "SET"; each such file should start at a different sector, though none of the sectors in question need to contain any data (they need not physically be stored anywhere). To set the date to July 4, 2020 at 12:34:56pm, read the following files in sequence:
SET_DATE/YEAR20
SET_DATE/MONTH07
SET_DATE/DATE04
SET_DATE/H12
SET_DATE/M34
SET_DATE/S56
SET_DATE/SET
The last access should cause the unit to set its clock. If a user might want to set the clock more than once, that could be accommodated by either having a bunch of essentially-identical directories under SET_DATE (so setting the date the first time would use SET_DATE/00/YEAR20, the second time SET_DATE/01/YEAR20, etc.) and/or having the drive unmount/remount itself if necessary to clear out any caching.
I would think it unwise to have directory fetches trigger actions, since Windows or an anti-virus tool might decide to pre-cache all the directories in a drive when it is mounted. I would not expect Windows or a browser to eagerly load files, however, so I would think one could have read accesses trigger actions.
I'm using the couchbase admin tool and one of the most useful features for me is the ability to go into the documents of a particular bucket and then using the document filter dialog I type a document prefix that I've reserved for a particular document type and then I immediately get a filtered list of just documents of this type.
For instance, if I had a bucket called "sports" which had data for all sorts of sports, I might have set of records related to tennis, football, etc. and let's assume that the ID's of these documents were all prefixed with the particular sport in question. So in this case I'd simply put football into the Document Filter dialog and would expect to see just those documents whose ID's start with "football". This is happening as I type. This functionality works perfectly fine on my main development machine but on my laptop and in my production environment typing results in nothing as I type. I can press the "Lookup Id" button on any environment and as long as a proper ID has been specified it will load the document but the real-time filtering is for me critical to making the admin functionality useful to me.
It's worth mentioning that both my main dev machine and laptop are on OSX and production is Ubuntu. Also of note, my main development environment is still creeping around on version 2.0.1 because I'm afraid of losing this functionality but my laptop is running 2.5.1 and I think prod is the same.
Also, looking at the network panel in the debugger I do notice an important variation:
Both laptop and main dev machines load the document viewer without any JS errors
Independant of typing on into the filter dialog my main dev fires off REST calls periodically to: http://couchserver:8091/pools/default?uuid=xxxxxx&waitChange=20000&etag=xxxxxx
As soon as I type into the filter dialog I see network requests that look like this: http://couchserver:8091/couchBase/reference_data/_all_docs?startkey=%22football%22&endkey=%22football%EF%BF%BF%22&skip=0&include_docs=true&limit=21&_=1399627171015
My laptop, where the functionality doesn't work, does also seem to have the basic polling message listed above but when I type into the filter dialog no message is sent (and no JS error thrown either). Just silence. :(
It appears from IRC and other channels that this functionality has been removed because it was causing stability problems with large datasets. This is a bit worrisome to me and I still feel strongly that this functionality is highly desirable in an admin tool (at least in development environments although I would argue both prod and dev).
Anyway, while the UI still uses the "filter" terminology I think it's fair to say the filter terminology has been removed. I will now have to write my own admin interface. :(
Is there a technical reason, why a Google Drive application must be installed through the Chrome Web Store (which severely limits the number of potential users)?
The reason that installation is required is to give users the ability to access applications from within the Google Drive user interface. Without installation, users would have no starting point for most applications, as they would not be able to start at a specific file, and then choose an application.
That said, I realize it can be difficult to work with in early development. We (the Google Drive team) are evaluating if we should remove this requirement or not. I suspect we'll have a final answer/solution in the next few weeks.
Update: We have removed the installation requirement. Chrome Web Store installation is no longer required for an app to work with a user's Drive transparently, but it is still required to take advantage of Google Drive UI integrations.
To provide the create->xxx behaviour that makes a new application document from the drive interface, and to be able to open existing documents from links, there must be some kind of manifest registered with Google's systems and some kind of agreement from the user that an application can access your documents and work with specific file types. There's little way around this when you think about the effects of not doing this.
That said, there are two high level issues that make for compatibility problems.
As the poster says, the requirement to install in the chrome store
severely limits the number of potential users.
But why? Why do the majority of Chrome Web Store applications say that they only work on Chrome? Most of these are wrappers to web applications that work on a range of browsers, yet you click through a selection and most display "works on chrome", aka only installs on chrome.
Before we launched our application on chrome we found that someone had created "xxxxxxx launcher" in the store, that simply forwards to our web app page. We're still wondering why it only "works on chrome". I suspect that some default template for the web store has:
"container" : "CHROME",
in it, which is the configuration option to say chrome only. That said, I can't find one, so I'm very confused why this is. It would be healthier if people picked Chrome because it's the better browser (which it is in a number of regards), not because their choice is limited if they don't. People can always write to the application vendor and ask if this limitation is really necessary.
The second thought is that a standardised manifest format across cloud storage providers would mean a much higher take up in web app vendors. Although, it isn't hugely complex to integrate, for example, with Google Drive, the back-end and ironing out the the details took over a week in total. Multiply that lots of storage providers and you have you lose an engineer for 2 months + the maintenance afterwards. The more than is common across vendor integration, the more likely it is to happen.
And while I'm on it, a JavaScript widget for opening and saving (I know Google have opening) by each cloud storage provider would improve integration by web app vendors. We should be using one storage providers across multiple applications, not one web application across multiple storage providers, the file UI should be common to the storage provider.
In order to sync with the local file system, one would need to install a browser plug-in in order to bridge the Web with the local computer. By default, Web applications don't have file I/O permissions on the user's hard drive for security reasons. Browser extensions, on the other hand, do not suffer from this limitation as it's assumed that when you, the user, give an application permission to be installed on your computer, you give it permissions to access more resources on the local computer.
Considering the add-on architectures for different browsers are different, Google first decided to build this application for their platform first. You can also find Google Drive in the Android/Play marketplace, one of Google's other app marketplaces.
In the future, if Google Drive is successful, there may very well be add-ons created for Firefox and Internet Explorer, but this of course has yet to be done and depends on whether or not Google either releases the API's to the public or internally makes a decision to develop add-ons for other browsers as well.
Consider the following scanning procedure in a typical document handling webapp:
The user scans a document using a scanner connected to his/her computer
The scanned image is saved locally on the user's computer as a BMP/JPG/TIF/PNG file
The user hits a file upload "Browse.." button in the web application
The user is presented with a file dialog which he/she uses to locate the scanned image
The user hits "Upload image" and the scanned image is uploaded to the server where it is stored
This process is quite complicated and I'd like to reduce the number of steps in order to make the process more user friendly/fool proof. Under ideal circumstances the above steps would be replaced with only one step in which the procedure initiate document scanning, complete document scanning and upload resulting image is automatically triggered from the webapp when clicking say "Scan and upload". Unfortunely it seems like the state of "web/scanner integration" is quite poor so this might be utopia.
How would you tackle this problem? More specifically, how would you go about reducing the number steps involve in the use-case described?
Well, two years have passed, so here's an update on the state of the art for those just joining us.
Both Dynamsoft and Atalasoft have multi-browser web-scanning toolkits which are compatible with any server-side stack. Both require the user to install an ActiveX (in IE) or an NPAPI plugin (Chrome, Firefox, etc.) to get access to the scanner via the TWAIN API.
Obviously if you have the time or a limited budget, you can create your own plugin. I heartily recommend the FireBreath plugin framework, and any TWAIN library rather than writing your own TWAIN code.
Once the ActiveX or plugin is installed, the rest of the work is a combination of javascript & HTML on the client, and some kind of handler on the server to accept and process the incoming image, which can be made to look just like a multipart form submit with an attached file.
I recommend doing the image upload in javascript using AJAX, because it is then part of the same browser 'session' as the web page, and it inherits the browser's proxy settings, session cookies and server-side authentication. I don't know about Dynamsoft's control, the Atalasoft toolkit includes such AJAX uploading. The image(s) are handed from the plugin to the javascript as a base64-encoded string, so no local file is actually created.
Disclaimer: I work on Atalasoft's WingScan web-scanning toolkit.
If your target audience is running Windows and IE, and you don't mind spending a few $$, Atalasoft has some components that will do just what you're looking for.
I actually saw someone at the bank do this while setting up my account and I was totally amazed. Bank in question was using Windows and IE, I assume your in an equally controlled environment. I think the bank used a combination of a custom/ predictable scanner driver and an ActiveX control.
A page loaded which said "Open the scanner" the staff member popped the document in and hit Scan on the webpage, then the page changed to say Scanning, then it showed the scanned document on the web page for the staff member to Approve. I can only assume that the scanner driver send the image to a certain location and the active X control was polling for it to appear, once it appeared it showed the image on screen, once the staff member had approved it the active x uploaded it in the background. She opened the next page and carried on with the rest of the process.
God knows how they made all that tech work but it can be done.
Silverlight 4 is coming out soon. It is supposed to have the ability to interact with COM objects on the user's computer (provided they are running Windows). In theory you call WIA methods from your Silverlight web page.
We implemented a solution to implement Remote Deposit for a bank. It works only in IE. A winforms dll was created that interfaces with LeadTools TWAIN dll. Leadtools TWAIN dll abstracts all the TWAIN minutae. This approach is slighly better than using an ActiveX control. .NET Framework would be needed on client. The scanned images are posted back to a hidden variable on the page and are processed on the server.
Hmm, I've always wanted to look at a scanned file before I did anything with it, but I suppose that depends on your scanner and how much quality you need.
If the goal is to "automate the scanning and uploading process" as opposed to "write a web app", I'd write an AutoIt script to control the existing scanner software and a simple ftp program.
The option most likely to remove the most steps, would probably be writing a customized scan utility that the user would download and run on their local machine.
SANE or TWAIN would handle getting the scanned image. cURL could than handle uploading the image to your web app. To make things even easier for the end user, I would use something like a Comet connection to update the web page when the file was available.
If that isn't an option, you might look into seeing what options your users will likely have using their scanners software. I believe many programs now support scanning to email or ftp.
The solution I have used for an intranet app, using multifunction scanner/copiers was to scan to an SMB share that the web server had access to. The user just goes to the copier scans to the share and when they get back to their desk, they go to the new scans page which shows a list of all the new unprocessed files.
Since your audience is controlled environment, You can write your own browser extension/program based on WIA/TWAIN that does the scanning. If you choose browser extensions such as BHO/ActiveX/XPCOM, etc, you need get the user's permission to install your extension. If you choose to write a program you may need web deployment technologies like ClickOnce or Java Web Start to be launched from web.
Interfacing TWAIN is a pain on Windows. Complexity aside, you have to display some GUI written by different scanner driver developers. It may be the only way to support old scanners or features not exposed via other interfaces like full-speed multipage scans from a document feeder.
Microsoft's WIA makes interfacing with scanner much easier with a scripting object model, however scanner-specific features are not available and some old scanners do not support the interface.
After scanning you can call a web service to notify the server and the web page can refresh periodically to check new images.
We have done something similar. we used a command-line TWAIN program (http://www.burrotech.com/quickscan.php). $$ $49
1) We developed a small .Net application to run the QuickScan program as a shell command.
2) The command was assigned to the Scan button.
3) Once the user presses on the scan button, a prompt will appear to enter the file name. The user saves the transaction Id as the file name.
4) Another .Net application (or maybe the same mentioned before) will read this file and upload it into database considering that the filename is the transaction ID.
Worked like a warm knife in butter!
You can try displaying the transaction ID into IE, user to select the ID then presses Scan. Your application will read the SELECTED text and save the file using the SELECTED text as the file name. We havne't tried it but it should work.
It is only utopia if you think that web applications are limited to web browsers, in fact, web applications can include a lot of different technologies, besides HTML and Javascript.
The cool way of solving that problem -- in fact, I already used that for some usbserial devices -- is to implement your application using SOAP+XMPP. You can do that in Perl by using XML::CompileX::Transport::SOAPXMPP, Catalyst::Engine::XMPP2, Catalyst::Controller::SOAP and Catalyst::Model::SOAP.
The interesting thing about using XMPP is that it simplifies the management of addressing, since you use the JID (Jabber ID) to look for the software agent, not some host+port addressing schema. The second interesting part of using XMPP is to more easily support the server pushing information to the client.
But if you don't want to handle XMPP you still can do the same thing with a lightweight embedded http server -- HTTP::Server::Simple, in Perl -- and somehow register the current scanner address in the server so it can call back.
And a last option, which is not so cute, is to have the software agent polling the server to see when there is a "scan document and upload" order for that specific machine and realize that operation when that is present.
In summary, having a local software agent to interact with the local hardware doesn't make your webapp less "web", as long as you use web standards -- like XML, SOAP and others -- to perform that communication.
You can put a Java applet in your website. This can access the scanner and send the data via REST to your web server.