I want to get all the open URLs from browsers running on the device without having to develop extensions. There are two reasons I don't want to develop extensions. First for Chrome, the user has to go to the chrome store to install the extension. Second, I have to write an extension for all browsers installed.
So I started off by looking into Scripting Bridge, but turns out it doesn't work for Chrome without GUI scripting(for which users have to enable assistive devices).
So instead, I am looking into building a plugin instead. The thing though is plugins can only support certain mime types. How do I make sure my plugin is called from any webpage? Unless there is a universal mime type which is present in all webpages, I am not sure how to solve this problem.
In any case, do you guys think this is the best way to go? Or is there any other way to get URLs of all open tabs.
The only way to get a plugin to automatically be added to all pages would be with an extension, and there is no way without having the plugin be loaded in all pages to know about other pages other than the one that a given plugin instance is loaded in.
Plugins are not aware of the browser, only of the page they are inserted into (or loaded to handle, in the case of a plugin that handles a mimetype such as .pdf). see http://npapi.com/extensions for more information on the capabilities of a plugin vs an extension.
Because plugins only know about a page, though, that means that they can't find out about other pages in the same browser process, including tabs. They simply don't have any method for doing this, and that is by design; the API developers didn't want anyone to be able to have a plugin that handles a media type that could somehow tie into your banking site window in another tab without you realizing it. Of course, certain extension frameworks might allow you to find a way to do that anyway, but a plugin itself cannot.
Related
I have developed a simple NPAPI plugin using Firebreath - the plugin is used to launch another process when a particular URL is visited.
I would like to extend the plugin to detect and log what users (on an intranet) are copying from and optionally restrict which users can copy content based on their role.
I have not been able to understand how I could detect the copy event (if such a thing exists) - any help and direction would be appreciated. I'm specifically looking at Chrome on Windows initially.
Many thanks,
Jon
None of what you're describing can be done with a NPAPI plugin. NPAPI plugins don't know anything about browsers copying and pasting, nor do they know anything about what URLs the browser visits, unless they happen to exist as an object tag on the page.
At best, what you're trying to do would require an extension, though I don't know if it's possible or not.
See http://npapi.com/extensions
I appreciate this question may appear broad. But it is because I am looking anywhere and everywhere for a possible solution to do something very simple.
The goal is from a web page opened in Chrome, to scan the DOM, extract specific elements and save them silently in some way that I can then access.
There is no intention for any of this to be published as an app or extension, it is simply me wanting to access my own rendered browser data and extract and store this data on my own computer. For this reason, I am currently finding Chrome's exhaustive sandboxing security frustrating and irrelevant to say the least.
I have a working Chrome Extension which extracts all of the data I want, has a list of 5 strings that I want to save and that's as far as I have gotten.
I have looked into these areas:
Existing NPAPI Plugins (could not get npapi file io to work).
Creating my own NPAPI Plugin - seems like a huge overhead and learning exercise simply to get external access to 5 strings
Every aspect of Chrome extension (and even App) apis (particularly their localstorage which is not accessible from outside the extension)
Any other thoughts?
I realise there is a solution through creating my own NPAPI plugin but I would like to believe that there is another approach that allows me to link a constructed DOM with my local system. I am open to any other option? (I have considered a Linux purely bash approach but I need to generate the DOM as though it was in my browser).
I just want to be able to access specifically extracted parts of a DOM on my local system, not write an entirely new C++ plugin to facilitate this very basic feature.
I have recently converted one GWT web application to be capable to work in HTML5 offline mode. So far seems to work fine but I'm wondering if it's a good idea to serve different cache.manifest versions for different browsers?
As we know, GWT will need only one permutation for one target browser (in case of one language, let's make it simple). And it's obvious that we would need to download just one XXXXXX.cach.html for one target browser.
I see it's possible as on the serverside I could check User-Agent HTTP header and return contents of the appropriate version of my cache.manifest, accordingly setting all headers in order not to break offline status checking behavior. The rest of resources would be served with no custom filtering.
Is this a good idea to optimize it this way? Is there anything I could be missing?
By accident, I came across the following project: Mobile GWT. Quick documentation (HTML5 Manifest) and code (HTML5ManifestServletBase) review reveals that manifest is prepared considering each client, so that only required resources are sent over the network. Pity,- I was just about to make my own open source solution...
Is it possible to access Google Chrome's cache from within an extension?
I'd like to write an extension that loads a cached version of a page when the online one can't be accessed (e.g. Internet connectivity issue).
Updated: I know I could write an NPAPI plugin accessible through an extension to accomplish this but I'd rather not suffer writing one... I am after a solution without resorting to NPAPI, please.
Note: as far as I can tell, Google Chrome doesn't support this functionality (at least not out-of-the-box): I just had an episode of "no Internet access" and I was stranded...
Unfortunately, I'm 99% sure that this is impossible without using an NPAPI in your extension.
Chrome extensions are sandboxed to their own process, and can only access files within the extension's folder.
There is some support for things like chrome://favicon/. But that's about it, at least for now.
Source (Google Chrome Extensions Reference)
P.S. I just had a crazy idea. Extensions only have access to files in their folder... but Chrome stores it's cache in the Cache folder. What you might try is, copy (or move) the Cache folder into a subfolder within the extension. The extension should now be able to access the cache.
Whether this is enough to actually enable offline mode... I don't know. I do see some HTML files (and obviously a lot of images) within my Cache folder, though.
In fact, even without using an extension, I can open up the HTML files in Chrome. And because they're stored on your computer, you should be able to access them even without internet.
P.S. the Cache folder is stored at PATH-TO-CHROME/Default/Cache
P.P.S. there is a way to store an entire webpage and archive it for later use. Check out this extension:
https://chrome.google.com/extensions/detail/mpiodijhokgodhhofbcjdecpffjipkle
Just make a simple plugin manifest that calls an AJAX page which loads jQuery from CDN, and then uses it to parse all the <a> elements on the page and alter the href values to have this prefix: http://webcache.googleusercontent.com/search?q=cache:
So <a href="http://stackoverflow.com/questions/blah"> becomes:
<a href="http://webcache.googleusercontent.com/search?q=cache:http://stackoverflow.com/questions/blah">
VoilĂ , you are cache surfing, but you still need to get to Google. I understand this answer is a bit outside the scope of the question but still solves a lot of web connectivity issues.
I'm tempted to just go write this plugin but I bet it'd be taboo in Google's eyes, so it'd get blocked or removed rather quickly. :)
i've got firebug (my team does not have firefox), and the IE developer toolbar(IE7) but I can not seem to figure out how to easily validate if the referenced files in a page are loading (i see javascript errors, but that doesn't succinctly point me to the exact file in a heirarchy of jquery - jqueryUI - datepicker files).
Additionally i'd like to be able to do this remotely, because on our corporate domain some files load fine for me, but not anyone else because they sometimes get encrypted to my domain user. So it would be nice if this process was either simple enough for my teammates to do it very quickly, or ... even better somehow with automation from a remote machine or web service request.
I thought I had seen a simple place on firebug to validate what loaded and what did not, but I can't find it now.
What are my options?
do you tried Javascript Lint?
Or the javascript plugin for Eclipse.
do you know YSlow?
It provides you with a set of excelent tools for web developing and I think it solves your question