How to get appWindow from Chrome.app.window.create - google-chrome

I am trying to write a chrome.app that is able to open and close chrome.app windows on both displays of a system that is configured with two monitors. When launched, the chrome application establishes a socket connection with a native application running on the same computer, I also open a hidden window via chrome.app.window.create to keep the chrome application up and running. The native application then reads a configuration file and then sends a series of ‘openBrowser’ commands to the chrome application via the socket.
When the chrome application receives an ‘openBrowser’ command, the chrome application makes a call to the chrome API method chrome.app.window.create, passing the create parameters AND a callback function. A code snippet is below:
NPMBrowserManager.prototype.openBrowser = function (browserId,htmlFile,browserBounds,hidden,grabFocus)
{
var browserManager = this;
var createParameters = {};
createParameters.bounds = browserBounds;
createParameters.hidden = hidden;
chrome.app.window.create(htmlFile,createParameters,function(appWindow)
{
// Check to see if I got a non-undefined appWindow.
if(appWindow !== null)
{
browserManager.browsers.push({"browserId":browserId,"window":appWindow});
console.info("NPMBrowserManager.openBrowser: Added browser, id =" + browserId + ", count =" + browserManager.browsers.length);
}
});
}
Unfortunately, the ‘appWindow’, parameter passed in the create callback is always undefined. I suspect it has something to do with the fact that the method openBrowser is itself being called by another method that processes commands received from the native application. However, the window opens exactly here and when I want to to, I just can’t seem to cache away any information about the new window that can be used later to close or move the window.
I want to be able to cache away the appWindow so that I can close or modify the created window later on in the workflow.
As a side note, I’ve noticed that appWindow is NOT undefined if I call the openBrowser method from within the callback that is associated with the chrome.app.runtime.onLaunched event. I suspect it has something to do with the current script context. I was not able to find any chrome.app documentation that goes into any detail about the chrome app architecture.
I would GREATLY appreciate it if anyone out there can explain to me how I can get the appWindow of the window that is created in the chrome.app.window.create method. By the way, I have also tried calling chrome.app.window.current to no avail… Very frustrating!!!
I’d also be interested in any documentation that might exist. I am aware of developer.chrome.com, but could not find much documentation other than reference documentation.
Thanks for the help!
Jim

Related

See what file/function deals with a WebSocket?

Is there a way to, possibly using the Chrome DevTools, see in what javascript file or function a WebSocket is handled with?
For instance, I am able to see the frames of the data in the Frames tab, but I am not able to find where they are handled. Is this even possible using only Chrome's DevTools?
I think doing a full-text search of the page source for "onmessage" is the easiest way of doing this.
Other than that, a more accurate method is to overwrite the native WebSocket object and putting in a debugger statement:
var nativeWebSocket = window.WebSocket
window.WebSocket = function(){
debugger
}
Paste this in the console before the WebSocket is created. You can use "Script First Statement" in Event Listener Breakpoints to pause when the page starts loading.
Chrome will pause when the WebSocket object is created, and you can go up the call stack to find the source code that's responsible.
This may be very different from where the onmessage handler is defined. However, you can then put a manual breakpoint on the line that contains new WebSocket, reload the page, and put this code in the console when the breakpoint is hit:
Object.defineProperty(e, "onmessage", {
set: function(){
debugger
}
})
Now the debugger will pause when the onmessage property is set on that WebSocket object.

Chrome Extension, messaging: getting port status

I am trying to get a port's status in an application (not a content script). When I do :
this.port = chrome.runtime.connect("okcbadfdlhldjgkbafhnkcpofabckgde");
I get a valid port object but I can't find anyway to determine if the port can be used at all (I don't even get a disconnect event if the extension can't be reached).
The only way I figured out to have the connectivity state is to actually trap an exception when performing a this.port.postMessage.
Is there a better way ?
https://developer.chrome.com/extensions/runtime#method-connect
Update
Running Version 48.0.2564.97 (64-bit) on Linux Ubuntu
No cross-extension messaging, just application to/from extension
Extension source code but note I have since moved on to implement another strategy for the extension because of the issue raised in this question.
Your extension uses a background-script that provides listener function for the chrome.runtime.onMessageExternal event. This event is used to listen for incoming messages, send from external webpage-scripts (or other extensions) by calling the chrome.runtime.sendMessage method.
Since your extension does not provide a listener function for the chrome.runtime.onConnectExternal event, chrome.runtime.connect cannot work for your extension.
As far as knowing the connection status is concerned, in this case a simple try-catch block would do enough to know whether the extension supports port or not. If it does, you need to view the manifest corresponding to this extension - to see if a particular host is allowed to send messages or not.
I was able to send message to your extension (see the enclosed figure) by adding the following lines of code in the background-script. In addition to this, I also added the matches string for the host - www.example.org in the manifest.
chrome.runtime.onMessageExternal.addListener(
function(request, _sender, sendResponse) {
console.log(request);
...
}
);

Retrieving selenium logs and screenshots from grid back in Intern

There are two parts to my question in regards to Intern workflow in case of exception:
1- Per Selenium Desired Capabilities specifications, RemoteWebDriver captures screentshots on exceptions by default (unless it is disabled by setting webdriever.remote.quiteExceptions.) Is it possible to retrieve these screenshots in Intern?
2- I have set up a Selenium Grid with multiple platforms/browsers and can execute Intern tests on the grid successfully. However I am trying to gather the logs back in my Intern environment so that I don’t have to sign on to each machine on the grid to see the logs. I am particularly interested in server, driver, and browser logs based upon selenium logging guide. I tried adding the following Intern configurations using the Selenium Desired Capabilities guide but wasn't able to get any logs:
capabilities: {
'selenium-version': '2.39.0',
'driver': 'ALL',
'webdriver.log.driver':'INFO',
'webdriver.chrome.logfile': 'C:\\intern\\logs \\chromedriver.log',
'webdriver.firefox.logfile':'C:\\intern \\logs\\firefox.log'
To get a screenshot yourself you can call remote.takeScreenshot().then(function (base64Png) {}), but there is no way that I am aware of to retrieve the automatically generated screenshots—there appears to be nothing in the WebDriver JsonWireProtocol to do so.
To retrieve logs, you can call remote.log(typeOfLog).then(function (logs) {}). See the JsonWireProtocol on log for more information on what you get back.
There is a way to capture automatically generated screenshots. Using a custom reporter (https://github.com/theintern/intern/wiki/Using-and-Writing-Reporters#custom-reporters) I was able to save a screen shot and log browser console logs into a file.
As mentioned in the link above, when the '/test/fail' topic callback is called, it passes in a test object. If the webdriver had failed internally, this object will have a 'test.error.cause.value.screen' variable present in it. This is the variable that stores the webdriver generated screenshot. So the following is what I did:
if (test.error.cause.value.screen) {
//Store this variable into a file using node's fs library
}
If you look at the error object, you will also get to see more error information that the webdriver has logged.
Regarding the browser logs, #C Snover has hit the nail on that one. But that information is only available inside the remote object. This object is available when the '/session/start' topic callback is called. So what I did is I created a map that mapped the session ID from the remote object to the remote object itself. And luckily, the test object has the session ID in it too. So, I retrieved the remote object from my map using test.sessionId as the key to the map and logged the browser logs too. So in short this is what I did:
'/session/start': function (remote) {
sessions[remote.sessionId] = { remote: remote };
},
'/test/fail': function (test) {
var remote = sessions[test.sessionId].remote;
remote._wd.log('browser', function (err, logs) {
//Store the logs array into a file using node's fs library
});
}

Call Gnome Shell Shortcuts programmatically

Gnome Shell has great shortcuts, however, I don't find a way to call them programmingly
Assume that I want to use a GJS script to start Google Chrome, move it to workspace 1, and maximize it, then start Emacs, move it to workspace 2, and maximize it.
This could be done using wm.keybindings: move-to-workspace-1, move-to-workspace-2 and maximize. However, how to call them programmingly?
I notice that in GJS, Meta.prefs_get_keybinding_action('move-to-workspace-1') will return the guint of action move-to-workspace-1, but I did not find any function to call the action.
In https://github.com/GNOME/mutter/blob/master/src/core/keybindings.c, I found a function meta_display_accelerator_activate, but I could not find a GJS binding for this function.
So, is there any way to call gnome shell shortcuts programmatically?
The best bet to move an application is to grab its Meta.Window object, which is created after it's started.
This would be done through getting the active workspace, starting the application, then getting the application from the active workspace and moving it.
Sample code for a quick implementation:
const workspace = global.screen.get_active_workspace();
const Gio = imports.gi.Gio;
//CLIname: Name used to open app from a terminal
//wsIndex: Workspace you want it on
function openApp(CLIname, wsIndex) {
let context = new Gio.AppLaunchContext;
//use 2 to indicate URI support
//0 is no flags, 1 for terminal window,
//No 3, 4 for notification support
//null because setting a name has no use
Gio.AppInfo.create_from_commandline(CLIname, null, 2).launch([], context);
//Unfortunately, there is no way I know to grab a specific window if you don't know the index.
for(let w of workspace.list_windows()) {
//check if the window title or window manager class match the CLIname. Haven't found any that don't match either yet.
if(w.title.toLowerCase().includes(CLIname.toLowerCase() || w.get_wm_class().toLowerCase.includes(CLIname.toLowerCase()) {
//Found a match? Move it!
w.change_workspace(global.screen.get_workspace_by_index(wsIndex));
}
{
}
/*init(), enable() and disable() aren't relevant here*/
To answer the actual question way at the end, it might be possible by forcing the GNOME on-screen keyboard to emit those keys, but that would require matching the right keys and I/O emulation for every keybinding you wish to execute, which can change whenever a user wants it to, from an extension.

Chrome extension : access local storage more than once in content script

I know how to access localstorage in content script, but only one time. I access it via sendRequest, but when I try to use this method in an event function, the jvascript file doesn't even load.
Is it possible to access to the options many times, like whenever the onclick event is fired ?
I looked on the google code website and found something to create a connection between content script and background using chrome.extension.connect(). Do i need to use that ?
Thanks !
Actually you can use sendRequest as many times as you can, but if you want to do it in another way you can open a long-lived channel (or what I call, "message tunnel") between content script and background page to communicate.
In your content script, you can use
var port = chrome.extension.connect({name: "myChannel"});
to open up a channel.
Then you can use
port.postMessage({message: "This is a message."});
to send a new message to the background page.
port.onMessage.addListener(function(msg) { }) listens to a new message.
In your background page,
chrome.extension.onConnect.addListener(function(port) {
port.onMessage.addListener(function(msg) {
if(port=="myChannel"){
console.log(msg+" from port "+port) //Gives you the message
}
})
})
listens to a new message in a specific port.