Gnome Shell has great shortcuts, however, I don't find a way to call them programmingly
Assume that I want to use a GJS script to start Google Chrome, move it to workspace 1, and maximize it, then start Emacs, move it to workspace 2, and maximize it.
This could be done using wm.keybindings: move-to-workspace-1, move-to-workspace-2 and maximize. However, how to call them programmingly?
I notice that in GJS, Meta.prefs_get_keybinding_action('move-to-workspace-1') will return the guint of action move-to-workspace-1, but I did not find any function to call the action.
In https://github.com/GNOME/mutter/blob/master/src/core/keybindings.c, I found a function meta_display_accelerator_activate, but I could not find a GJS binding for this function.
So, is there any way to call gnome shell shortcuts programmatically?
The best bet to move an application is to grab its Meta.Window object, which is created after it's started.
This would be done through getting the active workspace, starting the application, then getting the application from the active workspace and moving it.
Sample code for a quick implementation:
const workspace = global.screen.get_active_workspace();
const Gio = imports.gi.Gio;
//CLIname: Name used to open app from a terminal
//wsIndex: Workspace you want it on
function openApp(CLIname, wsIndex) {
let context = new Gio.AppLaunchContext;
//use 2 to indicate URI support
//0 is no flags, 1 for terminal window,
//No 3, 4 for notification support
//null because setting a name has no use
Gio.AppInfo.create_from_commandline(CLIname, null, 2).launch([], context);
//Unfortunately, there is no way I know to grab a specific window if you don't know the index.
for(let w of workspace.list_windows()) {
//check if the window title or window manager class match the CLIname. Haven't found any that don't match either yet.
if(w.title.toLowerCase().includes(CLIname.toLowerCase() || w.get_wm_class().toLowerCase.includes(CLIname.toLowerCase()) {
//Found a match? Move it!
w.change_workspace(global.screen.get_workspace_by_index(wsIndex));
}
{
}
/*init(), enable() and disable() aren't relevant here*/
To answer the actual question way at the end, it might be possible by forcing the GNOME on-screen keyboard to emit those keys, but that would require matching the right keys and I/O emulation for every keybinding you wish to execute, which can change whenever a user wants it to, from an extension.
Related
I would like to offer the opportunity to view output from the same data, in a spreadsheet, TBA sidebar and, ideally another type of HTML window for output created, for example, with a JavaScript Library like THREE.
The non Google version I made is a web page with iframes that can be resized, dragged and opened/closed and, most importantly, their content shares the same record object in the top window. So, I believe, perhaps naively, something similar could be made an option inside this established and popular application.
At the very least, the TBA trial has shown me it useful to view and manipulate information from either sheet or TBA. The facility to navigate large building projects, clone rooms and floors, and combine JSON records (stored in depositories like myjson) for collaborative work is particularly inspiring for me.
I have tried using the sidebar for different HTML files, but the fact only one stays open is not very useful, and frankly, sharing record objects is still beyond me. So that is the main question. Whether Google people would consider an extra window type is probably a bit ambitious, but I think worth asking.
You can't maintain a global variable across calls to HtmlService. When you fire off an HtmlService instance, which runs in the browser, the server side code that launched it exits.
From that point control is client side, in the HtmlService code. If you then launch a server side function (using google.script.run from client side), a new instance of the server side script is launched, with no memory of the previous instance - which means that any global variables are re-initialized.
There are a number of techniques for peristing values across calls.
The simplest one of course is to pass it to the htmlservice in the first place, then to pass it back to server side as an argument to google.script.run.
Another is to use property service to hold your values, and they will still be there when you go back, but there is a 9k maximum entry size
If you need more space, then the cache service can hold 100k in a single entry and you can use that in the same way (although there is a slight chance it will be cleaned away -- although it's never happened for me)
If you need even more space, there are techniques for compressing and/or spreading a single object across several cache entries - as documented here http://ramblings.mcpher.com/Home/excelquirks/gassnips/squuezer. This same method supports Google Drive, or Google cloud storage if you need to persist data even longer
Of course you can't pass non-stringifiable objects like functions and so on, but you can postpone their evaluation and allow the initialized server side script to evaulate them, and even share the same code between server, client or across projects.
Some techniques for that are described in these articles
http://ramblings.mcpher.com/Home/excelquirks/gassnips/nonstringify
http://ramblings.mcpher.com/Home/excelquirks/gassnips/htmltemplateresuse
However in your specific example, it seems that the global data you want is fetched from an external api call. Why not just retrieve it client side in any case ? If you need to do something with it server side, then pass it to the server using google.script.run.
window.open and window.postMessage() solved both the problems I described above.
I hope you will be assured from the screenshot and code that the usefulness of Google sheets can be extended for the common good. At the core is the two methods for inputting, copying and reviewing textual data - spreadsheet for a slice through a set of data, and TBA for navigation of associations in the Trail (x axis) and Branches (y axis), and for working on Aspects (z axis) of the current selection that require attention, in collaborations, from different interests.
So, for example, a nurse would find TBA useful for recording many aspects of an examination of a patient, whereas a pharmacist might find a spreadsheet more useful for stock control. Both record their data in a common object I call 'nset' (hierarchy of named sets), saved in the cloud and available for distribution in collaborative activities.
TBA is also useful for cloning large sets of records. For example, one room, complete with furniture can be replicated on one floor, then that floor, complete with rooms can be replicated for a complete tower.
Being able to maintain parallel nset objects in multiple monitor windows by postMessage means unrivalled opportunities to display the same data in different forms of multimedia, including interactive animation, augmented reality, CNC machine instruction, IOT controls ...
Here is the related code:
From the TBA in sidebar:
window.addEventListener("message", receiveMessage, false);
function openMonitor(nset){
var params = [
'height=400',
'width=400'
].join(',');
let file = 'http://glasier.hk/blazer/model.html';
popup = window.open(file,'popup_window', params);
popup.moveTo(100,100);
}
var popup;
function receiveMessage(event) {
let ed,nb;
ed = event.data;
nb = typeof ed === "string"? ed : nb[0];
switch(nb){
case "Post":
console.log("Post");
popup.postMessage(["Refreshing nset",nset], "http:glasier.hk");
break;
}
}
function importNset(){
google.script.run
.withSuccessHandler(function (code) {
root = '1grsin';
trial = 'msm4r';
orig = 'ozs29';
code = orig;
path = "https://api.myjson.com/bins/"+code;
$.get(path)
.done((data, textStatus, jqXHR) => {
nset = data;
openMonitor(nset);
cfig = nset.cfig;
start();
})
})
.sendCode();
}
From the popup window:
$(document).ready(function(){
name = $(window).attr("name");
if(name === "workshop"){
tgt = opener.location.href;
}
else{
tgt = "https://n-rxnikgfd6bqtnglngjmbaz3j2p7cbcqce3dihry-0lu-script.googleusercontent.com"
}
$("#notice").html(tgt);
opener.postMessage("Post",tgt);
$(window).on("resize",function(){
location.reload();
})
})
}
window.addEventListener("message", receiveMessage, false);
function receiveMessage(event) {
let ed,nb;
ed = event.data;
nb = typeof ed === "string"? ed : ed[0];
switch(nb){
case "Post": popup.postMessage(["nset" +nset], "*"); break;
default :
src = event.origin;
notice = [ed[0]," from ",src ];
console.log(notice);
// $("#notice").html(notice).show();
nset = ed[1];
cfig = nset.cfig;
reloader(src);
}
}
I should explain that the html part of the sidebar was built on a localhost workshop, with all styles and scripts compiled into a single file for pasting in a sidebar html file. The workshop also is available online. The Google target is provided by event.origin in postMessage. This would have to be issued to anyone wishing to make different monitors. For now I have just made the 3D modelling monitor with Three.js.
I think, after much research and questioning around here, this should be the proper answer.
The best way to implement global variables in GAS is through userproperties or script properties.https://developers.google.com/apps-script/reference/properties/properties-service. If you'd rather deal with just one, write them to an object and then json.stringify the object (and json.parse to get it back).
I'm having trouble with the autocomplete feature in Google App Script.
Built-in methods like SpreadsheetApp. will provide an autocomplete menu with methods to choose from.
However, if I create my own child object, autocomplete works for a little while, and then it just stops working.
for example:
var skywardRoster = SpreadsheetApp.getActiveSheet();
skywardRoster. will produce method options for a while, and then it stops.
However, the code still functions, and methods work if I type them out manually, so I know the declarations must be right. The menu simply won't appear, and it's just very inconvenient to have to look up each method individually as I go.
I have tried: breaking the variable and retyping that line; copy and pasting the code back into the editor; using different browsers; copying the gs file itself and working within the copy; and signing out completely and signing back in. Nothing seems to get it back to work.
I'm really new to coding, and I'm not sure what can be causing this.
Does anyone know how to fix this issue?
You might want to check Built-in Google Services:Using autocomplete:
The script editor provides a "content assist" feature, more commonly called "autocomplete," which reveals the global objects as well as methods and enums that are valid in the script's current context. To show autocomplete suggestions, select the menu item Edit > Content assist or press Ctrl+Space. Autocomplete suggestions also appear automatically whenever you type a period after a global object, enum, or method call that returns an Apps Script class. For example:
If you click on a blank line in the script editor and activate autocomplete, you will see a list of the global objects.
If you type the full name of a global object or select one from autocomplete, then type . (a period), you will see all methods and enums for that class.
If you type a few characters and activate autocomplete, you will see all valid suggestions that begin with those characters.
Since this was the first result on google for a non-working google script autocompletion, I will post my solution here as it maybe helps someone in the future.
The autocompletion stopped working for me when I assigned a value to a variable for a second time.
Example:
var cell = tableRow.appendTableCell();
...
cell = tableRow.appendTableCell();
So maybe create a new variable for the second assignment just during the implementation so that autocompletion works correctly. When you are done with the implementation you can replace it with the original variable.
Example:
var cell = tableRow.appendTableCell();
...
var replaceMeCell = tableRow.appendTableCell(); // new variable during the implementation
And when the implementation is done:
var cell = tableRow.appendTableCell();
...
cell = tableRow.appendTableCell(); // replace the newly created variable with the original one when you are done
Hope this helps!
I was looking for a way how to improve Google Apps Script development experience. Sometimes autocomplete misses context. For example for Google Spreadsheet trigger event parameters. I solved the problem by using clasp and #ts-check.
clasp allows to edit sources in VS Code on local machine. It can pull and push Google Apps Script code. Here is an article how to try it.
When you move to VS Code and setup environment you can add //#ts-check in the beginning of the JavaScript file to help autocomplete with the special instructions. Here is the instructions set.
My trigger example looks like this (notice autocompletion works only in VS Code, Google Apps Script cloud editor doesn't understand #ts-check instruction):
//#ts-check
/**
* #param {GoogleAppsScript.Events.SheetsOnEdit} e
*/
function onEditTrigger(e) {
var spreadsheet = e.source;
var activeSheet = spreadsheet.getActiveSheet();
Logger.log(e.value);
}
I agree, Google Script's autocomplete feature is pretty poor comparing with most of other implementations. However the lack is uderstandable in most cases and sometimes the function can be preserved.
Loosing context
The autocomplete is limited to Google objects (Spreasheets, Files, etc.). When working with them you get autocomplete hints, unless you pass such object instance to function as an argument. The context is lost then and the editor will not give you suggestions inside the called function. That is because js doesn't have type control.
You can pass an id into the function instead of the object (not File instance but fileId) and get the instance inside of the function but in most cases such operation will slow the script.
Better solution by Cameron Roberts
Cameron Roberts came with something what could be Goole's intence or a kind of hack, don't know. At the beginning of a function assign an proper object instance to parameter wariable and comment it to block:
function logFileChange(sheet, fileId){
/*
sheet = SpreadsheetApp.getActiveSheet();
*/
sheet.appendRow([fileId]); // auto completion works here
}
Auto completion preserved
I have a UI test which involves the dismissal of a system-generated UIAlertController. This alert asks the user for the permission to access the device's calendar. The objective of the test is the behaviour after a tap on the OK button:
1 let app = XCUIApplication()
...
// this code was basically generated by the recording feature of XCode 7
2 app.alerts.elementBoundByIndex(0).collectionViews.buttons["OK"].tap()
Now, instead of clicking the OK button, line 2 makes the simulator tap onto the first button which happens to be the Cancel button...
Additionally, I found out that the testing framework does not accurately recognize the appearing alert. So if I check the current count of alerts I always get 0:
// ...tap...
let count = app.alerts.count // == 0
This also happens if I use an NSPredicate for the condition and wait for several seconds.
Is it possible that UI tests do not work reliably with system-generated alerts? I am using XCode 7.0.1.
Xcode 7.1 has finally fixed the issue with system alerts. There are, however, two small gotchas.
First, you need to set up a "UI Interuption Handler" before presenting the alert. This is our way of telling the framework how to handle an alert when it appears.
Second, after presenting the alert you must interact with the interface. Simply tapping the app works just fine, but is required.
addUIInterruptionMonitorWithDescription("Location Dialog") { (alert) -> Bool in
alert.buttons["Allow"].tap()
return true
}
app.buttons["Request Location"].tap()
app.tap() // need to interact with the app for the handler to fire
The "Location Dialog" is just a string to help the developer identify which handler was accessed, it is not specific to the type of alert. I believe that returning true from the handler marks it as "complete", which means it won't be called again.
I managed to dismiss access prompt for contacts like this:
let alert = app.alerts["\u{201c}******\u{201d} Would Like to Access Your Contacts"].collectionViews.buttons["OK"]
if alert.exists
{
alert.tap()
}
Replace asterisks with your app's name. It might work the same for calendar.
This is what I ended up using to get the prompt to appear and then allow access to Contacts:
func allowAccessToContacts(textFieldsName: String)
{
let app = XCUIApplication()
let textField = app.textFields[textFieldsName]
textField.tap()
textField.typeText("aaa")
let alert = app.alerts["\u{201c}******\u{201d} Would Like to Access Your Contacts"].collectionViews.buttons["OK"]
if alert.exists
{
alert.tap()
}
textField.typeText("\u{8}\u{8}\u{8}")
}
I'm developing a class library for windows 10 universal apps (mobile and desktop device families only). I need to invoke an event if the user has been idle(no touch, mouse move, key press etc) for x number of seconds. This method can be used to solves this problem on android. But I couldn't find a solution on windows UWP.
Is there an API available in UWP to achieve this?
You can detect global input with various events on the app's CoreWindow:
Touch and mouse input with CoreWindow.PointerPressed, PointerMoved, and PointerReleased.
Keyboard input: KeyUp and KeyDown (the soft keys) and CharacterReceived (for characters generated via chords & text suggestions)
Use these to detect the user is active and idle out if it goes too long without any of these events.
I know this is really old question, but I think you can now get to same result with RegisterBackgroundTask
Just set:
new TimeTrigger(15, false) //For time trigger
Link
new SystemCondition(SystemConditionType.UserNotPresent)) //And so you want to know so user is not present
Link
Example usage in App.xaml.cs:
var builder = new BackgroundTaskBuilder();
builder.Name = "Is user Idle";
builder.SetTrigger(new TimeTrigger(2, false)); //two mins
builder.AddCondition(new SystemCondition(SystemConditionType.UserNotPresent));
// Do not set builder.TaskEntryPoint for in-process background tasks
// Here we register the task and work will start based on the time trigger.
BackgroundTaskRegistration task = builder.Register();
task.Completed += (sender, args) =>
{
//Handle user not present (Idle) here.
};
I am trying to write a chrome.app that is able to open and close chrome.app windows on both displays of a system that is configured with two monitors. When launched, the chrome application establishes a socket connection with a native application running on the same computer, I also open a hidden window via chrome.app.window.create to keep the chrome application up and running. The native application then reads a configuration file and then sends a series of ‘openBrowser’ commands to the chrome application via the socket.
When the chrome application receives an ‘openBrowser’ command, the chrome application makes a call to the chrome API method chrome.app.window.create, passing the create parameters AND a callback function. A code snippet is below:
NPMBrowserManager.prototype.openBrowser = function (browserId,htmlFile,browserBounds,hidden,grabFocus)
{
var browserManager = this;
var createParameters = {};
createParameters.bounds = browserBounds;
createParameters.hidden = hidden;
chrome.app.window.create(htmlFile,createParameters,function(appWindow)
{
// Check to see if I got a non-undefined appWindow.
if(appWindow !== null)
{
browserManager.browsers.push({"browserId":browserId,"window":appWindow});
console.info("NPMBrowserManager.openBrowser: Added browser, id =" + browserId + ", count =" + browserManager.browsers.length);
}
});
}
Unfortunately, the ‘appWindow’, parameter passed in the create callback is always undefined. I suspect it has something to do with the fact that the method openBrowser is itself being called by another method that processes commands received from the native application. However, the window opens exactly here and when I want to to, I just can’t seem to cache away any information about the new window that can be used later to close or move the window.
I want to be able to cache away the appWindow so that I can close or modify the created window later on in the workflow.
As a side note, I’ve noticed that appWindow is NOT undefined if I call the openBrowser method from within the callback that is associated with the chrome.app.runtime.onLaunched event. I suspect it has something to do with the current script context. I was not able to find any chrome.app documentation that goes into any detail about the chrome app architecture.
I would GREATLY appreciate it if anyone out there can explain to me how I can get the appWindow of the window that is created in the chrome.app.window.create method. By the way, I have also tried calling chrome.app.window.current to no avail… Very frustrating!!!
I’d also be interested in any documentation that might exist. I am aware of developer.chrome.com, but could not find much documentation other than reference documentation.
Thanks for the help!
Jim