As of 2021, with V8 engine, I'm wondering if it's a good idea to use Global Variables in Google Apps Script? And if it is, how to use them? Is my way a good way (described below)?
Now I checked, of course, all other similar questions. But there are still some details I couldn't find:
Basically what I tried to do is to not repeat the code: I have multiple functions, and I'm storing the active spreadsheet and the current sheet like so:
const ss = SpreadsheetApp.getActiveSpreadsheet();
const sheet = ss.getActiveSheet();
which leads to
repeating (1)
wasting resources - instantiating spreadsheet and sheet (2)
Increasing variables names inconsistency (ss / spreadsheet / spreadSheet - when copy/pasting from a snippet on the web) (3)
Right?
So I thought of using the Global Variables (GV) whenever I have common local variables in multiple functions.
However, since they’re will be unnecessarily allocated on every function call (there are also other functions that don't need the GVs), I tried to define them only when needed (only when there's function call that uses them) - by not using a defining keyword (var, const, let):
According to this link, it seems to be a good approach (pattern 1).
Anyway, I'm wondering if there are any other considerations or downsides I'm not aware of? Is it really a good idea to go this road? Because so far, I didn’t see any snippet that implements this, and I saw a lot of GAS snippets.
One downside I'm aware of is the lack of autocompletion in the new GAS editor, for my GVs (since I didn't define them using 'var' or 'let' to set their scope Global on purpose).
Otherwise, I'm aware of PropertiesService and CacheService. However I'm willing to reuse my script (where I defined my GVs) as a library for my other scripts.
Plus, you can only store values as strings in PropertiesService and CacheService (and not SpreadsheetApp.getActiveSpreadsheet()) right? Not to mention that I don't need persistency after the script execution.
So I'm also hesitant to use them instead of GVs.
You can use the lazy loading technique in my answer
To make it dynamic and avoid repetition, You can use enclosing arrow functions(()=>{}) to avoid direct execution and use Object.defineProperty() to add a getter.
One of the significant advantage of this method is modular lazy loading. If a object isn't needed, it isn't loaded. If you have ss, sheet1, sheet2,rangeOfSheet1 and rangeOfSheet2 as global variables, if you access rangeOfSheet2, only it's dependencies are loaded, i.e, sheet1 and ss. The rest are untouched.
const g = {};//global object
const addGetter_ = (name, value, obj = g) => {
Object.defineProperty(obj, name, {
enumerable: true,
configurable: true,
get() {
delete this[name];
return (this[name] = value());
},
});
return obj;
};
//MY GLOBAL VARIABLES in g
[
['ss', () => SpreadsheetApp.getActive()],
['MasterSheet', () => g.ss.getSheetByName('Sheet1')],
['MasterRangeColA1_5', () => g.MasterSheet.getRange('A1:A5')],
['MasterRangeColAValues', () => g.MasterRangeColA1_5.getValues()],
].forEach(([n, v]) => addGetter_(n, v));
const test = () => {
console.info('start');
console.log({ g });
console.info('Accessing MasterSheet');
console.log(g.MasterSheet);
console.log({ g }); //note ss is loaded as well
console.info('Accessing MasterRangeColAValues');
console.log(g.MasterRangeColAValues);
console.log({ g }); //note MasterRangeColA1_5 is loaded as well
};
Instead of a global object g, we can also use the global this, in which case, all variables directly become members of a global object:
const addGetter_ = (name, value, obj = this) => {
Object.defineProperty(obj, name, {
enumerable: true,
configurable: true,
get() {
delete this[name];
return (this[name] = value());
},
});
return obj;
};
[
['ss', () => SpreadsheetApp.getActive()],
['MasterSheet', () => ss.getSheetByName('Sheet1')],
['MasterRangeColA1_5', () => MasterSheet.getRange('A1:A5')],
['MasterRangeColAValues', () => MasterRangeColA1_5.getValues()],
].forEach(([n, v]) => addGetter_(n, v));
const test = () => {
console.info('start');
console.log(this);
console.info('Accessing MasterSheet');
console.log(MasterSheet);
console.log(this); //note ss is loaded as well
console.info('Accessing MasterRangeColAValues');
console.log(MasterRangeColAValues);
console.log(this); //note MasterRangeColA1_5 is loaded as well
};
Advantage: You don't have to prefix variables with g. But, global space is polluted.
Related
I need your help when writting good composables in Vue 3. Looking a the documentation I can see that composables should be a function. That's ok.
However I don't feel confortable with this because I lose IDE help and autocompletion.
For example, if I have a useUtils() composable like this:
// composables/useUtils.js
export default function useUtils() {
const isAdmin = () => true;
const isUser = () => false;
return {
isAdmin,
isUser,
}
Then, when writting code in PhpStorm/WebStorn, the IDE does not autocomplete (either auto import) the utilities functions described inside my useUtils() composable :(
For example, if I start to write:
const canCreate = isAdm //<-- Here I would like IDE to autocomplete and autoimport!
That doesn't work because IDE is not able to know what should autocomplete.
Workaround
If I define the composable as a bounch of exported functions however, it works correctly:
// composables/useUtils.js
export const isAdmin = () => true;
export const isUser = () => false;
Now, the IDE knows all available functions and does a good work autocompleting and autoimporting everything.
In addition, when using this approach, I also get's the ability to know what things of my composable are being used and what not, that's very cool. It doesn't happen when deffining a function. But I feel bad because Vue docs says that composables should be a function T_T
So, here is my question:
What do you do guys? Are there a way to configure the IDE for a better integration when writting composables? Is very bad to use a bunch of funtions?
Give me any tip please,
Thanks!
I usually do:
const useMyComposable = () => {
const myData = ref("something");
const myFunction = () => {
// ...
}
return {
myData,
myFunction
}
}
export { useMyComposable };
export default useMyComposable;
So now in a random component I get the autocompletion working.
I want to create a form on an index page that can store data via session storage. I also want to make sure that whatever data(let's say name) ... is remembered and used throughout the site with angular. I have researched pieces of this process but I do not understand how to write it or really even what it's called.
Any help in the right direction would be useful as I am in the infant stages of all of this angular business. Let me know.
The service you want is angular-local-storage.
Just configure it in your app.js file:
localStorageServiceProvider
.setStorageType('sessionStorage');
And then use it in the controller that contains whatever data you want to remember. Here is an example of a controller that loads the session storage data on initialization, and saves it when a user fires $scope.doSearch through the UI. This should give you a good place to start.
(function () {
angular.module("pstat")
.controller("homeCtrl", homeCtrl);
homeCtrl.$inject = ['$log', 'dataService', 'localStorageService', '$http'];
function homeCtrl ($log, dataService, localStorageService, $http) { {
if (localStorageService.get("query")) { //Returns null for missing 'query' cookie
//Or store the results directly if they aren't too large
//Do something with your saved query on page load, probably get data
//Example:
dataService.getData(query)
.success( function (data) {})
.error( function (err) {})
}
$scope.doSearch = function (query) {
vm.token = localStorageService.set("query", query);
//Then actually do your search
}
})
}()
With gulp you often see patterns like this:
gulp.watch('src/*.jade',['templates']);
gulp.task('templates', function() {
return gulp.src('src/*.jade')
.pipe(jade({
pretty: true
}))
.pipe(gulp.dest('dist/'))
.pipe( livereload( server ));
});
Does this actually pass the watch'ed files into the templates task? How do these overwrite/extend/filter the src'ed tasks?
I had the same question some time ago and came to the following conclusion after digging for a bit.
gulp.watch is an eventEmitter that emits a change event, and so you can do this:
var watcher = gulp.watch('src/*.jade',['templates']);
watcher.on('change', function(f) {
console.log('Change Event:', f);
});
and you'll see this:
Change Event: { type: 'changed',
path: '/Users/developer/Sites/stackoverflow/src/touch.jade' }
This information could presumably be passed to the template task either via its task function, or the behavior of gulp.src.
The task function itself can only receive a callback (https://github.com/gulpjs/gulp/blob/master/docs/API.md#fn) and cannot receive any information about vinyl files (https://github.com/wearefractal/vinyl-fs) that are used by gulp.
The source starting a task (.watch in this case, or gulp command line) has no effect on the behavior of gulp.src('src-glob', [options]). 'src-glob' is a string (or array of strings) and options (https://github.com/isaacs/node-glob#options) has nothing about any file changes.
Hence, I don't see any way in which .watch could directly affect the behavior of a task it triggers.
If you want to process only the changed files, you can use gulp-changed (https://www.npmjs.com/package/gulp-changed) if you want to use gulp.watch, or you cold use gulp-watch.
Alternatively, you could do this as well:
var gulp = require('gulp');
var jade = require('gulp-jade');
var livereload = require('gulp-livereload');
gulp.watch('src/*.jade', function(event){
template(event.path);
});
gulp.task('templates', function() {
template('src/*.jade');
});
function template(files) {
return gulp.src(files)
.pipe(jade({
pretty: true
}))
.pipe(gulp.dest('dist/'))
}
One of the possible way to pass a parameter or a data from your watcher to a task. Is through using a global variable, or a variable that is in both blocks scops. Here is an example:
gulp.task('watch', function () {
//....
//json comments
watch('./app/tempGulp/json/**/*.json', function (evt) {
jsonCommentWatchEvt = evt; // we set the global variable first
gulp.start('jsonComment'); // then we start the task
})
})
//global variable
var jsonCommentWatchEvt = null
//json comments task
gulp.task('jsonComment', function () {
jsonComment_Task(jsonCommentWatchEvt)
})
And here the function doing the task work in case it interest any one, But know i didn't need to put the work in such another function i could just implemented it directly in the task. And for the file you have your global variable. Here it's jsonCommentWatchEvt. But know if you don't use a function as i did, a good practice is to assign the value of the global variable to a local one, that you will be using. And you do that at the all top entry of the task. So you will not be using the global variable itself. And that to avoid the problem that it can change by another watch handling triggering. When it stay in use by the current running task.
function jsonComment_Task(evt) {
console.log('handling : ' + evt.path);
gulp.src(evt.path, {
base: './app/tempGulp/json/'
}).
pipe(stripJsonComments({whitespace: false})).on('error', console.log).
on('data', function (file) { // here we want to manipulate the resulting stream
var str = file.contents.toString()
var stream = source(path.basename(file.path))
stream.end(str.replace(/\n\s*\n/g, '\n\n'))
stream.
pipe(gulp.dest('./app/json/')).on('error', console.log)
})
}
I had a directory of different json's files, where i will use comments on them. I'm watching them. When a file is modified the watch handling is triggered, and i need then to process only the file that was modified. To remove the comments, i used json-comment-strip plugin for that. Plus that i needed to do a more treatment. to remove the multiple successive line break. Whatever, at all first i needed to pass the path to the file that we can recover from the event parameter. I passed that to the task through a global variable, that does only that. Allow passing the data.
Note: Even though that doesn't have a relation with the question, in my example here, i needed to treat the stream getting out from the plugin processing. i used the on("data" event. it's asynchronous. so the task will mark the end before the work completely end (the task reach the end, but the launched asynchronous function will stay processing a little more). So the time you will get in the console at task end, isn't the time for the whole processing, but task block end. Just that you know. For me it doesn't matter.
There aren't many examples demonstrating indexedDB in a ServiceWorker yet, but the ones I saw were all structured like this:
const request = indexedDB.open( 'myDB', 1 );
var db;
request.onupgradeneeded = ...
request.onsuccess = function() {
db = this.result; // Average 8ms
};
self.onfetch = function(e)
{
const requestURL = new URL( e.request.url ),
path = requestURL.pathname;
if( path === '/test' )
{
const response = new Promise( function( resolve )
{
console.log( performance.now(), typeof db ); // Average 15ms
db.transaction( 'cache' ).objectStore( 'cache' ).get( 'test' ).onsuccess = function()
{
resolve( new Response( this.result, { headers: { 'content-type':'text/plain' } } ) );
}
});
e.respondWith( response );
}
}
Is this likely to fail when the ServiceWorker starts up, and if so what is a robust way of accessing indexedDB in a ServiceWorker?
Opening the IDB every time the ServiceWorker starts up is unlikely to be optimal, you'll end up opening it even when it isn't used. Instead, open the db when you need it. A singleton is really useful here (see https://github.com/jakearchibald/svgomg/blob/master/src/js/utils/storage.js#L5), so you don't need to open IDB twice if it's used twice in its lifetime.
The "activate" event is a great place to open IDB and let any "onupdateneeded" events run, as the old version of ServiceWorker is out of the way.
You can wrap a transaction in a promise like so:
var tx = db.transaction(scope, mode);
var p = new Promise(function(resolve, reject) {
tx.onabort = function() { reject(tx.error); };
tx.oncomplete = function() { resolve(); };
});
Now p will resolve/reject when the transaction completes/aborts. So you can do arbitrary logic in the tx transaction, and p.then(...) and/or pass a dependent promise into e.respondWith() or e.waitUntil() etc.
As noted by other commenters, we really do need to promisify IndexedDB. But the composition of its post-task autocommit model and the microtask queues that Promises use make it... nontrivial to do so without basically completely replacing the API. But (as an implementer and one of the spec editors) I'm actively prototyping some ideas.
I don't know of anything special about accessing IndexedDB from the context of a service worker via accessing IndexedDB via a controlled page.
Promises obviously makes your life much easier within a service worker, so I've found using something like, e.g., https://gist.github.com/inexorabletash/c8069c042b734519680c to be useful instead of the raw IndexedDB API. But it's not mandatory as long as you create and manage your own promises to reflect the state of the asynchronous IndexedDB operations.
The main thing to keep in mind when writing a fetch event handler (and this isn't specific to using IndexedDB), is that if you call event.respondWith(), you need to pass in either a Response object or a promise that resolves with a Response object. As long as you're doing that, it shouldn't matter whether your Response is constructed from IndexedDB entries or the Cache API or elsewhere.
Are you running into any actual problems with the code you posted, or was this more of a theoretical question?
Is it possible to reset a resolved jQuery object to an 'unresolved' state and kick off it's initialization and callbacks all over again?
The specific thing I'm doing is that I have a jQuery deferred wrapper over the local file system api. From there I build up higher level deferreds for the things I care about:
var getFs = defFs.requestQuota(PERSISTENT, 1024*1024)
.pipe (bytes) -> defFs.requestFs(PERSISTENT, bytes)
var getCacheContents = getFs.pipe (fileSystem) ->
defFs.getDirectory('Cache', fileSystem.root).pipe (dir) ->
defFs.readEntries(dir)
Now most of the time, when I call getCacheContents I don't mind the memo-ized values being returned, in fact I prefer it. But, on the occasion when I want to write to the cache I really would like the ability to reset that pipe and have it re-select and rescan the cache next time its accessed.
I could cobble something together from $.Callbacks but a deferred-based solution would really be ideal.
No. A Promise is by definition a thing that resolves only once - from unresolved to fulfilled OR to rejected. You will not be able to do this with jQuery's Deferreds.
What you are actually searching for are Signals. They are to be fired more than once, but provide a similiar interface. There are some implementations around, you might ceck out js-signals or wire.js.
The only solution I could find is to reset the $.Deferred object and return new Promise from that one. It works together with some internal API dirty checking (if something gets edited / deleted), but would be more performant to just reset the existing $.Deferred and let it re-resolve on the next Promise request.
An example of a possible solution is:
$.myDeferredList = [];
$.createRestorableDeferred = function(a,b) {
// JUST BY SIMPLE $.when().then();
$.myDeferredList[a] = {
deferred: $.Deferred()
, then: b
,restore : function() {
$.myDeferredList['myReady'].deferred = $.Deferred();
$.when($.myDeferredList['myReady'].deferred).then($.myDeferredList['myReady'].then);
}
,resolve : function() {
$.myDeferredList['myReady'].deferred.resolve();
}
}
$.when($.myDeferredList['myReady'].deferred).then($.myDeferredList['myReady'].then);
window[a] = $.myDeferredList['myReady'];
}
var counter = 0;
$.createRestorableDeferred('myReady', function () {
console.log('>> myReady WHEN called',++counter);
$.myDeferredList['myReady'].restore();
});
// RESOLVING ways
$.myDeferredList['myReady'].deferred.resolve();
$.myDeferredList.myReady.deferred.resolve();
myReady.resolve();
Results in console:
>> myReady WHEN called 1
>> myReady WHEN called 2
>> myReady WHEN called 3