project/library physical (edit access) and logical (execution access) scoping - google-apps-script

I have read some of the basic Google Apps Script documentation/tutorials. I have not found any explanation of the "scope" of code execution.
Here is what I understand so far:
All code consists of one or more statements
All statements must (?) be contained in a "function"
(a slight different from non-google javascript? - is this a false assumption?)
All functions reside in a container called a "file"
Each "file" is part of a "project" container (or library)
Each project container is stored in a "spreadsheet" container.
These are the "physical" (edit access) boundaries.
My question is what is the "logical" boundaries of statements
during execution?
I started by assuming all variables/objects were global in scope,
similar to the way that JavaScript operates in a web-page. I did not think that
"edit access" containers limited the scope of variable/object definitions.
I was wrong.
I thought that a "library" structure is similar to a PHP "include" operation.
By that I thought it would save me from having to copy the same set of
code into every application (spreadsheet container), in which I needed to
use the already "tested" code. I assumed the resources available to the
included project were the same as those available to the including
project common resources). In short, I was wrong. The "properties" were stored in the defining spreadsheet, and are considered to be "owned" by the containing project.
From playing with it, I now understand that the "project key" just adds
a new "namespace" to the spreadsheet container. What happens in each
namespace remains in that namespace. The only (simplest) communication
between the namespaces is via function parameters and return value.
In particular, User, and Project properties are scoped to and remain in
their containing spreadsheet document. Each project within the spreadsheet
document has separate set of User and Project properties.
The same project/library name can appear in multiple spreadsheet files
and are totally independent.
Is this documented somewhere? And are there other things I need to know
about scoping (communication across threads from different applications
sharing the same project/library?
Also, if a function passes back a reference to an object defined in the
library scope, will it persist in the calling's project? Can I pass/return
a variable that points to a project's UserProperties "sevice" object and have access
to that data in another project?

This is documented towards the end of the User Guide in Libraries:
https://developers.google.com/apps-script/guide_libraries
When deciding how to scope things, we tried to think really hard about the most common use cases, and try to reduce surprise as much as possible, but we weren't perfect.
Regarding your question on whether you can use parameter-passing to share ScriptProperties objects between the library and the project, that's currently not possible. You can always expose getters/setters for particular properties.
If you have an interesting use case in mind which is impossible to achieve without the requested behavior, please file a bug in our issue tracker. Thanks!

I too have wrestled with this. While the documentation is very good, it is a bit confusing.
The shared and not-shared concept took me a bit of trial and error to get.
I did want to mention that UserProperties exist at the user level and is not tied to just the Project or Library. No need to pass UserProperties as you would with ScriptProperties when using a library.
Neat stuff all around :-)
Jim

Related

Clojurescript decoupled file & namespace?

I'm using reagent to build several alternate root components, only one of which will be mounted on any given page; definitely either/or. These have a degree of commonality in their makeup, hence it will be convenient to move what is common among them to a common namespace.
What would be ideal is if in the file for each of these components I had the option to switch namespace into common, and add defs particular to the component, then switch back, thus avoiding circular dependencies nor needing any kind of inheritance.
I recalled this being possible in common lisp, how wonderful it was, and it also seems possible in clojure.
From Clojurescript docs: ns must be the first form and can only be used once, and in-ns is only usable from the repl.
I'm wondering if there's a way to achieve this kind of thing in clojurescript which is still eluding me.
If not I may need to reconsider my assumptions behind multiple alternate root components; the "many builds within one build" kind of idea, if that makes sense.
Update after some futher experimentation and confusion:
another option might be to split a single namespace across multiple files (is this possible?). Not sure what direction to turn in here.
The fact that in reagent I am using atoms in the global namespace is what's creating the need for circular dependencies if I use a separate namespace for common. Hence, wonder about one global namespace, in which case multiple files might help. Or is the way forward one giant file and one namespace??
Update: I've realised there is a great tension between keeping all app state globally (in my current case, multiple atoms), and passing app state around. My pattern currently is everything global, don't pass any of it around. Passing the necessary state as parameters to fns in the common namespace would solve the problem here (duh!), but then there's the question of what principles are being followed here regarding app state. If I just added a param whenever I needed one, but started with the idea that everything was global, there'd be no real principle to it...
In ClojureScript, everything is pre-compiled into a single static JavaScript "executable", so there is nothing like the repl you are used to in Clojure. Indeed, in CLJS the "Var" concept doesn't really after the compiler, they are just static (constant) variables and cannot be rebound.
Having said that, CLJS does emulate the behavior of Clojure dynamic variables via the binding form, so that may help you to reach your goal. As in CLJ, it creates what amounts to a (thread-local) global variable. This is a degenerate case in CLJS since there is only one thread. However, the source code looks identical to the CLJ case.
Another way to accomplish this is to just use a plain atom as a global variable so you don't have to pass a parameter around.
As always, when using a global variable, it reduces the number of parameters in function call trees, but it creates invisible dependencies between different parts of the code. Somethimes convenient, but usually a bad tradeoff.

Azure ARM Template (JSON) Self-Reference

I'm creating some default "drag and drop" templates for our developers, and one section is the required tags. Most of the tags reference a variable: nice and easy. But one wants to reference the resource itself and I cannot figure out a way to it. Does anyone have any suggestions?
The tag itself is called "Context" and it's value should be the "type" of the resource it is in, e.g. "Microsoft.Web/serverfarms". This is desired to aid with billing. Obviously I could either create a different template per resource type (not ideal considering the number of different resources) or rely on the devs to update the field manually (not ideal either as relying on them to add the tags manually hasn't worked so far in a lot of cases), but I am trying to automate it.
Extrapolating from the [variables('< variablename >')] function I did try [resources('type')] but Azure complained that "resources is not a valid selection". I thought it might have complained that it couldn't tell which resource to look at, but it didn't get that far. Internet searches have not turned up anything useful so far.
I can't find a way to do this cleanly either (I hope someone corrects me though! This is a topic for us too). The reference and resourceId functions look promising, but both are unavailable inside of the resources block, would require some parsing, and also require the api version, which you probably also need to vary by resource and so you're just back to where you started. ARM won't even let you use a variable for the resource type property(probably a good thing), so that option is out too.
As such, you'll either have to live with your team having to replace that chunk of text manually or pursue some alternative.
The simplest thing that comes to mind would be to write a script in a language that understands JSON. That script reads the template, adds the tag to the resource, then saves the template again.
A similar approach would be to do it after the resources are deployed by writing a script that loops through all resources and making sure they have the tag. You can use automation to schedule this on a regular basis if you're concerned about it being missed. If you're deploying the templates using a script, you could add it in that script too.
There's some things you probably do with nested templates, but you probably wouldn't be making anyone's life easier or making the process more reliable.
This could be achievable potentially through some powershell specifically around Resource and Resource Group. Would need to run a Get-AzResource either at the subscription or potentially just the resource group level. Then pull the ResourceType field from the object return and use a Set-AzResource command passing in the ResourceID from above and the new tag mapped to the returnedResourceType field.

How does one make a self-contained package / library of functions

I am writing some support code in the common subset of Matlab/Octave, which comes in the form of a bunch of functions. Let's call it a package.
I want to be able to organize the package, i.e.,
put all the relevant function files in a single place, where users
are not supposed to store their code;
have some internal organization ('subpackages');
prevent namespace pollution;
have some mechanism for user code to 'import' parts of the package;
I don't necessarily want all functions I provide to be
visible from user clients.
On the Matlab side of things, this functionality is pretty much provided by package directories and the 'import' mechanism. This functionality doesn't appear to be available in Octave though (as of 3.6.1).
Given that, I wonder what options remain for organizing my support code package in Octave.
The option of putting everything in a directory and just have the user code do an ADDPATH feels rather unrefined, and doesn't give the level of control I want -- it only addresses point #1 of the list above.
There is plenty documentation here and examples in OctaveForge. Just browse the SVN.
Also there are personal packages all around. For example this one
Happy coding!

Why make global Lua functions local?

I've been looking at some Lua source code, and I often see things like this at the beginning of the file:
local setmetatable, getmetatable, etc.. = setmetatable, getmetatable, etc..
Do they only make the functions local to let Lua access them faster when often used?
Local data are on the stack, and therefore they do access them faster. However, I seriously doubt that the function call time to setmetatable is actually a significant issue for some program.
Here are the possible explanations for this:
Prevention from polluting the global environment. Modern Lua convention for modules is to not have them register themselves directly into the global table. They should build a local table of functions and return them. Thus, the only way to access them is with a local variable. This forces a number of things:
One module cannot accidentally overwrite another module's functions.
If a module does accidentally do this, the original functions in the table returned by the module will still be accessible. Only by using local modname = require "modname" will you be guaranteed to get exactly and only what that module exposed.
Modules that include other modules can't interfere with one another. The table you get back from require is always what the module stores.
A premature optimization by someone who read "local variables are accessed faster" and then decided to make everything local.
In general, this is good practice. Well, unless it's because of #2.
In addition to Nicol Bolas's answer, I'd add on to the 3rd point:
It allows your code to be run from within a sandbox after it's been loaded.
If the functions have been excluded from the sandbox and the code is loaded from within the sandbox, then it won't work. But if the code is loaded first, the sandbox can then call the loaded code and be able to exclude setmetatable, etc, from the sandbox.
I do it because it allows me to see the functions used by each of my modules
Additionally it protects you from others changing the functions in global environment.
That it is a free (premature) optimisation is a bonus.
Another subtle benefit: It clearly documents which variables (functions, modules) are imported by the module. And if you are using the module statement, it enforces such declarations, because the global environment is replaced (so globals are not available).

The use of config file is it equivalent to use of globals?

I've read many times and agree with avoiding the use of globals to keep code orthogonal. Does the use of the config file to keep read only information that your program uses similar to using Globals?
If you're using config files in place of globals, then yes, they are similar.
Config files should only be used in cases where the end-user (presumably a computer-savvy user, like a developer) needs to declare settings for an application or piece of code, while keeping their hands out of the code itself.
My first reaction would be that it is not the same. I think the problem with globals is the read+write scenario. Config-files are readonly (at least in terms of execution).
In the same way constants are not considered bad programming behaviour. Config-files, at least in the way I use them, are just easy-changable constants.
Well, since a config file and a global variable can both have the effect of propagating changes throughout a system - they are roughly similar.
But... in the case of a configuration file that change is usually going to take place in a single, highly-visible (to the developer) location, and global variables can affect change in very sneaky and hard to track down ways -- so in this way the two concepts are not similar.
Having a configuration file ususally helps with DRY concepts, and it shouldn't hurt the orthogonality of the system, either.
Bonus points for using the $25 word 'orthogonal'. I had to look that one up in Wikipedia to find out the non-Euclidean definition.
Configuration files are really meant to be easily editable by the end user as a way of telling the program how to run.
A more specialized form of configuration files, user preferences, are used to remember things between program executions.
Global is related to a unique instance for an object which will never change, whereas config file is used as container for reference values, for objects within the application that can change.
One "global" object will never change during runtime, the other object is initialized through config file, but can change later on.
Actually, those objects not only can change during the lifetime of the application, they can also monitor the config file in order to realize "hot-change" (modification of their value without stopping/restarting the application), if that config file is modified.
They are absolutely not the same or replacements for eachother. A config file, or object can be used non-globally, ie passed explicitly.
You can of course have a global variable that refers to a config object, and that would be defeating the purpose.