Questions on extending GAS spreadsheet usefulness - json

I would like to offer the opportunity to view output from the same data, in a spreadsheet, TBA sidebar and, ideally another type of HTML window for output created, for example, with a JavaScript Library like THREE.
The non Google version I made is a web page with iframes that can be resized, dragged and opened/closed and, most importantly, their content shares the same record object in the top window. So, I believe, perhaps naively, something similar could be made an option inside this established and popular application.
At the very least, the TBA trial has shown me it useful to view and manipulate information from either sheet or TBA. The facility to navigate large building projects, clone rooms and floors, and combine JSON records (stored in depositories like myjson) for collaborative work is particularly inspiring for me.
I have tried using the sidebar for different HTML files, but the fact only one stays open is not very useful, and frankly, sharing record objects is still beyond me. So that is the main question. Whether Google people would consider an extra window type is probably a bit ambitious, but I think worth asking.

You can't maintain a global variable across calls to HtmlService. When you fire off an HtmlService instance, which runs in the browser, the server side code that launched it exits.
From that point control is client side, in the HtmlService code. If you then launch a server side function (using google.script.run from client side), a new instance of the server side script is launched, with no memory of the previous instance - which means that any global variables are re-initialized.
There are a number of techniques for peristing values across calls.
The simplest one of course is to pass it to the htmlservice in the first place, then to pass it back to server side as an argument to google.script.run.
Another is to use property service to hold your values, and they will still be there when you go back, but there is a 9k maximum entry size
If you need more space, then the cache service can hold 100k in a single entry and you can use that in the same way (although there is a slight chance it will be cleaned away -- although it's never happened for me)
If you need even more space, there are techniques for compressing and/or spreading a single object across several cache entries - as documented here http://ramblings.mcpher.com/Home/excelquirks/gassnips/squuezer. This same method supports Google Drive, or Google cloud storage if you need to persist data even longer
Of course you can't pass non-stringifiable objects like functions and so on, but you can postpone their evaluation and allow the initialized server side script to evaulate them, and even share the same code between server, client or across projects.
Some techniques for that are described in these articles
http://ramblings.mcpher.com/Home/excelquirks/gassnips/nonstringify
http://ramblings.mcpher.com/Home/excelquirks/gassnips/htmltemplateresuse
However in your specific example, it seems that the global data you want is fetched from an external api call. Why not just retrieve it client side in any case ? If you need to do something with it server side, then pass it to the server using google.script.run.

window.open and window.postMessage() solved both the problems I described above.
I hope you will be assured from the screenshot and code that the usefulness of Google sheets can be extended for the common good. At the core is the two methods for inputting, copying and reviewing textual data - spreadsheet for a slice through a set of data, and TBA for navigation of associations in the Trail (x axis) and Branches (y axis), and for working on Aspects (z axis) of the current selection that require attention, in collaborations, from different interests.
So, for example, a nurse would find TBA useful for recording many aspects of an examination of a patient, whereas a pharmacist might find a spreadsheet more useful for stock control. Both record their data in a common object I call 'nset' (hierarchy of named sets), saved in the cloud and available for distribution in collaborative activities.
TBA is also useful for cloning large sets of records. For example, one room, complete with furniture can be replicated on one floor, then that floor, complete with rooms can be replicated for a complete tower.
Being able to maintain parallel nset objects in multiple monitor windows by postMessage means unrivalled opportunities to display the same data in different forms of multimedia, including interactive animation, augmented reality, CNC machine instruction, IOT controls ...
Here is the related code:
From the TBA in sidebar:
window.addEventListener("message", receiveMessage, false);
function openMonitor(nset){
var params = [
'height=400',
'width=400'
].join(',');
let file = 'http://glasier.hk/blazer/model.html';
popup = window.open(file,'popup_window', params);
popup.moveTo(100,100);
}
var popup;
function receiveMessage(event) {
let ed,nb;
ed = event.data;
nb = typeof ed === "string"? ed : nb[0];
switch(nb){
case "Post":
console.log("Post");
popup.postMessage(["Refreshing nset",nset], "http:glasier.hk");
break;
}
}
function importNset(){
google.script.run
.withSuccessHandler(function (code) {
root = '1grsin';
trial = 'msm4r';
orig = 'ozs29';
code = orig;
path = "https://api.myjson.com/bins/"+code;
$.get(path)
.done((data, textStatus, jqXHR) => {
nset = data;
openMonitor(nset);
cfig = nset.cfig;
start();
})
})
.sendCode();
}
From the popup window:
$(document).ready(function(){
name = $(window).attr("name");
if(name === "workshop"){
tgt = opener.location.href;
}
else{
tgt = "https://n-rxnikgfd6bqtnglngjmbaz3j2p7cbcqce3dihry-0lu-script.googleusercontent.com"
}
$("#notice").html(tgt);
opener.postMessage("Post",tgt);
$(window).on("resize",function(){
location.reload();
})
})
}
window.addEventListener("message", receiveMessage, false);
function receiveMessage(event) {
let ed,nb;
ed = event.data;
nb = typeof ed === "string"? ed : ed[0];
switch(nb){
case "Post": popup.postMessage(["nset" +nset], "*"); break;
default :
src = event.origin;
notice = [ed[0]," from ",src ];
console.log(notice);
// $("#notice").html(notice).show();
nset = ed[1];
cfig = nset.cfig;
reloader(src);
}
}
I should explain that the html part of the sidebar was built on a localhost workshop, with all styles and scripts compiled into a single file for pasting in a sidebar html file. The workshop also is available online. The Google target is provided by event.origin in postMessage. This would have to be issued to anyone wishing to make different monitors. For now I have just made the 3D modelling monitor with Three.js.
I think, after much research and questioning around here, this should be the proper answer.

The best way to implement global variables in GAS is through userproperties or script properties.https://developers.google.com/apps-script/reference/properties/properties-service. If you'd rather deal with just one, write them to an object and then json.stringify the object (and json.parse to get it back).

Related

Obtain list of My Places from Google Maps

I am trying to obtain the list of places the user has saved on Google Maps. Now I know there isnt an API for this (for whatever reason), but I saw here:
"My Places" Google Maps API
That apparently there used to be a way to obtain the URL, but it does not seem to work with my list of places.
E.g.
https://www.google.com/maps/#46.889424,0.1194148,6z/data=!4m3!11m2!2s1KbZtik1IdXyNhwfXEb3P9vaZvzU!3e3
Does not seem to work if I append &output=kml or &output=json
I created this list on Google Maps, then hit share and obtained that link.
I even tried parsing the resulting HTML but it seems everything is handled by some Javascript Engine and I can't find any reference to Google Ids there --- I dont even know how they handle clicks!
Any help? There must be a way to retrieve this information programmatically!
EDIT:
I managed to get something working by visiting the shared link, then processing the html and storing the window.APP_INITIALIZATION_STATE variable. I then convert it to an javascript array and loop over it. Deep inside the array/map structure, I managed to get the google name and google place id out of that array. That seems to work a bit, but when trying with lists over 20 items long, google only gets the first 20 and is waiting for the user to 'scroll down' to get the next 20. That seems to trigger another call to get the next 20 results and looks a bit like:
https://www.google.com/search?tbm=map&fp=1&authuser=0&hl=en&gl=nl&pb=!4m8!1m3!1d54065472.4384380........
I can see the original feature id being included at the end of the url, but have no idea how to construct this url in full though to get the next 20 items.... Any ideas?
Your saved places list actually has what you call a feature ID attribute, this isn't a common practice and Google frowns upon this technique but take a look at this URL:
https://www.google.com/maps/preview/entity?authuser=0&hl=en&gl=us&pb=!1m10!1s0x0%3A0x3743ae09a161976b!3m8!1m3!1d14318.72623152007!2d-98.2296425!3d26.2070353!3m2!1i1024!2i768!4f13.1!12m3!2m2!1i392!2i106!13m57!2m2!1i203!2i100!3m2!2i4!5b1!6m6!1m2!1i86!2i86!1m2!1i408!2i200!7m42!1m3!1e1!2b0!3e3!1m3!1e2!2b1!3e2!1m3!1e2!2b0!3e3!1m3!1e3!2b0!3e3!1m3!1e8!2b0!3e3!1m3!1e3!2b1!3e2!1m3!1e9!2b1!3e2!1m3!1e10!2b0!3e3!1m3!1e10!2b1!3e2!1m3!1e10!2b0!3e4!2b1!4b1!9b0!14m3!1snyc5W-WeHY3r5gLwkoRI!7e81!15i10112!15m19!2b1!5m4!2b1!3b1!5b1!6b1!10m1!8e3!14m1!3b1!17b1!24b1!25b1!26b1!30m1!2b1!36b1!52b1!53b1!21m28!1m6!1m2!1i0!2i0!2m2!1i458!2i768!1m6!1m2!1i974!2i0!2m2!1i1024!2i768!1m6!1m2!1i0!2i0!2m2!1i1024!2i20!1m6!1m2!1i0!2i748!2m2!1i1024!2i768!22m1!1e81!29m0!30m1!3b1
Highlighted is the feature ID from the link you posted:
https://www.google.com/maps/#46.889424,0.1194148,6z/data=!4m3!11m2!2s1KbZtik1IdXyNhwfXEb3P9vaZvzU!3e3
Along with other maps parameters; when you hit that link you're actually manually triggering the same callback that Google's own scripts in maps use to parse the data to feed back to the maps UI; if you look at array item 2, or {c:..} you'll find a stringified array with the contents of your list, now depending on the program language you're using all it takes is a little tweaking (find/replace, loop through, lint and trim, etc.) to this array and you can pull your results; the cool thing is if you add or remove a place the next time you hit that end point it's updated in real-time.
Some people may call it a "hack"; but it gets the job done. :)
Hope I pointed you to a direction in the event you haven't found a solution; give this a shot.
Note the URL has to be pasted in its entirety, SO truncated the hyperlink; copy and paste the whole thing in one shot and a text file from Google with the arrays will be produced; in my case I curl the URLs I need and parse the returned strings as needed to pull data from Google where their API has limitations. Just a tip. :)
Also check Joel's Answer who did some research and refined some of the following information.
Pagination
You can use this tool to decrypt the pb-parameter. PB stands for protocol buffer (protobuf) and Google uses its own kind of it for maps. You can find different decoders for this by googling it.
In my case, the pagination was done via one parameter (8iX0). It seems, that it always comes with another similar parameter (7i20) but I don't know that it does. I can't yet confirm that this is always the case, but from my experience you're basically looking for two integers that are 20/40/60 etc. apart.
Here's what this looks like for me:
page 2 (7i20, 8i20)
page 3 (7i20, 8i40)
page 4 (7i20, 8i60)
From this information, I tried 7i20 8i00 for page 1, that seemed to work. For lists with >100 items, it just continues like that (8i120, 8i140 etc.)
Here's a code snippet in python (quick & dirty). Make sure to add (long) delays if your list has many pages as you will get rate-limited by captchas eventually if you don't. Notice the 8i%s0 in the url, make sure to put the %s back when you paste your pb-block.
url = "https://www.google.com:443/search?tbm=map&pb=!7i20!8i%s0!..."
headers = {"Referer": "https://www.google.com/"}
def fetch_stops_from_maps():
new_results = -1
page = 0
results = []
while new_results != 0:
new_results = 0
x = requests.get(url % page, headers=headers)
txt = html.unescape(x.text)
txt = txt.split("\n")[1]
results = re.findall(r"\[null,null,[0-9]{1,2}\.[0-9]{4,15},[0-9]{1,2}\.[0-9]{4,15}]", txt)
print(len(results))
for cord in results:
# curr = the description you can manually type in when saving
curr = txt.split(cord)[1].split("\"]]")[0]
curr = curr[curr.rindex(",\"") + 2:]
cords = str(cord).split(",")
lat = cords[2]
lon = cords[3][:-1]
results.append(s)
new_results += 1
page += 2
Actually getting the correct url
Getting the correct url currently seems to be the hardest part when doing this and I have not fully figured this out aswell. However, for my use-case this is not really important, so I extracted the correct pb-block once and called it a day.
As explained in the other answers, the id of the list is visible in the basic url (here, the 2sXX...) when you navigate to the list in your browser. It seems to usually be 24-32 (?) characters long.
.../maps/<coords>/data=!4m3!11m2!2sXXXX...XXXX!3e3
If you have this id, you can put it into an existing protobuf-block and it may work (I only tested this with 3 different lists, which were all created by the same account, so this theory is far from proven).
Now, how do you get the block? I would just share the one I have, but because I only understand parts of what it does, I fear that it may contain some personal info. Instead, I will share my process of getting it. For this I use Burpsuite. It's a program mainly used for web-security testing and has a free community edition, however for our use-case it is the perfect tool, because with it you can easily tinker with requests, change small parts in the request, send it again and immediately see if your changes changed the response. However for extracting the pb-block, one should also be able to use any program that can intercept browser traffic.
Heres the basic rundown with burp:
From GMaps, share a list that has >20 items (this is important) and copy the public link
In Burp, go to the tab "Proxy", make sure "Intercept" is off and click "Open browser" to open the integrated chromium browser
There, paste the link and wait until maps loaded completely
In Burp, turn "Intercept" on, then in google maps, scroll down in the list, until it starts loading new results (always blocks of 20)
Burp now intercepted all requests the browser made since you turned intercepting on. Click "Forward" and go through all requests, until you see a request in the format
GET /search?tbm=map&authuser=0&hl=de&gl=de&pb=!7i20....
This is what you're looking for.
Optionally, you can now right click into the request-text and click "send to repeater", then switch to the repeater-tab. Here you can edit the request and then send it again, being able to see the response immediately. For example, removing the authuser, hl, gl, q, ech, psi url parameters, the request still works flawlessly. If you remove the tch=1 parameter, the response you get will be in a more human readable format.
In the request-text you should now be able to just search for the list-id you got from the link previously and replace it with the id of another list (search bar is at the bottom in burp). As I said, this worked for me, but it may be possible that the pb-block contains some additional metadata that makes lists from different google-accounts or different types of lists incompatible with specific pb-blocks. Just a theory though. Let me know how it goes!
Further automating
I have theorised that one could automate getting the pb-block using requests-html because it can load html-sites fully but it doesn't get updated anymore. Another option (probably the better one) is Selenium Wire, as you should be able to load the page and intercept the requests, like we did in burp. Seems like a whole lot of work tho :D
This was the only API was able to find was this:
https://www.google.com/bookmarks/?output=xml
Used in a browser you would have to first log in through Google's OAuth. It would then return your saved places. Not sure at the moment how you would embedded the authentication to do this programmatically, but this might send you in the right direction.
I was able to extract the data I needed from my google maps list. Below are some comments that expand on some of the other comments here, along with a script that extracts all of the relevant data points from the network response.
Obtaining the underlying URL
You can easily find this URL by just opening the devtools on your browser, going to the network tab, and refreshing the webpage or scrolling down on the list until it loads new results (the list must be larger than 20 results). You should be able to find the network request that starts with https://www.google.com/search?tbm=map&pb... and go from there.
Increase the results size
I was able to increase the number of results returned from the request by changing the value of the 7i20 parameter. From what I can tell, the 71XX parameter is the size of the page, and the 8iXX parameter is the starting point. I haven't tested how large you can make the page limit, but I tested 100 and it seemed to work fine. This should make dealing with larger lists much easier.
Parsing out the data
Instead of using regex to parse out the relevant data from the response, I found that the response is basically just a massive JSON object and I was able to identify the indexes for specific types of data, such as the name of the place, location, notes, etc. See the script below.
If you look at the buildResults function in the script below, you can see the exact indexes used to extract specific pieces of information. This of course may change over time if the network response changes format at all, so use these as a starting point in the case where the specific values aren't at those indexes anymore. Hopefully they would be close to those locations
Script to parse the data (javascript / node.js)
// Insert the raw text content from the network response from the
// https://www.google.com/search?tbm=map&pb... url below.
const rawInput = null
function prepare(input) {
// There are 5 random characters before the JSON object we need to remove
// Also I found that the newlines were messing up the JSON parsing,
// so I removed those and it worked.
const preparedForParsing = input.substring(5).replace(/\n/g, '')
const json = JSON.parse(preparedForParsing)
const results = json[0][1].map(array => array[14])
return results
}
function prepareLookup(data) {
// this function takes a list of indexes as arguments
// constructs them into a line of code and then
// execs the retrieval in a try/catch to handle data not being present
return function lookup(...indexes) {
const indexesWithBrackets = indexes.reduce((acc, cur) => `${acc}[${cur}]`, '')
const cmd = `data${indexesWithBrackets}`
try {
const result = eval(cmd)
return result
} catch(e) {
return null
}
}
}
function buildResults(preparedData) {
const results = []
for (const place of preparedData) {
const lookup = prepareLookup(place)
// Use the indexes below to extract certain pieces of data
// or as a starting point of exploring the data response.
const result = {
address: {
street_address: lookup(183, 1, 2),
city: lookup(183, 1, 3),
zip: lookup(183, 1, 4),
state: lookup(183, 1, 5),
country_code: lookup(183, 1, 6),
},
name: lookup(11),
tags: lookup(13),
notes: lookup(25,15,0,2),
placeId: lookup(78),
phone: lookup(178,0,0),
coordinates: {
long: lookup(208,0,2),
lat: lookup(208,0,3)
}
}
results.push(result)
}
return results
}
const preparedData = prepare(rawInput)
const listResults = buildResults(preparedData)
console.log(listResults)

Data Studio connector making multiple calls to API when it should only be making 1

I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.

What is background to have ServerHandler.addCallbackElement method?

Frequently GAS users (me too) do not use the ServerHandler.addCallbackElement method or use in a way which does not cover all controls.
What is a background to have this method at all? Why GAS developers introduced it? Is it simpler to pass all input widgets values to all server handlers as parameters?
The documentation does not provide answers to these questions.
I see the following causes
Adding widgets as callback elements reduces traffic between browsers and GAS servers in case of several handlers which handle different sets of controls. Here is a question. How much traffic it saves? I think maximum a few kilobytes, usually hundreds of bytes. Is it worth, considering the modern internet connections speed, even mobile connections.
A form contains a table-like edit controls with multiple buttons and it is comfortable to handle row elements with the same name. This issue is easily avoided by using tags. See the following example. If the tags are used for other purposes it is not a problem to parse the source button id and extract the row number.
Limits of technology used behind the scenes. If there are such limits, then what are they?
function doGet(e) {
var app = UiApp.createApplication();
var vPanel = app.createVerticalPanel();
var handler = app.createServerHandler("onBtnClick");
var lstWidgets = [];
for (var i = 0; i < 10; i++) {
var hPanel = app.createHorizontalPanel().setTag('id_' + i);
var text = app.createTextBox().setName("text_" + i);
text.setText(new Date().valueOf());
var btn = app.createButton("click me").addClickHandler(handler);
btn.setTag(i).setId('id_btn' + i);
var lbl = app.createLabel().setId("lbl_" + i);
hPanel.add(text);
hPanel.add(btn);
hPanel.add(lbl);
lstWidgets.push(text);
lstWidgets.push(btn);
vPanel.add(hPanel);
}
// The addCallbackElement calls simulate situation when all widgets values are passed to a single server handler.
for (var j = 0; j < lstWidgets.length; j++) {
handler.addCallbackElement(lstWidgets[j]);
}
app.add(vPanel);
return app;
}
function onBtnClick(e) {
var app = UiApp.getActiveApplication();
var i = e.parameter[e.parameter.source + '_tag'];
var lbl = app.getElementById("lbl_" + i);
lbl.setText("Source ButtonID: " + e.parameter.source + ', Text: ' + e.parameter["text_" + i]);
return app;
}
Great Question.
"How much traffic it saves?" I don't think we know yet, but I expect it will get more efficient over time. Here is another discussion on performance. Only extensive testing and improvements from Google will really allow us to identify best practices, for now all I can say is that ClientHandlers are clearly going to be better than ServerHandlers whenever possible.
As JavaScript developers I think we are predominantly use to doing stuff client-side, then we think of PHP/ASP as server-side tools. My understanding so far is that our GAS code is actually running both client and server side (at the very least it's calling server side functionality) but it sure seems like there's more going on server-side than we realize, and on the client-side this seems to result in somewhat "compiled" code. I kinda recognize some of this multi-tier deployment from my Java experience.
Since there are a lot of ways of doing the same thing, Google can take advantage of the fact that our code is not directly interpreted (by either side) to do things that would not necessarily make sense if we were writing the code by hand. This is why I think it will become more efficient than other solutions, eventually but probably not yet. For now I'd suggest steering clear of GAS if you are worried about performance. Maybe just for fun try looking at the source of your client-side Web-Apps at runtime (view source). So in order for them to do things most efficiently, I imagine they will benefit by having us define things in a very high-level way. This gives them the most flexibility in how they interpret our code.
To specifically address your second question I personally think of the Handler Function onBtnClick() as running on the Server-Side, whereas the Tags you refer to (and most of the doGet) would be in the browser's engine on the client-side. I can see how the functionality would be much more flexible (efficient and powerful) on the server-side if they have an idea ahead of time as to how much memory they would need to handle specific events/requests. (Clearly if each getElementById() call was running a separate request, that would be like clicking a link to a new mini-webpage each time.)
So now the question is why can't my handler just automatically create parameters with just the stuff I use in my handler function? The only reason we are asking this question in the first place is because there is some stuff in the UiApp which seems to be available on both ends. The UiApp is already in the scope of both the doGet and onClick but the variables defined in doGet are not, so these values need to be either
explicitly saved like ScriptProperties.setProperty() or
put into the UiApp somewhere with an Id or
explicitly given to the Handler function using addCallbackElement()
Notice how you had to addCallbackElement(lstWidget), because it was not created with an app.create... constructor within the UiApp object. My guess is that GAS is implementing XML compliant SOAP calls to a web-service on the Google end, we may be able to figure this out by really studying the client-side source code. Just to reiterate we could also use setProperty() it does not really matter, or even save them via JDBC and then retrieve them with another connection from within your handler function but somehow the data needs to be passed from the Client to the Server and vice-versa.
From a programming perspective there is a lot of stuff available in the scope of your client-side doGet function that you probably would never want to pass to the server, or there may be functions in the scope of the server-side doClick() with the same name as functions on the client-side but they may actually be calls to totally different library functions maybe even on totally different hardware (even though from the developer's perspective they work the same way).
Maybe the Google team has not yet really decided on how the UiApp really works yet, otherwise they would just force or at least allow us to put everything in there. Yet another observation when we call UiApp.getActiveApplication() based on it's name it does not seem like a constructor, but rather a method that returns a private instance from the UiApp object. (Object being a class that was previously instantiated and supposedly initialized somewhere.) I may not have 100% answered your question but I sure did try, any further insight from the community would clearly be appreciated.
Now I may be straying off-topic but I also imagine the actual product will continue to change as they do more to improve performance in the long-term, and if we still feel like we are writing client-side code as a developer then that is a success for Google. Now please correct me if I have stated anything wrong, I have just recently started using these tools and plan to follow up on this question with more specifics as I learn more but as of right now that is my best interpretation.
If you use a formpanel all the sub elements will be sent to your dopost function. With the button as source. And your UIapp will be cleaned.
If you don't want that use a callback to specify what element and siblings will be sent.
This is how the UIapp is designed.

Why can't Web Worker call a function directly?

We can use the web worker in HTML5 like this:
var worker = new Worker('worker.js');
but why can't we call a function like this?
var worker = new Worker(function(){
//do something
});
This is the way web workers are designed. They must have their own external JS file and their own environment initialized by that file. They cannot share an environment with your regular global JS space for multi-threading conflict reasons.
One reason that web workers are not allowed direct access to your global variables is that it would require thread synchronization between the two environments which is not something that is available (and it would seriously complicate things). When web workers have their own separate global variables, they cannot mess with the main JS thread except through the messaging queue which is properly synchronized with the main JS thread.
Perhaps someday, more advanced JS programmers will be able to use traditional thread synchronization techniques to share access to common variables, but for now all communication between the two threads must go through the message queue and the web worker cannot have access to the main Javascript thread's environment.
This question has been asked before, but for some reason, the OP decided to delete it.
I repost my answer, in case one needs a method to create a Web worker from a function.
In this post, three ways were shown to create a Web worker from an arbitrary string. In this answer, I'm using the third method, since it's supported in all environments.
A helper file is needed:
// Worker-helper.js
self.onmessage = function(e) {
self.onmessage = null; // Clean-up
eval(e.data);
};
In your actual Worker, this helper file is used as follows:
// Create a Web Worker from a function, which fully runs in the scope of a new
// Worker
function spawnWorker(func) {
// Stringify the code. Example: (function(){/*logic*/}).call(self);
var code = '(' + func + ').call(self);';
var worker = new Worker('Worker-helper.js');
// Initialise worker
worker.postMessage(code);
return worker;
}
var worker = spawnWorker(function() {
// This function runs in the context of a separate Worker
self.onmessage = function(e) {
// Example: Throw any messages back
self.postMessage(e.data);
};
// etc..
});
worker.onmessage = function() {
// logic ...
};
worker.postMessage('Example');
Note that the scopes are strictly separated. Variables can only be passed and forth using worker.postMessage and worker.onmessage. All messages are structured clones.
This answer might be a bit late, but I wrote a library to simplify the usage of web workers and it might suit OP's need. Check it out: https://github.com/derekchiang/simple-worker
It allows you to do something like:
SimpleWorker.run({
func: intensiveFunction,
args: [123456],
success: function(res) {
// do whatever you want
},
error: function(err) {
// do whatever you want
}
})
WebWorkers Essentials
WebWorkers are executed in an independent thread, so have no access to the main thread, where you declare them (and viceversa). The resulting scope is isolated, and restricted. That's why, you can't , for example, reach the DOM from inside the worker.
Communication with WebWorkers
Because communication betwen threads is neccessary, there are mechanisms to accomplish it. The standard communication mechanism is through messages, using the worker.postMessage() function and the worker.onMessage(), event handler.
More advanced techniques are available, involving sharedArrayBuffers, but is not my objective to cover them. If you are interested in them, read here.
Threaded Functions
That's what the standard brings us.
However, ES6 provides us enough tools, to implement an on-demmand callable Threaded-Function.
Since you can build a Worker from a Blob, and your Function can be converted into it (using URL.createObjectURL), you only need to implement some kind of Communication Layer in both threads, to handle the messages for you, and obtain a natural interaction.
Promises of course, are your friend, considering that everything will happen asynchronously.
Applying this theory, you can implement easilly, the scenario you describe.
My personal approach : ParallelFunction
I've recently implemented and publised a tiny library wich does exactly what you describe. in less than 2KB (minified).
It's called ParallelFunction, and it's available in github, npm , and a couple of CDNs.
As you can see, it totally matches your request:
// Your function...
let calculatePi = new ParallelFunction( function(n){
// n determines the precision , and in consequence
// the computing time to complete
var v = 0;
for(let i=1; i<=n; i+=4) v += ( 1/i ) - ( 1/(i+2) );
return 4*v;
});
// Your async call...
calculatePi(1000000).then( r=> console.log(r) );
// if you are inside an async function you can use await...
( async function(){
let result = await calculatePi(1000000);
console.log( result );
})()
// once you are done with it...
calculatePi.destroy();
After initialization, you can call your function as many times you need. a Promise will be returned, wich will resolve, when your function finishes execution.
By the way, many other Libraries exists.
Just use my tiny plugin https://github.com/zevero/worker-create
and do
var worker_url = Worker.create(function(e){
self.postMessage('Example post from Worker'); //your code here
});
var worker = new Worker(worker_url);
While it's not optimal and it's been mentioned in the comments, an external file is not needed if your browser supports blobURLs for Web Workers. HTML5Rocks was the inspiration for my code:
function sample(e)
{
postMessage(sample_dependency());
}
function sample_dependency()
{
return "BlobURLs rock!";
}
var blob = new Blob(["onmessage = " + sample + "\n" + sample_dependency]);
var blobURL = window.URL.createObjectURL(blob);
var worker = new Worker(blobURL);
worker.onmessage = function(e)
{
console.log(e.data);
};
worker.postMessage("");
Caveats:
The blob workers will not successfully use relative URLs. HTML5Rocks link covers this but it was not part of the original question.
People have reported problems using Blob URLs with Web Workers. I've tried it with IE11 (whatever shipped with FCU), MS Edge 41.16299 (Fall Creator's Update), Firefox 57, and Chrome 62. No clue as to Safari support. The ones I've tested have worked.
Note that "sample" and "sample_dependency" references in the Blob constructor call implicitly call Function.prototype.toString() as sample.toString() and sample_dependency.toString(), which is very different than calling toString(sample) and toString(sample_dependency).
Posted this because it's the first stackoverflow that came up when searching for how to use Web Workers without requesting an additional file.
Took a look at Zevero's answer and the code in his repo appears similar. If you prefer a clean wrapper, this is approximately what his code does.
Lastly -- I'm a noob here so any/all corrections are appreciated.
By design web workers are multi-threaded, javascript is single threaded"*"multiple scripts cannot run at the same time.
refer to: http://www.html5rocks.com/en/tutorials/workers/basics/

wpf form freezing with get data from database

I created WPF application using c#.I have to get more data in here from mysql databse.I'm use for this one ODBC 3.51 connector.when data loading freeze my application.
I'm try to fix that problem using thread.But I can't able to do this one using thread. Please suggest to way for solve my problem...
Use a BackgroundWorker-class. It's usage is very simple and it is used quite often for tasks such as loading data. The following example shows you it's usage:
BackgroundWorker bgWorker = new BackgroundWorker() { WorkerReportsProgress=true};
bgWorker.DoWork += (s, e) => {
// Load here your data
// Use bgWorker.ReportProgress(); to report the current progress
};
bgWorker.ProgressChanged+=(s,e)=>{
// Here you will be informed about progress and here it is save to change/show progress.
// You can access from here savely a ProgressBars or another control.
};
bgWorker.RunWorkerCompleted += (s, e) => {
// Here you will be informed if the job is done.
// Use this event to unlock your gui
};
bgWorker.RunWorkerAsync();
The use of the BackgroundWorker allows the UI-thread to continue it's processing and therefore the application rests responsive during loading. But because of this, you have also to ensure that no actions can take place that rely on the loaded data. A very simple solution is to set your main UI elements IsEnabled-property to false and in RunWorkerCompleted you set it to true. With a little fantasy you can improve this dumb behaviour tp a nice UI-experience (depending on the App).
It is general a good a advice to do long time operations in a separate thread (BackgroundWorker). One caveat is there: Do not create WPF-Elements in the DoWork-event. This will not do because all derived types of DependencyObject must be created in the same thread they are used.
There are other solutions to do this, for example creating directly a thread or with the event based async pattern, but I recommend to use BackgroundWorker for your task because it handels for you the plumbing. In the end, the result is the same but the way to it is much easier.