Drive API files.list returning nextPageToken with empty item results - google-drive-api

In the last week or so we got a report of a user missing files in the file list in our app. We we're a bit confused at first because they said they only had a couple files that matched our query string, but with a bit of work we were able to reproduce their issue by adding a large number of files to our Google Drive. Previously we had been assuming people would have less than 100 files and hadn't been doing paging to avoid multiple files.list requests.
After switching to use paging, we noticed that on one of our test accounts was sending hundreds and hundreds of files.list requests and most of the responses did not contain any files but did contain a nextPageToken. I'll update as soon as I can get a screenshot - but the client was sending enough requests to heat the computer up and drain battery fairly quickly.
We also found that based on what the query is even though it matches the same files it can have a drastic effect of the number of requests needed to retrieve our full file list. For example, switching '=' to 'contains' in the query param significantly reduces the number of requests made, but we don't see any guarantee that this is a reasonable and generalizeable solution.
Is this the intended behavior? Is there anything we can do to reduce the number of requests that we are sending?
We're using the following code to retrieve files created by our app that is causing the issue.
runLoad: function(pageToken)
{
gapi.client.drive.files.list(
{
'maxResults': 999,
'pageToken': pageToken,
'q': "trashed=false and mimeType='" + mime + "'"
}).execute(function (results)
{
this.filePageRequests++;
if (results.error || !results.nextPageToken || this.filePageRequests >= MAX_FILE_PAGE_REQUESTS)
{
this.isLoading(false);
}
else
{
this.runLoad(results.nextPageToken);
}
}.bind(this));
}

It is, but probably shouldn't be, the correct behaviour.
It generally occurs when using the drive.file scope. What (I think) is happening is that the API layer is fetching all files, and then removing those that are outside of the current scope/query, and returning the remainder to your client app. In theory, a particular page of files could have no files in-scope, and so the returned array is empty.
As you've seen, it's a horribly inefficient way of doing it, but that seems to be the way it is. You simply have to keep following the next page link until it's null.
As to "Is there anything we can do to reduce the number of requests that we are sending?"
You're already setting max results to 999 which is the obvious step. Just be aware that I have seen this value trigger internal errors (timeouts?) which manifest themselves as 500 errors. You might want to sacrifice efficiency for reliability and stick to the default of 100 which seems to be better tested.
I don't know if the code you posted is your actual code, or just a simplified illustration, but you need to make sure you are dealing with 401 errors (auth expiry) and 500 errors (sometimes recoverable with a retry)

Related

Obtain list of My Places from Google Maps

I am trying to obtain the list of places the user has saved on Google Maps. Now I know there isnt an API for this (for whatever reason), but I saw here:
"My Places" Google Maps API
That apparently there used to be a way to obtain the URL, but it does not seem to work with my list of places.
E.g.
https://www.google.com/maps/#46.889424,0.1194148,6z/data=!4m3!11m2!2s1KbZtik1IdXyNhwfXEb3P9vaZvzU!3e3
Does not seem to work if I append &output=kml or &output=json
I created this list on Google Maps, then hit share and obtained that link.
I even tried parsing the resulting HTML but it seems everything is handled by some Javascript Engine and I can't find any reference to Google Ids there --- I dont even know how they handle clicks!
Any help? There must be a way to retrieve this information programmatically!
EDIT:
I managed to get something working by visiting the shared link, then processing the html and storing the window.APP_INITIALIZATION_STATE variable. I then convert it to an javascript array and loop over it. Deep inside the array/map structure, I managed to get the google name and google place id out of that array. That seems to work a bit, but when trying with lists over 20 items long, google only gets the first 20 and is waiting for the user to 'scroll down' to get the next 20. That seems to trigger another call to get the next 20 results and looks a bit like:
https://www.google.com/search?tbm=map&fp=1&authuser=0&hl=en&gl=nl&pb=!4m8!1m3!1d54065472.4384380........
I can see the original feature id being included at the end of the url, but have no idea how to construct this url in full though to get the next 20 items.... Any ideas?
Your saved places list actually has what you call a feature ID attribute, this isn't a common practice and Google frowns upon this technique but take a look at this URL:
https://www.google.com/maps/preview/entity?authuser=0&hl=en&gl=us&pb=!1m10!1s0x0%3A0x3743ae09a161976b!3m8!1m3!1d14318.72623152007!2d-98.2296425!3d26.2070353!3m2!1i1024!2i768!4f13.1!12m3!2m2!1i392!2i106!13m57!2m2!1i203!2i100!3m2!2i4!5b1!6m6!1m2!1i86!2i86!1m2!1i408!2i200!7m42!1m3!1e1!2b0!3e3!1m3!1e2!2b1!3e2!1m3!1e2!2b0!3e3!1m3!1e3!2b0!3e3!1m3!1e8!2b0!3e3!1m3!1e3!2b1!3e2!1m3!1e9!2b1!3e2!1m3!1e10!2b0!3e3!1m3!1e10!2b1!3e2!1m3!1e10!2b0!3e4!2b1!4b1!9b0!14m3!1snyc5W-WeHY3r5gLwkoRI!7e81!15i10112!15m19!2b1!5m4!2b1!3b1!5b1!6b1!10m1!8e3!14m1!3b1!17b1!24b1!25b1!26b1!30m1!2b1!36b1!52b1!53b1!21m28!1m6!1m2!1i0!2i0!2m2!1i458!2i768!1m6!1m2!1i974!2i0!2m2!1i1024!2i768!1m6!1m2!1i0!2i0!2m2!1i1024!2i20!1m6!1m2!1i0!2i748!2m2!1i1024!2i768!22m1!1e81!29m0!30m1!3b1
Highlighted is the feature ID from the link you posted:
https://www.google.com/maps/#46.889424,0.1194148,6z/data=!4m3!11m2!2s1KbZtik1IdXyNhwfXEb3P9vaZvzU!3e3
Along with other maps parameters; when you hit that link you're actually manually triggering the same callback that Google's own scripts in maps use to parse the data to feed back to the maps UI; if you look at array item 2, or {c:..} you'll find a stringified array with the contents of your list, now depending on the program language you're using all it takes is a little tweaking (find/replace, loop through, lint and trim, etc.) to this array and you can pull your results; the cool thing is if you add or remove a place the next time you hit that end point it's updated in real-time.
Some people may call it a "hack"; but it gets the job done. :)
Hope I pointed you to a direction in the event you haven't found a solution; give this a shot.
Note the URL has to be pasted in its entirety, SO truncated the hyperlink; copy and paste the whole thing in one shot and a text file from Google with the arrays will be produced; in my case I curl the URLs I need and parse the returned strings as needed to pull data from Google where their API has limitations. Just a tip. :)
Also check Joel's Answer who did some research and refined some of the following information.
Pagination
You can use this tool to decrypt the pb-parameter. PB stands for protocol buffer (protobuf) and Google uses its own kind of it for maps. You can find different decoders for this by googling it.
In my case, the pagination was done via one parameter (8iX0). It seems, that it always comes with another similar parameter (7i20) but I don't know that it does. I can't yet confirm that this is always the case, but from my experience you're basically looking for two integers that are 20/40/60 etc. apart.
Here's what this looks like for me:
page 2 (7i20, 8i20)
page 3 (7i20, 8i40)
page 4 (7i20, 8i60)
From this information, I tried 7i20 8i00 for page 1, that seemed to work. For lists with >100 items, it just continues like that (8i120, 8i140 etc.)
Here's a code snippet in python (quick & dirty). Make sure to add (long) delays if your list has many pages as you will get rate-limited by captchas eventually if you don't. Notice the 8i%s0 in the url, make sure to put the %s back when you paste your pb-block.
url = "https://www.google.com:443/search?tbm=map&pb=!7i20!8i%s0!..."
headers = {"Referer": "https://www.google.com/"}
def fetch_stops_from_maps():
new_results = -1
page = 0
results = []
while new_results != 0:
new_results = 0
x = requests.get(url % page, headers=headers)
txt = html.unescape(x.text)
txt = txt.split("\n")[1]
results = re.findall(r"\[null,null,[0-9]{1,2}\.[0-9]{4,15},[0-9]{1,2}\.[0-9]{4,15}]", txt)
print(len(results))
for cord in results:
# curr = the description you can manually type in when saving
curr = txt.split(cord)[1].split("\"]]")[0]
curr = curr[curr.rindex(",\"") + 2:]
cords = str(cord).split(",")
lat = cords[2]
lon = cords[3][:-1]
results.append(s)
new_results += 1
page += 2
Actually getting the correct url
Getting the correct url currently seems to be the hardest part when doing this and I have not fully figured this out aswell. However, for my use-case this is not really important, so I extracted the correct pb-block once and called it a day.
As explained in the other answers, the id of the list is visible in the basic url (here, the 2sXX...) when you navigate to the list in your browser. It seems to usually be 24-32 (?) characters long.
.../maps/<coords>/data=!4m3!11m2!2sXXXX...XXXX!3e3
If you have this id, you can put it into an existing protobuf-block and it may work (I only tested this with 3 different lists, which were all created by the same account, so this theory is far from proven).
Now, how do you get the block? I would just share the one I have, but because I only understand parts of what it does, I fear that it may contain some personal info. Instead, I will share my process of getting it. For this I use Burpsuite. It's a program mainly used for web-security testing and has a free community edition, however for our use-case it is the perfect tool, because with it you can easily tinker with requests, change small parts in the request, send it again and immediately see if your changes changed the response. However for extracting the pb-block, one should also be able to use any program that can intercept browser traffic.
Heres the basic rundown with burp:
From GMaps, share a list that has >20 items (this is important) and copy the public link
In Burp, go to the tab "Proxy", make sure "Intercept" is off and click "Open browser" to open the integrated chromium browser
There, paste the link and wait until maps loaded completely
In Burp, turn "Intercept" on, then in google maps, scroll down in the list, until it starts loading new results (always blocks of 20)
Burp now intercepted all requests the browser made since you turned intercepting on. Click "Forward" and go through all requests, until you see a request in the format
GET /search?tbm=map&authuser=0&hl=de&gl=de&pb=!7i20....
This is what you're looking for.
Optionally, you can now right click into the request-text and click "send to repeater", then switch to the repeater-tab. Here you can edit the request and then send it again, being able to see the response immediately. For example, removing the authuser, hl, gl, q, ech, psi url parameters, the request still works flawlessly. If you remove the tch=1 parameter, the response you get will be in a more human readable format.
In the request-text you should now be able to just search for the list-id you got from the link previously and replace it with the id of another list (search bar is at the bottom in burp). As I said, this worked for me, but it may be possible that the pb-block contains some additional metadata that makes lists from different google-accounts or different types of lists incompatible with specific pb-blocks. Just a theory though. Let me know how it goes!
Further automating
I have theorised that one could automate getting the pb-block using requests-html because it can load html-sites fully but it doesn't get updated anymore. Another option (probably the better one) is Selenium Wire, as you should be able to load the page and intercept the requests, like we did in burp. Seems like a whole lot of work tho :D
This was the only API was able to find was this:
https://www.google.com/bookmarks/?output=xml
Used in a browser you would have to first log in through Google's OAuth. It would then return your saved places. Not sure at the moment how you would embedded the authentication to do this programmatically, but this might send you in the right direction.
I was able to extract the data I needed from my google maps list. Below are some comments that expand on some of the other comments here, along with a script that extracts all of the relevant data points from the network response.
Obtaining the underlying URL
You can easily find this URL by just opening the devtools on your browser, going to the network tab, and refreshing the webpage or scrolling down on the list until it loads new results (the list must be larger than 20 results). You should be able to find the network request that starts with https://www.google.com/search?tbm=map&pb... and go from there.
Increase the results size
I was able to increase the number of results returned from the request by changing the value of the 7i20 parameter. From what I can tell, the 71XX parameter is the size of the page, and the 8iXX parameter is the starting point. I haven't tested how large you can make the page limit, but I tested 100 and it seemed to work fine. This should make dealing with larger lists much easier.
Parsing out the data
Instead of using regex to parse out the relevant data from the response, I found that the response is basically just a massive JSON object and I was able to identify the indexes for specific types of data, such as the name of the place, location, notes, etc. See the script below.
If you look at the buildResults function in the script below, you can see the exact indexes used to extract specific pieces of information. This of course may change over time if the network response changes format at all, so use these as a starting point in the case where the specific values aren't at those indexes anymore. Hopefully they would be close to those locations
Script to parse the data (javascript / node.js)
// Insert the raw text content from the network response from the
// https://www.google.com/search?tbm=map&pb... url below.
const rawInput = null
function prepare(input) {
// There are 5 random characters before the JSON object we need to remove
// Also I found that the newlines were messing up the JSON parsing,
// so I removed those and it worked.
const preparedForParsing = input.substring(5).replace(/\n/g, '')
const json = JSON.parse(preparedForParsing)
const results = json[0][1].map(array => array[14])
return results
}
function prepareLookup(data) {
// this function takes a list of indexes as arguments
// constructs them into a line of code and then
// execs the retrieval in a try/catch to handle data not being present
return function lookup(...indexes) {
const indexesWithBrackets = indexes.reduce((acc, cur) => `${acc}[${cur}]`, '')
const cmd = `data${indexesWithBrackets}`
try {
const result = eval(cmd)
return result
} catch(e) {
return null
}
}
}
function buildResults(preparedData) {
const results = []
for (const place of preparedData) {
const lookup = prepareLookup(place)
// Use the indexes below to extract certain pieces of data
// or as a starting point of exploring the data response.
const result = {
address: {
street_address: lookup(183, 1, 2),
city: lookup(183, 1, 3),
zip: lookup(183, 1, 4),
state: lookup(183, 1, 5),
country_code: lookup(183, 1, 6),
},
name: lookup(11),
tags: lookup(13),
notes: lookup(25,15,0,2),
placeId: lookup(78),
phone: lookup(178,0,0),
coordinates: {
long: lookup(208,0,2),
lat: lookup(208,0,3)
}
}
results.push(result)
}
return results
}
const preparedData = prepare(rawInput)
const listResults = buildResults(preparedData)
console.log(listResults)

Data Studio connector making multiple calls to API when it should only be making 1

I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.

Service invoked too many times for one day: urlfetch

I am getting the following error in my Sheets Add-on:
Service invoked too many times for one day: urlfetch
I'm aware of the limits here, but how can I tell if I am hitting the "URLFetch calls" of 100,000 or the "URLFetch data received" of 100mB? They are two very different issues and if I'm hitting the first one, I must be making requests unintentionally somewhere because there's no way I'm intentionally making the call 100k times a day. It is possible I'm hitting the 100mb, but the way the error is phrased makes me think I'm hitting the first, is there anyway to know for sure which one I'm hitting?
I have run into that too. I only have 1000 rows going out to a web service. Data did not change neither in my sheet nor in the service. But at some point today most of my cells show #Error with this cause.
I feel like it's going out to re-fetch the results way too often. Is there not some caching that can be employed?
UPDATE (long time due): adding cache was exactly what was needed. So I implemented a function fetch(url) which uses a cache and that way avoids the replicate calls.
function fetch(url) {
var cache = CacheService.getScriptCache();
var result = cache.get(url);
if(!result) {
var response = UrlFetchApp.fetch(url);
result = response.getContentText();
cache.put(url, result, 21600);
}
return result;
}
You currently can not do it.
Perhaps run a counter in your script.

How to extend AFNetworking 2.0 to perform request combining

I have a UI where the same image URL could be requested by several UIImageViews at varying times. Obviously if a request from one of them has finished then returning the cached version works as expected. However, especially with slower networks, I'd like to be able to piggy-back requests for an image URL onto any currently running/waiting HTTP request for the same URL.
On an HTTP server this called request combining and I'd love to do the same in the client - to combine the different requests for the same URL into a single request and then callback separately to each of the callers). The requests for that URL dont happen to start at the same time.
What's the best way to accomplish this?
I think re-writing UIImageView+AFNetworking might be the easiest way:
check the af_sharedImageRequestOperationQueue to see if it has an operation with the same request
if I do already have an operation in the queue or running then add myself to some list of callbacks/blocks to be called on success/failure
if I don't have the operation, then create it as normal
in the setCompletionBlockWithSuccess to call each of the blocks in turn.
Any simpler alternatives?
I encountered a similar problem and decided that your way was the most straightforward. One added bit of complexity is that these downloads require special credentials and so must go through their own operation queue. Here's the code from my UIImageView category to check whether a particular URL is inflight:
NSUInteger foundOperation = [[ConnectionManager sharedConnectionManager].operationQueue.operations indexOfObjectPassingTest:^BOOL(AFHTTPRequestOperation *obj, NSUInteger idx, BOOL *stop) {
BOOL URLAlreadyInFlight = [obj.request.URL.absoluteString isEqualToString:URL.absoluteString];
if (URLAlreadyInFlight) {
NSBlockOperation *updateUIOperation = [NSBlockOperation blockOperationWithBlock:^{
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
self.image = [[ImageCache sharedImageCache] cachedImageForURL:URL];
}];
}];
//Makes updating the UI dependent on the completion of the matching operation.
[updateUIOperation addDependency:obj];
}
return URLAlreadyInFlight;
}];
Were you able to come up with a better solution?
EDIT: Well, it looks like my method of updating the UI just can't work, as the operation's completion blocks are run asynchronously, so the operation finishes before the blocks are run. However, I was able to modify the image cache to be able to add callbacks for when certain URLs are cached, which seems to work correctly. So this method will properly detect when certain URLs are in flight and be able to take action with that knowledge.

New Google Sheets custom functions sometimes display "Loading..." indefinitely

SPECIFIC FOR: "NEW" google sheets only.
This is a known issue as highlighted by google in the new sheets.
Issues: If you write complex* custom functions in google-apps-script for google sheets, you will occasionally run into cells which display a red error box around the cell with the text "Loading..."
Google has suggested:
If this occurs, try reloading the page or renaming the function and changing all references to the new name.
However for other developers experiencing this issue (and who are unable to escape the "loading..." error), I've written my findings in the answer below on how to get past this (with limitations) consistently.
*We're treating this question as the canonical answer for Google Sheet's indefinite "Error... Loading data" problem. It's not limited to complex or slow functions.
Important Tip: Create multiple copies of your entire spreadsheet as you experiment. I have had 3 google spreadsheets corrupted and rendered completely in-accessible (stuck in a refresh loop). This has happened when I was experimenting with custom functions so YOU HAVE BEEN WARNED!
You will want to try one or many of the following ways to fix this issue:
As suggested by google, try re-loading the spreadsheet or re-naming the function or changing the parameters in the cell to see if this fixes the issue.
Surround ALL your custom functions in a try-catch block. This will help detect code issues you may not have tested properly. Eg:
try{
//methods
}catch(ex){
return "Exception:"+ex;
}
Revert to the old sheets and test your functions and check for any other type of error such as an infinite loop or invalid data format. If the function does not work in the old sheets, it will not work in the new sheets and it will be more difficult to debug.
Ensure NONE of your parameters refer to, can expect to or will ever contain a number larger than 1 million (1000000). No idea why but using a number larger than a million as any parameter will cause your function to fail to execute. If you have to, ask the input to be reduced in size (maybe divide by 1000 or ask for M instead of mm).
Check for numeric or floating point issues where numbers may exceed a normal set of significant figures. The new sheets seems to be a little glitchy with numbers so if you are expecting very large or very complex numbers, your functions may not work.
Finally, if none of the above work, switch to the old google sheets and continue working.
If you find any other limitations or causes for functions to fail to execute, please write them below for me and other users who are heavy g-sheet users!
I also had the infinite loading issue with the following function.
// check if an item can be checked off
function checkedOff( need, have ) {
var retStr = "nope";
if( have >= need ){
retStr = "yep";
}
return retStr;
};
Turns out you shouldn't have a trailing ";". Removing the semicolon solved the problem.
// check if an item can be checked off
function checkedOff( need, have ) {
var retStr = "nope";
if( have >= need ){
retStr = "yep";
}
return retStr;
}
This runs as one would expect.
FWIW, I just ran into this and the culprit ended up being a getRange() call that pulled several thousand rows into an array. Periodically it would get hung on the "Loading..." message.
I worked around it by putting that range into the document cache. It's a little kludgy because the cache only stores strings, not arrays, but you can force it back into an array using .split(',') when you need to access the array.
(In my case it's a single array. There's probably a way to do it using a double array, either by sending each row or column into its own cache, or reading the cache value back N items at a time, each N becoming its own array.)
Here's the relevant bit from my code:
var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("mySheet"); //search the "mySheet" sheet
// is the big list already in the cache?
var cache = CacheService.getDocumentCache();
var cached = cache.get("columnValues");
if (cached != null) {
var columnValues = cached.split(','); // take the cached string and make it an array
} else { // it's not in the cache, so put it there
var column = 1; // the column with your index
var columnValues = sheet.getRange(2, column, sheet.getLastRow()).getValues(); // first row is header
cache.put("columnValues",columnValues,60000); // this forces the array into a string as if you used .join() on it
}
This is definitely a bug in Apps Script -- getRange() shouldn't hang without a timeout or error message. But at least there's a workaround. Here's the bug I opened against it, where I've also put the full code.gs from my sheet.
One cause: Permissions needing authorizing.
As far as {this problem, better phrased the cell result(s) of a custom function displaying the disgustingly-vague message ‘Loading... Error: loading data...’}, indeed in the case where all instances of the same/similar custom function call displaying this error, is that Google Sheets needs permissions to run the script (often additionally: meaning in the past it didn't need these), so instead of {acting appropriately: then prompting the user for these permissions else returning that error}, Sheets instead hangs with this disgustingly vague error.
Additional permissions can be needed from 1 or more:
Google App Scripts has since rewriting their permission structure --how this problem now just happened to me, per my internal note O80U3Z.
Your code or some library it uses made changes to require more access ...but in this case you have a much better chance of guessing the cause of this disgustingly-vague error, so hopefully won't be reading here.
To fix, I explicitly ran my GAS spreadsheet code by both: clicking one of my custom menu functions and, in the ‘script editor’, running one of my custom JS functions notably the ‘onOpen()’ since that is most comprehensive. The first promoted me for indeed new permissions, via popup ‘Authorization RequiredThe application "MM6ZBT(MM6Z83 script)" needs authorization to run.’, though onOpen() also did this in cases of GAS revising its permissions since we used that sheet. Then, as I was still getting this ‘Loading...’ error, I reloaded the web page (so the sheet), and, at least for these cases of this disgustingly vague error, it was gone and the computations worked fine :-)
TL;DR - Try duplicating the sheet tab and delete the old one
I struggled with this issue today, and tried some of the approaches mentioned. For various reasons, renaming the function wasn't possible for me.
It was very clear that if I called a my function like this in cell X25:
=myFunction("a", 1, "b", 2, "c", 3)
The cell would be stuck "Loading...", while just changing a parameter slightly (e.g. converting a number to a string) the cell would evaluate fine.
=myFunction("a", "" & 1, "b", 2, "c", 3)
Just copying the code into another cell (e.g. X24) and executing it there seemed to bypass the problem. As soon as I moved it back to the original parameters or cell, it got stuck "Loading..." again.
So I would assume it's some kind of caching of "Cell ID", function and parameters that go bonkers on Google's side.
My good-enough solution was to simply duplicate the Sheet tab, delete the old one, and finally rename the new one back to the original name. This solved the problem for me.
I also had the "loading data..." error but none of the fixes described here worked for me. It didn't seem to be caused by the issues described here. In my case, I narrowed it down to a specific floating point operation issue (it looks like a real bug in Google Sheets to me), and documented one possible work around at
Google Sheets / Apps "Loading data" error: is there a better workaround?
To summarize (at the request of commenter Steve), if a cell with
= myfunction(B10)
generated a "loading data" error, then for me it could be fixed by wrapping the argument in a "value()" function:
= myfunction(value(B10))
which converts the number in cell B10 (which seemed like a normal number but generated problems somehow) into a normal number that works fine.
I also had the problem that you explained. It seems that it can be caused in more than one way.
I ended up finding that my custom function was displaying that error because it relied on data from an =IMPORTRANGE() call, and that call was failing.
I eventually found that the =IMPORTRANGE() call was failing because I had forgotten to update the URL that it was importing from when I had uploaded a new version of that imported-from sheet. It seems that trying to IMPORTRANGE from a trashed file can cause the infinite "Loading..." error.
Update 2022
It looks like this bug is still happening. I tried ALL the solutions mentioned here but none worked.
What worked was to start with a blank slate. I recreated the file, copy-pasted my data, reapplied my preferred style and format, and lo-and-behold the sheet finally managed to pull the data using my custom functions.
This is a definitely a bug on Google's end - and it's all the more annoying because they removed the "Report a problem" button from the "Help" section.
Nevermind
The newer sheet has stopped working too. This is so annoying ..
The problem is that when a custom function formula cell starts showing Loading..., the custom function does not get called at all. The code in the script project thus does not come into play. Even the simplest custom functions sometimes suffer from the issue.
The problem usually goes away if you clear the formula cell and undo, or slightly edit the custom function's parameters to cause it to get re-evaluated. But that does not solve the issue. Google has been dragging their feet solving the underlying cause for many years.
To help the issue get Google's attention, star issue 233124478 in the issue tracker. Click the star icon ☆ in the top left-hand corner to vote for fixing the issue and get notified of status changes. Please do not post a "me too" or "+1" reply, but just click the star icon. Google prioritizes issues with the most stars.
Add-ons
I had two add-ons, and no function was loading.
I removed them, and all is well!
For me, renaming the custom function solved the problem. For now at least.
Just to add to Azmo 's answer...
I in fact removed all trailing semi-colons from the code:
// check if an item can be checked off
function checkedOff( need, have ) {
var retStr = "nope"
if( have >= need ){
retStr = "yep"
}
return retStr
}
And discovered, that when doing this over a large range you can also max out the acceptable number of calls to the API.
To get around it I added an IF THEN check around my custom script call.
So instead of:
=checkedOff(H10,H11)
Use something like this to check for a populated field before execution:
=if(H17<>"-",checkedOff(H10,H11),0)
My app script pulling data from my MSSQL database displayed just fine on GoogleSheets my laptop browser but then did not display on the Android GS app.
Per this thread it looks like there's a number of issues that could cause this, but #DestinyArchitect's answer above re: Permissions seemed like the simplest fix.
While testing my app script, Sharing was off for this GoogleSheet file.
Once I moved it to my team's folder where we have default Sharing switched on with a few team members, the MSSQL data showed right up on the GoogleSheet in my Android GS app.
Easy fix, this time...
In my case, the cell was stuck with a Loading... message due to "probably" a race condition between functions and formulas resolutions.
This is my custom function:
function ifBlank(value1, value2) {
return !!value1 ? value1 : value2;
}
This is the formula calling it:
=IFBLANK(VLOOKUP($A2,Overrides!$A$2:$E,5,FALSE),VLOOKUP($A2,'_resourceReq'!$A$2:$C,3))
Those VLOOKUP values could be pretty complex and could also take some time to resolve.
Solution: (in my case)
Wrapping the VLOOKUP() into TO_TEXT() or VALUE() for example.
So, changing the formula to =IFBLANK(TO_TEXT(VLOOKUP($A2,Overrides!$A$2:$E,5,FALSE)),TO_TEXT(VLOOKUP($A2,'_resourceReq'!$A$2:$C,3))) worked well.
If that doesn't work maybe try resolving the value from a function into a cell before using it as the argument of your custom function.
In my case, multiple cells using functions experienced this issue, but the simple answer was... wait.
In my case, I was scraping data via importXML functions across multiple rows and
columns. I was thrilled with the results, feeling on top of the world, then "Loading..." started showing its ugly face. For way too long. That's how I wound up here in troubleshooting mode, impatient and upset that Google was doing me wrong.
I tried many of the solutions here, only to find my "Loading..." antagonist acting unpredictably, popping up randomly like whacamoles, nothing to do with the code itself.
So. In my case, it was a matter of waiting it out (towards an hour for some rows, but I had so many cells fetching url data).
My layman's guess is that fetching data like this gets put in their bandwidth pipeline, lesser priority than typing a url into a search bar or other user requests.