I'm using this tutorial (https://scrapediary.com/find-local-leads-with-google-places-api-and-sheets/) to scrape data from google places API into a google sheet. I copied the code exactly:
var output = [ ["Name", "Place ID", "Latitude", "Longitude", "Types"]]
var url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json?types=food&location=51.4977836,-0.1522502&radius=200&key=AIzaSyBtepY6mCTkHr3m4UCacxSkePkli5yEbCM";
var response = UrlFetchApp.fetch(url)
payload = JSON.parse(response)
for (var x = 0; x < payload['results'].length; x++){
var inner = [ payload['results'][x]['name'], payload['results'][x]['place'],payload['results'][x]['latitude'],payload['results'][x]['longitude'],payload['results'][x]['types']]
output.push(inner)}
}
and I'm trying to run it in google sheets like this:
=placeSearch("Golf Course","51.4977836","-0.1522502","20000","i_put_my_api_key_here")
and it shows "Loading" and then returns nothing. I've double checked that the url itself works by pasting it into the browser and it returns the results in JSON format. I feel like there's a problem with pushing the results to the sheet but I can't find it
There is no doubt that the code you copied is working. Upon testing the same exact code you posted to replicate the problems, I only added return in the function to populate the cell.
See my exact code which worked and returned the data in sheets.
function placesAPI(keyword,latitude,longitude,radius,api_key,depth) {
var output = [ ["Name", "Place ID", "Latitude", "Longitude", "Types"]]
var url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json?types=food&location=51.4977836,-0.1522502&radius=200&key=AIzaSyBtepY6mCTkHr3m4UCacxSkePkli5yEbCM";
var response = UrlFetchApp.fetch(url)
payload = JSON.parse(response);
for (var x = 0; x < payload['results'].length; x++){
var inner = [ payload['results'][x]['name'], payload['results'][x]['place'],payload['results'][x]['latitude'],payload['results'][x]['longitude'],payload['results'][x]['types']]
output.push(inner)}
return(output); // added this code to put the value on the cell
}
In the function call, you need to use the api key in the url first to establish a connection. I have confirmed in my testing that if you used other api keys in the first function call, it will not return anything.
=placesAPI("Golf Course","51.4977836","-0.1522502","20000","AIzaSyBtepY6mCTkHr3m4UCacxSkePkli5yEbCM",20)
After that, it should return the same output below. Same with what we see when visiting the url manually.
Related
I have built a simple custom function in Apps Script using URLFetchApp to get the follower count for TikTok accounts.
function tiktok_fans() {
var raw_data = new RegExp(/("followerCount":)([0-9]+)/g);
var handle = '#charlidamelio';
var web_content = UrlFetchApp.fetch('https://www.tiktok.com/'+ handle + '?lang=en').getContentText();
var match_text = raw_data.exec(web_content);
var result = (match_text[2]);
Logger.log(result)
return result
}
The Log comes back with the correct number for followers.
However, when I change the code to;
function tiktok_fans(handle) {
var raw_data = new RegExp(/("followerCount":)([0-9]+)/g);
//var handle = '#charlidamelio';
var web_content = UrlFetchApp.fetch('https://www.tiktok.com/'+ handle + '?lang=en').getContentText();
var match_text = raw_data.exec(web_content);
var result = (match_text[2]);
Logger.log(result)
return result
}
and use it in a spreadsheet for example =tiktok_fans(A1), where A1 has #charlidamelio I get an #ERROR response in the cell
TypeError: Cannot read property '2' of null (line 6).
Why does it work in the logs but not in the spreadsheet?
--additional info--
Still getting the same error after testing #Tanaike answer below, "TypeError: Cannot read property '2' of null (line 6)."
Have mapped out manually to see the error, each time the below runs, a different log returns "null". I believe this is to do with the ContentText size/in the cache. I have tried utilising Utilities.sleep() in between functions with no luck, I still get null's.
code
var raw_data = new RegExp(/("followerCount":)([0-9]+)/g);
//tiktok urls
var qld = UrlFetchApp.fetch('https://www.tiktok.com/#thisisqueensland?lang=en').getContentText();
var nsw = UrlFetchApp.fetch('https://www.tiktok.com/#visitnsw?lang=en').getContentText();
var syd = UrlFetchApp.fetch('https://www.tiktok.com/#sydney?lang=en').getContentText();
var tas = UrlFetchApp.fetch('https://www.tiktok.com/#tasmania?lang=en').getContentText();
var nt = UrlFetchApp.fetch('https://www.tiktok.com/#ntaustralia?lang=en').getContentText();
var nz = UrlFetchApp.fetch('https://www.tiktok.com/#purenz?lang=en').getContentText();
var aus = UrlFetchApp.fetch('https://www.tiktok.com/#australia?lang=en').getContentText();
var vic = UrlFetchApp.fetch('https://www.tiktok.com/#visitmelbourne?lang=en').getContentText();
//find folowers with regex
var match_qld = raw_data.exec(qld);
var match_nsw = raw_data.exec(nsw);
var match_syd = raw_data.exec(syd);
var match_tas = raw_data.exec(tas);
var match_nt = raw_data.exec(nt);
var match_nz = raw_data.exec(nz);
var match_aus = raw_data.exec(aus);
var match_vic = raw_data.exec(vic);
Logger.log(match_qld);
Logger.log(match_nsw);
Logger.log(match_syd);
Logger.log(match_tas);
Logger.log(match_nt);
Logger.log(match_nz);
Logger.log(match_aus);
Logger.log(match_vic);
Issue:
From your situation, I remembered that the request of UrlFetchApp with the custom function is different from the request of UrlFetchApp with the script editor. So I thought that the reason for your issue might be related to this thread. https://stackoverflow.com/a/63024816 In your situation, your situation seems to be the opposite of this thread. But, it is considered that this issue is due to the specification of the site.
In order to check this difference, I checked the file size of the retrieved HTML data.
The file size of HTML data retrieved by UrlFetchApp executing with the script editor is 518k bytes.
The file size of HTML data retrieved by UrlFetchApp executing with the custom function is 9k bytes.
It seems that the request of UrlFetchApp executing with the custom function is the same as that of UrlFetchApp executing withWeb Apps. The data of 9k bytes are retrieved by using this.
From the above result, it is found that the retrieved HTML is different between the script editor and the custom function. Namely, the HTML data retrieved by the custom function doesn't include the regex of ("followerCount":)([0-9]+). By this, such an error occurs. I thought that this might be the reason for your issue.
Workaround:
When I tested your situation with Web Apps and triggers, the same issue occurs. By this, in the current stage, I thought that the method for automatically executing the script might not be able to be used. So, as a workaround, how about using a button and the custom menu? When the script is run by the button and the custom menu, the script works. It seems that this method is the same as that of the script editor.
The sample script is as follows.
Sample script:
Before you run the script, please set range. For example, please assign this function to a button on Spreadsheet. When you click the button, the script is run. In this sample, it supposes that the values like #charlidamelio are put to the column "A".
function sample() {
var range = "A2:A10"; // Please set the range of "handle".
var raw_data = new RegExp(/("followerCount":)([0-9]+)/g);
var sheet = SpreadsheetApp.getActiveSheet();
var r = sheet.getRange(range);
var values = r.getValues();
var res = values.map(([handle]) => {
if (handle != "") {
var web_content = UrlFetchApp.fetch('https://www.tiktok.com/'+ handle + '?lang=en').getContentText();
var match_text = raw_data.exec(web_content);
return [match_text[2]];
}
return [""];
});
r.offset(0, 1).setValues(res);
}
When this script is run, the values are retrieved from the URL and put to the column "B".
Note:
This is a simple script. So please modify it for your actual situation.
Reference:
Related thread.
UrlFetchApp request fails in Menu Functions but not in Custom Functions (connecting to external REST API)
Added:
About the following additional question,
whilst this works for 1 TikTok handle, when trying to run a list of multiple it fails each time, with the error TypeError: Cannot read property '2' of null. After doing some investigating and manually mapping out 8 handles, I can see that each time it runs, it returns "null" for one or more of the web_content variables. Is there a way to slow the script down/run each UrlFetchApp one at a time to ensure each returns content?
i've tried this and still getting an error. Have tried up to 10000ms. I've added some more detail to the original question, hope this makes sense as to the error. It is always in a different log that I get nulls, hence why I think it's a timing or cache issue.
In this case, how about the following sample script?
Sample script:
In this sample script, when the value cannot be retrieved from the URL, the value is tried to retrieve again as the retry. This sample script uses the 2 times as the retry. So when the value cannot be retrieved by 2 retries, the empty value is returned.
function sample() {
var range = "A2:A10"; // Please set the range of "handle".
var raw_data = new RegExp(/("followerCount":)([0-9]+)/g);
var sheet = SpreadsheetApp.getActiveSheet();
var r = sheet.getRange(range);
var values = r.getValues();
var res = values.map(([handle]) => {
if (handle != "") {
var web_content = UrlFetchApp.fetch('https://www.tiktok.com/'+ handle + '?lang=en').getContentText();
var match_text = raw_data.exec(web_content);
if (!match_text || match_text.length != 3) {
var retry = 2; // Number of retry.
for (var i = 0; i < retry; i++) {
Utilities.sleep(3000);
web_content = UrlFetchApp.fetch('https://www.tiktok.com/'+ handle + '?lang=en').getContentText();
match_text = raw_data.exec(web_content);
if (match_text || match_text.length == 3) break;
}
}
return [match_text && match_text.length == 3 ? match_text[2] : ""];
}
return [""];
});
r.offset(0, 1).setValues(res);
}
Please adjust the value of retry and Utilities.sleep(3000).
This works for me as a Custom Function:
function MYFUNK(n=2) {
const url = 'my website url'
const re = new RegExp(`<p id="un${n}.*\/p>`,'g')
const r = UrlFetchApp.fetch(url).getContentText();
const v = r.match(re);
Logger.log(v);
return v;
}
I used my own website and I have several paragraphs with ids from un1 to un7 and I'm taking the value of A1 for the only parameter. It returns the correct string each time I change it.
This question already has answers here:
Scraping data to Google Sheets from a website that uses JavaScript
(2 answers)
Closed last month.
I'm attempting to scrape options pricing data from Yahoo Finance in Google Sheets. Although I'm able to pull the options chain just fine, i.e.
=IMPORTHTML("https://finance.yahoo.com/quote/TCOM/options?date=1610668800","table",2)
I find that it's returning results that don't completely match what's actually shown on Yahoo Finance. Specifically, the scraped results are incomplete - they're missing some strikes. i.e. the first 5 rows of the chart may match, but then it will start returning only every other strike (aka skipping every other strike).
Why would IMPORTHTML be returning "abbreviated" results, which don't match what's actually shown on the page? And more importantly, is there some way to scrape complete data (i.e. that doesn't skip some portion of the available strikes)?
In Yahoo finance, all data are available in a big json called root.App.main. So to get the complete set of data, proceed as following
var source = UrlFetchApp.fetch(url).getContentText()
var jsonString = source.match(/(?<=root.App.main = ).*(?=}}}})/g) + '}}}}'
var data = JSON.parse(jsonString)
You can then choose to fetch the informations you need. Take a copy of this example https://docs.google.com/spreadsheets/d/1sTA71PhpxI_QdGKXVAtb0Rc3cmvPLgzvXKXXTmiec7k/copy
edit
if you want to get a full list of available data, you can retrieve it by this simple script
// mike.steelson
let result = [];
function getAllDataJSON(url = 'https://finance.yahoo.com/quote/TCOM/options?date=1610668800') {
var source = UrlFetchApp.fetch(url).getContentText()
var jsonString = source.match(/(?<=root.App.main = ).*(?=}}}})/g) + '}}}}'
var data = JSON.parse(jsonString)
getAllData(eval(data),'data')
var sh = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet()
sh.getRange(1, 1, result.length, result[0].length).setValues(result);
}
function getAllData(obj,id) {
const regex = new RegExp('[^0-9]+');
for (let p in obj) {
var newid = (regex.test(p)) ? id + '["' + p + '"]' : id + '[' + p + ']';
if (obj[p]!=null){
if (typeof obj[p] != 'object' && typeof obj[p] != 'function'){
result.push([newid, obj[p]]);
}
if (typeof obj[p] == 'object') {
getAllData(obj[p], newid );
}
}
}
}
Here's a simpler way to get the last market price of a given option. Add this function to you Google Sheets Script Editor.
function OPTION(ticker) {
var ticker = ticker+"";
var URL = "finance.yahoo.com/quote/"+ticker;
var html = UrlFetchApp.fetch(URL).getContentText();
var count = (html.match(/regularMarketPrice/g) || []).length;
var query = "regularMarketPrice";
var loc = 0;
var n = parseInt(count)-2;
for(i = 0; i<n; i++) {
loc = html.indexOf(query,loc+1);
}
var value = html.substring(loc+query.length+9, html.indexOf(",", loc+query.length+9));
return value*100;
}
In your google sheets input the Yahoo Finance option ticker like below
=OPTION("AAPL210430C00060000")
I believe your goal as follows.
You want to retrieve the complete table from the URL of https://finance.yahoo.com/quote/TCOM/options?date=1610668800, and want to put it to the Spreadsheet.
Issue and workaround:
I could replicate your issue. When I saw the HTML data, unfortunately, I couldn't find the difference of HTML between the showing rows and the not showing rows. And also, I could confirm that the complete table is included in the HTML data. By the way, when I tested it using =IMPORTXML(A1,"//section[2]//tr"), the same result of IMPORTHTML occurs. So I thought that in this case, IMPORTHTML and IMPORTXML might not be able to retrieve the complete table.
So, in this answer, as a workaround, I would like to propose to put the complete table parsed using Sheets API. In this case, Google Apps Script is used. By this, I could confirm that the complete table can be retrieved by parsing the HTML table with Sheet API.
Sample script:
Please copy and paste the following script to the script editor of Spreadsheet, and please enable Sheets API at Advanced Google services. And, please run the function of myFunction at the script editor. By this, the retrieved table is put to the sheet of sheetName.
function myFunction() {
// Please set the following variables.
const url ="https://finance.yahoo.com/quote/TCOM/options?date=1610668800";
const sheetName = "Sheet1"; // Please set the destination sheet name.
const sessionNumber = 2; // Please set the number of session. In this case, the table of 2nd session is retrieved.
const html = UrlFetchApp.fetch(url).getContentText();
const section = [...html.matchAll(/<section[\s\S\w]+?<\/section>/g)];
if (section.length >= sessionNumber) {
if (section[sessionNumber].length == 1) {
const table = section[sessionNumber][0].match(/<table[\s\S\w]+?<\/table>/);
if (table) {
const ss = SpreadsheetApp.getActiveSpreadsheet();
const body = {requests: [{pasteData: {html: true, data: table[0], coordinate: {sheetId: ss.getSheetByName(sheetName).getSheetId()}}}]};
Sheets.Spreadsheets.batchUpdate(body, ss.getId());
}
} else {
throw new Error("No table.");
}
} else {
throw new Error("No table.");
}
}
const sessionNumber = 2; means that 2 of =IMPORTHTML("https://finance.yahoo.com/quote/TCOM/options?date=1610668800","table",2).
References:
Method: spreadsheets.batchUpdate
PasteDataRequest
Using ImportJSON to parse JSONSchema documents and load into GSheet.
I have JSON documents with paths as in the snip below.
I want to output the names of properties in one column and the type in another.
Wanted to see if someone has done this already before i start hacking about with parseJSON or the defaultTransform functions of ImportJSON.
Added example GSheet here
Shows source, currently parsed output and what i need in terms of required output
/data/schema/properties/plan_id/type
/data/schema/properties/plan_id/maxLength
/data/schema/properties/plan_name/type
/data/schema/properties/plan_name/maxLength
/data/schema/properties/type/type
/data/schema/properties/type/maxLength
/data/schema/properties/quantity_ranges/type
/data/schema/properties/quantity_ranges/maximum
/data/schema/properties/quantity_ranges/minimum
/data/schema/properties/pricing_option/type
/data/schema/properties/pricing_option/maxLength
/data/schema/properties/currency/type
/data/schema/properties/currency/enum
/data/schema/properties/value/type
/data/schema/properties/value/maximum
/data/schema/properties/value/minimum
Thanks in advance!
You want to achieve the following situation.
From
To
You want to achieve this using Google Apps Script.
I understood like above. If my understanding is correct, how about this answer? Please think of this as just one of several possible answers.
Sample script:
When you use this sample script, please put =parseObject("SourceJSON!A1") to a cell in your shared Spreadsheet.
function parseObject(range) {
var range = SpreadsheetApp.getActiveSpreadsheet().getRange(range);
var value = range.getValue();
var object = JSON.parse(value);
var res = [];
var headers = ["type", ["maxLength", "maximum"], "minimum", "enum"];
// var headers = ["type", "maxLength", "maximum", "minimum", "enum"];
for (var i in object.data.schema.properties) {
var obj = object.data.schema.properties[i];
for (var j = 0; j < headers.length; j++) {
var temp = [object.data.id, object.data.version];
if (Array.isArray(headers[j])) {
for (var k = 0; k < headers[j].length; k++) {
if (obj[headers[j][k]]) res.push(temp.concat([i, "",obj[headers[j][k]],"",""]));
}
} else {
if (obj[headers[j]]) {
var ar = [i, "","","",""];
ar.splice(j + 1, 1, Array.isArray(obj[headers[j]]) ? obj[headers[j]].join(",") : obj[headers[j]]);
res.push(temp.concat(ar));
}
}
}
}
return res;
}
Result:
Note:
This sample script retrieves the data from the Spreadsheet.
In your DesiredOutput, the values of "maxLength" and "maximum" in the data are put to the same column. At above sample script, the result is the same with it. If you want to separate the values of "maxLength" and "maximum", please modify var headers = ["type", ["maxLength", "maximum"], "minimum", "enum"]; to var headers = ["type", "maxLength", "maximum", "minimum", "enum"];.
This sample script is for the value in your shared Spreadsheet. So when you use this for the data with other structure, an error might occur and/or the result you don't want might be returned. Please be careful this.
I have a project where I have scanned 10,000 family pictures from as far back as the 1900's and I am organizing them in Google Photos. I have a spreadsheet where I was keeping track of the proper dates and captions for the entire collection. I would organize a few at a time but then recently found out about the google photos API.
I would like to use something like the methods Method: mediaItems.list or Method: mediaItems.search to get the data from my photos into the spreadsheet to manage.
The output from these examples is exactly what I'm looking for and would want to load that into a spreadsheet.
It would be super awesome if there was a way to update back from the sheet again as well.
I found this article but the code provided does not work for me.
I have this function now in my sheet
function photoAPI() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var albums_sh = ss.getSheetByName("albums") || ss.insertSheet("albums", ss.getSheets().length);
albums_sh.clear();
var narray = [];
var api = "https://photoslibrary.googleapis.com/v1/albums";
var headers = { "Authorization": "Bearer " + ScriptApp.getOAuthToken() };
var options = { "headers": headers, "method" : "GET", "muteHttpExceptions": true };
var param= "", nexttoken;
do {
if (nexttoken)
param = "?pageToken=" + nexttoken;
var response = UrlFetchApp.fetch(api + param, options);
var json = JSON.parse(response.getContentText());
json.albums.forEach(function (album) {
var data = [
album.title,
album.mediaItemsCount,
album.productUrl
];
narray.push(data);
});
nexttoken = json.nextPageToken;
} while (nexttoken);
albums_sh.getRange(1, 1, narray.length, narray[0].length).setValues(narray);
}
When I run it in debug mode, I get the following error
({error:{code:403, message:"Request had insufficient authentication scopes.", status:"PERMISSION_DENIED"}})
I know this means I need to authenticate but don't know how to make that happen.
I have an API key and a secret from the Google photos API pages.
Edit
I used the links from #Tanaike to figure out how to add scopes to my project.
I added these three.
spreadsheets.currentonly
photoslibrary
script.external_request
Now when I run in debug mode, I get a 403 error indicating I need to set up my API. Summary of the error is below:
error:
code:403
Photos Library API has not been used in project 130931490217 before or it is disabled. Enable it by visiting
https://console.developers.google.com/apis/api/photoslibrary.googleapis.com/overview?project=130931490217
Google developers console API activation
type.googleapis.com/google.rpc.Help
"PERMISSION_DENIED"
When I try to go to the listed URL though, I just get a message that says "Failed to load."
I got my code working with the help of #Tanaike in my comments above. I had two issues.
1) I needed to specify the oauthScopes in appsscript.json which is hidden by default in google scripts. It can be revealed by going to the menu and selecting View > Show Manifest File.
2) I was using a default GCP project which did not have authorization to use the photos API and could not be enabled. I needed to switch to a standard GCP project which I had created earlier and had enabled the photos API.
Here is my original posted function with additional comments after I got it working:
function photoAPI_ListAlbums() {
// Modified from code by Stackoverflow user Frç Ju at https://stackoverflow.com/questions/54063937/0auth2-problem-to-get-my-google-photos-libraries-in-a-google-sheet-of-mine
// which was originally Modified from http://ctrlq.org/code/20068-blogger-api-with-google-apps-script
/*
This function retrieves all albums from your personal google photos account and lists each one with the name of album, count of photos, and URL in a new sheet.
Requires Oauth scopes. Add the below line to appsscript.json
"oauthScopes": ["https://www.googleapis.com/auth/spreadsheets.currentonly", "https://www.googleapis.com/auth/photoslibrary", "https://www.googleapis.com/auth/photoslibrary.readonly", "https://www.googleapis.com/auth/script.external_request"]
Also requires a standard GCP project with the appropriate Photo APIs enabled.
https://developers.google.com/apps-script/guides/cloud-platform-projects
*/
//Get the spreadsheet object
var ss = SpreadsheetApp.getActiveSpreadsheet();
//Check for presence of target sheet, if it does not exist, create one.
var albums_sh = ss.getSheetByName("albums") || ss.insertSheet("albums", ss.getSheets().length);
//Make sure the target sheet is empty
albums_sh.clear();
var narray = [];
//Build the request string. Default page size is 20, max 50. set to max for speed.
var api = "https://photoslibrary.googleapis.com/v1/albums?pageSize=50";
var headers = { "Authorization": "Bearer " + ScriptApp.getOAuthToken() };
var options = { "headers": headers, "method" : "GET", "muteHttpExceptions": true };
var param= "", nexttoken;
//Make the first row a title row
var data = [
"Title",
"Item Count",
"ID",
"URL"
];
narray.push(data);
//Loop through JSON results until a nextPageToken is not returned indicating end of data
do {
//If there is a nextpagetoken, add it to the end of the request string
if (nexttoken)
param = "&pageToken=" + nexttoken;
//Get data and load it into a JSON object
var response = UrlFetchApp.fetch(api + param, options);
var json = JSON.parse(response.getContentText());
//Loop through the JSON object adding desired data to the spreadsheet.
json.albums.forEach(function (album) {
var data = [
"'"+album.title, //The prepended apostrophe makes albums with a name such as "June 2007" to show up as that text rather than parse as a date in the sheet.
album.mediaItemsCount,
album.id,
album.productUrl
];
narray.push(data);
});
//Get the nextPageToken
nexttoken = json.nextPageToken;
//Continue if the nextPageToaken is not null
} while (nexttoken);
//Save all the data to the spreadsheet.
albums_sh.getRange(1, 1, narray.length, narray[0].length).setValues(narray);
}
And here is another function which I created in the same style to pull photo metadata directly. This is what I was originally trying to accomplish.
function photoAPI_ListPhotos() {
//Modified from above function photoAPI_ListAlbums
/*
This function retrieves all photos from your personal google photos account and lists each one with the Filename, Caption, Create time (formatted for Sheet), Width, Height, and URL in a new sheet.
it will not include archived photos which can be confusing if you happen to have a large chunk of archived photos some pages may return only a next page token with no media items.
Requires Oauth scopes. Add the below line to appsscript.json
"oauthScopes": ["https://www.googleapis.com/auth/spreadsheets.currentonly", "https://www.googleapis.com/auth/photoslibrary", "https://www.googleapis.com/auth/photoslibrary.readonly", "https://www.googleapis.com/auth/script.external_request"]
Also requires a standard GCP project with the appropriate Photo APIs enabled.
https://developers.google.com/apps-script/guides/cloud-platform-projects
*/
//Get the spreadsheet object
var ss = SpreadsheetApp.getActiveSpreadsheet();
//Check for presence of target sheet, if it does not exist, create one.
var photos_sh = ss.getSheetByName("photos") || ss.insertSheet("photos", ss.getSheets().length);
//Make sure the target sheet is empty
photos_sh.clear();
var narray = [];
//Build the request string. Max page size is 100. set to max for speed.
var api = "https://photoslibrary.googleapis.com/v1/mediaItems?pageSize=100";
var headers = { "Authorization": "Bearer " + ScriptApp.getOAuthToken() };
var options = { "headers": headers, "method" : "GET", "muteHttpExceptions": true };
//This variable is used if you want to resume the scrape at some page other than the start. This is needed if you have more than 40,000 photos.
//Uncomment the line below and add the next page token for where you want to start in the quotes.
//var nexttoken="";
var param= "", nexttoken;
//Start counting how many pages have been processed.
var pagecount=0;
//Make the first row a title row
var data = [
"Filename",
"description",
"Create Time",
"Width",
"Height",
"ID",
"URL",
"NextPage"
];
narray.push(data);
//Loop through JSON results until a nextPageToken is not returned indicating end of data
do {
//If there is a nextpagetoken, add it to the end of the request string
if (nexttoken)
param = "&pageToken=" + nexttoken;
//Get data and load it into a JSON object
var response = UrlFetchApp.fetch(api + param, options);
var json = JSON.parse(response.getContentText());
//Check if there are mediaItems to process.
if (typeof json.mediaItems === 'undefined') {
//If there are no mediaItems, Add a blank line in the sheet with the returned nextpagetoken
//var data = ["","","","","","","",json.nextPageToken];
//narray.push(data);
} else {
//Loop through the JSON object adding desired data to the spreadsheet.
json.mediaItems.forEach(function (MediaItem) {
//Check if the mediaitem has a description (caption) and make that cell blank if it is not present.
if(typeof MediaItem.description === 'undefined') {
var description = "";
} else {
var description = MediaItem.description;
}
//Format the create date as appropriate for spreadsheets.
var d = new Date(MediaItem.mediaMetadata.creationTime);
var data = [
MediaItem.filename,
"'"+description, //The prepended apostrophe makes captions that are dates or numbers save in the sheet as a string.
d,
MediaItem.mediaMetadata.width,
MediaItem.mediaMetadata.height,
MediaItem.id,
MediaItem.productUrl,
json.nextPageToken
];
narray.push(data);
});
}
//Get the nextPageToken
nexttoken = json.nextPageToken;
pagecount++;
//Continue if the nextPageToaken is not null
//Also stop if you reach 400 pages processed, this prevents the script from timing out. You will need to resume manually using the nexttoken variable above.
} while (pagecount<400 && nexttoken);
//Continue if the nextPageToaken is not null (This is commented out as an alternative and can be used if you have a small enough collection it will not time out.)
//} while (nexttoken);
//Save all the data to the spreadsheet.
photos_sh.getRange(1, 1, narray.length, narray[0].length).setValues(narray);
}
Because of the limitations of the ListPhotos function and the fact that my library is so enormous, I am still working on a third function to pull photo metadata from all the photos in specific albums. I'll edit this answer once I pull that off.
I have created a sheet to keep my crypto holdings. I use this importJSON function I found on youtube : (I have changed the help text for myself)
/**
* Imports JSON data to your spreadsheet Ex: IMPORTJSON("https://api.coinmarketcap.com/v2/ticker/1/?convert=EUR","data/quotes/EUR/price")
* #param url URL of your JSON data as string
* #param xpath simplified xpath as string
* #customfunction
*/
function IMPORTJSON(url,xpath){
try{
// /rates/EUR
var res = UrlFetchApp.fetch(url);
var content = res.getContentText();
var json = JSON.parse(content);
var patharray = xpath.split("/");
//Logger.log(patharray);
for(var i=0;i<patharray.length;i++){
json = json[patharray[i]];
}
//Logger.log(typeof(json));
if(typeof(json) === "undefined"){
return "Node Not Available";
} else if(typeof(json) === "object"){
var tempArr = [];
for(var obj in json){
tempArr.push([obj,json[obj]]);
}
return tempArr;
} else if(typeof(json) !== "object") {
return json;
}
}
catch(err){
return "Error getting data";
}
}
I use this function to readout an API :
This is a piece of my script :
var btc_eur = IMPORTJSON("https://api.coinmarketcap.com/v2/ticker/1/?convert=EUR","data/quotes/EUR/price");
var btc_btc = IMPORTJSON("https://api.coinmarketcap.com/v2/ticker/1/?convert=BTC","data/quotes/BTC/price");
ss.getRange("B2").setValue([btc_eur]);
ss.getRange("H2").setValue([btc_btc]);
var bhc_eur = IMPORTJSON("https://api.coinmarketcap.com/v2/ticker/1831/?convert=EUR","data/quotes/EUR/price");
var bhc_btc = IMPORTJSON("https://api.coinmarketcap.com/v2/ticker/1831/?convert=BTC","data/quotes/BTC/price");
ss.getRange("B3").setValue([bhc_eur]);
ss.getRange("H3").setValue([bhc_btc]);
The last few days I get "Error getting data" errors. When I start manualy the script it works.
I than tried this code I found here :
ImportJson
function IMPORTJSON(url,xpath){
var res = UrlFetchApp.fetch(url);
var content = res.getContentText();
var json = JSON.parse(content);
var patharray = xpath.split("/");
var res = [];
for (var i in json[patharray[0]]) {
res.push(json[patharray[0]][i][patharray[1]]);
}
return res;
}
But this gives an error about : TypeError: Cannot read property "quotes" from null. What am I doing wrong ?
The big problem is your script call API at least 4 times. When few users do it too, the Google server call API too much times.
The API of Coinmarketcap has limited bandwidth. When any client reach this limit, the API return HTTP error 429. Google Scripts is on shared Google servers, that means lot of users looks as one client for Coinmarketcap API.
When API decline your request, your script fails – the error message corresponds to the assumed error (xpath cant find quotes component in empty varible).
This is ruthless behavior. Please, don't ruin API via mass calls.
You can load data from API at once and re-use it angain for each finding in data.
I have similar Spreadsheet automatically filled from Coinmarketcap API, you can copy it for your:
Coins spreadsheet
Google Script on GitHub.
This my script is strictly ask API only once for whole runtime and reusing one response for all queries.
Change of your script
Also you can make few changes in your Code for saving resources:
Change IMPORTJSON function from this:
function IMPORTJSON(url,xpath){
var res = UrlFetchApp.fetch(url);
var content = res.getContentText();
var json = JSON.parse(content);
...
to this:
function IMPORTJSON(json, xpath) {
...
and rutime section of code you can change like this:
var res = UrlFetchApp.fetch("https://api.coinmarketcap.com/v2/ticker/1/?convert=EUR");
var content = res.getContentText();
var json = JSON.parse(content);
var btc_eur = IMPORTJSON(json,"data/quotes/EUR/price");
var btc_btc = IMPORTJSON(json,"data/quotes/BTC/price");
ss.getRange("B2").setValue([btc_eur]);
ss.getRange("H2").setValue([btc_btc]);
...
Main benefit is: the UrlFetchApp.fetch is called only once.
Yes, I know, this code is not works 1:1 like your. That because that receive prices only for EUR and not for BTC. Naturally fetching comparation between BTC and BTC is unnecessary because it is always 1 and other values you can count matematically from EUR response – please don't abuse an api for such queries.
As Jakub said, the main issue is that all requests are counted as coming from the same Google server.
One solution which I consider easier is to put a proxy server in the middle, this can be done by either purchasing a server and setting it up (which is quite complex) or using a service like Proxycrawl which includes some free requests and after that, unless you run thousands of queries per month, it should cost you less than 1 USD per month.
To do that you just need to edit one line of the script:
var res = UrlFetchApp.fetch(url);
This line, becomes this:
var res = UrlFetchApp.fetch(`https://api.proxycrawl.com/?token=YOUR_TOKEN&url=${encodeURIComponent(url)}`);
Make sure to replace YOUR_TOKEN with your actual service token
Just this simple change will make the requests never fail as each request will be sent from a different IP instead of all coming from Google.