I am using Discover the Google Analytics platform to generate queries in order to make callouts to GA from inside a SalesForce application. Upon creating the custom the report an API Query URI is generated which presents the data from the report in a JSON format.
One example uri looks like the following:
https://www.googleapis.com/analytics/v3/data/ga?ids=[my gi id] &start-date=[start date]&end-date=[end date[&metrics=ga%3Asessions%2Cga%3AsessionDuration&dimensions=ga%3AdaysSinceLastSession%2Cga%3Acampaign%2Cga%3AsourceMedium%2Cga%3AsocialNetwork%2Cga%3Adimension2&filters=ga%3AsessionDuration%3E1&access_token=[my access token]
The issue is that the presented data is limited to 1000 rows max, and I am not sure how can I surpass this size view limit.
The google analytics API has a field you can send called max-results if you add
&max-results=10000
to your request you will get paging of 10000 rows. That is the max you can set it to if there are more results a nextlink will be returned with the results that you can use to make more requests to get the additional data.
Related
I have a google sheets addon with custom formulas that fetch data from my API to show in their result.
The problem for many users is the addon frequently reaches the Urlfetch quota. So I'm trying to use another source of data for my formulas, I been trying to setup BigQuery for that ( I know is not meant to be used like that).
My approach would be something like this:
When a user executes a formula, I look first in BigQuery to see if data already there, if not fetch from API then stores the result in a BigQuery.
I tried a proof of concept, where I added a custom function to my addon with the code sample in https://developers.google.com/apps-script/advanced/bigquery
where I replaced projectId for my own and queried a sample table
when executed the formula got this error:
GoogleJsonResponseException: API call to bigquery.jobs.query failed with error: Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
I tried also execute the same code from a function called from the sidebar frontend, where I got this error instead:
User does not have bigquery.jobs.create permission in project
From these errors I'm guessing I have to assign a BigQuery role in IAM to each user. But I have hundreds of them.
Is there any why that any addon user can access the BigQuery project? or my whole approach is wrong.
How about using the Cache service instead of BigQuery? The Cache service is simple to use, does not require authentication, and also works in a custom function context.
Note these limitations:
The maximum length of a key is 250 characters.
The maximum amount of data that can be stored per key is 100KB.
The maximum expiration time is 21,600 seconds (6 hours), but cached data may be removed before this time if a lot of data is cached.
The cap for cached items is 1,000. If more than 1,000 items are written, the cache stores the 900 items farthest from expiration.
I can now retrieve step count data from Google Fitness REST API.
https://www.googleapis.com/fitness/v1/users/me/dataset:aggregate
However, I can't tell which data is reliable (data not generate by user input).
After some research I found there is a orginalDataSourceId in document
https://developers.google.com/fit/rest/v1/reference/users/dataSources/datasets#resource
But the description say
WARNING: do not rely on this field for anything other than debugging. The value of this field, if it is set at all, is an implementation detail and is not guaranteed to remain consistent.
So I really don't know how to do. How do I filter out step count that user input manually from Google Fitness REST API?
I found a solution now. The originDataSourceId of dataset is not reliable when you get by aggregate. (You may get data that merged from differnt source)
So you can get the data source list first.
https://developers.google.com/fit/rest/v1/reference/users/dataSources/list
You can filter the Data Source by dataTypeName, device, ... etc.
Then you can use the dataStreamId of the Data Source as dataSourceId to aggregate data.
https://developers.google.com/fit/rest/v1/reference/users/dataset/aggregate
(aggregateBy)
I need to retrieve data from an API source that has a massive amount of entries. (1800+) The problem is that the source has no pagination or way to group the entries. We are then saving the entries as post on the site and will run through a Cronjob daily.
Using curl_init() to retrieve the data from the API source. But the we keep getting a 503 error, timing out. When it works it retrieves the data as json saving important info with as metadata and the rest as json.
Is there a more reliable way to retrieve the data. On other sites I have worked on we have been able to programmatically run through an API per page in the backend.
You might try saving the JSON to a file first, then running the post creation on the JSON in the file vs. the direct cURL connection. I ran into similar issues in the past, even with an API that had pagination.
I have an asp.net web api with getting a list of data from database with a very heavy sql query(using store procedure) then serialize to json, my data result could return sometime more than 100,000 rows of data and is beyond the max limitation of http JSON response which is 4MB, I've been trying to use pagination to limit my result size but it pulls down performance as everytime user click on next page will trigger a heavy sql command, but if I don't use pagination, sometimes the result data size is more than 4MB, and my client side grid won't render properly. Since I don't have a way to check the JSON data size before sending back to client from web api. So my questions would be:
Is there any way to check data size in asp.net web api before sending back to client? For example, if it's more than 4MB then send a response saying "please modify your date range to have less data"? Would this be a good idea in application design?
Is there any way to save the entire data result in cache or
somewhere with asp.net web api so that every time when user perform a
pagination, it will not get result again from database but from the
cache.
Is there any way to store the entire data result in cache or in a temp file on client side(using Angular 5) so that when user perform a pagination, it will not request another http call to web api.
I would be more than happy to listen any experience or suggestion from anyone! Thank you very much!
Does Google Data Studio Community connector support pagination?
I work with an external data service. The service returns data page by page. It requires start and next parameters and it requires 2 req/sec. Can I override a method like getData or upgrade request argument for implement this feature?
If it's not. Is there the best practice for getting data of this kind?
Community Connectors do not support pagination for web APIs at present.
The best practice would depend on your use case. If you want to get the full dataset for user, you can make multiple UrlFetch calls to get the full dataset, merge it, and return the merged set as the getdata() response. It might also make sense to cache this result to avoid making a large number of requests in the short term. You can cache using Apps Script cache, or a Sheet, or even to BigQuery. Keep in mind that Apps Script has 6 min / execution limit.
However, if you want to return only specific pages, the only way to configure that would be through getConfig since configparams are passed with the getData() request. Example use case would be returning only first n number of pages where n selected by user in the config.