How to get latest google fit params in one api call - google-fit

I need to fetch multiple params from google fit like "Heart Rate", "Body Temperature" etc... But looks like there is no single api to fetch all of them.
There is this one api called dataset aggregate, shown below, we can pass multiple params for get, (like heart rate, body temperature. but this is returning only the aggregate values like "Max Heart Rate", "Min Heart Rate", "Average Heart Rate", "Max Body Temperature" in the specified time frame. but not the heart rate, body temperature.
https://www.googleapis.com/fitness/v1/users/userId/dataset:aggregate
Is there any i can fetch the latest values for multiple params, the below api fetches the latest value but only one param can be passed at a time.
https://www.googleapis.com/fitness/v1/users/userId/dataSources/dataSourceId
kindly let me know if there's any way i can fetch multiple params from a single api,
thanks in advance. through rest api

Related

How to get top X number of items according to a property in a paginated API?

Please note the question is purposely vague.
Assume, I have an API that returns paginated data of books. 100 book objects per page [This data is present in the headers like 'total-pages'. The response body is in the format of a list of JSON objects.
[
{},
{},
.
.
.
]
Assume book object has all the necessary data such as title, description, number of sales, etc, and so on. How do I return top-selling n number of books [top-selling => Highest number of sales]?
You cannot provide any parameters such as
GET endpoint/orderby=popularity&order=desc
to this endpoint and you cannot make any database query optimizations. How do we solve this problem with just pure code?
I'll be grateful for any ideas or suggestions.
Since the endpoint is limited in the parameters it accepts, your only option is to fetch all the pages (by continuously looping until it errors) and then sort it in your program it self.
If the above solution is slow, you can optimize by relaxing your requirements from return top-selling n number of books to return a few top-selling books which are in the top X%

How do I get the hourly forecast from api.weather.gov for Texas stations or zones or offices or gridpoints or anything else?

I am trying to get the hourly forecast from api.weather.gov
I have one gridpoint working for Indiana.
https://api.weather.gov/gridpoints/IND/56,65/forecast/hourly
I was given this information and it is valid. What I need is the hourly forecast in the state of Texas for every stations or zones or offices or gridpoints or anything else.
How do I do that?
As long as you have the latitude/longitude of the location you want the forecast for, then:
Get the point metadata from https://api.weather.gov/points/{lat},{lon}
Follow the link in the forecastHourly property to get the forecast
This is preferable to constructing the URL as in the other answer, as your program won't break if the URL scheme changes in the future.
Looking at the api documentation found HERE and HERE, you are calling the /gridpoints/{wfo}/{x},{y}/forecast/hourly call which will return the hourly weather forecast for the specified weather office {wfo} at the specified x-y coordinates. You can find a list of the weather offices HERE. Finding the X-Y coordinates for the weather offices may be a bit more tedious to find on the web.
If you happen have access to the GPS coordinates that you are working with you can use the /points/{x},{y} API call to get the information on the closest weather office to then pass to the /gridpoints/{wfo}/{x},{y}/forecast/hourly API call.
The flow of your application can look something like this:
Step 1: Get your map Geo coordinates. In my case, I am at 35,-106
Step 2: Make a call to the weather.gov API: https://api.weather.gov/points/35,-106. You will be presented with some JSON data. Look for the cwa key in the properties object. That will be the forecast office to pass into the next api call. In my case, the key is ABQ. You also need to find the gridX and gridY keys in the properties. These are the XY coordinates that you will use for the {X},{Y} parameters in the API call. In my case X = 121 and Y = 112.
Step 3: Make the final call to the weather.gov API: https://api.weather.gov/gridpoints/ABQ/121,112/forecast/hourly

Pagination yields no results in Google Fit

I am using the REST API of Google Fit. I want to list sessions with the fitness.users.sessions.list method. This gives me a few dozen of results.
Now I would like to get more results and for this I set the pageToken to the value I got from the previous response. But the new results does not contain any data points, just yet another pageToken:
{
"session": [
],
"deletedSession": [
],
"nextPageToken": "1541027616563"
}
The same happens when I use the pagination function of the Google Python API Client: I iterate on results but never get any new data.
request = self.service.users().sessions().list(userId='me')
while request is not None:
response = request.execute()
for ds in response['session']:
yield ds
request = self.service.users().sessions().list_next(request, response)
I am sure there is much(!) more session data in Google Fit for my account. Am I missing something regarding pagination?
Thanks
I think that the description of the pageToken parameter is actually rather confusing in the documentation (this answer was written prior to the documentation being updated).
The continuation token, which is used to page through large result sets. To get the next page of results, set this parameter to the value of nextPageToken from the previous response.
This is conflating two concepts: continuation, and paging. There isn't actually any paging in the implementation of Users.sessions.
Sessions are indexed by their modification timestamp. There are two (or three, depending on how you count) ways to interact with the API:
Pass a start and/or end time. Omitted start and end times are taken to be the start and end of time respectively. In this case, you will get back all sessions falling between those times.
Pass neither start nor end times. In this case, you will receive all sessions between some time in the past and now. That time is:
pageToken, if provided
Otherwise, it's 7 days ago (this doesn't actually appear in the documentation, but it is the behavior)
In any of these cases, you receive a nextPageToken back which is just after the most recent session in the results. As such, nextPageToken is really a continuation token, because what it is saying is that you have been told about all sessions modified up to now: pass that token back to be told about anything modified between nextPageToken and "current time" to get updates.
As such, if you issue a request that fetches all sessions for the last 7 days (no start/end time, no page token) and get a nextPageToken, you will only get something back in a request using that nextPageToken if any sessions have been modified in between the first and second requests.
So, if you're making these requests in quick succession, it is expected that you won't see anything in the second response.
In terms of the validity of the startTime you were passing in your comment, that's a bug. RFC3339 defines that fractional seconds should be optional.
I'll see about getting that fixed; but in the interim, just make sure you pass a fractional number of seconds (even if it is just .0, e.g. 2018-10-18T00:00:00.0+00:00).
It may be because the format of the URL you're using is different from the example in the documentation.
You are using:
startTime=2018-10-18T00:00:00+00:00
Wherein the one in the documentation has it as:
startTime=2014-04-01T00:00:00.00Z
The documentation also stated that both startTime and endTime query parameters are required.

google analytics core api results sampling level

I have a script pulling data from the Google Analytics core api. Since I am using the results of the data to successfully populate a sheet in GSheets I know that my data pull is a success.
I'm reading the documentation here.
In particular, this table:
However, I would like to Logger.log() the sampling level of the query:
// check sampling for each report
if(!results.containsSampledData) {
Logger.log('sampling: none');
} else {
Logger.log('sampling: ' + results.query.samplingLevel);
}
When I view the logs I get 'sampling: undefined'.
How do I get the sampling results from the results object?
Here is what generates the results object, though I don;t think it's relavant (but may be wrong):
// get GA data from core api
function getReportDataForProfile(profile, len_results, start_num) {
var startDate = getLastNdays(30); // set date range here
var endDate = getLastNdays(0);
var optArgs = {
'dimensions': 'ga:dimension5, ga:dimension4', // Comma separated list of dimensions.
'start-index': start_num,
'max-results': len_results,
'filters': 'ga:source==cj'
};
// Make a request to the API.
var results = Analytics.Data.Ga.get( // mcf for multi channel api, Ga for core
profile, // Table id (format ga:xxxxxx).
startDate, // Start-date (format yyyy-MM-dd).
endDate, // End-date (format yyyy-MM-dd).
'ga:goalCompletionsAll, ga:users, ga:sessions', // Comma seperated list of metrics.
optArgs);
return results;
}
I think you missed this sentence:
The following table summarizes all the query parameters accepted by
the Core reporting API.
Those are query parameters. In other words, values that YOU supply. So, you should already know what the sampling level is since you determine it.
Here's the doc on sampling level. If not supplied, it sets samplingLevel to DEFAULT.
EDIT: Here's the doc on the response. I see that it indeed includes a samplingLevel field, but if you scroll further down, samplingLevel isn't one of the fields described in the Response Fields table. I suspect it is either included in the response by accident or you cannot rely upon that field given the lack of documentation.
Ah. If I read further down I would have seen this paragraph:
Sampling
Google Analytics calculates certain combinations of dimensions and
metrics on the fly. To return the data in a reasonable time, Google
Analytics may only process a sample of the data.
You can specify the sampling level to use for a request by setting the
samplingLevel parameter.
If a Core Reporting API response contains sampled data, then the
containsSampledData response field will be true. In addition, 2
properties will provide information about the sampling level for the
query: sampleSize and sampleSpace. With these 2 values you can
calculate the percentage of sessions that were used for the query. For
example, if sampleSize is 201,000 and sampleSpace is 220,000 then the
report is based on (201,000 / 220,000) * 100 = 91.36% of sessions.
See Sampling for a general description of sampling and how it is used
in Google Analytics."
So to get sample size as a percentage (what I'm used to seeing) I do this: results.sampleSize/results.sampleSpace

Set UITableViewCell data from remote JSON file

I have UITableView representing list of cities (100 cities).
For each city I want to call specific remote(URL) JSON to get city's weather information and populate response data for each city cell in the UITableView.
When I run application, I want to see my table as fast as possible, so I don't need to wait for all json responses. I want that informations got asynchronously (when specific json is loaded, set it's information for corresponding city cell in the UITableView).
Note: It is important for me to call seperate remote JSON files.
Which technic is the best for this task?
I would start with the following approach:
Create a data structure to hold city information, including:
path to your data service,
service call "state" (idle, waiting, completed, error),
weather information (from JSON returned by service call)
When you first show the table, you will want to:
initialize your array (of the aforementioned data structure),
initiate each service call asynchronously,
set each row (city) state to waiting.
You will also probably want to return a custom UITableCellView with the city name (if you already have it) and a spinning activity indicator. This will be your best option to have a fast load time (not waiting for services to complete) and give some visual indication that the data is loading.
Each service call should use the ViewController as its delegate; you will need a key field so that when the services return, they can identify with which row/city they are associated.
As each service completes and calls the delegate, it will send the data to the ViewController, which (in turn) will update the array and initiate a UITableView update.
The UITableView update is, in my opinion, the most difficult part. Typically cells are drawn or updated when they become visible; the table pre-fetches all visible cells' geometry and then queries the actual contents when it's ready to draw each cell; as a result, your strategy for updating cells will depend on how your table is used.
If your cell geometry changes, you will most likely need to redraw your entire table; I shudder to think about what 50 simultaneous UITableView redraws will do for your app, so you might need to set a time-threshold to "chunk" updates and handle drawing more intelligently.
[theTableView reloadData] will cause the entire table to be re-queried and redrawn.
If your cell geometry does not change, you can try to be more surgical of updating only the visible cells (the non-visible ones aren't an issue since their data will be queried when they become visible).
[theTableView visibleCells] returns an array of visible cells; when your service call returns, you could update the data and then search the array to see if the cell in question is visible; if it is, you will probably need to send the specific UITableCellView a setNeedsDisplay message.
There is a good explanation of setNeedsDisplay, setNeedsLayout, and 'reloadData' at http://iosdevelopertips.com/cocoa/understanding-reload-repaint-and-re-layout-for-uitableview.html.
There is a relevant SO question at How to refresh UITableViewCell?
Lastly, you will probably want to implement some updating logic in the service delegate error routine, just so you don't create endlessly spinning activity indicators.
I do this now while searching multiple servers. I use Core Data, but you can use an NSMutableArray to accumulate your JSON responses.
Every time you finish receiving date from one of your servers (for example, when connectionDidFinishLoading executes), take the JSON data object and add it to an NSMutableArray (let's call it weatherResults) (add it using the addObject method). You may want to convert the JSON to an NSDictionary before adding it to the mutable array weatherResults.
Assuming your dataSource delegate methods refer to what is in the weatherResults NSMutableArray (for example, getting the number of rows from the size of the array using [weatherResults count]) you can do the following:
After inserting the object to the array, you can simply call reloadData in the dataSource controller. You will see the table update as each new JSON results arrives. The results should append to the bottom of the table as they come in. If you want to sort the NSMutableArray each time a JSON results arrives, you can do that too.
I do this and the time it takes to resort and reload the table is insignificant on my iPad. If you do not resort, it should be even faster.
By the way, in this explanation, I assume that the JSON response contains all of the information that you need to fill in your table cell. That may not be the case. If it's not, you will have to correlate the response with other information you have, such as a list of cities that your program is presenting.