Got the following in the logcat while using Google FIT api for android.
No live data sources available for Sensory Registration Request{type Data Type{com.google.calories.expended[calories(f)]}
I have registered a listner and the following request to get calories expended by the user:
SensorRequest sensorRequest = new SensorRequest.Builder().setDataType(DataType.TYPE_CALORIES_EXPENDED)
.setSamplingRate(10, TimeUnit.SECONDS).build();
Are you trying to fetch the data from the Google fit Cloud or from the sensors, if its cloud, go for HistoryAPI, and if its from sensors, may be the device doesn't have built in sensors to get calories burned, so try adding sensors via BLE with the help of BleAPI and then go for this call.
You have to get a proper response i believe
Related
I'm trying to get the used Capacity from the Azure Monitor REST-API for a particular Fileshare.
But all I get is the summarized Capacity. Does anymone know the trick?
This command will deliver the UsedCapacity for the entire StorageAccount:
GET https://management.azure.com/subscriptions/<MYSUBSCRIPTION>/resourceGroups/<MYRG>/providers/Microsoft.Storage/storageAccounts/<MYSTORAGEACCOUNT>/fileServices/default/providers/Microsoft.Insights/metrics?api-version=2018-01-01
You can get the share usage by using the Get Share Stats API: Get Share Stats (FileREST API) - Azure Files | Microsoft Docs
I'm new in IBM Watson Assistant. I have a use case where i have to ask user for their interest. and i need to save the response in db or call some API and post the user response. any how i want to capture user response in my own system.
I have done basic handson on Watson Assistant. Not able to figure out how to save user response in external system.
If the question is What is your favorite Mobile Brand.
Options: Apple , Samsung , Sony
If user Response with Apple. Then I need to save this in my system. So in future i can offer products according to customer interest.
The messages API Response includes everything you'd need. You could store that response in your database each time, or configure your application to only store specific responses (e.g. when a certain intent or entity is detected, since the response includes those data elements).
Put more simply, your application is already controlling the flow of information between the user and the Watson Assistant API, so you're in full control over when and how to capture the API Response data and store it wherever you'd like.
If I go to google api playground I do the following steps:
Step 1: Select & authorize APIs. I select the two scopes
https://www.googleapis.com/auth/fitness.blood_glucose.read
https://www.googleapis.com/auth/fitness.blood_pressure.read
cause I need to read blood glucose and pressure from user.
I select a google user and authorize the application to read the data.
Step 2: Exchange authorization code for tokens. I exchange the authorization token for the access and refresh token.
Step 3: Configure request to API. From List possible operations I choose
DataSources UsersLists all data sources that are visible to the developer, using the OAuth scopes provided. The list is not exhaustive; the user may have private data sources that are only visible to other developers, or calls using other scopes.
Request: GET https://www.googleapis.com/fitness/v1/users/{userId}/dataSources
I change {userId} with me and I suppose to retrieve the data source to read blood glucose and blood pressure, but what I receive is an empty array.
{
"dataSource": []
}
I need to test the reading of such values (pressure and glucose). What steps do I have to in google playground to achieve these 2 readings?
Thanks in advance
To get blood pressure and glucose data, you first need to create that data. You see in the get api that your datasource is empty. Here is what you can do to create and get a blood pressure:
Install "Instant Heart Rate Monitor" application from "play store",
Signup using your google account,
Connect "Instant Heart Rate Monitor" application to "Google Fit" (go to Instant Heart Rate Monitor profile and click on Google Fit,...)
Measure your heart bpm using this application. Since this app is now connected with Google Fit, you will have a record in your datasource.
To get this record, have the current epoch time in nanoseconds (https://www.epochconverter.com/) and make the following GET call:
https://www.googleapis.com/fitness/v1/users/me/dataSources/derived:com.google.heart_rate.bpm:com.google.android.gms:merge_heart_rate_bpm/datasets/000000-1518561964000000000
Note that in the above code, 1518561964000000000 is epoch time in nanoseconds and you need to change it to the current time. You can do the same thing for Glucose data with an application that measures blood Glucose :)
To get blood Pressure from google Fit endpoint is:
https://www.googleapis.com/fitness/v1/users/me/dataSources/derived:com.google.blood_pressure:com.google.android.gms:merged/datasets/0-1550664667715000000
dataStreamId: derived:com.google.blood_pressure:com.google.android.gms:merged
time: 0-1550664667715000000
This is startTime-endTime in UNIX epoch nanoseconds format.
To get epoch time this link can be used:
https://www.freeformatter.com/epoch-timestamp-to-date-converter.html
I'm currently implementing an IoT solution that has a bunch of sensors sending information in JSON format through a gateway.
I was reading about doing this on azure but couldn't quite figure out how the JSON scheme and the Event Hubs work to display the info on PowerBI?
Can I create a schema and upload it to PowerBI then connect it to my device?
there's multiple sides to this. To start with, the IoT ingestion in Azure is done tru Event Hubs as you've mentioned. If your gateway is able to do a RESTful call to the Event Hubs entry point, Event Hubs will get this data and store it temporarily for the retention period specified. Then stream analytics, will consume the data from Event Hubs and will enable you to do further processing and divert the data to different outputs. In your case, you can set one of the outputs to be a PowerBI dashboard which you can authorize with an organizational account (more on that later) and the output will automatically tied to PowerBI. The data schema part is interesting, the JSON itself defines the data table schema to be used on PowerBI side and will propagate from EventHubs to Stream Analytics to PowerBI with the first JSON package sent. Once the schema is there it is fixed and the rest of the data being streamed in should be in the same format.
If you don't have an organizational account at hand to use with PowerBI, you can register your domain under Azure Active Directory and use that account since it is considered within your org.
There may be a way of altering the schema afterwards using PowerBI rest api. Kindly find the links below..Haven't tried it myself tho.
https://msdn.microsoft.com/en-us/library/mt203557.aspx
Stream analytics with powerbi
Hope this helps, let me know if you need further info.
One way to achieve this is to send your data to Azure Events Hub, read it and send it to PowerBI with Stream Analytics. Listing all the steps here would be too long. I suggest that you take a look at a series of blog posts I wrote describing how I built a demo similar to what you try to achieve. That should give you enough info to get you started.
http://guyb.ca/IoTAzureDemo
I am using Google Drive SDK for .NET. Everything is working as expected, except that whenever I get the permission feed for a particular document. I get the Id, kind, name, role, selflink and type fields on permission.
There is no mention of the email address of the user which is stopping me from recognizing by reading the permissions, whether a file has been shared inside the domain or outside of the domain.
I can't use Google Docs API to get the ACL on doc because I am writing an app for users over 200000 users and I will need speed which is provided by Google API Console.
What should I do?
Thanks
You can use the Documents List API to get the email addresses. I'm curious why you believe Drive is faster than Documents List, for most API calls they have a comparable response time.
https://developers.google.com/google-apps/documents-list/#retrieving_the_acl_for_a_document_file_or_collection
Thanks for the idea but can I use Document List API with the Service Accounts provided through API Console?
Why do I believe that Google Drive SDK is better? Because we don't have any control over Document List API where we can set QPS (Query per second) limits or not that I know of.
Secondly, with Document List API when you make request to servers, in the past I had to create a fault tollerent algorithm in such a way that if one request fails second should go after 2 seconds if that fails then make your next request after 5 seconds until 7 second delay.
So, I don't think that Document List API would be a good fit for processing documents over 200000 users everyday unless Google has changed the way their API used to behave?