I'm trying to acquire historical traffic data that occurred during an emergency evacuation. Is it possible to retrieve this kind of data with Google Maps Platform Roads API? I'm specifically interested in this API, not others like the ones in this link. Thank you.
Related
we are a digital health company developing an API to access Health data to use by other companies, like health insurance. One of our customers complained that there are discrepancies between the daily steps data shown in the GoogleFit App and the data we provided for some of his users. However, when we queried GoogleFit API through Postman through https://www.googleapis.com/fitness/v1/users/me/dataset:aggregate , we received the same value we provided. Hence, the discrepancy is between the GoogleFit APP and GoogleFit API. The difference in steps data between the API and the APP is significant and can reach values up to 9262 steps of difference. Moreover, these discrepancies were observed for more users and for several days.
I read here https://developers.google.com/fit/faq#how_do_i_get_the_same_values_step_count_calories_distance_etc_as_the_google_fit_app (and I also checked related issues in Stack Overflow, e.g.,https://stackoverflow.com/questions/69030278/google-fit-rest-apis-giving-incorrect-steps-count ) that these discrepancies might be related to syncing delays. However, the user synced again, but the data in the App did not change. Moreover, the user doesn’t use multiple devices, and the discrepancies seem to be permanent and not temporary.
Can you please check on this and let us know which might be the reason for these discrepancies? Is there any troubleshooting we/the user can implement to solve it?
Thank you :)
Is it possible to get historical data for the traffic layer in google maps, i.e. specify a time and location and receive whether traffic was normal or slow etc? I don't need the typical traffic conditions given a time and day, I need concrete data as described above.
Some APIs provide historical data only for traffic incidents like congestion and accidents.
I am developing a GPS application that tracks a user, sort of like the app Runkeeper that tracks where you have been on your run.
In order to do this, should I store GPS coordinates in a SQL Lite database on the phone or on Google App Engine and then when the user selects the data, I can send the entire set to the phone?
What would be a better design?
I have a lot of experience in this category. I am also developing a similar app. I save data on the phone, but periodically check for a connection and then upload the data to an app engine, and then delete the data on the phone (so it doesnt comsume too much phone memory). I store upload files name to data store, and files to the cloud storage (which can accept larger size than the data store). Then I upload the data to bigQuery for analysis and main storage of all data. I also remember what data was successfully uploaded to bigQuery. So there are really 4 memories just to get the data into BQ. Its quite an effort. My next phase is doing the analytics using BigQuery and sending info back the phone app. I also use Tableau as they have a good solution for reading bigQuery and they recognize geo data and you can plot lat,lon on an implementation of Open Street Map.
Certainly there are other solutions. The downside to my solution is that the upload to bigQuery is slow, sometimes takes minutes before the data is available. But on the other hand, Google's cloud tools are quite good and it integrates with 3rd party analytics/viewers.
Keep me updated on how you progress.
I have a large volume of data (100,000 points). Is it possible to show all the data points at the same time? Someone said that the Google Maps API 3 cannot load more than 3000 data points. Is this true? How many data points can it show at one time?
You might want to take a look at this article from the Google Geo APIs team, discussing various strategies to display a large number of markers on a map.
Storing the data using Fusion Tables in particular might be an interesting solution.
I'm working on the visualization of a large spatial data set(around 2.5 million records) using Google maps on a web site. The data points correspond to geographic coordinates within a City. I'm trying to group the data points based on their zip codes so that I can visualize some form of statistical information for each of the zip codes within the city.
The Google API provides an option for "reverse-geocoding" a point to get the zipcode but it enforces a usage limit of 2500 per IP per day. Given the size of my data set I don't think Google or any other limit enforced Web API is practical.
Now, after a few rounds of Google search I've found out that it is possible to set up a spatial database if we have the boundary data for the zip codes. I've downloaded the said boundary data but they're in what is known as "shapefile" format. I'm trying to use PostGIS to set up a spatial database that I can query on for each data point to find out which zipcode it belongs to.
Am I headed in the right direction?
If yes, can someone please share any known examples that have done something similar as I seem to have reached a dead end with PostGIS.
If no, can you please point me to the right course of action given my requirements.
Thanks.