I'm building a mobile app that lists posts, each post has a place attached to it.
I want the list to be able to show distance from the user's location.
without caching anything it would require to store the place reference for each post and while listing fetch the place's geometry from Google Places API, this sounds like a very bad idea.
am I allowed to store the place's id, reference, name and geometry in my db and deliver it with my API?
this is for performance purposes only
another implementation might be to cache this data in a local sqlite db on the mobile device, but then the user will have to download the information for each uncached place so for a list of X different places the client will be doing X api calls, sounds slow and battery wasting.
am I allowed to have a central cache in my db in a table that'll be refreshed every once in a while and evicted if not accessed for lets say 30 days ?
Google's page on Places states that Caching of the Places ID is allowed.
The terms in 10.5.d state that you may store limited amounts of content for no more than 30 calendar days for performance reasons. Since this is what you are trying to do, then I would expect that you are ok to store the ID, location and name.
As you start to cache more information then you'll breach the terms of the API. It's not too clear what these are but I think as long as you are being reasonable then you'll be OK.
As per the current policy, the Place Id is exempt from the caching restriction.
Pre-Fetching, Caching or Storage of contents
Pre-Fetching, Caching, or Storage of Content Applications using the
Directions API are bound by the Google Maps Platform Terms of Service.
Section 3.2.4(a) of the terms states that you must not pre-fetch,
cache, index, or store any Content except under the limited conditions
stated in the terms.
Note that the place ID, used to uniquely identify a place, is exempt
from the caching restriction. You can therefore store place ID values
indefinitely. Place ID values are returned in the place_id field in
Directions API responses.
"Note that the place ID, used to uniquely identify a place, is exempt from the caching restriction. You can therefore store place ID values indefinitely. The place ID is returned in the place_id field in Places API responses." https://developers.google.com/places/web-service/policies#usage_limits
Related
Since the latest changes to the Google Fit REST API, location data can only be read by the application initially wrote the data. See Google fit permission problems.
This is too bad since it allowed for useful features as tracking routes for workouts. But I can live without this.
However, what I really need for my data analysis is elevation data, i.e., track the elevation gain of the user during a workout session. Altitude gain is a rather important metric in the analysis of certain activities, often more important than distance.
My question is thus: Without the access to the location data (which included the user's altitude), how can I query elevation data from google fit for a given session?
Thanks for any ideas.
I'm looking for a solution to gather data on local businesses. In a nutshell, I need to input a street address/coordinates and get a listing of all other businesses that exist in a (for example) 3 mile radius. Will the Google Maps API work for this?
This will be a manual process so the requests will be very minimal: maybe 1 or 2 requests per month. This isn't a script that I am intending to run over and over again in any way to create a high volume requirement.
Google Places API certainly is not designed to gather all businesses within the specified radius. The API returns only most prominent results and doesn't guarantee the complete list. This is not a traditional database search.
In addition there are restrictions in the Terms of Service that prohibit such kind of searches. Have a look at the paragraph 10.4 c (ii).
No creation or augmentation of data sets based on Google’s Content or Services. You will not use Google’s Content or Services to create or augment your own mapping-related dataset (or that of a third party), including a mapping or navigation dataset, business listings database, mailing list, or telemarketing list.
https://developers.google.com/maps/terms#section_10_4
I have read Google's Terms and Conditions here: https://developers.google.com/maps/documentation/geocoding/support#comunity-support, but I am still a little unclear on how long we can store latitude and longitude in our own database.
I thought I found the answer here: Terms and Conditions Google Maps: Can I store lat/lng and address components?, but reading some of the recent comments raised doubts once again.
Specifically, if the sole intent is to use the latitude and longitude retrieved from the API with a Google map, can I store those attributes in my own database indefinitely or only for 30 days?
How do I make contact with someone at Google directly so I have a definitive answer to this question and don't need to go contact a lawyer to interpret the terms and conditions.
Thank you,
Terry
The Terms of Service, section 10.5, clause d, states this:
No caching or storage. You will not pre-fetch, cache, index, or store
any Content to be used outside the Service, except that you may store
limited amounts of Content solely for the purpose of improving the
performance of your Maps API Implementation due to network latency
(and not for the purpose of preventing Google from accurately tracking
usage), and only if such storage:
is temporary (and in no event more
than 30 calendar days);
is secure;
does not manipulate or aggregate any part of the Content or Service;
and does not modify attribution in any way.
This appears to me to specify that the caching must be temporary--you can't actively decide that you're going to cache the data for a max of 30 days. By your own words you want to cache it to prevent API hits, but that is explicitly prohibited by this clause.
If you were caching for a short duration for a specific purpose, such as knowing that a given user will be using the data again in a relatively short period of time, caching would be allowed. Caching just for the sake of caching is not allowed.
You are allowed to cache indefinitely if it's related to a user preference. For example, storing lat/long information is okay if you're saving a user's home coordinates, but only the actual preference data and not any results generated by the API that are related to the personal data.
I am not a lawyer, but this section appears rather clear to me.
I'm new to geocoding so I'm not certain this is even the question I should be asking, but all of the other discussions I've seen on this topic (here and on the Google API forum) are so application specific that I feel like I might be missing a very elementary step - I don't need to know how to implement a store finder - I need to know if I should.
Here is my specific situation - I have been contracted to design an application wherein we will build a database of shops (say, independently owned bars and pubs). This list will continually grow and change as shops close and new ones open. The user can enter his/her point of origin (zip code or address) and be shown a list or map containing all the various shops within a given radius in order of proximity.
I know how to deliver these results from a static database:
One would store the longitude and latitude as columns for each row and then just use that information to check distances.
But I have inherited an (already fairly large) database of shops which have addresses but not coordinates - so I'm not sure what the best way to get those addresses is. I could write a script to query them one at a time against google geocoding, I could have a data entry person manually look up the coordinates for each one and populate the data that way, or maybe there is a third option I'm not aware of.
Is this the right place to be asking this question? Google Maps Geocoding doesn't host a forum of their own, but refers people to Stack Overflow. Other forums on the net dealing with this topic are all relating to a specific technical question but no one seems to be talking about it from a top-down perspective (ie the big picture).
Google imposes a 2,500 queries per day limit on free users and a 100,000 queries a day limit on paid ones - neither of these seem to be up to the task of a site with even moderate traffic if, every time a user makes a request, the entire database (perhaps thousands of shops) are being checked against Google's data. It seems certain we must store the coords locally but even storing them locally, there will have to be checks against Google in order to plot them on a map. If I had a finite number of locations (if, for example, I had six hardware shops) and I wanted to make a store locator, there would be a wealth of discussions, tutorials, and stack overflow questions available to point the way for me, but I'm dealing with a potentially vast number of records and not sure how to proceed or where to begin.
Any advice would be welcome - Additionally, if this is not the best place to be asking this question, a helpful response would be to indicate a better place to post it. I've searched for three days but haven't found what looks like a good resource for asking such subjective questions.
The best way of course would be when you use a geocoding-service to get coordinates and store the coordinates in your DB. But it's not possible with google's geocoding-service, because it's not permitted to store geocoded data permanent.
There are free services without this restriction, some keywords to search for: mapquest, nominatim, geonames(but these services are less accurate than google)
Another option would be to use a FusionTable. The geocoding would run automatically(but the daily limits are the same as for the geocoding-service). The benefit: the geocoding is permanent(you can't access the locations directly by e.g. downloading the DB-dump), but you may use the coordinates for plotting markers(via a FusionTablesLayer) or filtering(e.g. by distance)
The number of entries shouldn't be an issue, 100k is no problem for a database
I was reading an wikipedia article on the usage of Bloom Filters . It was mentioned in the article that Bloom filters are used by Google Chrome to detect whether a URL entered is malicious. Because of the presence of false positive
Google Chrome web browser uses a Bloom filter to identify malicious URLs. Any URL is first checked against a local Bloom filter and only upon a hit ,a full check of the URL is performed
I am guessing full check means that Google stores a harsh table of the list of malicious URL and the URL is hashed to checked if it is present in the table. If this is the case , isent it better to just have the hash table instead of hash table + bloom filter??
Please enlightened me on this , is my version of full check correct ???
A Bloom filter is a Probabilistic data structure tells us that the element either definitely is not in the set or may be in the set. Bloom filter takes less space (depends on hash functions configured and error rate) compared to the Hashmap. Hashmap can determine whether an element exists or not, whereas bloomfilter can only deterministically check for non existence of an element.
Let us look at the Google Chrome use case. When a user enters a URL it should validate whether the URL is a safe or not. To validate the URL, chrome can make a call to the google serve (internally google can maintain any data structure to find this out). However, the challenge with this approach is multiple fold.For every URL request served on chrome the URL validation happens through Google Server which adds additional dependency on google servers, network roundtrip time, and requirement to maintain high availability to validate the URLs for all the URLs fired from chrome browsers across the world.
Since this data does not change very often (may be updated in an hour or so), chrome might have considered bundling all the malicious sites data as a bloom filter and google might sync this data periodically with the clients(malicious sites are less compared to the full blown websites. ). When user opens a url in chrome it checks the bloom filter if the URL does not exist it is safe. If it exists then bloom filter is not sure about it so the traffic goes to the google server for validating (this traffic will be way less compared to routing all the traffic).
A bloom filter for all malicous URL's is small enough to be kept on your computer and even in memory. Because almost all sites you enter are not malicous it would be better if you wouldn't do an extra request for them, that's where the bloom filter comes in.
You might not feel it but for slow internet connections it's very useful.
Not only is the Bloom filter much smaller and faster than a web query, it also protects Google's malicious URL API from what would otherwise be a tremendous workload.
From my understanding, bloom filters can store data efficiently in a limited amount of space. The contract of bloom filter is that it does not return false negative, however based on the vector size of your bloom filter it might return you some false positives.
Just to make sure of the false positives, Google is either using Hashing or sending that urls to their servers to recheck the url there by eliminating the load of sending all the urls to their servers.
Google stores a filter of listed malicous URLs in a Bloom-Filter.
See here:
Chromium Code Reviews Issue 10896048
Alex Yakunin's blog: Nice Bloom filter application