Our server uses the Google Maps API probably a few dozen times a day to turn street addresses into lat/lng coordinates, for later use on a client-side google map.
We've been doing this for years, and we're hitting this URL to do it:
http://maps.googleapis.com/maps/api/geocode/json?address=<street address>&sensor=false
We do this from a Perl script, and it runs once every 15 minutes, processing any that have queued up (again, we're talking a couple dozen requests a DAY). If multiple are queued, it hits the API serially in bursts of 5, then pauses 5 seconds before continuing, if any more remain.
This has worked fine for years now.
Today, we have started regularly receiving HTTP 500 response code, with this JSON in the body:
{
"results" : [],
"status" : "UNKNOWN_ERROR"
}
I have taken one of the actual requests, and repeatedly hit it from my local dev box, and it works fine (repeatedly).
Yet on our server (colocated at a major service provider), I repeat the test, and I can regularly duplicate the 500 error in the browser.
Both my dev box and our server are currently resolving maps.googleapis.com to the same IP:
googleapis.l.google.com [74.125.198.95]
So I don't think it's an issue of our server hitting a bad Google server.
I would suspect a quota issue, except our use is very, very minimal. Perhaps they are blocking a large swath of RackSpace customers, but no idea how we could find this out or rectify it if so.
After further testing on other servers we have colocated at the same facility, only a single server is exhibiting this issue, though the other servers are in different network class entirely.
Does Google typically use this kind of error message for quota issues? If not, any other ideas as to what the problem is, so we can fix it? At this point, we may need to switch to Bing Maps or some other provider, if there's no way to know why it's happening, or when it will return to normal.
I checked Google's status page, and "Google Maps" is green with no issues indicated, but I have no idea if this considers their Maps API services or not.
Related
I want to ask for help/ideas on the issue I will describe below.
Our iOS app allows users to access their Google Drive files.
We use Changes API (https://developers.google.com/drive/api/v3/reference/changes). The main pre-condition to using this API is to build a local DB that holds the snapshot of the user's Drive file tree and the token. To initially fill the DB we must request the list of all files from user's Drive. Getting the list of all files (with metadata) takes too long for many of our users. This is the issue I want to address.
We request files with the series of Files requests (https://developers.google.com/drive/api/v3/reference/files/list). Most requests are plain files?q=trashed%20%3D%20false.
For example, at my own private Google Drive:
69K files
initial request of all files takes 5+ minutes with my current network speed (Download 527 Mbps, Upload 417 Mbps; ping www.googleapis.com – 40–45 ms)
~150 requests
each request brings information about ~460 files
each request takes around 2-2.5 seconds
Sometimes I observed requests to take up to 6 seconds, which means that getting all files list took 15 minutes at my account.
If I look at the Developer Console, the latency is below 0.1s
Many of our users have Drives far bigger than mine. Standard iOS app user's session is not long enough to complete the initial request. We do save every intermediate page token so that all data received during single app session is not lost if user leaves the app – next session we will keep downloading data from the last saved token. But still there're some cases when our app needs the DB to be filled out with data before starting some operations – in that case our users see "Pending..." progress and they complain that our app is slow.
So, questions:
is it possible to improve the described request speed/latency?
maybe there's some quota that we are missing and it can be changed?
maybe someone can advice a more effective way of getting all files list?
P.S. We could potentially reduce the amount of requests. We have to perform some double checks for Shared with Me folders as we observed that sometimes request of all files doesn't list all files from Shared folders. That's a bit of a side story, and I don't think this will dramatically improve situation for us. I can provide more details on the actual set of requests we perform if necessary.
Are you returning all the fields - I would assume so since the only query param provided is trashed=false as the query param. Do you need all the fields? Can you try to reduce the query to only return the fields you really care about (using a field mask) and see if that improves your performance?
I am using the Google Maps Javscript Api, v3 and everything is working well up to a point where the requests for the map images are forbidden with a status of 403. Usually the map stops loading after a period of time in which the page/session is open: it may be 24 hours, it may be more than 48h, I couldn't actually find a more accurate period.
Given the fact that we want to have a live website and a testing one – different domains, I generated 2 different keys, and I am loading them conditionally, but the html rendered is the one expected.
var mapKey = VanillaRate.Domain.Settings.AppSettings.GoogleMapsApiKey;
and the script tag is:
script src="https://maps.googleapis.com/maps/api/js?key=#(mapKey)&libraries=places" async defer
The usage limits were not exceeded, the referrer is well set.
The error appears when the map is zoomed and it's:
Failed to load resource: the server responded with a status of 403 () - maps.googleapis.com/maps/api/js/StaticMapService.GetMapImage?....
Since I couldn’t find any exact posted situation nor documentation about it, it is possible to be a timeout on google servers for security reasons and this is why the requests are forbidden for a session longer than a day?
EDIT: I forgot to mention that after refreshing the tab, everything works well. If it was indeed the usage limit, would the server respond with success after refresh? I've read that in this case, the map wouldn't work all day. Is that right?
If the response is still a HTTP 403 (Forbidden) error, the signature was not necessarily the problem, it may be related to usage limits instead.
This typically means your access to the web service has been blocked on the grounds that your application has been exceeding usage limits for too long or otherwise abused the web service.
I find this answer on google developer. There is no simply way to resolve this problem. Google recommended two solutions:
Reduce requests to the server;
Or, 'purchasing additional allowance for your Google Maps APIs for Work license.'
You can also try to access to the the Google Cloud Support Portal to signal your problem.
I find this informations in google developer here. You can find on this link some solutions like I detail to you and the explanation of your problem.
"The usage limits were not exceeded"
Are you sure? You're loading the places library, in which case this applies:
Google Places API Web Service
Default 1,000 free requests per day,
increased to 150,000 free requests per day after identity
verification.
https://developers.google.com/maps/pricing-and-plans/
See also:
https://developers.google.com/places/web-service/usage
https://developers.google.com/maps/documentation/javascript/places#UsageLimits
I'm currently using the Google Places API to pull reviews onto a webpage. Everything is working fine except for the Photos of the people leaving reviews. When trying to get the photo of the reviewer, it's returning a 403 Forbidden on every other page load. It seems that there might be a rate limit perhaps?
The problem is I can't find any documentation about rate limits and how to get the picture to display without issue. Am I missing something in the docs?
My API call is this;
https://maps.googleapis.com/maps/api/place/details/json?placeid=PLACE_ID&key=API_KEY
That returns quite a long JSON array (I've cut it down). One of those fields is;
{
"result" : {
"reviews" : [
{
"profile_photo_url" : "//lh5.googleusercontent.com/url/photo.jpg"
}
}
}
}
Like I said, if I refresh a couple of times it'll cause a 403 error for the images get request. Anyway to cache or allow more requests?
I found out why this was happening. It's to do with rate limits on the photo media which is why it was giving a 403 error in the console. The developer docs outline the limits for the media requested.
An excerpt of the docs...
The Google Places API Web Service enforces a default limit of 1,000
free requests per 24 hour period, calculated as the sum of client-side
and server-side requests. If your app exceeds the initial limit, the
app will start failing. You can increase this limit free of charge, up
to 150,000 requests per 24 hour period, by enabling billing on the
Google API Console to verify your identity. A credit card is required
for verification. We ask for your credit card purely to validate your
identity. Your card will not be charged for use of the Google Places
API Web Service.
Best thing to do is to cache the media once it's requested to avoid going over the limit which is especially useful if you're reloading the page lots of times for testing local development changes.
Is anybody else using the Google CDN option from the Load Balancer?
In the last hour, it has stopped working for me completely.
I am receiving the message:
That’s an error.
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds. That’s all we know.
I have not changed anything in my setup and it just stopped working. I thought maybe my ip address could be blocked, but it's happening from websites that check http headers. I have also switched my ip address and it's still the same.
All 3 of my VM Instances load the file fine. When I goto the file through my CDN ip, it does not work.
Here is all three of my VM Instances:
http://104.154.79.149/round.gif
http://104.198.106.79/round.gif
http://104.196.138.170/round.gif
Here is the CDN Ip address:
http://130.211.31.236/round.gif
What is wrong and how can I fix this?
I have created the firewall rule for 130.211.0.0/22 tcp:1-5000 Apply to all targets as suggested in other posts with no success.
This is not something I believe you are able to resolve as this is also affecting my instances as of around an hour ago. Some of our instances are accessible from some locations as is evidenced by the RPS still ticking over, even if at a reduced rate, and the database activity.
[Edit] I can access my instances by IP (not via the load balancer) but not via load balancer / by domain. I am able to wget from other instances and get the correct responses.
Google are having an incident
https://status.cloud.google.com/incident/compute/16020
Google Compute Engine Incident #16020 502s from HTTP(s) Load Balancer
Incident began at 2016-10-13 15:00 (all times are US/Pacific).
Although I'd dispute their start time, I believe it began earlier (maybe 14:30 Pacific)
I've been working with website that uses geocode lookups via Google. I've been testing this for awhile now.
https://maps.googleapis.com/maps/api/geocode/json?address=1600+Pennsylvania+Ave+NW,+Washington,+DC&key=XXXXXXXXXXXXXXXXXXXXXXXXXXX
I've got that key locked to particular servers. All of a sudden I'm seeing geocode lookup errors. The response back from Google is:
{
error_message: "Browser API keys cannot have referer restrictions when used with this API.",
results: [ ],
status: "REQUEST_DENIED"
}
When I try a simple request without the API key at all it seems to work fine. Here you can try this yourself. Copy and paste the next line in your browser's URL and return.
https://maps.googleapis.com/maps/api/geocode/json?address=1600+Pennsylvania+Ave+NW,+Washington,+DC
Now, I probably shouldn't look a gift horse in the mouth, but the whole thing seems odd. If I remove my API keys today, will my websites, that rely on an address to Lat/Lng conversion, all fail tomorrow?
Is anybody else experiencing odd failures with Google Maps and GeoCode lookups? Is anyone aware of a systemic content or policy change from the Google mapping / GeoCoding team??
Edit, update:
So this defect lasted about 40 minutes, from around 9:10PM PST until a bit before 10PM PST. It seems to be fixed now.
Response to comment: Hmmm. I've been looking at the API keys as:
Server keys: Create and use a Server key if your application runs on a
server. Do not use this key outside of your server code. For example,
do not embed it in a web page. To prevent quota theft, restrict your
key so that requests are only allowed from your servers' source IP
addresses.
Browser keys: Create and use a Browser key if your application runs on
a client, such as a web browser. To prevent your key from being used
on unauthorized sites, only allow referrals from domains you
administer.
I'm definitely doing this complete lookup from user directly to Google without a server in the middle. No way can I safely use a Server key there. So I've read your input, and it definitely says Server key for geocoding. But, that really implies that no one should ever allow a browser / client interaction to process a geocode lookup. Frankly I just assumed the writeup was out of date and a bit inaccurate.
While you may be right, the whole thing just looks odd. I would have thought that if Geocode required a lookup from a server (only) and never from a web application via the browser (ever) that there would have been some direct comment as to that effect.
Oh, and the browser keys, with server fencing, seem to be working again. Again, I'm just saying the whole thing is odd. I'm treating this as a temporary hiccup up at the Google geocode servers.
And yes, I can certainly introduce an API server for a round trip Ajax call to do the lookup safely with a server key, but what's the point? Is there a benefit that I'm just not seeing? I guess I could add elements like a nonce to protect my round trip geocode intermediate lookup server from somebody else using it, etc.... But at this point, I'm just confused.
Update #2: 16 Jun 2016
Again, this whole thing is not clear. I filed a feature request to the Google GeoCode team asking for a clarification update to the documentation to address the use of Browser API keys for geocode lookups.
The documentation for the Geocoding Web Service states:
Standard API users: If you're using the API under the standard plan, you must use a server key (a type of API key) set up in a project of your choice.
The error message indicates you are using a browser key.