Traffic info distanceMatrix API java client - google-maps

I'm trying to use the java client to get information about duration between 2 city but i need also the duration with traffic information. I'm using the java client 0.1.11 and i only get the duration without traffic information, there is no method about traffic mode so how can i do ?
EDIT : I want to use this part of the documentation :
traffic_model (defaults to best_guess) which is optional parameter.

According to the documentation provided by google you can get the "duration_in_traffic" but not the traffic details.And moreover this facility is only available to the google premium plan users.
Read this https://developers.google.com/maps/documentation/javascript/distancematrix#distance_matrix_results
The "traffic_model" option can take the following values
trafficModel (optional) specifies the assumptions to use when calculating time in traffic. This setting affects the value returned in the duration_in_traffic field in the response, which contains the predicted time in traffic based on historical averages. Defaults to best_guess.
The following values are permitted:
google.maps.TrafficModel.BEST_GUESS (default) or string value best_guess indicates that the returned duration_in_traffic should be the best estimate of travel time given what is known about both historical traffic conditions and live traffic. Live traffic becomes more important the closer the departureTime is to now..
google.maps.TrafficModel.PESSIMISTIC or string value pessimistic indicates that the returned duration_in_traffic should be longer than the actual travel time on most days, though occasional days with particularly bad traffic conditions may exceed this value.
google.maps.TrafficModel.OPTIMISTIC or string value optimistic indicates that the returned duration_in_traffic should be shorter than the actual travel time on most days, though occasional days with particularly good traffic conditions may be faster than this value.

The Google Maps Distance Matrix API is design for calculating the distances and the durations of multiple origin and multiple destination.
This api does not return traffic information. Please read the official document (before asking).
https://developers.google.com/maps/documentation/distance-matrix/intro

Related

How to get public transport time schedule from The Google Maps Directions API?

I am trying to make public transport time schedule app using google maps directions api.
Whats the best way to get all of the possible departure_time's for a specific route from one place to another from a specific time?
The problem is, the server is always responding with only one route for one specific time. How can I get all of the following departure_times?
The worst way to do this is asking server every minute if there is some new travel link. But hey, its gonna take a lot of time!
So I thought google might be providing some kind of transport schedules but I can't find any info on google developers webs. I saw only the way to give google schedule information with the help of General Transit Feed Specification (GTFS) here or here.
But I can't find the way to get it from them.
I don't believe google maps directions api will return the information you are looking for as a collection.
The problem with transit data is that calculating a future schedule can require a lot of processing (especially if there are multiple routes involved in the rider reaching their destination) because, basically, the system needs to do a trip plan for each scheduled trip at starting point for the time range.
Google hints at this in their API regarding the alternatives parameter
alternatives — If set to true, specifies that the Directions service may provide more than one route alternative in the response. Note that providing route alternatives may increase the response time from the server.
Also, the different future departure times may actually be different routes or combination of routes (e.g. where multiple routes may come together on the same street for a while - for instance, near a college campus or other transit hub)
In order to get the underlying route data that would have the actual stop times you are looking for you would need to download the transit agencies' GTFS data directly and process it yourself (check our GTFS Data Exchange). This is what your competitors are already doing (e.g. Transit App, Moovit, etc.). There are packages that will do some of this processing for you (e.g. One Bus Away). However, even with the use of existing libraries, there is some heavy lifting involved here (from a development point of view).
As a final note, if you want to pursue using google maps directions api you wouldn't need query it for each minute within a time-range in order to get a series of departure times. You should be able to make a series of calls with the departure time set just past the departure time you got back in the previous call. For example, if the first trip time was 1:00pm set departure_time to 1:05pm and request again, then if the second trip time was 1:20pm set the next departure_time to 1:25pm and request again, and so on to build your list of future trips.
Okay. Firstly your question is not in the right spirit as stackoverflow demands. Check at google's developer console , API section and check if they offer any such API to give you all transport schedules in 24hrs or not ? If there is any such API then good, you can hit that but if not then I am afraid you wont be able to get it unless you hit API after some intervals.
Another suggestion is that , you can try yahoo or bing maps and check if they have any such API for your query.

Mismatch in Latitude-longitude obtained from Google API

I was trying obtain "latitude-longitude" information for multiple addresses using google API. However, I observed that for few addresses I was getting different "latitude-longitude" values over a period of time. Can "latitude-longitude" change over a period of time? If yes, why? Or is it just a bug (or may be due to an update) in google API?
latitude-longitude pair define a geo-location in earth, and that point is generally not moving. If you get different results for same address information, then it is the service which for some reason gives different values. Thus do check that what actually is happening, either you are giving a different address, or the service is interpreting it diffferently. Best way to determine what has happened is to see what values were actually used in the query, and check all data given in replay (many APIs are giving also the address detail for the location given as result value)

Is it worth to exclude null fields from a JSON server response in a web application to reduce traffic?

Lets say that the API is well documented and every possible response field is described.
Should web application's server API exclude null fields in a JSON response to lower the amount of traffic? Is this a good idea at all?
I was trying to calculate the amount of traffic reduced for a large app like Twitter, and the numbers are actually quite convincing.
For example: if you exclude a single response field, "someGenericProperty":null, which is 26 bytes, from every single API response, while Twitter is reportedly having 13 billion API requests per day, the amount of traffic reduction will be >300 Gb.
More than 300 Gb less traffic every day is quite a money saver, isn't it? That's probably the most naive and simplistic calculation ever, but still.
In general, no. The more public the API and and the more potential consumers of the API, the more invariant the API should be.
Developers getting started with the API are confused when a field shows up some times, but not other times. This leads to frustration and ultimately wastes the API owner's time in the form of support requests.
There is no way to know exactly how downstream consumers are using an API. Often, they are not using it just as the API developer imagines. Elements that appear or disappear based on the context can break applications that consume the API. The API developer usually has no way to know when a downstream application has been broken, short of complaints from downstream developers.
When data elements appear or disappear, uncertainty is introduced. Was the data element not sent because the API considered it to be irrelevant? Or has the API itself changed? Or is some bug in the consumer's code not parsing the response correctly? If the consumer expects a fields and it isn't there, how does that get debugged?
On the server side, extra code is needed to strip out those fields from the response. What if the logic to strip out data the wrong? It's a chance to inject defects and it means there is more code that must be maintained.
In many applications, network latency is the dominating factor, not bandwidth. For performance reasons, many API developers will favor a few large request/responses over many small request/responses. At my last company, the sales and billing systems would routinely exchange messages of 100 KB, 200 KB or more. Sometimes only a few KB of the data was needed. But overall system performance was better than fetching some data, discovering more was needed then sending additional request for that data.
For most applications some inconsistency is more dangerous than superfluous data is wasteful.
As always, there are a million exceptions. I once interviewed for a job at a torpedo maintenance facility. They had underwater sensors on their firing range to track torpedoes. All sensor data were relayed via acoustic modems to a central underwater data collector. Acoustic underwater modems? Yes. At 300 baud, every byte counts.
There are battery-powered embedded applications where every bytes counts, as well as low-frequency RF communication systems.
Another exception is sparse data. For example, imagine a matrix with 4,000,000 rows and 10,000 columns where 99.99% of the values of the matrix are zero. The matrix should be represented with a sparse data structure that does not include the zeros.
It's definitely dependent from the service and the amount of data it provides; it should be evaluate the ratio about null / not null data and set a threshold over than it worth to exclude that elements.
Thanks for sharing, it's an interesting point as for me.
The question is on a wrong side - JSON is not the best format to compress or reduce traffic, but something like google protobuffers or bson is.
I am carefully re-evaluating nullables in the API scheme right now. We use swagger (Open API) and json scheme does not really have something like nullable type and I think there is a good reason for this.
If you have a JSON response that maps a DB integer field which is suddenly NULL (or can be according to DB scheme), well it is indeed ok for relational DB but not at all healthy for your API.
I suggest to adopt and follow a much more elegant approach, and that would be to make better use of "required" also for the response.
If the field is optional in the response API scheme and it has null value in the DB do not return this field.
We have enabled strict scheme checks also for the API responses, and this gives us a much better control of our data and force us not to rely on states in the API.
For the API client that of course means doing checks like:
if ("key" in response) {
console.log("Optional key value:" + response[key]);
} else {
console.log("Optional key not found");
}

Google Elevation API - limits

I'm developing a map application that uses Google Elevation API. Today I spotted that I get
OVER_QUERY_LIMIT response. It is clear that I have reached my quota. Of course I have read documentation: http://code.google.com/apis/maps/documentation/elevation/#Limits.
There is a one thing I cannot understand though. Thus I have a question for you.
I pass only two points as a path but I want it to be divided into 250 steps. Does the following query gets info about 250 locations or two only?
http://maps.googleapis.com/maps/api/elevation/json?path=90.828934,-33.938923|92.983400,-2.552155&mapclient=flashapi&sensor=false&samples=250&key=KEY=xt&url=URL
I think it was impossible for me to check 25 000 locations in one day but if the above mentioned query gets 250 locations instead of two, then I have a problem :)
Thanks
In my experience, and according to the documentation, your request does count as 250 locations. Maybe you should use a lower number of steps and interpolate.
Keep in mind that even if it wouldn't, you are also subjected to a 2,500 requests per day limit.
A bit late, but someone might find it useful...
From the API documentation:
"Use of the Google Elevation API is subject to a limit of 2,500 requests per day... In each given request you may query the elevation of up to 512 locations"
I read that as saying a batch request counts as a single request so that shouldn't be the problem.
However, the Google Elevation API (and their other map APIs) also return OVER_QUERY_LIMIT if you access them to often in a short period of time.
"Additionally, we enforce a request rate limit to prevent abuse of the service."
To deal with this, in my functions I build in a wait parameter. This progressively increases the length of time between calls until either a response which isn't until OVER_QUERY_LIMIT is received, or until the wait is >500 ms (or other duration, depending on the application). If it's still returning OVER_QUERY_LIMIT I return OVER_HARD_QUERY_LIMIT to show that I've reached the limit for the day.

How to get the equivalent of the accuracy in Google Map Geocoder V3

I want to get geocode from google, and I used to do it with the V2 of the API.
Google send in the json a pretty good information, the accuracy, reference here : http://code.google.com/intl/fr-FR/apis/maps/documentation/javascript/v2/reference.html#GGeoAddressAccuracy
In V3, Google doesn't seem to send me exactly the same information. There is the array "adresse_component", which seem bigger if the accuracy is better, but not exactly.
For example, I have a request accuracy to the street number, the array is of size 8.
Another query is accuracy to the route, so less accuracy, but the array is still of size 8, as there is a row 'sublocality', which not appear in the first case.
Ok, for a result, Google send a data 'types', which have the 'best' accuracy. This types are here : http://code.google.com/intl/fr-FR/apis/maps/documentation/geocoding/#Types
But, there is no real order, and if I wan't the result better than postal_code, I have no clue to how to do that.
So, how can I get this equivalent of the V2 accuracy, whithout some dumb and horrible code ?
Well, there is the location type, which is not so bad :
location_type stores additional data about the specified location. The
following values are currently supported:
"ROOFTOP" indicates that the returned result is a precise geocode for which we have location information accurate down to street address precision.
"RANGE_INTERPOLATED" indicates that the returned result reflects an
approximation (usually on a road) interpolated between two precise
points (such as intersections). Interpolated results are generally
returned when rooftop geocodes are unavailable for a street address.
"GEOMETRIC_CENTER" indicates that the returned result is the geometric
center of a result such as a polyline (for example, a street) or
polygon (region)."
"APPROXIMATE" indicates that the returned result is
approximate.
I test if the location_type is different of approximate, and it gives some good results.
With Google deprecating their Geocoding v2 API later this year, there's going to be a ton of people migrating their geocoding logic to v3 and this very question is going to crop up: How to map the 'location_type' string to an equivalent 'accuracy'?
Here's a decent mapping:
"ROOFTOP" -> 9
[Everything else] -> 4 to 8 (aka the text string might as well read "GARBAGE")
If something other than ROOFTOP is specified, use the area of 'northeast' and 'southwest' to decide if it is accurate enough for you.
Now what should happen if you don't get something "accurate"? Run a Google Places text search query for the same address. Google Places does geocoding as well and, with Billing enabled, you can get 10,000 Places text search queries per day (no rate limit) and Google claims they won't charge the card (they supposedly just use it for verifying the account). With Billing, you get 100,000 queries, but Places text search queries have a "cost" of 10 times the amount of a regular Places query, hence the aforementioned 10,000 limit. Places can be finicky though and you should only consider responses with one result.
Sometimes Places queries will not return a zipcode - especially if one isn't sent. If you need the zipcode, take the lat/lng results of the Places query and feed it back into the geocoder, which will usually spit out an address with a zipcode (and very frequently a ROOFTOP match).
It should be noted that the official Geocoding API courtesy limit is 2,500 requests per day with a rate limit of one per second per IP address. Therefore, following the above formula will likely decimate and may even halve the number of geocodings available to you.
If you need more than the Google Geocoding limit (who doesn't?), invent your own mini geocoding service with something like the OpenStreetMap database. Clone the parts of OpenStreetMap that you need and write your own geocoder (or use a library). Then you can geocode to your heart's content with no quantity limit or rate limit. If you still use Google Maps, you can use Google's geocoder as a fallback should the OSM geocoder not be accurate enough for all cases.
Alternatively, if you trust your users to not submit bogus data (really?) and have to use the Google geocoding service, you could also abuse a user's web browser by having the browser geocode information for you and then feed the results to your server. You might burn through the user's daily limit and you risk someone pushing bogus data, but if you are going to all that trouble, do you actually care?
At any rate, the tips above should suffice for interim usage for most users to get a working v3 API set up. Ran into this issue myself, so figured I'd share with the community a halfway decent solution. I still think v2 was the better API - integer accuracy ratings instead of ugly text strings always wins out.
Paula's answer is good but you do need to also consider John's comment that ROOFTOP can return garbage.
I use a post-geocode-query sanity checker to get rid of those cases where location_type is 'ROOFTOP' but the address has nothing to do with the address you sent to google - this sanity checker compares the new address with the old address and considers what changed and by how much. The google geocoder is good at fixing typos, sometimes, but it can also make some non-sensical decisions - for instance choose a different city, a different state, or a different country. You need to decide if the result is a fix in a typo or if the geocode logic went astray.
So, don't just assume ROOFTOP == 9. It can also be garbage if the new address is way off from the original that you sent.
For things like Apartments or building with multiple units, the location_type = 'RANGE_INTERPOLATED' may also be accurate when the result type is 'subpremise'.
Remember, geocoding is not the same thing as address validation. They have some overlap but google geocode logic tries too hard to get you an answer, even when your input is garbage.