Google Maps kml file limit - google-maps

Is there a limit to the number of kml files which can be rendered? I know there is a file size limit but I seem to be hitting another limit.
The error thrown is
GET https://mts1.googleapis.com/mapslt?hl=en-US&lyrs=kml%3AcXOw0bjKUSgN5kcEMpDT…7Capi%3A3%7Cclient%3A2&x=67&y=98&z=8&w=256&h=256&source=apiv3&token=127990 414 (Request-URI Too Large) mts1.googleapis.com/mapslt?hl=en-US&lyrs=kml%3AcXOw0bjKUSgN5kcEMpDTUkENzfIp…api%3A3%7Cclient%3A2&x=67&y=98&z=8&w=256&h=256&source=apiv3&token=127990:1
Below is an example of what I am attempting to accomplish.
http://tinyurl.com/qg5enx8

from the documentation
There is a limit on the number of KML Layers that can be displayed on a single Google Map.
If you exceed this limit, none of your layers will display. The limit is based on the total
length of all URLs passed to the KMLLayer class, and consequently will vary by application;
on average, you should be able to load between 10 and 20 layers without hitting the limit.

Try using Network links: https://developers.google.com/kml/documentation/kml_tut?csw=1#network_links
I know it sounds a but more cumbersome, but it helps when loading lots of KML Data at once.
Another option is to change your approach and use Google Fusion Tables through the Javascript API. Basically to start on this route, you'll need to load the script: https://apis.google.com/js/client.js?onload=init where onload refers to YOUR javascript function to run once the script is loaded. Mine looks something like this:
function init() {
gapi.client.load('fusiontables', 'v1', function() {
gapi.client.setApiKey( YOUR_API_KEY_AS_STRING );
gapi.client.fusiontables.query.sql({sql:["SELECT * FROM", TABLE_NAME].join(' '), fields:'rows, columns'}).execute( function(json) {
//Do what you need to parse the json response
//Set up KML Layers
//etc... make sure to check if maps is loaded too
json.rows.forEach( function(t) {
console.log( t );
});
});
});
}

Related

Obtain list of My Places from Google Maps

I am trying to obtain the list of places the user has saved on Google Maps. Now I know there isnt an API for this (for whatever reason), but I saw here:
"My Places" Google Maps API
That apparently there used to be a way to obtain the URL, but it does not seem to work with my list of places.
E.g.
https://www.google.com/maps/#46.889424,0.1194148,6z/data=!4m3!11m2!2s1KbZtik1IdXyNhwfXEb3P9vaZvzU!3e3
Does not seem to work if I append &output=kml or &output=json
I created this list on Google Maps, then hit share and obtained that link.
I even tried parsing the resulting HTML but it seems everything is handled by some Javascript Engine and I can't find any reference to Google Ids there --- I dont even know how they handle clicks!
Any help? There must be a way to retrieve this information programmatically!
EDIT:
I managed to get something working by visiting the shared link, then processing the html and storing the window.APP_INITIALIZATION_STATE variable. I then convert it to an javascript array and loop over it. Deep inside the array/map structure, I managed to get the google name and google place id out of that array. That seems to work a bit, but when trying with lists over 20 items long, google only gets the first 20 and is waiting for the user to 'scroll down' to get the next 20. That seems to trigger another call to get the next 20 results and looks a bit like:
https://www.google.com/search?tbm=map&fp=1&authuser=0&hl=en&gl=nl&pb=!4m8!1m3!1d54065472.4384380........
I can see the original feature id being included at the end of the url, but have no idea how to construct this url in full though to get the next 20 items.... Any ideas?
Your saved places list actually has what you call a feature ID attribute, this isn't a common practice and Google frowns upon this technique but take a look at this URL:
https://www.google.com/maps/preview/entity?authuser=0&hl=en&gl=us&pb=!1m10!1s0x0%3A0x3743ae09a161976b!3m8!1m3!1d14318.72623152007!2d-98.2296425!3d26.2070353!3m2!1i1024!2i768!4f13.1!12m3!2m2!1i392!2i106!13m57!2m2!1i203!2i100!3m2!2i4!5b1!6m6!1m2!1i86!2i86!1m2!1i408!2i200!7m42!1m3!1e1!2b0!3e3!1m3!1e2!2b1!3e2!1m3!1e2!2b0!3e3!1m3!1e3!2b0!3e3!1m3!1e8!2b0!3e3!1m3!1e3!2b1!3e2!1m3!1e9!2b1!3e2!1m3!1e10!2b0!3e3!1m3!1e10!2b1!3e2!1m3!1e10!2b0!3e4!2b1!4b1!9b0!14m3!1snyc5W-WeHY3r5gLwkoRI!7e81!15i10112!15m19!2b1!5m4!2b1!3b1!5b1!6b1!10m1!8e3!14m1!3b1!17b1!24b1!25b1!26b1!30m1!2b1!36b1!52b1!53b1!21m28!1m6!1m2!1i0!2i0!2m2!1i458!2i768!1m6!1m2!1i974!2i0!2m2!1i1024!2i768!1m6!1m2!1i0!2i0!2m2!1i1024!2i20!1m6!1m2!1i0!2i748!2m2!1i1024!2i768!22m1!1e81!29m0!30m1!3b1
Highlighted is the feature ID from the link you posted:
https://www.google.com/maps/#46.889424,0.1194148,6z/data=!4m3!11m2!2s1KbZtik1IdXyNhwfXEb3P9vaZvzU!3e3
Along with other maps parameters; when you hit that link you're actually manually triggering the same callback that Google's own scripts in maps use to parse the data to feed back to the maps UI; if you look at array item 2, or {c:..} you'll find a stringified array with the contents of your list, now depending on the program language you're using all it takes is a little tweaking (find/replace, loop through, lint and trim, etc.) to this array and you can pull your results; the cool thing is if you add or remove a place the next time you hit that end point it's updated in real-time.
Some people may call it a "hack"; but it gets the job done. :)
Hope I pointed you to a direction in the event you haven't found a solution; give this a shot.
Note the URL has to be pasted in its entirety, SO truncated the hyperlink; copy and paste the whole thing in one shot and a text file from Google with the arrays will be produced; in my case I curl the URLs I need and parse the returned strings as needed to pull data from Google where their API has limitations. Just a tip. :)
Also check Joel's Answer who did some research and refined some of the following information.
Pagination
You can use this tool to decrypt the pb-parameter. PB stands for protocol buffer (protobuf) and Google uses its own kind of it for maps. You can find different decoders for this by googling it.
In my case, the pagination was done via one parameter (8iX0). It seems, that it always comes with another similar parameter (7i20) but I don't know that it does. I can't yet confirm that this is always the case, but from my experience you're basically looking for two integers that are 20/40/60 etc. apart.
Here's what this looks like for me:
page 2 (7i20, 8i20)
page 3 (7i20, 8i40)
page 4 (7i20, 8i60)
From this information, I tried 7i20 8i00 for page 1, that seemed to work. For lists with >100 items, it just continues like that (8i120, 8i140 etc.)
Here's a code snippet in python (quick & dirty). Make sure to add (long) delays if your list has many pages as you will get rate-limited by captchas eventually if you don't. Notice the 8i%s0 in the url, make sure to put the %s back when you paste your pb-block.
url = "https://www.google.com:443/search?tbm=map&pb=!7i20!8i%s0!..."
headers = {"Referer": "https://www.google.com/"}
def fetch_stops_from_maps():
new_results = -1
page = 0
results = []
while new_results != 0:
new_results = 0
x = requests.get(url % page, headers=headers)
txt = html.unescape(x.text)
txt = txt.split("\n")[1]
results = re.findall(r"\[null,null,[0-9]{1,2}\.[0-9]{4,15},[0-9]{1,2}\.[0-9]{4,15}]", txt)
print(len(results))
for cord in results:
# curr = the description you can manually type in when saving
curr = txt.split(cord)[1].split("\"]]")[0]
curr = curr[curr.rindex(",\"") + 2:]
cords = str(cord).split(",")
lat = cords[2]
lon = cords[3][:-1]
results.append(s)
new_results += 1
page += 2
Actually getting the correct url
Getting the correct url currently seems to be the hardest part when doing this and I have not fully figured this out aswell. However, for my use-case this is not really important, so I extracted the correct pb-block once and called it a day.
As explained in the other answers, the id of the list is visible in the basic url (here, the 2sXX...) when you navigate to the list in your browser. It seems to usually be 24-32 (?) characters long.
.../maps/<coords>/data=!4m3!11m2!2sXXXX...XXXX!3e3
If you have this id, you can put it into an existing protobuf-block and it may work (I only tested this with 3 different lists, which were all created by the same account, so this theory is far from proven).
Now, how do you get the block? I would just share the one I have, but because I only understand parts of what it does, I fear that it may contain some personal info. Instead, I will share my process of getting it. For this I use Burpsuite. It's a program mainly used for web-security testing and has a free community edition, however for our use-case it is the perfect tool, because with it you can easily tinker with requests, change small parts in the request, send it again and immediately see if your changes changed the response. However for extracting the pb-block, one should also be able to use any program that can intercept browser traffic.
Heres the basic rundown with burp:
From GMaps, share a list that has >20 items (this is important) and copy the public link
In Burp, go to the tab "Proxy", make sure "Intercept" is off and click "Open browser" to open the integrated chromium browser
There, paste the link and wait until maps loaded completely
In Burp, turn "Intercept" on, then in google maps, scroll down in the list, until it starts loading new results (always blocks of 20)
Burp now intercepted all requests the browser made since you turned intercepting on. Click "Forward" and go through all requests, until you see a request in the format
GET /search?tbm=map&authuser=0&hl=de&gl=de&pb=!7i20....
This is what you're looking for.
Optionally, you can now right click into the request-text and click "send to repeater", then switch to the repeater-tab. Here you can edit the request and then send it again, being able to see the response immediately. For example, removing the authuser, hl, gl, q, ech, psi url parameters, the request still works flawlessly. If you remove the tch=1 parameter, the response you get will be in a more human readable format.
In the request-text you should now be able to just search for the list-id you got from the link previously and replace it with the id of another list (search bar is at the bottom in burp). As I said, this worked for me, but it may be possible that the pb-block contains some additional metadata that makes lists from different google-accounts or different types of lists incompatible with specific pb-blocks. Just a theory though. Let me know how it goes!
Further automating
I have theorised that one could automate getting the pb-block using requests-html because it can load html-sites fully but it doesn't get updated anymore. Another option (probably the better one) is Selenium Wire, as you should be able to load the page and intercept the requests, like we did in burp. Seems like a whole lot of work tho :D
This was the only API was able to find was this:
https://www.google.com/bookmarks/?output=xml
Used in a browser you would have to first log in through Google's OAuth. It would then return your saved places. Not sure at the moment how you would embedded the authentication to do this programmatically, but this might send you in the right direction.
I was able to extract the data I needed from my google maps list. Below are some comments that expand on some of the other comments here, along with a script that extracts all of the relevant data points from the network response.
Obtaining the underlying URL
You can easily find this URL by just opening the devtools on your browser, going to the network tab, and refreshing the webpage or scrolling down on the list until it loads new results (the list must be larger than 20 results). You should be able to find the network request that starts with https://www.google.com/search?tbm=map&pb... and go from there.
Increase the results size
I was able to increase the number of results returned from the request by changing the value of the 7i20 parameter. From what I can tell, the 71XX parameter is the size of the page, and the 8iXX parameter is the starting point. I haven't tested how large you can make the page limit, but I tested 100 and it seemed to work fine. This should make dealing with larger lists much easier.
Parsing out the data
Instead of using regex to parse out the relevant data from the response, I found that the response is basically just a massive JSON object and I was able to identify the indexes for specific types of data, such as the name of the place, location, notes, etc. See the script below.
If you look at the buildResults function in the script below, you can see the exact indexes used to extract specific pieces of information. This of course may change over time if the network response changes format at all, so use these as a starting point in the case where the specific values aren't at those indexes anymore. Hopefully they would be close to those locations
Script to parse the data (javascript / node.js)
// Insert the raw text content from the network response from the
// https://www.google.com/search?tbm=map&pb... url below.
const rawInput = null
function prepare(input) {
// There are 5 random characters before the JSON object we need to remove
// Also I found that the newlines were messing up the JSON parsing,
// so I removed those and it worked.
const preparedForParsing = input.substring(5).replace(/\n/g, '')
const json = JSON.parse(preparedForParsing)
const results = json[0][1].map(array => array[14])
return results
}
function prepareLookup(data) {
// this function takes a list of indexes as arguments
// constructs them into a line of code and then
// execs the retrieval in a try/catch to handle data not being present
return function lookup(...indexes) {
const indexesWithBrackets = indexes.reduce((acc, cur) => `${acc}[${cur}]`, '')
const cmd = `data${indexesWithBrackets}`
try {
const result = eval(cmd)
return result
} catch(e) {
return null
}
}
}
function buildResults(preparedData) {
const results = []
for (const place of preparedData) {
const lookup = prepareLookup(place)
// Use the indexes below to extract certain pieces of data
// or as a starting point of exploring the data response.
const result = {
address: {
street_address: lookup(183, 1, 2),
city: lookup(183, 1, 3),
zip: lookup(183, 1, 4),
state: lookup(183, 1, 5),
country_code: lookup(183, 1, 6),
},
name: lookup(11),
tags: lookup(13),
notes: lookup(25,15,0,2),
placeId: lookup(78),
phone: lookup(178,0,0),
coordinates: {
long: lookup(208,0,2),
lat: lookup(208,0,3)
}
}
results.push(result)
}
return results
}
const preparedData = prepare(rawInput)
const listResults = buildResults(preparedData)
console.log(listResults)

Continuously emulate GPS Locations on Chrome

For a mobile web application I would like to emulate location movements of the device. While it is possible to override a single location using the Sensor Tab in Chrome's Developer Console (See: https://developers.google.com/web/tools/chrome-devtools/device-mode/device-input-and-sensors) I would like to override the location continuously, say for instance update the device's location every second.
Is there a possibility to achieve this in Chrome (or any other Desktop Browser)?
I am looking for a solution similar to the Android Emulator which allows to replay recorded GPS Tracks (From GPX or KML files):
(See: https://developer.android.com/guide/topics/location/strategies.html#MockData)
DevTools has no feature for this, but if you happen to be using getCurrentPosition() you can pretty much recreate this by overriding the function in a snippet.
I suppose this workflow won't work if you're using watchPosition() (which you probably are), because I believe that's basically a listener that gets fired when the browser updates the coordinates. There's no way to update the browser's coordinates.
However, I'll record the workflow below b/c it may be useful to somebody else.
Store your script in a snippet.
Override navigator.geolocation.getCurrentPosition() to the target coordinates.
So, you could store the coordinates and timestamps in JSON (either within the snippet, or just fetch the JSON from the snippet using XHR / Fetch), and then use setTimeout() to update the coordinates at the specified times.
var history = [
{
time: 1000,
coords: ...
},
{
time: 3000,
coords: ...
}
];
for (var i = 0; i < history.length; i++) {
setTimeout(function() {
navigator.geolocation.getCurrentPosition = function(success, failure) {
success({
coords: history[i].coords,
timestamp: Date.now()
});
}, history[i].time);
}

Program-generated KML file validates, but doesn't work

I had a co-worker that normally worked with Google Maps and now I am creating my first map. I am using what they developed in the past and making the changes for what I need. They created a script that sets some of the map defaults, so that is why things might look slightly different.
var map = new Map();
map.loadMap();
var kml = new google.maps.KmlLayer({ url: 'http://api.mankatomn.gov/api/engineeringprojectskml', suppressInfoWindows: true });
kml.setMap(map.map);
The map loads. My KML file doesn't load. I don't get any errors in the console. When I replace the url with a different URL http://www.mankato-mn.gov/Maps/StreetConstruction/streetconstruction.ashx?id=122 it'll work just fine. My new feed does validate. Is there a issue with my web service?
Update: After a few days, I am still having the issue. So I am pretty sure this isn't a DNS issue anymore. I created a jsFiddle to see if it is my code or something else. I started with Google's sample code and changed the URL of the KML file to both my web service and to a static version of the generated file. Both are valid KML files. Neither work. If there was a syntax error, wouldn't the API report that?
You can get the status of a KML layer with
kml.getStatus();
which in this case return:
"INVALID_DOCUMENT"
Now, if I request your URL from the browser, I get
<Error>
<Message>An error has occurred.</Message>
</Error>
So it seems if there ever was a valid KML there, it isn't anymore. Assuming from your question I can oly guess it was above weight limit, or you weren't associating it with a valid instance of map.
For getStatus to return something useful, you must wait for Google Maps API to try and load the KML layer you declared. For example, you can add a listener on the status_changed event.
var kmloptions={
url: 'https://dl.dropboxusercontent.com/u/2732434/engineeringprojectskml.kml',
suppressInfoWindows: true
};
var newKml = new google.maps.KmlLayer(kmloptions);
newKml.setMap(map);
google.maps.event.addListenerOnce(newKml, 'status_changed', function () {
console.log('KML status is', newKml.getStatus());
});
in this case (note that I'm using the alternative URL you used in the jsFiddle), I still get INVALID DOCUMENT.
Update: it seems the problem was the encoding of the file (UTF-16 BE is meant to be binary). I converted it to utf-8 and reindented (it's in my public dropbox now)
You can check if the DNS is setup by:
Going to the url in your browser. Do this with cache emptied and history ereased (private mode is best). If it ends up at your server and the right file it is not a DNS problem.
Move the file to a location you're sure it is reachable without any DNS issues. e.g. http://www.mankato-mn.gov/Maps/StreetConstruction/engineeringprojectskml
If the problem persists make sure that your KML syntax and Javascript is 100% correct. Also check out https://developers.google.com/maps/documentation/javascript/examples/layer-kml if you're still having any issues.

Get exact geo coordinates along an entire route, Google Maps or OpenStreetMap

Suppose I have a route defined from one town to another. From the Google Maps API I can recover a route between the two. However, the route returned from Google is a driving route that includes geo-coordinates only at places where there is another step in a leg (for example, where I have to turn from one highway to another).
What I need is geo-locations (lat/long) along the entire route, at specific intervals (for example, every 1/4 mile or 100m).
Is there a way to accomplish this via the Google Maps API / web services?
Or would the OpenStreetMap database be the way to do it?
Kind regards,
Madeleine.
OSRM gives you routes with road geometries as they are in the OpenStreetMap database. For example, you can get the route as GPX (and post-process this file if you want). This would look like the following:
GET http://router.project-osrm.org/viaroute?hl=en&loc=47.064970,15.458470&loc=47.071100,15.476760&output=gpx
Read more: OSRM API docs.
Since the accepted answer is outdated and does not work anymore, here is how all nodes along a road can be queried using the route service from Project OSRM.
Given an arbitrary number of lon,lat pairs.
For Instance the following three (in Berlin):
13.388860,52.517037
13.397634,52.529407
13.428555,52.523219
The route-service calculates the fastest route between these points and its possible to return all nodes along the road using the following query:
http://router.project-osrm.org/route/v1/driving/13.388860,52.517037;13.397634,52.529407;13.428555,52.523219?alternatives=false&annotations=nodes
This returns a json response containing node IDs of all the nodes along the route. The result should look something like this:
{
"routes": [
{
...
"legs": [
{
"annotation": {
"nodes": [
2264199819,
2045820592,
21487242,
...
]
}
To receive the lat,lon coordinates of the nodes OverpassAPI can be used.
[out:json];
(
node(264199819);
node(...);
node(...);
...
);
(._;>;);
out;
Here is a sample request using overpass-turbo: http://overpass-turbo.eu/s/toe
It's simply google.maps.DirectionsService().route() method. You need to pass the service request and then a callback which executes upon completion of the service request.
https://developers.google.com/maps/documentation/javascript/directions
While not used as API, here: https://www.nmeagen.org/ one can create "Multi-point line", set the distance between points and download route (coordinates) as CSV.
Adding to the Marlio's answer.
You can use Google Maps Directions API itself.
For a given origin and destination, in the JSON output, look for following:
"polyline" : {
"points" : ""
}
You can use a decoder to get the coordinates from the polyline.:
https://github.com/emcconville/google-map-polyline-encoding-tool
Or. you can use the googleway package in R to decode the same.
https://cran.r-project.org/web/packages/googleway/googleway.pdf
I am not sure how to set the resolution to your desired level though.But the resolution in the API output is really good.

Google Maps API Javascript pull data into variable from MySQL table

Im working on a script that will pull a variable from my MySQL db table, heres the example coding google gives me.
var beaches = [
['Bondi Beach', -33.890542, 151.274856, 4],
];
and now here is what im trying to achieve,
i have a table with the fields, ID , Beach , Long, Lat
how would i replace the beach variable to pull from my MySQL table instead of having to go in and manually add the beach to the variable. That way when users add a Beach Name with longitude and latitude to the DB through my form it automatically adds a marker on my google maps.
I am achieving the ComplexIcon Overlays with Google Maps API v3
https://developers.google.com/maps/documentation/javascript/overlays#ComplexIcons
Im guessing im going to be using some AJAX ? I have never used AJAX before so if this is the case i guess i better pull up my AJAX tuts :)
Javascript in a user's browser can't get at your database directly (which is good!) but it can ask your server for data, and AJAX may well be the way to go if you want dynamic updates. Google has an article: https://developers.google.com/maps/articles/phpsqlajax_v3
However it seems more likely that you will simply need to get the marker data when you send the page to the browser, and you can do that by constructing the page in PHP or another server-side language, using similar techniques to get data out of the database and use the values directly in the page code.
You may need to do both, to create the initial page the user gets and update it via AJAX so just the data changes and you don't have to refresh the whole page.
[Note: you don't have to use XML to transfer data asynchronously, you could use JSON to format it or any other format you can code for. But XML is easy and there are plenty of examples.]
Yes, you have to do it with ajax.
For example, I'll use jQuery
$.ajax({
url: formSubmitUrl,
data: 'id=1&lat=23.444...',
type: 'GET',
success: function (data)
{
// data = response from the script
// if every thing is ok, I return the string 'OK'
if(data == 'OK')
{
var pMarker = new google.maps.Marker({
position: new google.maps.LatLng(lat, lng),
map: map,
icon: yourMarkerimage
});
}
}
});
And that's all from the client side.
Submit the form via AJAX and then add the marker to the map