Loading 100-200K markers on google map - google-maps

At the moment I'm using Google Maps v.3 API for drawing markers on the map.
I have around 500 markers in total.
For displaying purposes I use markerCluster and group markers using this tool on the client side in the browser.
However, I plan to expand the locations number and assume it can grow to 100K or even 200K quickly.
I did some stress tests, and realized that current solution basically kills the browser and about 10-20K markers.
So my question what is the best approach to draw that many markers (not necessary google maps)?
I've read posts with similar questions, e.g.:
Showing many markers in Google Maps
Best solution for too many pins on google maps
Basically people suggest to use some clusterer for display purposes, which I already use.
Or to use fusion tables for retrieving data, which is not an option, as the data has to stay on my server. Also I assume the display functionality is limited with fusion tables.
I'm thinking about implementing the following scenario:
on every page zoom / load - send ajax request with bounds of the display view, add about 30% on all sides and retrieve markers, which fall into this geo area only.
30% is added in case user zooms out, so that I can display other markers around quickly and then retreive further in background the rest around (broader territory)
When the number of markers is more than 50 - then I plan to apply clustering for displaying purposes. But as the markerCluster in javascript is quite slow, namely not markerCluster but google itself, as it still applies locations of all the markers, I plan to do clustering on the server side by spliting the bounds of the displayed map in about 15*15 grid and drop markers into particular cells and then basically send to the client clusters with number of markers inside (e.g. like for heatmap). And then to display clusters as markers.
Could you please give some insight whoever did similar. Does it makes sense in general. Or is it a stupid approach as ajax requests will be sent to the server on every map zoom and shift and basically overload server with redundant requests?
What I want to achieve is a nice user experience on big datasets of markers (to load in less than 2 secs).

Your approach is solid. If at all possible, you'll want to precompute the clusters and cache them server-side, with their update strategy determined by how often the underlying dataset changes.
Google maps has ~20 zoom levels, depending on where you are on the planet. Depending on how clustered your data is, if you have 200,000 markers total and are willing to show about 500 on the map at a given time, counting all the cluster locations and original markers you'll only end up storing roughly 2n = 400,000 locations server-side with all your zoom levels combined.
Possible cluster updating strategies:
Update on every new marker added. Possible for a read-heavy application with few writes, if you need a high degree of data timeliness.
Update on a schedule
Kick off an update if ((there are any new markers since the last clustering pass && the cache is older than X) || there are more than Y new markers since the last clustering pass)
Storing these markers in a database that supports geo-data natively may be beneficial. This allows SQL-like statements querying locations.
Client-side, I would consider fetching a 50% margin on either side, not 30%. Google zooms in powers of 2. This will allow you to display one full zoom level.
Next, if this application will get heavy use and optimization is worthwhile, I would log server-side when a client zooms in. Try to profile your usage, so you can determine if users zoom in and out often. With solid numbers (like "70% of users zoom in after retrieving initial results, and 20% zoom out"), you can determine if it would be worthwhile to preload the next layer of zoomed data for your users, in order to gain UI responsiveness.

Related

Which technology to use with Google maps

I want to develop a map application with about 500 or so polygons and some data associated with each polygon. The data for each polygon, to give an approximation, would be about 100 string fields of moderate length (say each under 50 characters). I expect this data (but not the polygons) to change arbitrarily every 10 minutes or so.
There seems to be a number of technologies to build this -- fusion table, the "raw" map api with a sql database to hold the data, map engine etc. I was wondering which of these various options is appropriate (and why) for the amount of data and level of churn I have mentioned.
I think for your case the raw map API is very appropriate. 500 polygons is a low number, map server would certainly be an overkill. For fusion tables I would go if I had thousands of markers, but not for 500. I am now using roughly this number of polygons in the raw API without problems.
PS: For drawing polygons with raw API there is a trick that took me long time to research, might come in handy later for you: Handle when drawing of polygons is complete in google maps api v3

Getting all streets visible in Google map's viewport

I'm trying to build a map with the following algorithm:
Wait for pan or zoom to occurs.
Query for all streets visible in the viewport (extent).
Color every visible street with a predefined color.
Example:
I want to show the numbers of businesses on each street, or the number of crimes committed at each street.
I have a DB which holds this kind of information (streetname, data), but each row doesn't have the location data.
Therefore, after each map zoom or pan, I cannot query all of it by a geographical bounding rectangle, it will be far more efficient to use Google own DB and query it by street names.
I know how to register to pan and zoom events.
I know how to calculate the viewport coordinates.
I know how to color a single street.
How can I get a list of all streets visible in the viewport?
Any other solutions or architectures are welcome.
The preferred solution will not use Google DirectionsService nor DirectionsRenderer since they slow down the map.
My understanding is that what you are asking is not possible from Google API's. Reverse geocoding inside a polygon is not a service they offer. There are some posts on other sites (e.g. https://gis.stackexchange.com/questions/22816/how-to-reverse-geocode-without-google) with the reference gisgraphy.com looking like a pretty neat reverse geocoding tool.
This still does not address your all streets in a polygon problem however. I think your only option would be to get your hands on the data (Open Street Maps) and write the code yourself. Further - if you are going to do this for a large area I would take an approach like I recommended here with grids: https://stackoverflow.com/a/18420564/1803682
I would create my grid elements, and for each street calculate all the grids to which it belongs and store in the database. Then when you search a polygon, you would calculate all the grids the polygon overlaps, and can then test the subset of road data in each of those squares to determine overlap.
I looked into this and abandoned a similar requirement a few months back and still have a desire to implement it. Most of the point/line in polygon work is happening on data created in my application (i.e. not street data) and right now that is the only data I will be including. What I am trying to say is - I hope someone gives you a better answer.
Update:
For what you are asking I still believe you will need to use a mix of your own database based on OpenStreetMap and some kind of grid analysis carried out in advance. If you have some time to commit to the project this should not be too awful to process. The database will be large, and the calculations needed will likely require a significant amount of one-time / upfront processing time. As far as highlighting routes/roads/whatever within the viewport, there are lots of way to accomplish this using the API - example here which I found useful: polyline snap to road using google maps api v3
Also useful: http://econym.org.uk/gmap/snap.htm
Note that one way streets may give some grief if using the directions api to snap to a street and you will likely have to watch for this and correct or reverse the start/end points.
Google would recommend using it's Geocoding Service in order to populate your data base with the co-ordinates. You can then use the LatLng Bounds Class method "contains" to check whether your points lie within the viewport. The advantage of this approach is you only need to geocode the information once and then store this, versus sending coding requests each time the viewport changes.
An alternate efficient way of displaying this kind of data may be to use google fusion tables. this greatly simplifies the integration of the data with the map.

Google Maps API - Slow Max Zoom Service

I am using Max Zoom Service of Google Maps API to get the maximum zoom level of a given coordinates. Most of the time it is fast and each request only takes around 150 ms. However, there have been a few occasions that the service became extremely slow, around 20 seconds.
maxZoomService = new google.maps.MaxZoomService();
maxZoomService.getMaxZoomAtLatLng(center, function(response) {
// my process
});
Have you experienced similar issue?
Yes we have experienced the same problems.
We're using wkhtmltoimage to generate map images for inclusion into PDF files using wkhtmltopdf.
We have to set a maximum time in which the image is generated. If it's too short, then you often will not have all the map tiles downloaded in order to create the image. Too long and there's a really noticeable delay for users (with our comnection 5 seconds seems optimal)
We wanted to make sure that generated satellite maps do not exceed the maximum zoom level using the MaxZoomService and that's when we ran into problems with the service.
As it is an asynchronous service, ideally we wait for the service to report the max zoom level BEFORE creating the map and therefore triggering the correct .map tile downloads.
Setting a default "fallback" zoom for the map in case the service is being really slow is not really an option as subsequently updating the zoom level when getting a return value from the service will in most cases cause new tiles to be reloaded, requiring more delay...
If like us you're interested in specific, repeatable locations (e.g. from a database) then one option might be to cache (store) the max zoom levels in advance and periodically check for updated values.
In our specific case the only other option would be to allow the service a specific amount of time (say 2 seconds) and if it does not respond then fall back to a default zoom.
Not ideal, but can handle the services' "Bad hair days"...

Handling Much of Google Maps 3 Marker

I had lot of marker on my Google Maps v.3. My maps is running so slow because there is too much marker on that
So, what about indexing the database to increase the marker's load? Will that have any effect? If no, do you have any ideas? Thanks :D
It depends on where the bottleneck is.
If the bottleneck is at database seek, adding indexes to your database would definitely help. You might want to refer to the MySQL indexing documentation for some guide.
Otherwise, if the bottleneck is at the front-end, ie. too heavy on the JavaScript from having to load too many markers at once, here are some tips:
Try not to have too many markers displayed at once.
If the markers are related to time, you may want to remove some old markers off your maps to free up some memory.
If there are a lot of markers to be displayed all at once, you may wish to use timer (setTimeout) to stagger the display, for example, show a marker at every 100ms instead of all at once.
If possible, redesign your UI to allow showing only say 20 most relevant markers at a time while hinting to your users to load another 20 if needed.

Same address Google Maps - any ideas how to facilitate?

So I am trying to think of a way to facilitate two things. It may end up being a two step process in the end but I was looking for input.
The first thing I need to do is accommodate locations with the same address. The two scenarios that come to mind are businesses that share a location and apartment buildings.
The second thing I need to accommodate is a business/nonprofit with no headquarters, just a town. Right now I just map them to the town center, but if multiple businesses have no headquarters I run into the first problem.
So I did some Googling and found a solution that involved having a list of locations alongside the map so you can click on them and the info window will pop up. This isn't a solution for me though.
What I was thinking of was using the location to map the first point. For the second and points after that moving the marker over .05 degrees or something marginal so that the marker shows up. The inherent problem with that is that what happens if 12 Main Street turns into 13 Main Street?
So any thoughts on what I could do?
Thanks
Levi
There's an extension by Martin Pearman called ClusterMarker that detects any groups of two or more markers whose icons visually intersect when displayed. Each group of intersecting markers is then replaced with a single cluster marker that looks different. The cluster marker, when clicked, simply centres and zooms the map in on the markers whose icons previously intersected.
A more advanced approach to this problem might be SQL - same address = same coordinates...
GROUP BY or HAVING COUNT > 1 ... would let you create multi-record coordinates.
In fact - before you can cluster client-side, you need to send out the data first, which means transferring much more than required in this case, which results in higher loading times and higher RAM utilization client-side ...plus all the useless JS processing of the clusterer.
Client-side clustering is only recommend when the coordinates are close to each other, but not when they are absolutely identical.
Think about it...