Which technology to use with Google maps - google-maps

I want to develop a map application with about 500 or so polygons and some data associated with each polygon. The data for each polygon, to give an approximation, would be about 100 string fields of moderate length (say each under 50 characters). I expect this data (but not the polygons) to change arbitrarily every 10 minutes or so.
There seems to be a number of technologies to build this -- fusion table, the "raw" map api with a sql database to hold the data, map engine etc. I was wondering which of these various options is appropriate (and why) for the amount of data and level of churn I have mentioned.

I think for your case the raw map API is very appropriate. 500 polygons is a low number, map server would certainly be an overkill. For fusion tables I would go if I had thousands of markers, but not for 500. I am now using roughly this number of polygons in the raw API without problems.
PS: For drawing polygons with raw API there is a trick that took me long time to research, might come in handy later for you: Handle when drawing of polygons is complete in google maps api v3

Related

Loading 100-200K markers on google map

At the moment I'm using Google Maps v.3 API for drawing markers on the map.
I have around 500 markers in total.
For displaying purposes I use markerCluster and group markers using this tool on the client side in the browser.
However, I plan to expand the locations number and assume it can grow to 100K or even 200K quickly.
I did some stress tests, and realized that current solution basically kills the browser and about 10-20K markers.
So my question what is the best approach to draw that many markers (not necessary google maps)?
I've read posts with similar questions, e.g.:
Showing many markers in Google Maps
Best solution for too many pins on google maps
Basically people suggest to use some clusterer for display purposes, which I already use.
Or to use fusion tables for retrieving data, which is not an option, as the data has to stay on my server. Also I assume the display functionality is limited with fusion tables.
I'm thinking about implementing the following scenario:
on every page zoom / load - send ajax request with bounds of the display view, add about 30% on all sides and retrieve markers, which fall into this geo area only.
30% is added in case user zooms out, so that I can display other markers around quickly and then retreive further in background the rest around (broader territory)
When the number of markers is more than 50 - then I plan to apply clustering for displaying purposes. But as the markerCluster in javascript is quite slow, namely not markerCluster but google itself, as it still applies locations of all the markers, I plan to do clustering on the server side by spliting the bounds of the displayed map in about 15*15 grid and drop markers into particular cells and then basically send to the client clusters with number of markers inside (e.g. like for heatmap). And then to display clusters as markers.
Could you please give some insight whoever did similar. Does it makes sense in general. Or is it a stupid approach as ajax requests will be sent to the server on every map zoom and shift and basically overload server with redundant requests?
What I want to achieve is a nice user experience on big datasets of markers (to load in less than 2 secs).
Your approach is solid. If at all possible, you'll want to precompute the clusters and cache them server-side, with their update strategy determined by how often the underlying dataset changes.
Google maps has ~20 zoom levels, depending on where you are on the planet. Depending on how clustered your data is, if you have 200,000 markers total and are willing to show about 500 on the map at a given time, counting all the cluster locations and original markers you'll only end up storing roughly 2n = 400,000 locations server-side with all your zoom levels combined.
Possible cluster updating strategies:
Update on every new marker added. Possible for a read-heavy application with few writes, if you need a high degree of data timeliness.
Update on a schedule
Kick off an update if ((there are any new markers since the last clustering pass && the cache is older than X) || there are more than Y new markers since the last clustering pass)
Storing these markers in a database that supports geo-data natively may be beneficial. This allows SQL-like statements querying locations.
Client-side, I would consider fetching a 50% margin on either side, not 30%. Google zooms in powers of 2. This will allow you to display one full zoom level.
Next, if this application will get heavy use and optimization is worthwhile, I would log server-side when a client zooms in. Try to profile your usage, so you can determine if users zoom in and out often. With solid numbers (like "70% of users zoom in after retrieving initial results, and 20% zoom out"), you can determine if it would be worthwhile to preload the next layer of zoomed data for your users, in order to gain UI responsiveness.

Plotting huge number of locations on a map

Is there any good library that will enable to see the coarse location distribution.
I have some million data points to plot on a map. Doing it on google maps is going to be heavy. I want this to be like a dot plot.
The closest I can think to the functionality you are looking for would be to implement a heatmap layer : https://developers.google.com/maps/documentation/javascript/examples/layer-heatmap.
This would allow you to see the distribution of points without plotting them individually and thus saving you some client-side work.
How well this will work with a million+ points however I am not sure. You might be best off looking into a fusion tables combination.

For MVC Array - Polygons - which is the best way to go?

I need to go about building 2,800 polygons (using MVC Array) for a google maps (v3) and displaying some attribute information. There seems to me that there are two ways to do this:
1) just load an array of 15,000+ points from a MySQL db file. I am planning on generating these points using a vertices command in a GIS platform. All vertice points that are vertices of a common building polygon will share the same identifier (i.e. building id). I will generate the lat/longs of each of these points in the WGS84 coordinate system (i.e. EPSG 4326) http://spatialreference.org/ref/epsg/4326.
OR
2) Parse a MySQL Geometry file (which comes in the Blob format) into a readable set of coordinates for google maps to read. I am still fuzzy on how to do this....(especially dealing with Blob)...but I suspect I need to load the file in the WGS84 coordinate system and start from there. Also, need to load the dbf file (in order to call the attributes) and correspond this with the building id.
My questions are:
Of option 1 & 2 what would be the best course of action to take for a low-experience programmer and the volume of data (i.e. 2000 polygons, which breakdown into 15000 points)?
IS WGS 84 (EPSG 4326) what I need my polygon coordinates to be in??? I've found a lot of confusion on this (some sites suggest WGS 84 - Mercator, EPSG 3857). This is a rather critical processing step to know...so I really hope for some guidance if possible.
Thanks!!!

Is it possible to get cities polygonal boundaries like in Google Maps?

I would like to have the possibility to tell if a GPS location is in an inhabited or uninhabited zone.
I have tried some reverse geocoding services out there, but all of them proved useless, because they select the nearest address possible. (I understand why this should be so, it is useful for the purpose of reverse geocoding)
I have noticed in Google Maps, when I search for a city, their boundaries are selected in red dotted well defined line. I would love it to use this, or something similar.
Is there any possible way that Google maps can provide such a service, or something that can solve my problem.
Are there any other web solution or databases that you know of that can give me this information ?
Or maybe I can use any of the reverse geocoding solutions with some parameters (such as restricting the size of searching) to determine if the location is or is not in a populated area?
If you will not find a public service then it gets interesting, and expensive in terms of developping effort.
Public data (world wide) is only available from OpenStreetMap, i think they have such a layer (could be named Land_use (rural, etc.)) This layer is usually used to color a map, look at openstreet map Web page if you find a suitable coloring, that coresponds to your task. (E.g look at green, or gray).
These data are stored in polygons, you would have top extract these polygons (i asume millions of them). Ten you need a fast searching spatial index, like a region Quadtree.
Then you do a "point(lat, lon) in polygon" call, and get the polygon related to your position.
Probaly not all that polygons will fit into main memory, so you must load them on demand (e.g by country).
A variant of this approach is to use a geo spatial database like postgres to store that polygons, and do a DB query.
With that approach most work will be extracting the polygons from OpenStreetMap DB file.
More acurate is data from TomTom, but these can be really expensive.

GIS with Bing Silverlight and SQL 2008?

I have data that I want to create a GIS type application that will have the typical features of adding and removing layers of different types. What is the best architectural approach?
The data consists of property locations in Eastings and Northings.
I also have ordnance survey data in GML and Shapefiles.
I know this is a very broad question but the subject area also seems very broad to me and I am unsure which direction to go.
I was thinking of using SQL 2008 spatial and Bing Silverlight control to visualise that maps.To do this would I have to convert the eastings and northings to GWS84 geography datatype? But then if I converted the shapefiles to GML and imported all the GML files to sql using GeomFromGML they would be in geometry datatypes. Wouldn’t the two types be incompatible?
Also, should ESRI ArcGIS API for Silverlight
feature in the equation? Is this a good environment to create maps that I can point as SQL sqerver 2008 as the datasource (using a WCF service if necessary)?
Any advice greatly appreciated!
This is something I've done several times, using OS data from SQL Server in both Bing Maps AJAX and Silverlight controls. Some general comments below (in no particular order!):
Don't expect to implement full-blown
GIS functionality using Bing Maps.
Simple querying, retrieval and
display of the data is all fine (+
some simple editing), but after that
you'll be struggling with what can be
achieved in the browser.
All vector shapes supplied to Bing Maps need to be in (geography)
WGS85 coordinates, EPSG:4326.
However, all data will be projected
and displayed using (projected)
Spherical Mercator system, EPSG:3857.
In terms of vector shapes, you can expect to achieve a similar level of performance as you get in the SSMS spatial results tab - that is, (with careful architecture) you can plot up to about 5,000 features on the map at once, zoom / pan around them, click on them to bring up various properties and attributes etc. However, after that you'll find the UI becomes fairly unresponsive (which
I imagine is the reason why the spatial results tabs itself limits you to displaying 5,000 records at once).
If you want to display more features than this, one approach is to rasterize them by projecting them into the EPSG:3857 projection, creating a .PNG/.JPG image file of the features and then cutting that image into tiles according to the Bing Maps quadkey tile numbering system as explained here:
http://msdn.microsoft.com/en-us/library/bb259689.aspx and displaying them as a tilelayer. Tile layers are significantly faster than displaying the equivalent vector shapes, although it means the data is static.
If you do create raster tiles, you can either render them dynamically
or pre-render them to improve performance - i.e. you could set
up a job to render and update the tileset for slowly-changing data
every night/every month etc.
If you're talking about OS Mastermap data, the sheer level of detail
involved means that you need to think more carefully about what
features you want to display, and how you want to display them. Take
greater London, for example, which covers an area about 50km x 40km. To
create raster tiles (each of which are 256px x 256px) at zoom level 19
covering this area you'd need to render and store 1.3 million separate
tiles. If each of those is generated from a database query that takes, say
200ms to run, it's gonna take a looooonggggg time to prepare all your
data. Also, once the files have been generated, you might want to think
about storing them in a DB rather than saving them on a file system.
As for loading the OS data into SQL Server in the first place - there
are several tools that will import either from GML or shapefile into SQL
Server, and handle the projection from EPSG:27700 (Ordnance Survey
National Grid) to WGS84 along the way. Try GDAL/OGR or Safe FME for
starters.
I have a blog at http://alastaira.wordpress.com that has several blog posts that you may find useful describing various aspects of integrating Bing Maps and SQL Server. Particularly, you might want to look at:
http://alastaira.wordpress.com/2011/02/16/loading-ordnance-survey-open-data-into-sql-server-2008/
http://alastaira.wordpress.com/2011/01/23/the-google-maps-bing-maps-spherical-mercator-projection/
http://alastaira.wordpress.com/2011/02/21/using-ogr2ogr-to-convert-reproject-and-load-spatial-data-to-sql-server/