Efficiently loading ZCTAs from a shape file - google-maps

I'm working on an google map application that uses a geojson file that stores the ZCTA (Zip Code Tabulation Areas) of states. These files are fairly large and take some time to load so I was trying to find ways to reduce the loading time of the geoJson file.
I have looked at this question here:
Is there a memory efficient and fast way to load big json files in python?
so I am aware that reducing the loading time isn't easy to do.
but I came across this link that quickly loads ZCTA:
http://www.trulia.com/home_prices/
So my question is this: What has the developer done on this site to quickly load the ZCTA data? Can anyone see offhand how it was done?

You are basically asking, "how do I write the user interface for a GIS that will offer high performance?"
The developer of this website is not storing this information in a JSON and then loading it each time in response to each user's clicks. The developer is probably storing all of these objects in memory. To make things even faster, the developer has probably created the various KML files for the Google map in advance. The developer may be interfacing directly with Google, or may be using a commercial GIS that has an output format that employs Google Maps.
For example, you might check out https://gis.stackexchange.com/questions/190709/arcgis-to-google-maps.

Related

What would be the best Web Mapping API for the following requirements?

I have a fairly simple, and what I would think to be common mapping web mapping project to complete. I'm struggling in my selection of a web mapping API. Thus far, I've not been able to one that meets the following requirements.
Able to display thousands of points in one view without choking crippling the browser. To be specific, I'd say I would like to display roughly 30,000 points at one time and still be able to navigate around a slippy map without degraded performance.
Local maps. The web server will run on the local client, so being able to display a map without reaching out to the internet (even if it's a very basic map) is an absolute requirement.
Render dynamic data from a database onto a map (most API's are meeting this requirement).
Draw polygons directly on the map, and export the lat/lon values of all vertices.
In your experience working with map api's, do any of them meet the requirements above?
I've looked at OpenLayers 3, Leaflet, and Polymaps. Aside from reading every piece of documentation ahead of time, I can't discern if any of these would fill all requirements. Again, I'm hoping someone with experience with any API could point me in the right direction.
Thanks!
As for Leaflet:
Thousands of points: you could use one of these plugins:
Leaflet.markercluster, clusters your points into "groups of points" at low zoom levels.
Leaflet MaskCanvas, replaces all your points by a single canvas layer.
Local maps: as long as you provide a way to create image tiles (even on the local machine), most mapping libraries should work.
Dynamic data: depending on what you call "dynamic", all mapping libraries should provide you with built-in methods to display your data.
Drawing polygons and export lat/lon vertices: use Leaflet.draw plugin
OpenLayers 3 would very probably provide you with all these functionalities as well.

Speeding up a very large Bing Maps polyline based layer

I am writing a Bing Map based Universal App for Windows Phone and Windows 8 that shows some quite large map layers.
Writing the initial app was no problem (the tutorial I followed is at http://blogs.msdn.com/b/rbrundritt/archive/2014/06/24/how-to-make-use-of-maps-in-universal-apps.aspx), however I now am experiencing major problems rendering a layer that contains thousands of polylines, with tens of thousands of co-ordinates.
The data is just too big - on Windows 8.1, the map crashes the application, while on Windows Phone 8.1, the layer takes a very long time to render.
According to http://blogs.msdn.com/b/bingdevcenter/archive/2014/04/23/visualize-large-complex-data-with-local-tile-layers-in-bing-maps-windows-store-apps-c.aspx, I should speed it up by converting it to a local tile layer, however, the program mentioned in the article (MapCruncher) requires a PNG as input. The question is, how do I convert my map data to a PNG? I can have the data as a shapefile, KML file, or a CSV file. Is there another way I should be doing this? I know I can do this via Geoserver, however my app has to have offline support and so cannot download from the web server the appropriate files as needed.
If anyone has an other ways I could approach this speed issue with large layers, then that would be greatly appreciated. I am aware that I can speed up rendering of a layer in Bing Maps via quadtrees, however most of what I have found is theoretical. If anyone has some code I can plug in to this, that would be very helpful.
Local tile layers are fine if you only have data in a small area, or only want to show the data for a few zoom levels. Otherwise the number of tiles grows drastically and will make your app huge. If your data changes regularly, or you want to support all zoom levels of the map you should store your data on a server and expose it as a dynamic tile layer. A dynamic tile layer is a web service that generates a till on demand from your data. You can add caching to the tiles for performance. This is the best way to handle large data sets and one I have used a lot. In fact I have a demo here: http://onsbingmapsdemo.cloudapp.net/ This data set consists of 175,000 complex polygons that equates to about 2GB of data.
I have an old blog post on how to do this here: http://rbrundritt.wordpress.com/2009/11/26/dynamic-tile-layers-in-the-bing-maps-silverlight-control/
If you prefer working with MVC you might find these project useful:
https://ajaxmapdataconnector.codeplex.com/
https://dataconnector.codeplex.com/

Poor rendering time when plotting large shape files on Google Maps

First, I'd like to let it be known that this is my first journey into developing mapping applications with such a large set of data. So, I am in way locked into the techniques or technologies listed below...
The Overview
What I am trying to accomplish is the ability to load my company's market information for the United States onto some form of web-based mapping software. Currently I am trying to accomplish this with Google Maps JS api and GeoJSON data, but am not against other alternatives. There are roughly 163 files (ranging from 6MB to 200KB) that I have exported from ArcGIS into GeoJSON files, and then loaded into a database (currently as GeoJSON strings). We have developed a UI that loads the data based on the current map bounds and Max/Min calculations in the corresponding records.
The problem I'm running into is the render time on the map itself, which is annoying when switching between different regions, states, or zoom levels. The API calls to load the data are acceptable in regards to the size of the data being retrieved. My boss said it is great for a proof of concept, but would like to see it much, much faster.
The Questions
What suggestions could you offer to increase the render time?
Are there better options (3rd party libs, techniques, etc) for what I'm trying to accomplish?
It was suggested by a co-worker to export the map shapes and just use the images for overlaying the information based on the coords. Any input on this?
One solution to your problem could be reducing number of vertices in polygon or polyline layers (files) while preserving feature's shape. In that way geoJSON file would be smaller and easier to render.
You can use tools available in ArcGIS for Desktop.
See
How to Simplify Line or Polygon
or
Simplify Polygon
I'm sure there are similar tools in QGIS or any other open source GIS software.
There are other solutions to your problem like prerendering data in map tiles that could be overlayed over Google Maps basemap(s). That is probably the fastest solution but more complicated.
Maybe this could be good starting point.
Just couple of random thoughts.
On one project I worked on I noticed terrible performance with geoJson I loaded via REST service.
I found out that browser's JavaScript engine had hard time parsing large geoJson responses. I finally decided to load data in batches (multiple requests). Performance increased enormously.
I noticed you use 163 files (layers?). Do you display them at the same time (no no no) or you have some control panel to switch layers on/off? If you display them at the same time then performance will suffer for sure and you should look into generating map tiles with thematically grouped layers. In that way you'd come up with couple of tile layers that you can switch on and off.
Also, if data used to generate tiles changes frequently than tiles aren't good idea. Then maybe you should look into Web Map Services. WMS generates maps on the fly from data stored in database. GeoServer supports WMS and it's easy to configure and use.

where to store information about routes and points?

I'm working on a project where I need to display maps. These maps are going to have routes and points of interest, there will be many maps to display in the future. At this moment, the project operates with mysql. At first, I was thinking in keeping the points of interest and the points from the routes in a table in mysql server and display them with leaflet.js and OSM, but doing some research I found information about geoJson to store points and routes, and also i notice that leaflet can display information in geojson format.
I am a novice at Maps topic, so what do you recommended me?
To store points in mysql?
To have a database of geoJson files which store information about any map?
The project is about a Web application where the user will find detailed information about some routes, this information will be displayed in text format(html) and will be accompanied with a map (that will display the route and some points)
We (https://play.google.com/store/apps/details?id=org.aph.avigenie&hl=en) used a sqlite DB on the device to store favourites. We also store OSM (Open Street Maps) data in MySQL. Having the geospatial extensions is helpful; however, we don't use them, we have been able to find ways around having them.
A really useful answer would require some more insights about your application. For example, is your application somehow coupled with a webservice? Is your application itself a web service? In that case, should the various calculation be performed on client-side or server-side? Are you dealing with few POI and routes having not many waypoints? Or are you working on a planetary GIS?
Assuming you probably are somewhere between those last two extremes, you should definitively take a look at:
MySQL spatial extension
or preferably PostGIS
Those will allow you to store lots of data and to efficiently retrieve them.
BTW, remember the earth is not flat. So, storing point is one thing, but displaying them on a flat display will require to use some kind of projection. GeoJSON itself is only a file format. You will definitively require a tool than can handle that for you. That will be one key feature to check, I think.

Google maps Javascript API vs. KML layer?

Is there any general advice when to use KML layers or the Javascript API? There are several needs for the app. I need to display markers for moving object on the map. These will be refreshed every minute in average. But there will be a lot of static markers to be displayed on demand. These are just markers. But in another screen I need to draw the driven routes.
Actually I'm not sure what's the best way of displaying all the needed data. KML? Only get the data and to the rest by the API on the fly? Generate the code on the server and just upload it to the browser. What would be the best in terms of performance and and easy usage for the developer?
I think javascript layers are faster than kml.
(The fastest speed is to generate image tiles - but that takes a lot of time and needs to be done statically).
I'd use AJAX to generate the data on the server and send it to the API on the fly.
I think KML is useful if you want to share the data with other websites and people using desktop GIS software. It is extremely easy to use someone else's KML layer. It is much harder to use their javascript array.