I am working on an application utilizing SDK version 1.0.2 that needs to manage 400-500 markers. Generally speaking the performance is very good except when a custom info window is visible. The large number or markers and custom info window causes the UI to stutter.
I realize the issue is the number of markers and when I reduce the number of markers the issue does not appear. I have tried to reduce the number of markers by only adding those that are visible during the didChangeCameraPosition callback but I found that adding and removing markers have a bigger performance hit.
Not sure what else I can try and any advice on how to proceed would be very helpful.
The new SDK version 1.4.0 released in July 2013 has a new delegate method mapView:idleAtCameraPosition: which will be called after the end of a camera movement. So you can possibly shift the marker adding logic to this method instead of didChangeCameraPosition which will be called multiple times during the course of a camera change. This should improve some performance.
Related
I have a fairly simple, and what I would think to be common mapping web mapping project to complete. I'm struggling in my selection of a web mapping API. Thus far, I've not been able to one that meets the following requirements.
Able to display thousands of points in one view without choking crippling the browser. To be specific, I'd say I would like to display roughly 30,000 points at one time and still be able to navigate around a slippy map without degraded performance.
Local maps. The web server will run on the local client, so being able to display a map without reaching out to the internet (even if it's a very basic map) is an absolute requirement.
Render dynamic data from a database onto a map (most API's are meeting this requirement).
Draw polygons directly on the map, and export the lat/lon values of all vertices.
In your experience working with map api's, do any of them meet the requirements above?
I've looked at OpenLayers 3, Leaflet, and Polymaps. Aside from reading every piece of documentation ahead of time, I can't discern if any of these would fill all requirements. Again, I'm hoping someone with experience with any API could point me in the right direction.
Thanks!
As for Leaflet:
Thousands of points: you could use one of these plugins:
Leaflet.markercluster, clusters your points into "groups of points" at low zoom levels.
Leaflet MaskCanvas, replaces all your points by a single canvas layer.
Local maps: as long as you provide a way to create image tiles (even on the local machine), most mapping libraries should work.
Dynamic data: depending on what you call "dynamic", all mapping libraries should provide you with built-in methods to display your data.
Drawing polygons and export lat/lon vertices: use Leaflet.draw plugin
OpenLayers 3 would very probably provide you with all these functionalities as well.
I've completely failed to understand the reason why i get this exception so it would be nice if someone would be so kind to look at my project and suggest any reasons for the issue. I understand it's quite difficult to look at unknown project so even any tips would also be nice.
Problem description: i've ported my own ZoomControl from WPF to StoreApp and use it to zoom in/out graph using ScaleTransform. When i work on default/far zooming level all seems fine and no memory spikes occur but as i zoom-in deeper unmanaged memory usage burst very high and fast resulting in OOM exception. I've profiled using dotMemory and tried to isolate different parts related to zooming including animation cut off and template simplifying to no avail.
Another strange thing i've noted: if you zoom in w/o crash and alt+tab to task Manager you'll see significant memory usage drop, then if you'll back to app and pan content with the mouse (don't touching zoom) you'll got lag and can see huge memory usage spike in Task Manager.
The weird thing is that it works fine on one zoom level and crash on the other, This is mind blowing. I just don't understand why there is so high memory usage in zoomed in state.
I'm working on open-source project you can get here. Run METRO.SimpleGraph project and use mousewheel to zoom in.
I am writing a Bing Map based Universal App for Windows Phone and Windows 8 that shows some quite large map layers.
Writing the initial app was no problem (the tutorial I followed is at http://blogs.msdn.com/b/rbrundritt/archive/2014/06/24/how-to-make-use-of-maps-in-universal-apps.aspx), however I now am experiencing major problems rendering a layer that contains thousands of polylines, with tens of thousands of co-ordinates.
The data is just too big - on Windows 8.1, the map crashes the application, while on Windows Phone 8.1, the layer takes a very long time to render.
According to http://blogs.msdn.com/b/bingdevcenter/archive/2014/04/23/visualize-large-complex-data-with-local-tile-layers-in-bing-maps-windows-store-apps-c.aspx, I should speed it up by converting it to a local tile layer, however, the program mentioned in the article (MapCruncher) requires a PNG as input. The question is, how do I convert my map data to a PNG? I can have the data as a shapefile, KML file, or a CSV file. Is there another way I should be doing this? I know I can do this via Geoserver, however my app has to have offline support and so cannot download from the web server the appropriate files as needed.
If anyone has an other ways I could approach this speed issue with large layers, then that would be greatly appreciated. I am aware that I can speed up rendering of a layer in Bing Maps via quadtrees, however most of what I have found is theoretical. If anyone has some code I can plug in to this, that would be very helpful.
Local tile layers are fine if you only have data in a small area, or only want to show the data for a few zoom levels. Otherwise the number of tiles grows drastically and will make your app huge. If your data changes regularly, or you want to support all zoom levels of the map you should store your data on a server and expose it as a dynamic tile layer. A dynamic tile layer is a web service that generates a till on demand from your data. You can add caching to the tiles for performance. This is the best way to handle large data sets and one I have used a lot. In fact I have a demo here: http://onsbingmapsdemo.cloudapp.net/ This data set consists of 175,000 complex polygons that equates to about 2GB of data.
I have an old blog post on how to do this here: http://rbrundritt.wordpress.com/2009/11/26/dynamic-tile-layers-in-the-bing-maps-silverlight-control/
If you prefer working with MVC you might find these project useful:
https://ajaxmapdataconnector.codeplex.com/
https://dataconnector.codeplex.com/
First, I'd like to let it be known that this is my first journey into developing mapping applications with such a large set of data. So, I am in way locked into the techniques or technologies listed below...
The Overview
What I am trying to accomplish is the ability to load my company's market information for the United States onto some form of web-based mapping software. Currently I am trying to accomplish this with Google Maps JS api and GeoJSON data, but am not against other alternatives. There are roughly 163 files (ranging from 6MB to 200KB) that I have exported from ArcGIS into GeoJSON files, and then loaded into a database (currently as GeoJSON strings). We have developed a UI that loads the data based on the current map bounds and Max/Min calculations in the corresponding records.
The problem I'm running into is the render time on the map itself, which is annoying when switching between different regions, states, or zoom levels. The API calls to load the data are acceptable in regards to the size of the data being retrieved. My boss said it is great for a proof of concept, but would like to see it much, much faster.
The Questions
What suggestions could you offer to increase the render time?
Are there better options (3rd party libs, techniques, etc) for what I'm trying to accomplish?
It was suggested by a co-worker to export the map shapes and just use the images for overlaying the information based on the coords. Any input on this?
One solution to your problem could be reducing number of vertices in polygon or polyline layers (files) while preserving feature's shape. In that way geoJSON file would be smaller and easier to render.
You can use tools available in ArcGIS for Desktop.
See
How to Simplify Line or Polygon
or
Simplify Polygon
I'm sure there are similar tools in QGIS or any other open source GIS software.
There are other solutions to your problem like prerendering data in map tiles that could be overlayed over Google Maps basemap(s). That is probably the fastest solution but more complicated.
Maybe this could be good starting point.
Just couple of random thoughts.
On one project I worked on I noticed terrible performance with geoJson I loaded via REST service.
I found out that browser's JavaScript engine had hard time parsing large geoJson responses. I finally decided to load data in batches (multiple requests). Performance increased enormously.
I noticed you use 163 files (layers?). Do you display them at the same time (no no no) or you have some control panel to switch layers on/off? If you display them at the same time then performance will suffer for sure and you should look into generating map tiles with thematically grouped layers. In that way you'd come up with couple of tile layers that you can switch on and off.
Also, if data used to generate tiles changes frequently than tiles aren't good idea. Then maybe you should look into Web Map Services. WMS generates maps on the fly from data stored in database. GeoServer supports WMS and it's easy to configure and use.
I'm looking at trying to make my WP8 app more accurate in terms of location detection by making use of the accelerometer to detect if the device has stopped moving (in the context of driving, so horizontally only presumably).
I've read about the Motion API which is apparently the thing to use as it removes some of the complexity and calculations required when referencing the accelerometer directly. But i'm ensure has to how to use the data in the correct way, MotionReading.DeviceAcceleration etc.
Can anyone suggest the best/simplest way to determine if the device has stopped moving using this API or otherwise if there's a better way?
Thanks.
From my point of view, it really depends where do you want to use your app. If it is an outdoor application then as Igrali mentioned using GPS would be a better option. However, if you are still sure that you want to use accelerometer one way that I tried before was save last few results of the accelerometer every time then check them with previous results. if the result was same it means the system is stopped otherwise it is moving. That may not be a good way but it works.