I am using Max Zoom Service of Google Maps API to get the maximum zoom level of a given coordinates. Most of the time it is fast and each request only takes around 150 ms. However, there have been a few occasions that the service became extremely slow, around 20 seconds.
maxZoomService = new google.maps.MaxZoomService();
maxZoomService.getMaxZoomAtLatLng(center, function(response) {
// my process
});
Have you experienced similar issue?
Yes we have experienced the same problems.
We're using wkhtmltoimage to generate map images for inclusion into PDF files using wkhtmltopdf.
We have to set a maximum time in which the image is generated. If it's too short, then you often will not have all the map tiles downloaded in order to create the image. Too long and there's a really noticeable delay for users (with our comnection 5 seconds seems optimal)
We wanted to make sure that generated satellite maps do not exceed the maximum zoom level using the MaxZoomService and that's when we ran into problems with the service.
As it is an asynchronous service, ideally we wait for the service to report the max zoom level BEFORE creating the map and therefore triggering the correct .map tile downloads.
Setting a default "fallback" zoom for the map in case the service is being really slow is not really an option as subsequently updating the zoom level when getting a return value from the service will in most cases cause new tiles to be reloaded, requiring more delay...
If like us you're interested in specific, repeatable locations (e.g. from a database) then one option might be to cache (store) the max zoom levels in advance and periodically check for updated values.
In our specific case the only other option would be to allow the service a specific amount of time (say 2 seconds) and if it does not respond then fall back to a default zoom.
Not ideal, but can handle the services' "Bad hair days"...
Related
I am using Flutter to write my running app likes Strava. In this app, I used location and google maps plugin for flutter. Everything is good until I call getCurrentLocation every 10s to track my location and receive different LatLng even if I stand still.
Did anybody here face the same problem? I think it probably cause by the GPS's accuraccy issue.
Every few seconds, Android (and ios) gets a new location by either connect to cell towers or connecting to gps satellites. Based on that data it determines the most likely location for you on the globe. Since these measurements are not 100% accurate, every time it recalculates the users location, there will be a slight difference in location. Even if you stand still, your lat-lng values will change slightly. That is normal. You can decide to discard the new value if it is too close to the previous one. maps_toolkit is a good library for calculating the distance between two locations.
At the moment I'm using Google Maps v.3 API for drawing markers on the map.
I have around 500 markers in total.
For displaying purposes I use markerCluster and group markers using this tool on the client side in the browser.
However, I plan to expand the locations number and assume it can grow to 100K or even 200K quickly.
I did some stress tests, and realized that current solution basically kills the browser and about 10-20K markers.
So my question what is the best approach to draw that many markers (not necessary google maps)?
I've read posts with similar questions, e.g.:
Showing many markers in Google Maps
Best solution for too many pins on google maps
Basically people suggest to use some clusterer for display purposes, which I already use.
Or to use fusion tables for retrieving data, which is not an option, as the data has to stay on my server. Also I assume the display functionality is limited with fusion tables.
I'm thinking about implementing the following scenario:
on every page zoom / load - send ajax request with bounds of the display view, add about 30% on all sides and retrieve markers, which fall into this geo area only.
30% is added in case user zooms out, so that I can display other markers around quickly and then retreive further in background the rest around (broader territory)
When the number of markers is more than 50 - then I plan to apply clustering for displaying purposes. But as the markerCluster in javascript is quite slow, namely not markerCluster but google itself, as it still applies locations of all the markers, I plan to do clustering on the server side by spliting the bounds of the displayed map in about 15*15 grid and drop markers into particular cells and then basically send to the client clusters with number of markers inside (e.g. like for heatmap). And then to display clusters as markers.
Could you please give some insight whoever did similar. Does it makes sense in general. Or is it a stupid approach as ajax requests will be sent to the server on every map zoom and shift and basically overload server with redundant requests?
What I want to achieve is a nice user experience on big datasets of markers (to load in less than 2 secs).
Your approach is solid. If at all possible, you'll want to precompute the clusters and cache them server-side, with their update strategy determined by how often the underlying dataset changes.
Google maps has ~20 zoom levels, depending on where you are on the planet. Depending on how clustered your data is, if you have 200,000 markers total and are willing to show about 500 on the map at a given time, counting all the cluster locations and original markers you'll only end up storing roughly 2n = 400,000 locations server-side with all your zoom levels combined.
Possible cluster updating strategies:
Update on every new marker added. Possible for a read-heavy application with few writes, if you need a high degree of data timeliness.
Update on a schedule
Kick off an update if ((there are any new markers since the last clustering pass && the cache is older than X) || there are more than Y new markers since the last clustering pass)
Storing these markers in a database that supports geo-data natively may be beneficial. This allows SQL-like statements querying locations.
Client-side, I would consider fetching a 50% margin on either side, not 30%. Google zooms in powers of 2. This will allow you to display one full zoom level.
Next, if this application will get heavy use and optimization is worthwhile, I would log server-side when a client zooms in. Try to profile your usage, so you can determine if users zoom in and out often. With solid numbers (like "70% of users zoom in after retrieving initial results, and 20% zoom out"), you can determine if it would be worthwhile to preload the next layer of zoomed data for your users, in order to gain UI responsiveness.
I have a very large array (20 million numbers, output of a sql query) in my MVC application and I need to send it to the client browser (it will be visualized on a map using webGL and the user is supposed to play with the data locally). What is the best approach to send the data? (Please just do not suggest this is a bad idea! I am looking for an answer to this specific question, not alternative suggestions)
This is my current code (called using ajax), but when array size goes above 3 millions I receive outofmemory exception. It seems the serialization (stringbuilder?) fails.
List<double> results = DomainModel.GetPoints();
JsonResult result = Json(results, JsonRequestBehavior.AllowGet);
result.MaxJsonLength = Int32.MaxValue;
return result;
I do not have much experience with web programming/javascript/MVC. I have been researching for the past 24 hours but did not get anywhere, so I need a hint/sample code to continue my research.
NO, NO, NO, you do not send that much information to the browser:
it results in a huge memory usage that will most likely crash the web-browser (and in fact in your case it does)
it takes a large amount of time to retrieve it, not everyone has a good internet connection, and even good connections can fluctuate over time
If you're building a map tool, then I'd recommend splitting the map into tiles and sending only the data corresponding to the portion of the map the user is currently working on. Also for larger zooms you can filter out data, as surely you can't place it all on the map.
Edit. A somewhat another alternative would be to ask your users to use machines with at least 16GB of RAM, or whatever RAM size is needed to deal with your huge data).
I'm trying to load a 500MB shape file into GeoServer and get it to respond to a client request within a reasonable time frame (it currently doesn't respond even after 30mins of waiting). I want it to deliver image tiles; I'm using Google Maps API v3 ImageMapType to automatically request the correct tiles using the GeoServer WMS URL. The layer consists of hundreds of thousands of polygons for coastal Tasmania - so the layer is very sparse. I've tried:
Creating a tile cache (but the ETA is 15 years in zoom range 13 to 18) and it creates a lot of blank tiles (est. >95%)
Removing all attributes in the layer before loading into GeoServer (still waited an hour for it to begin seeding tile cache and still gave no progress)
Merging the polygons so there are only 10 polygons in the layer (same behaviour)
Using the bounds options in the tile cache (same behaviour)
[edit] Reprojecting the layer into EPSG:900913 (same behaviour)
Cutting the layer into 12 sections to reduce empty space, loading them as a Layer Group, and seeding the tile cache from this (even 1 of these layers wouldn't begin seeding - too big still?)
The next option we're looking at is breaking the layer into 1km grids and loading all 8000 layers as a Layer Group. I'm doubtful this will work. However 1 of these layers DID work when seeding the cache - and it only took a few seconds for all zoom levels.
How do I get GeoServer to serve this large, sparse data? Surely other people have this issue? Do I need to do something special with the layer itself? Or is there something in GeoServer that I should be configuring?
To start: a 500MB map should be peanuts for GeoServer, unless you bought your hardware well over a decade ago. I work with much larger datasets on a daily basis.
Perhaps you let GeoServer access the shapefile directly from disk?
I'd recommend the following setup:
Make sure you have enough RAM installed. I just saw that I could buy 24GB for less than 80 euros. That should be enough to cache your database entirely;
Install Postgres with PostGIS extensions;
To make sure that no re-projection is necessary, you can pre-convert all coordinates to Google's marcator projection (EPSG:9009l3);
Make sure you have a spatial index on the geometry column;
If your map is static, you can pre-render the tiles. This is really going to be a big boost in performance. Try to find until which zoomlevel you can pre-render within a reasonable time. The zoomed-in images are usually faster anyway, because fewer elements are involved to create the image;
Furthermore, I doubt that you're ever going to get a result if it took 30 minutes. At least the webserver already timed out way before that. Download the image manually by pasting the URL. If you don't see a picture, then open the downloaded image in a text editor: there might be a textual error message in the picture file instead of binary data. That error message usually describes what the problem is.
I had lot of marker on my Google Maps v.3. My maps is running so slow because there is too much marker on that
So, what about indexing the database to increase the marker's load? Will that have any effect? If no, do you have any ideas? Thanks :D
It depends on where the bottleneck is.
If the bottleneck is at database seek, adding indexes to your database would definitely help. You might want to refer to the MySQL indexing documentation for some guide.
Otherwise, if the bottleneck is at the front-end, ie. too heavy on the JavaScript from having to load too many markers at once, here are some tips:
Try not to have too many markers displayed at once.
If the markers are related to time, you may want to remove some old markers off your maps to free up some memory.
If there are a lot of markers to be displayed all at once, you may wish to use timer (setTimeout) to stagger the display, for example, show a marker at every 100ms instead of all at once.
If possible, redesign your UI to allow showing only say 20 most relevant markers at a time while hinting to your users to load another 20 if needed.