So I am trying to think of a way to facilitate two things. It may end up being a two step process in the end but I was looking for input.
The first thing I need to do is accommodate locations with the same address. The two scenarios that come to mind are businesses that share a location and apartment buildings.
The second thing I need to accommodate is a business/nonprofit with no headquarters, just a town. Right now I just map them to the town center, but if multiple businesses have no headquarters I run into the first problem.
So I did some Googling and found a solution that involved having a list of locations alongside the map so you can click on them and the info window will pop up. This isn't a solution for me though.
What I was thinking of was using the location to map the first point. For the second and points after that moving the marker over .05 degrees or something marginal so that the marker shows up. The inherent problem with that is that what happens if 12 Main Street turns into 13 Main Street?
So any thoughts on what I could do?
Thanks
Levi
There's an extension by Martin Pearman called ClusterMarker that detects any groups of two or more markers whose icons visually intersect when displayed. Each group of intersecting markers is then replaced with a single cluster marker that looks different. The cluster marker, when clicked, simply centres and zooms the map in on the markers whose icons previously intersected.
A more advanced approach to this problem might be SQL - same address = same coordinates...
GROUP BY or HAVING COUNT > 1 ... would let you create multi-record coordinates.
In fact - before you can cluster client-side, you need to send out the data first, which means transferring much more than required in this case, which results in higher loading times and higher RAM utilization client-side ...plus all the useless JS processing of the clusterer.
Client-side clustering is only recommend when the coordinates are close to each other, but not when they are absolutely identical.
Think about it...
Related
At the moment I'm using Google Maps v.3 API for drawing markers on the map.
I have around 500 markers in total.
For displaying purposes I use markerCluster and group markers using this tool on the client side in the browser.
However, I plan to expand the locations number and assume it can grow to 100K or even 200K quickly.
I did some stress tests, and realized that current solution basically kills the browser and about 10-20K markers.
So my question what is the best approach to draw that many markers (not necessary google maps)?
I've read posts with similar questions, e.g.:
Showing many markers in Google Maps
Best solution for too many pins on google maps
Basically people suggest to use some clusterer for display purposes, which I already use.
Or to use fusion tables for retrieving data, which is not an option, as the data has to stay on my server. Also I assume the display functionality is limited with fusion tables.
I'm thinking about implementing the following scenario:
on every page zoom / load - send ajax request with bounds of the display view, add about 30% on all sides and retrieve markers, which fall into this geo area only.
30% is added in case user zooms out, so that I can display other markers around quickly and then retreive further in background the rest around (broader territory)
When the number of markers is more than 50 - then I plan to apply clustering for displaying purposes. But as the markerCluster in javascript is quite slow, namely not markerCluster but google itself, as it still applies locations of all the markers, I plan to do clustering on the server side by spliting the bounds of the displayed map in about 15*15 grid and drop markers into particular cells and then basically send to the client clusters with number of markers inside (e.g. like for heatmap). And then to display clusters as markers.
Could you please give some insight whoever did similar. Does it makes sense in general. Or is it a stupid approach as ajax requests will be sent to the server on every map zoom and shift and basically overload server with redundant requests?
What I want to achieve is a nice user experience on big datasets of markers (to load in less than 2 secs).
Your approach is solid. If at all possible, you'll want to precompute the clusters and cache them server-side, with their update strategy determined by how often the underlying dataset changes.
Google maps has ~20 zoom levels, depending on where you are on the planet. Depending on how clustered your data is, if you have 200,000 markers total and are willing to show about 500 on the map at a given time, counting all the cluster locations and original markers you'll only end up storing roughly 2n = 400,000 locations server-side with all your zoom levels combined.
Possible cluster updating strategies:
Update on every new marker added. Possible for a read-heavy application with few writes, if you need a high degree of data timeliness.
Update on a schedule
Kick off an update if ((there are any new markers since the last clustering pass && the cache is older than X) || there are more than Y new markers since the last clustering pass)
Storing these markers in a database that supports geo-data natively may be beneficial. This allows SQL-like statements querying locations.
Client-side, I would consider fetching a 50% margin on either side, not 30%. Google zooms in powers of 2. This will allow you to display one full zoom level.
Next, if this application will get heavy use and optimization is worthwhile, I would log server-side when a client zooms in. Try to profile your usage, so you can determine if users zoom in and out often. With solid numbers (like "70% of users zoom in after retrieving initial results, and 20% zoom out"), you can determine if it would be worthwhile to preload the next layer of zoomed data for your users, in order to gain UI responsiveness.
I have lists of between 100 and 10000 GPS location from vehicles driving around during some timespan.
I want to display that on a Google Map, using their API (with the Business licence if that matters).
As I see it, there are 3 options, all with problems:
1) Draw a polyline between all positions. Some positions are not that accurate so it looks like the route hits some buildings next to the road. I know that all positions are on a road. Also, it cuts some corners, and it doesn't look professional.
2) Display just the GPS positions in the map. This is not good either since the GPS positions are off the road (which they shouldn't be).
3) Draw the route using Maps API. This limits us to using 23 waypoints between the start and end positions. The route looks excellent and it follows the road (GPS positions next to the road are moved to the road automatically). But especially for longer time spans, this option means that the route displayed is incorrect (Google guesses the route taken between the waypoints - so from the 10000 GPS positions it only uses 23). And we can't display a clearly incorrect route.
Does anyone have a good/better way to show a driven route on Google Maps that follows the road but takes into account all/many given GPS positions?
Could you not chain the route using the maps API? It's not something I've done before so this answer could be a little vague but would it not be possible to segment your list of coordinates into chunks of 23 fire the requests and then display the resultant routes on the map?
I'm not overly sure on the return format so it may be necessary to mess with the output in order to give the illusion of the route, also you will likely not need to use every coordinate (perhaps exclude those that are within a small distance of each other for example being stuck at lights), otherwise the requests may take a long time.
We've actually moving away from option 3. The reason is that when the positions get moved to the nearest road, that is not always correct (like if you're driving on a parking lot), so since that doesn't always give the correct route, then we'll not take that path.
So I don't know if it's possible to chain several routes in the same map.
I'm trying to build a map with the following algorithm:
Wait for pan or zoom to occurs.
Query for all streets visible in the viewport (extent).
Color every visible street with a predefined color.
Example:
I want to show the numbers of businesses on each street, or the number of crimes committed at each street.
I have a DB which holds this kind of information (streetname, data), but each row doesn't have the location data.
Therefore, after each map zoom or pan, I cannot query all of it by a geographical bounding rectangle, it will be far more efficient to use Google own DB and query it by street names.
I know how to register to pan and zoom events.
I know how to calculate the viewport coordinates.
I know how to color a single street.
How can I get a list of all streets visible in the viewport?
Any other solutions or architectures are welcome.
The preferred solution will not use Google DirectionsService nor DirectionsRenderer since they slow down the map.
My understanding is that what you are asking is not possible from Google API's. Reverse geocoding inside a polygon is not a service they offer. There are some posts on other sites (e.g. https://gis.stackexchange.com/questions/22816/how-to-reverse-geocode-without-google) with the reference gisgraphy.com looking like a pretty neat reverse geocoding tool.
This still does not address your all streets in a polygon problem however. I think your only option would be to get your hands on the data (Open Street Maps) and write the code yourself. Further - if you are going to do this for a large area I would take an approach like I recommended here with grids: https://stackoverflow.com/a/18420564/1803682
I would create my grid elements, and for each street calculate all the grids to which it belongs and store in the database. Then when you search a polygon, you would calculate all the grids the polygon overlaps, and can then test the subset of road data in each of those squares to determine overlap.
I looked into this and abandoned a similar requirement a few months back and still have a desire to implement it. Most of the point/line in polygon work is happening on data created in my application (i.e. not street data) and right now that is the only data I will be including. What I am trying to say is - I hope someone gives you a better answer.
Update:
For what you are asking I still believe you will need to use a mix of your own database based on OpenStreetMap and some kind of grid analysis carried out in advance. If you have some time to commit to the project this should not be too awful to process. The database will be large, and the calculations needed will likely require a significant amount of one-time / upfront processing time. As far as highlighting routes/roads/whatever within the viewport, there are lots of way to accomplish this using the API - example here which I found useful: polyline snap to road using google maps api v3
Also useful: http://econym.org.uk/gmap/snap.htm
Note that one way streets may give some grief if using the directions api to snap to a street and you will likely have to watch for this and correct or reverse the start/end points.
Google would recommend using it's Geocoding Service in order to populate your data base with the co-ordinates. You can then use the LatLng Bounds Class method "contains" to check whether your points lie within the viewport. The advantage of this approach is you only need to geocode the information once and then store this, versus sending coding requests each time the viewport changes.
An alternate efficient way of displaying this kind of data may be to use google fusion tables. this greatly simplifies the integration of the data with the map.
I want a list of locations (coordinates) for all possible colonies/neighborhoods of some Indian cities. Take for example Delhi. Can this data be obtained with the Places API?
The only thing that comes to my mind is to use a query like -
https://maps.googleapis.com/maps/api/place/search/xml?location=28.540346,77.210026&radius=500&types=administrative_area_level_1|administrative_area_level_2|administrative_area_level_3|locality|neighborhood|street_address|sublocality|sublocality_level_4|sublocality_level_5|sublocality_level_3|sublocality_level_2|sublocality_level_1|subpremise&sensor=false&key=MYKEY
and then keep changing the radius by 500 till the whole city is covered.
Is there a better way of doing this?
Given how often you would need to do this for your map, since caching that data goes against the terms of service, this is not a great approach. If you map gets any decent usage, you'll rapidly hit your quota. Plus you're only get center points of the colonies/neighborhoods. I'd recommend trying to find another source of that data you can download. The Places API was not designed with this in mind.
If I had lat/long data for all our leads in Salesforce, is there a way to write a query to group them, or say list all the leads within 10 miles of San Francisco, CA ?
[EDIT: Clarification]
I have thousands of leads with both a full address, and long/lats.
I want to build a query on these leads that will give me all of the leads near San Francisco, CA. This means doing GIS type work within salesforce.
I could of course filter specifically on city, or zipcodes or area code, but this presents some problems when trying to rollup a whole metro area.
Yes. You need to Reverse GeoCode them with a tool/service. In the past I have used Maporamas service but it was quite expensive and that was before Google maps and virtual earth existed so I am sure there is something cheaper(free) out there now.... Googling around I have found this and this
EDIT:
OK from What I understand you are trying to calculate the distance between 2 lat/long points. I would start by discounting the ones that where outside you sphere of (lets say) 10 miles. So from your central point you will want to get the the coordinates 10 miles, East, West, South and North. To do this you need to use the Great-circle distance formula.
From that point you have you Sales Force Data if you wish to break this data up further then you need to order the points by distance from the central point. To do this you need to use the Haversine formula
I am not sure what you language preference is so I just included some examples from SQL(mainly) and C#
Haversine Formula in C# and in SQL
Determine the distance between ZIP codes using C#
Great Circle SQL
Great Circle 2
Use GeoHash.org (either as a web service or implement the algorithm). It hashes your lat-long coords into a form that appears similar for nearby places. For example A may have a hash like "akusDf3af" and B might have a hash like "akusDf3b2" if they are nearby. Then do a SOQL query that looks for places starting with the same n characters as a known location. Your n will determine the radius of the lookup.
These are some great technical solutions that can provide very exact answers, but two things to consider:
geospatial proximity does not map neatly to responsibility
Ownership calculation seems to be done best through postal code lookups or other rules that don't allow for gaps or overlaps. Otherwise, you'll have two (or more) salespeople fighting over leads that are close to both of them, and ignore those leads that are far away from both of them.
So, if you're using geo-calculations like those above to assign ownership, just acknowledge the system will leak and create business rules to accomodate for that. But a simple postal lookup to define territories (as salesforce's own territory management feature does) might be better.
I'd suggest the problem we're trying to solve geospatially is not who owns which lead. Rather, given all the leads you own, which are nearby?
maps often offer more data per pixel than columnar reports
Again, geospatial data in a report may not be the best answer. A lead 50km away, but along a major road, is more interesting than another lead 50km away on the other side of a mountain or lake. Or a lead close to other leads is more interesting than a lead by itself.
A report can't show this, but a map can.
Salesforce has some great examples of Google Maps integrations. Instead of a columnar report called "My Nearby Leads", why not a visualforce page, with a google map inside? You're giving the user far more information than a columnar report could. They might like it better, and it's easier to implement than trying to calculate some of the equations above.
Just another perspective that may (or may not) be appropriate to the problem at hand.
This post is really old, but is showing up at the top of Google results, so I figured I would post some info to it anyways.
2 nice mapping tools are batchgeo.com and geocod.io. Geocod.io can even give you lat and long coordinates from an address.
If you just need a one time calculation, you can use Excel. Export all your leads with the lat and long. Then go to Google Maps and get the lat and long in decimal degrees for the city center of wherever you want to measure to.
Then use this formula in excel to calculate the distance between the coordinates in miles. Lat1dd and Long1dd are the coordinates for one point, and the lat2dd and long2dd are coordinate points for the other point.
=3963*ACOS(COS(RADIANS(90-lat1dd))*COS(RADIANS(90-lat2dd))+SIN(RADIANS(90-lat1dd))*SIN(RADIANS(90-lat2dd))*COS(RADIANS(long1dd-long2dd)))
After you run it, just sort the results from smallest to largest to get those results that are the closest.
I haven't done this next part yet, but conceptually it should work. We have a field that lists the major market each account is in. Example, Chicago IL. I am going to build a trigger or formula field that essentially says IF(Market="Chicago IL") then use X and Y for the lat and long. These will be hardcoded as the city center for that specific market. The query will then run each individual account's lat and long against the one from the city center to calculate a distance.
If you wanted to break the market into different zones, you could adjust your formula so it uses < and > on the lat and long fields. Everything less than X but greater than Y goes in Zone A, etc.
Hope this helps someone.