SQL db vs. Fusion Tables performance in Google Maps - mysql

I'm developing an application that stores geolocation data in a SQL table to generate a map of all entered points/addresses by users. I would like this to scale to a large amount of points, possibly 50,000+ and still have great performance. Looking over the google maps API articles, however, they say performance can be greatly improved using fusion tables instead.
Does anyone have experience with this? Would performance suffer if I have thousands of markers loaded on a map from a SQL table? Does KML or any other strategy seem a better fit?
Once I'm zoomed out enough I could use MarkerClusters, but I'm not sure if that affects performance either since I'm still loading all the geocodes to the page.

You can't compare both technologies.
When you load thousands of markers from a sql-database, you have to create each single marker, what of course will have bad performance, because you'll need to send the data for thousands of markers to the client and create the markers on client-side.
When you use fusion-tables, you don't load markers, you load tiles. It doesn't matter how many markers are visible on the tiles, the performance will always be the same.
KML is not an option, because the amount of features is limited(currently to 1000)

Well, perhaps the only advantage is the fact that your data records may be kept private if you use some SQL table instead of Fusion Tables - but only if this is a concern in your project.
Daniel

Related

Way to use Fusion Tables with the Google Maps API while maintaining privacy?

It's my understanding that the only way to use a private Fusion Table with the Maps API is if you're using the Business version of the API. Only public and unlisted tables can be used normally. I would really like to realize the performance benefits of Fusion Tables but am not interested in paying for a business license.
My question is this: How secure could an unlisted table be? The data I'll be displaying is sensitive, but not critical. It is unlikely to be sought out specifically, or scraped by a bot. (no addresses, names of people, phone numbers, etc).
If Fusion Tables really won't be an option for me and my sensitive data, with MySQL at what point would I start to see serious degradation based on the number of markers in an average browser? I estimate the maximum number of points in the table to be somewhere around 1000-2000.
The privacy-setting(public or unlisted) are only required for a FusionTableLayer.
You may use 2 Tables: 1 FusionTable(public or unlisted) to store the geometry and plot the markers, and a 2nd table(private) where you store the sensitive data. Use a common key for the rows, so you'll be able to request the sensitive data from table#2 based on the key returned by table#1.
Which kind of table you use for table#2 depends on you, data in a private FusionTable are accessible after Authentication, but I would prefer my own (mySQL)DB for sensitive data(It happens to me that the data of a FT was accessible via the FT-API, although the download-option was disabled, so I currently wouldn't rely too much in the security-- note that FusionTables are still experimental ).

How many data points can the Google Maps API show at one time?

I have a large volume of data (100,000 points). Is it possible to show all the data points at the same time? Someone said that the Google Maps API 3 cannot load more than 3000 data points. Is this true? How many data points can it show at one time?
You might want to take a look at this article from the Google Geo APIs team, discussing various strategies to display a large number of markers on a map.
Storing the data using Fusion Tables in particular might be an interesting solution.

Advantages of ScriptDb over Fusion Tables

I have been looking through the new ScriptDb functionality - I am sure it is 'better' than Fusion Tables as a data store, I am just not sure how/why? Would anyone be able to suggest why it would be preferable (although not universally so, I am sure) over a Fusion Table?
Here are few points to justify "why use scriptDB"
You do not have to use URLfetch to fetch data from Fusion Tables. Since you have relatively lower quota (as per my observation) for URLFetch
ScriptDB is natively supported in App Script so it is faster and robust than your own implementation to access fusion tables.
ScriptDB is key-value store (in the form of JSON) whose latency increases linearly as the DB size increases which is faster than all RDBMS whose latency increases exponentially with DB size. But I am not sure how Fusion Table behave as data size increases.
ScripDB service has far higher quota than URLFetch.
You can do maximum 5 queries in a second in fusion table but in scriptDB, there is no such declaration of query limit.
size limit of ScriptDB:
50MB for consumer accounts,
100MB for Google Apps accounts,
200MB for Google Apps for Business/Education/Government accounts,. I think, this is sufficient for application developed using Apps Script.
You may check the FAQ section in below link for more detail.
https://developers.google.com/apps-script/scriptdb

Best Practice around Geocoding?

Can anyone weigh in on best practice around geocoding location data?
We have several hundred "locations" (and growing) that are regularly inserted into the database.
We will occassionally run queries to plop these on a Google Maps using the Javascript API.
Do you think it's better to geocode these addresses when they're inserted into the database (ie, add lng and lat fields and populate and store these--addresses won't be modified manually) or call a geocoding service whenever we are generating our maps?
Storing the long/lat values will be much faster when it comes to use them. As long as you're happy they wont be moving around on their own between data entry and map drawing then do that.
We call the geocoding api and store the latitude/longitude upon insert of a new record or update of any address field. Not only is this quicker when adding the location to a map, but it reduces repeated calls to the API. Given that there is a limit to the number of calls you may make in a day (enforced as calls per second) you don't want to do it any more than you have to.
Storing the geocodes (but updating them occasionally) is a recommended best practice by Google.

MySQL GIS and Spatial Extensions - how to map regions and query against them

I am trying to make a smartphone app which will return a list of users within a certain proximity, say 100m. It's easy to get the coordinates of my BlackBerry and write them to a database, but in order to return a list of other users within 100m, I need to pull every other record from the database and compare the distance between the two points, checking to see if it's within range, before outputting that user's information.
This is going to be time consuming if there are many users involved. So I would like to map areas (countries, cities, I'm not yet sure of the resolution I'll need) so that I can first target a smaller subset of all users. This will save on processing time.
I have read the basics of GIS and spatial querying on the mysql website but to be honest the query is over my head and I hate copying and pasting code without understanding it. Plus it only checks for proximity - I want to first check if a coordinate falls within a certain area.
Does anyone have any experience of such matters and feel like giving me some pointers? Resources such as any preexisting databases of points describing countries as polygons would be really helpful too.
Many thanks to anyone who takes the time :)
I would reccomend against using MySQL for doing spatial analysis. They have only implemented bounding box analysis and it is not implemented for most spatial functions. I would recommend using either PostGIS or perhaps spatialCouch or extend MongoDB. This would be much better for what you look to be doing.