I have been looking through the new ScriptDb functionality - I am sure it is 'better' than Fusion Tables as a data store, I am just not sure how/why? Would anyone be able to suggest why it would be preferable (although not universally so, I am sure) over a Fusion Table?
Here are few points to justify "why use scriptDB"
You do not have to use URLfetch to fetch data from Fusion Tables. Since you have relatively lower quota (as per my observation) for URLFetch
ScriptDB is natively supported in App Script so it is faster and robust than your own implementation to access fusion tables.
ScriptDB is key-value store (in the form of JSON) whose latency increases linearly as the DB size increases which is faster than all RDBMS whose latency increases exponentially with DB size. But I am not sure how Fusion Table behave as data size increases.
ScripDB service has far higher quota than URLFetch.
You can do maximum 5 queries in a second in fusion table but in scriptDB, there is no such declaration of query limit.
size limit of ScriptDB:
50MB for consumer accounts,
100MB for Google Apps accounts,
200MB for Google Apps for Business/Education/Government accounts,. I think, this is sufficient for application developed using Apps Script.
You may check the FAQ section in below link for more detail.
https://developers.google.com/apps-script/scriptdb
Related
We are a google cloud sql user.
We are having a table in cloud sql , the size of table is approximately 400 GB.
The maximum size of instance is 500 GB.
The table size will grow up to 2 TB by end of this year [Just an estimation].
We want to create multiple instances to handle this huge table.
Can we allocate more instances for this table?
Please suggest us.
I'll leave sharding strategies to other answers, I just want to provide an alternative. Google Cloud Platform has some other solutions that might help you scale:
Datastore is great for denormalised application storage with fast access.
BigQuery is great for advanced offline analytics.
If you are willing to manage your own database you can run MySQL in GCE with any size disk supported by GCE.
It's my understanding that the only way to use a private Fusion Table with the Maps API is if you're using the Business version of the API. Only public and unlisted tables can be used normally. I would really like to realize the performance benefits of Fusion Tables but am not interested in paying for a business license.
My question is this: How secure could an unlisted table be? The data I'll be displaying is sensitive, but not critical. It is unlikely to be sought out specifically, or scraped by a bot. (no addresses, names of people, phone numbers, etc).
If Fusion Tables really won't be an option for me and my sensitive data, with MySQL at what point would I start to see serious degradation based on the number of markers in an average browser? I estimate the maximum number of points in the table to be somewhere around 1000-2000.
The privacy-setting(public or unlisted) are only required for a FusionTableLayer.
You may use 2 Tables: 1 FusionTable(public or unlisted) to store the geometry and plot the markers, and a 2nd table(private) where you store the sensitive data. Use a common key for the rows, so you'll be able to request the sensitive data from table#2 based on the key returned by table#1.
Which kind of table you use for table#2 depends on you, data in a private FusionTable are accessible after Authentication, but I would prefer my own (mySQL)DB for sensitive data(It happens to me that the data of a FT was accessible via the FT-API, although the download-option was disabled, so I currently wouldn't rely too much in the security-- note that FusionTables are still experimental ).
I'm developing an application that stores geolocation data in a SQL table to generate a map of all entered points/addresses by users. I would like this to scale to a large amount of points, possibly 50,000+ and still have great performance. Looking over the google maps API articles, however, they say performance can be greatly improved using fusion tables instead.
Does anyone have experience with this? Would performance suffer if I have thousands of markers loaded on a map from a SQL table? Does KML or any other strategy seem a better fit?
Once I'm zoomed out enough I could use MarkerClusters, but I'm not sure if that affects performance either since I'm still loading all the geocodes to the page.
You can't compare both technologies.
When you load thousands of markers from a sql-database, you have to create each single marker, what of course will have bad performance, because you'll need to send the data for thousands of markers to the client and create the markers on client-side.
When you use fusion-tables, you don't load markers, you load tiles. It doesn't matter how many markers are visible on the tiles, the performance will always be the same.
KML is not an option, because the amount of features is limited(currently to 1000)
Well, perhaps the only advantage is the fact that your data records may be kept private if you use some SQL table instead of Fusion Tables - but only if this is a concern in your project.
Daniel
I've read that Google Cloud SQL is recommended for small to medium sized applications. I was wondering if it's possible to spread my data across multiple instances in Google Cloud SQL. Say in instance 1 I have 10 tables, 1Gb each, and after a while table A needs more space, say 1.5Gb. Now there's not enough space for all this data in one single instance, how do you spread table A data across different instances? Is it possible to do so?
Thank you,
Rodrigo.
As per the google storage documentation:
If you reach the free storage limit, everything in Google Drive, Gmail and Picasa will still be accessible, but you won't be able to create or add anything new over the free storage limit.
I'm developing a web application that will use a google spreadsheet as a database.
This will mean parsing up to 30.000 (guestimated size) rows in regular operations for searching ID's etc...
I'm worried about the response times i will be looking at. Does anybody have experience with this? I don't want to waste my time on something that will hit a deadend at an issue like that.
Thanks in advance
Using spreadsheets as a database for this data set is probably not a good idea. Do you already have this spreadsheet set up?
30K rows will allow you to have only 66 columns, is that enough for you? Check the Google Docs size limits help page for more info.
Anyway, Google Spreadsheets have a "live concurrent editing" nature to it that makes it a much slower database than any other option. You should probably consider something else.
Do you intend to use the spreadsheet to display data or only as a storage place.?
In this second option the relative slowness of the spreadsheet will not be an issue since you'll only have to read its data once to get its data in an array and play with that...
This implies of course that you build every aspect of data reading and writing in a dedicated UI and never show the spreadsheet itself , the speed will be depending only on the JavaScript engine on arrays, the speed of UI and the speed of your internet connection... all 3 factors being not very performing if compared to a 'normal' application but with the advantage of being easily shareable and available anywhere.:-)
That said, I have written such a database app with about 20 columns of data and 1000 rows and it is perfectly useable although having some latency even for simple next / previous requests. On the other hand the app can send mails and create docs.... the advantages of Google service integration :-)
You could have a look at this example to see what I'm talking about