I've read that Google Cloud SQL is recommended for small to medium sized applications. I was wondering if it's possible to spread my data across multiple instances in Google Cloud SQL. Say in instance 1 I have 10 tables, 1Gb each, and after a while table A needs more space, say 1.5Gb. Now there's not enough space for all this data in one single instance, how do you spread table A data across different instances? Is it possible to do so?
Thank you,
Rodrigo.
As per the google storage documentation:
If you reach the free storage limit, everything in Google Drive, Gmail and Picasa will still be accessible, but you won't be able to create or add anything new over the free storage limit.
Related
We are a google cloud sql user.
We are having a table in cloud sql , the size of table is approximately 400 GB.
The maximum size of instance is 500 GB.
The table size will grow up to 2 TB by end of this year [Just an estimation].
We want to create multiple instances to handle this huge table.
Can we allocate more instances for this table?
Please suggest us.
I'll leave sharding strategies to other answers, I just want to provide an alternative. Google Cloud Platform has some other solutions that might help you scale:
Datastore is great for denormalised application storage with fast access.
BigQuery is great for advanced offline analytics.
If you are willing to manage your own database you can run MySQL in GCE with any size disk supported by GCE.
Not sure how to ask this question, but as I understand google cloud SQL supports the idea of instances, which are located throughout their global infrastructure...so I can have a single database spread across multiple instances all over the world.
I have a a few geographic regions our app serves...the data doesn't really need to be aggregated as a whole and could be stored individually on separate databases in regions accordingly.
Does it make sense to serve all regions off one database/multiple instances? Or should I segregate each region into it's own database and host the data the old fashion way?
If by “scaling” you mean memory size, then you can start with a smaller instance (less RAM) and move up to a more powerful instance (more RAM) later.
But if you mean more operations per second, there is a certain max size and max number of operations that one Cloud SQL instance can support. You cannot infinitely scale one instance. Internally the data for one instance is indeed stored over multiples machines, but that is more related to reliability and durability, and it does not scale the throughput beyond a certain limit.
If you really need more throughput than what one Cloud SQL instance can provide, and you do need a SQL based storage, you’ll have to use multiple instances (i.e. completely separate databases) but your app will have to manage them.
Note that the advantages of Cloud go beyond just scalability. Cloud SQL instances are managed for you (e.g. failover, backups, etc. are taken care of). And you get billing based on usage.
(Cloud SQL team)
First, regarding the overall architecture: An "instance" in Google Cloud SQL is essentially one MySQL database server. There is no concept of "one database/multiple instances". Think of your Cloud SQL "instance" as the database itself. At any point in time, the data from a Cloud SQL instance is served out from one location -- namely where your instance happens to be running at that time. Now, if your app is running in Google App Engine or Google Compute Engine, then you can configure your Cloud SQL instance so that it is located close to your app.
Regarding your question of one database vs. multiple databases: If your database is logically one database and is served by one logical app, then you should probably have one Cloud SQL instance. (Again, think of one Cloud SQL instance as one database). If you create multiple Cloud SQL instances, they will be siloed from one another, and your app will have to do all the complex logic of managing them as completely different databases.
(Google Cloud SQL team)
I'm working on a SaaS application. Each user will buy a plan on this application and he will be given a certain amount of storage corresponding to amount of information on the app. For example, the Free user will get 1GB free storage, the Basic user will get 5GB storage.
Currently, all information are stored in MySQL database and it is just plain text without any binary data on disk such as images or videos.
Let's imaging Gmail without attachment as an example of this application.
How can I implement this function on my application? Do we need a method that somehow calculates the amount of info contains in database for a specific user and does some validation on that?
Thank you in advance!
You should keep a running tally of how much space each user has consumed, which is then updated every time a write is made against their quota. Continually computing it is not going to be very efficient.
I'm developing a web application that will use a google spreadsheet as a database.
This will mean parsing up to 30.000 (guestimated size) rows in regular operations for searching ID's etc...
I'm worried about the response times i will be looking at. Does anybody have experience with this? I don't want to waste my time on something that will hit a deadend at an issue like that.
Thanks in advance
Using spreadsheets as a database for this data set is probably not a good idea. Do you already have this spreadsheet set up?
30K rows will allow you to have only 66 columns, is that enough for you? Check the Google Docs size limits help page for more info.
Anyway, Google Spreadsheets have a "live concurrent editing" nature to it that makes it a much slower database than any other option. You should probably consider something else.
Do you intend to use the spreadsheet to display data or only as a storage place.?
In this second option the relative slowness of the spreadsheet will not be an issue since you'll only have to read its data once to get its data in an array and play with that...
This implies of course that you build every aspect of data reading and writing in a dedicated UI and never show the spreadsheet itself , the speed will be depending only on the JavaScript engine on arrays, the speed of UI and the speed of your internet connection... all 3 factors being not very performing if compared to a 'normal' application but with the advantage of being easily shareable and available anywhere.:-)
That said, I have written such a database app with about 20 columns of data and 1000 rows and it is perfectly useable although having some latency even for simple next / previous requests. On the other hand the app can send mails and create docs.... the advantages of Google service integration :-)
You could have a look at this example to see what I'm talking about
I have been looking through the new ScriptDb functionality - I am sure it is 'better' than Fusion Tables as a data store, I am just not sure how/why? Would anyone be able to suggest why it would be preferable (although not universally so, I am sure) over a Fusion Table?
Here are few points to justify "why use scriptDB"
You do not have to use URLfetch to fetch data from Fusion Tables. Since you have relatively lower quota (as per my observation) for URLFetch
ScriptDB is natively supported in App Script so it is faster and robust than your own implementation to access fusion tables.
ScriptDB is key-value store (in the form of JSON) whose latency increases linearly as the DB size increases which is faster than all RDBMS whose latency increases exponentially with DB size. But I am not sure how Fusion Table behave as data size increases.
ScripDB service has far higher quota than URLFetch.
You can do maximum 5 queries in a second in fusion table but in scriptDB, there is no such declaration of query limit.
size limit of ScriptDB:
50MB for consumer accounts,
100MB for Google Apps accounts,
200MB for Google Apps for Business/Education/Government accounts,. I think, this is sufficient for application developed using Apps Script.
You may check the FAQ section in below link for more detail.
https://developers.google.com/apps-script/scriptdb