Google Compute Engine guide says that Google may migrate a VM in order to do maintenance:
https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options
By default, standard instances are set to live migrate, where Google
Compute Engine automatically migrates your instance away from an
infrastructure maintenance event, and your instance remains running
during the migration. Your instance might experience a short period of
decreased performance, although generally most instances should not
notice any difference.
There is a disruption during migration.
Is it possible that Google decides to migrate all instances within a zone at the same time? Is there a maximum to a number of concurrent migrations?
Q: There is a disruption during migration?
A: Yes there is a small period of time where the instance is not running on the old host neither the new one. Here [1] you can see how the process works.
Q: Is it possible that Google decides to migrate all instances within a zone at the same time?
A: It is very unlikely that this escenario happens, as this would implicate that all your Google Compute Engine instances of your project are on the same physical host.
Q: Is there a maximum to a number of concurrent migrations?
A: I don't know the answer to that question but I have addressed to the proper team so maybe they can answer it.
You can find more about the live migration procedure here [2].
[1] https://cloud.google.com/compute/docs/instances/live-migration#how_does_the_live_migration_process_work
[2] https://cloud.google.com/compute/docs/instances/live-migration
Related
I set up Google Cloud MySQL, I store there just one user (email, password, address) and I'm querying it quite often due to testing purposes of my website. I set up minimal zone availability, the lowest SSD storage, memory 3.75GB, 1vCPUs, automatic backups disabled but running that database from the last 6 days costing me £15... How can I decrease the costs of having MySQL database in the cloud? I'm pretty sure paying that amount is way too much. Where is my mistake?
I suggest using the Google Pricing Calculator to check the different configurations and pricing you could have for a MySQL database in Cloud SQL.
Choosing Instance type
As you've said in your question, you're currently using the lowest standard instance, which is based on CPU and memory pricing.
As you're currently using your database for testing purposes, I could suggest to configure your database with the lowest Shared-Core Machine Type which is db-f1-micro, as shown here. But note that
The db-f1-micro and db-g1-small machine types are not included in the Cloud SQL SLA. These machine types are designed to provide low-cost test and development instances only. Do not use them for production instances.
Choosing Storage type
As you have selected the lowest allowed disk space, you could lower cost changing the storage type to HDD instead of a SSD if you haven't done so, as stated in the documentation:
Choosing SSD, the default value, provides your instance with SSD storage. SSDs provide lower latency and higher data throughput. If you do not need high-performance access to your data, for example for long-term storage or rarely accessed data, you can reduce your costs by choosing HDD.
Note that Storage type could only be selected when you're creating the instance and could not be changed later, as stated in the message when creating your instance.
Choice is permanent. Storage type affects performance.
Stop instance when is not in use
Finally, you could lower costs by stopping the database instance when it is not in use as pointed in the documentation.
Stopping an instance suspends instance charges. The instance data is unaffected, and charges for storage and IP addresses continue to apply.
Using Google Pricing Calculator
The following information is presented as a calculation exercise based in the Google Pricing Calculator
The estimated fees provided by Google Cloud Pricing Calculator are for discussion purposes only and are not binding on either you or Google. Your actual fees may be higher or lower than the estimate. A more detailed and specific list of fees will be provided at time of sign up
Following the suggestions above, you could get a monthly estimate of 6.41 GBP. Based on a 24 hour per 7 days running instance.
And using a SSD, it increases to 7.01 GBP. As said before, the only way to change the storage type would be to create a new instance and load your data.
And this could lower to 2.04 GBP if you only run it for 8 hours 5 days a week running on HDD.
I'm starting a project where a CloudSQL instance would be a great fit however I've noticed they are twice the price for the same specification VM on GCP.
I've been told by several devops guys I work with that they are billed by usage only. Which would be perfect for me. However on their pricing page it states "Instance pricing for MySQL is charged for every second that the instance is running".
https://cloud.google.com/sql/pricing#2nd-gen-pricing
I also see several people around the web saying they are usage only.
Cloud SQL or VM Instance to host MySQL Database
Am I interpreting Googles pricing pages incorrectly?
Am I going to be billed for the instance being on or for its usage?
Billed by usage
All depend what you mean by USAGE. When you run a Cloud SQL instance, it's like a server (compute engine). Until you stop it, you will pay for it. It's not a pay-per-request pricing, as you can have with BigQuery.
With Cloud SQL, you will also pay the storage that you use. And the storage can grow automatically according with the usage. Be careful the storage can't be reduce!! even if you delete data in database!
Price is twice a similar Compute engine
True! A compute engine standard1-n1 is about $20 per month and a same config on Cloud SQL is about $45.
BUT, what about the price of the management of your own SQL instance?
You have to update/patch the OS
You have to update/patch the DB engine (MySQL or Postgres)
You have to manage the security/network access
You have to perform snapshots, ensure that the restoration works
You have to ensure the High Availability (people on call in case of server issue)
You have to tune the Database parameters
You have to watch to your storage and to increase it in case of needs
You have to set up manually your replicas
Is it worth twice the price? For me, yes. All depends of your skills and your opinion.
There are a lot of hidden configuration options that when modified can quickly halve your costs per option.
Practically speaking, GCP's SQL product only works by running 24/7, there is no time-based 'by usage' option, short of you manually stopping and restarting the compute engine.
There are a lot of tricks you can follow to lower costs, you can read many of them here: https://medium.com/#the-bumbling-developer/can-you-use-google-cloud-platform-gcp-cheaply-and-safely-86284e04b332
I want to update the machine type of my Google Cloud SQL instance, but this takes several minutes to update (second generation instance). The instance will be unavailable until the instance has restarted. Because of this downtime, we have to update the machine type at night time, so our visitors are the least troubled by this update.
Is there a workflow how we can minimise this downtime to zero or maybe a few seconds? I already thought about possible solutions like adding a temporary failover or maybe make use of read replica.
I contacted the support of Google Cloud about this question and they told me that Cloud SQL isn't build to perform this change without downtime. If I want to be able to make these changes, I should look at Cloud Spanner which is a horizontal scalable SQL solution provided by Google.
Due to some mix-up during planning we ended up with several worker nodes running 23TB drives which are now almost completely unused (we keep data on external storage). As the drives are only wasting money at the moment, we need to shrink them to a reasonable size.
Using weresync I was able to fully clone the drive to a much smaller one but apparently you can't swap the boot drive in GCE (which makes no sense to me). Is there a way to achieve that or do I need to create new workers using the images? If so, is there any other config I need to copy to the new instance in order for it to be automatically joined to the cluster?
Dataproc does not support VMs configuration changes in running clusters.
I would advise you to delete old cluster and create new one with workers disk size that you need.
I ended up creating a ticket with GCP support - https://issuetracker.google.com/issues/120865687 - to get an official answer to that question. Got an that this is not possible currently but should be available shortly (within months) in the beta GCP CLI, possibly in the Console on a later data as well.
Went on with a complete rebuild of the cluster.
Not sure how to ask this question, but as I understand google cloud SQL supports the idea of instances, which are located throughout their global infrastructure...so I can have a single database spread across multiple instances all over the world.
I have a a few geographic regions our app serves...the data doesn't really need to be aggregated as a whole and could be stored individually on separate databases in regions accordingly.
Does it make sense to serve all regions off one database/multiple instances? Or should I segregate each region into it's own database and host the data the old fashion way?
If by “scaling” you mean memory size, then you can start with a smaller instance (less RAM) and move up to a more powerful instance (more RAM) later.
But if you mean more operations per second, there is a certain max size and max number of operations that one Cloud SQL instance can support. You cannot infinitely scale one instance. Internally the data for one instance is indeed stored over multiples machines, but that is more related to reliability and durability, and it does not scale the throughput beyond a certain limit.
If you really need more throughput than what one Cloud SQL instance can provide, and you do need a SQL based storage, you’ll have to use multiple instances (i.e. completely separate databases) but your app will have to manage them.
Note that the advantages of Cloud go beyond just scalability. Cloud SQL instances are managed for you (e.g. failover, backups, etc. are taken care of). And you get billing based on usage.
(Cloud SQL team)
First, regarding the overall architecture: An "instance" in Google Cloud SQL is essentially one MySQL database server. There is no concept of "one database/multiple instances". Think of your Cloud SQL "instance" as the database itself. At any point in time, the data from a Cloud SQL instance is served out from one location -- namely where your instance happens to be running at that time. Now, if your app is running in Google App Engine or Google Compute Engine, then you can configure your Cloud SQL instance so that it is located close to your app.
Regarding your question of one database vs. multiple databases: If your database is logically one database and is served by one logical app, then you should probably have one Cloud SQL instance. (Again, think of one Cloud SQL instance as one database). If you create multiple Cloud SQL instances, they will be siloed from one another, and your app will have to do all the complex logic of managing them as completely different databases.
(Google Cloud SQL team)