I set up Google Cloud MySQL, I store there just one user (email, password, address) and I'm querying it quite often due to testing purposes of my website. I set up minimal zone availability, the lowest SSD storage, memory 3.75GB, 1vCPUs, automatic backups disabled but running that database from the last 6 days costing me £15... How can I decrease the costs of having MySQL database in the cloud? I'm pretty sure paying that amount is way too much. Where is my mistake?
I suggest using the Google Pricing Calculator to check the different configurations and pricing you could have for a MySQL database in Cloud SQL.
Choosing Instance type
As you've said in your question, you're currently using the lowest standard instance, which is based on CPU and memory pricing.
As you're currently using your database for testing purposes, I could suggest to configure your database with the lowest Shared-Core Machine Type which is db-f1-micro, as shown here. But note that
The db-f1-micro and db-g1-small machine types are not included in the Cloud SQL SLA. These machine types are designed to provide low-cost test and development instances only. Do not use them for production instances.
Choosing Storage type
As you have selected the lowest allowed disk space, you could lower cost changing the storage type to HDD instead of a SSD if you haven't done so, as stated in the documentation:
Choosing SSD, the default value, provides your instance with SSD storage. SSDs provide lower latency and higher data throughput. If you do not need high-performance access to your data, for example for long-term storage or rarely accessed data, you can reduce your costs by choosing HDD.
Note that Storage type could only be selected when you're creating the instance and could not be changed later, as stated in the message when creating your instance.
Choice is permanent. Storage type affects performance.
Stop instance when is not in use
Finally, you could lower costs by stopping the database instance when it is not in use as pointed in the documentation.
Stopping an instance suspends instance charges. The instance data is unaffected, and charges for storage and IP addresses continue to apply.
Using Google Pricing Calculator
The following information is presented as a calculation exercise based in the Google Pricing Calculator
The estimated fees provided by Google Cloud Pricing Calculator are for discussion purposes only and are not binding on either you or Google. Your actual fees may be higher or lower than the estimate. A more detailed and specific list of fees will be provided at time of sign up
Following the suggestions above, you could get a monthly estimate of 6.41 GBP. Based on a 24 hour per 7 days running instance.
And using a SSD, it increases to 7.01 GBP. As said before, the only way to change the storage type would be to create a new instance and load your data.
And this could lower to 2.04 GBP if you only run it for 8 hours 5 days a week running on HDD.
Related
I wanted to create a Always Free Eligible VM Instance (VM.Standard.E2.1.Micro) on the Oracle Cloud, but it's not on my list.
And when I check my limit for VM.Standard.E2.1.Micro in
"Governance > Limits, Quotas and Usage", it say 0.
How can I create one? My Home Region is Canada Southeast (Montreal), ca-montreal-1.
My account's trial is not over yet. Should I wait till my trial is over to create it?
As per the Always Free website, at any time you can have up to the following:
Two Oracle Autonomous Databases with powerful tools like Oracle Application Express (APEX) and Oracle SQL Developer
Two Oracle Cloud Infrastructure Compute VMs; Block, Object, and Archive Storage; Load Balancer and data egress; Monitoring and Notifications
If you already are at capacity for this, then you would not be able to add an additional. Further details of Always Free resources can be found here - https://docs.oracle.com/en-us/iaas/Content/FreeTier/resourceref.htm
The always free provide you with the following
2 Compute virtual machines with 1/8 OCPU and 1 GB memory each.
2 Block Volumes Storage
100 GB total.
10 GB Object Storage.
10 GB Archive Storage.
Resource Manager: managed Terraform.
Focus on the specs of the free one
VM.Standard.E2.1.Micro is not available for ca-montreal-1 at this time (January 2021).
I created a new account in the Ashburn region where VM.Standard.E2.1.Micro is available.
I'm starting a project where a CloudSQL instance would be a great fit however I've noticed they are twice the price for the same specification VM on GCP.
I've been told by several devops guys I work with that they are billed by usage only. Which would be perfect for me. However on their pricing page it states "Instance pricing for MySQL is charged for every second that the instance is running".
https://cloud.google.com/sql/pricing#2nd-gen-pricing
I also see several people around the web saying they are usage only.
Cloud SQL or VM Instance to host MySQL Database
Am I interpreting Googles pricing pages incorrectly?
Am I going to be billed for the instance being on or for its usage?
Billed by usage
All depend what you mean by USAGE. When you run a Cloud SQL instance, it's like a server (compute engine). Until you stop it, you will pay for it. It's not a pay-per-request pricing, as you can have with BigQuery.
With Cloud SQL, you will also pay the storage that you use. And the storage can grow automatically according with the usage. Be careful the storage can't be reduce!! even if you delete data in database!
Price is twice a similar Compute engine
True! A compute engine standard1-n1 is about $20 per month and a same config on Cloud SQL is about $45.
BUT, what about the price of the management of your own SQL instance?
You have to update/patch the OS
You have to update/patch the DB engine (MySQL or Postgres)
You have to manage the security/network access
You have to perform snapshots, ensure that the restoration works
You have to ensure the High Availability (people on call in case of server issue)
You have to tune the Database parameters
You have to watch to your storage and to increase it in case of needs
You have to set up manually your replicas
Is it worth twice the price? For me, yes. All depends of your skills and your opinion.
There are a lot of hidden configuration options that when modified can quickly halve your costs per option.
Practically speaking, GCP's SQL product only works by running 24/7, there is no time-based 'by usage' option, short of you manually stopping and restarting the compute engine.
There are a lot of tricks you can follow to lower costs, you can read many of them here: https://medium.com/#the-bumbling-developer/can-you-use-google-cloud-platform-gcp-cheaply-and-safely-86284e04b332
I have been trying to determine what instances I should choose, for Compute engine and Cloud SQL, for when we lunch our product.
Initially I'm working on handling max 500 users per day, with peak traffic likely to occur during the evenings. The users are expected to stay on the site with constant interactions for a lengthily period of time (10min+).
So far my guess's lead me to the following:
Compute engine:
n1-standard-2 ->
2 virtuals cpu's, 3.75GB memory
Cloud SQL:
D2 ->
1GB ram, max 250 concurrent users
Am I in the right ball park, or can I use smaller/larger instances?
I'd say to use appropriate performance testing tools to simulate the traffic that will be hitting your server and estimate the amount of the resources you will require to handle the requests.
For Compute Engine VM instance, you can go with a lighter machine type and take advantage of the GCE Autoscaler to automatically add more resources to your front-end when the traffic goes high.
I recommend watching this video.
There are only 7 performance tiers in GCS (D0, D1, D2, D4, D8, D16, D32), RAM maxes out at 16GB (D32) as they are based on Google Compute Engine(GCE) machine types. See screenshot below (1)
By comparison, Amazon has 13 performance tiers with db.r3.8xlarge's RAM maxes out at 244GB. (2)
So my question is, what is the rough equivalent performance tier in AWS RDS for MySQL for a Google Cloud SQL's D32 tier?
Disclaimer: I am new to Google Cloud SQL. I only start to use Cloud SQL because I started a new job that's 100% Google Cloud. Previously I have been a AWS user since the early days.
The D0-D32 Cloud SQL tiers are not based on GCE VMs so a direct comparison is not straightforward. Note that the storage for D0-D32 is replicated geographically and that makes writes a lot slower. The ASYNC mode improves the performance for small commits. The upside is that the instances can be relocated quickly between location that are farther apart.
The connectivity for Cloud SQL is also different from RDS. RDS can be access using IPs and the latency is comparable with VMs talking over local IPs. Cloud SQL uses only external IPs. That makes the latency from GCE higher (~1.25ms) but it provides a slightly better for experience for connections coming from outside the Google Cloud because the TCP connections are terminated closer to the clients.
That being said, from a memory point of view, the db.m3.xlarge from RDS is the closest match for the D32 from Cloud SQL. If the working set fits in the memory the performance for some queries will be similar.
Currently in Alpha, there is a new feature of Cloud SQL that uses comparable performance to GCE machine types.
A Google Cloud SQL Performance Class instance is a new Google Cloud
SQL instance type that provides better performance and more storage
capacity, and is optimized for heavy workloads running on Google
Compute Engine. The new instance type is an always-on, long-lived
database as compared to a Standard Class instance type. Performance
Class instances are based on tiers that correspond to Google Compute
Engine (GCE) standard or highmem machine types.
Link: https://cloud.google.com/sql/docs/getting-started-performance-class
Anyway, very good question, comparing prices with AWS, I found out that there is a huge difference in resources for the smallest instances and the same price:
GCE, D0 = $0.025 per hour (0,128 GB RAM + "an appropriate amount of CPU")
AWS, db.t2.micro = $0.02 per hour (1 GB RAM + 1 vCPU)
For 1 GB RAM in GCE, one would have to pay $0.19 per hour. Unfortunately Google does not specify anything about SSD storage, something very important for performance comparison.
So I've read elsewhere that LoadRunner is well known to support 2-4k users easily enough, but what that didn't tell me was what sort of environment LoadRunner needed to do that. Is there any sort of guidance available on what the environment needs to be for various loads?
For example, Would a single dual-core 2.4Ghz CPU, 4GB RAM support 1,000 concurrent vUsers easily? What about if we were testing something at a larger scale (say 10,000 users) where I assume we'd need a small server farm to generate ? What would be the effect of fewer machines but with more network cards?
There have been tests run with loadrunner well into the several hundred thousand user ranges. You can imagine the logistical effort on the infrastructure required to run such tests.
Your question on how many users can a server support is actually quite a complex question. Just like any other piece of engineered software, each virtual user takes a slice of resources to operate from the finite pool of CPU, DISK, Network and RAM. So, simply adding more network cards doesn't buy you anything if your limiting factor is CPU for your virtual users. Each virtual user type has a base weight and then your own development and deployment models alter that weight. I have observed a single load generator that could take 1000 winsock users easily with less than 50% of all used resources and then drop to 25 users for a web application which had significantly high network data flows, lots of state management variables and the need for some disk activity related to the loading of files as part of the business process. You also don't want to max load your virtual user hosts in order to limit the possibility of test bed influences on your test results.
If you have immature loadrunner users then you can virtually guarantee you will be running less than optimal virtual user code in terms of resource utilization which could result in as few as 10% of what you should expect to run on a given host to produce load because of choices made in virtual user type, development and deployment run time settings.
I know this is not likely the answer you wanted to hear, i.e, "for your hosts you can get 5732 of virtual user type xfoo," but there is no finite answer without holding the application as a constant and the skills of the user of the tool as a constant. Then you can move from protocol to protocol and from host to host and find out how many users you can get per box.
Ideally each virtual user needs around 4 mb of ram memory.. so you can calculate what number your existing machine can reach up to..