We currently have a database hosted in Google Cloud SQL. We are paying almost 100$ but we use less than 5% of our size. Our configs are:
Version: MySQL 8.0
Machine Type: High Memory 4 vCPU, 26 GB
Storage Type: SSD
Storage capacity: 100GB
I was thinking of switching to Machine Type High Memory to Lightweight.
Would this delete my current database data?
You can scale up and down the memory and te CPU without data loss. Your database will be unavailable (you need to stop and to start it again with the new machine type configuration). Don't be afraid, you can do this
At, the opposite, you can scale up but not to scale down the storage capacity. If you want to achieve that, you need to export the data outside of your database, to delete the current Cloud SQL instance and to create a new one with a smallest disk. And then to reimport the data.
Related
I have a dataset of 100GB and I want to use AWS (RDS MySQL).
I would like to understand what the minimum required amount of RAM to upload the dataset and later perform the queries.
It is quite expensive to start an instance RDS MySQL with RAM close to 100Gb. If there's a way to work with an instance with smaller amount of RAM, I would like to hear from you.
Databases keep their data on disk (using Amazon Elastic Block Store volumes). You can easily add 100GB of disk storage to any Amazon RDS instance.
Databases use RAM for processing data and for keeping often-used data in memory ('cache'). More RAM generally improves the performance of a database, but the amount of RAM you need is not related to the amount of data that is being stored in the database.
Start with a small RDS instance and, if your queries run too slow, change it to an instance type that has more RAM and CPU.
I have an application that is hosted in AWS ECS and having the database in AWS RDS. I'm using a microservice-based container architecture for my application. The frontend of the application is in Angular and Backends are in Java and Python. Right now, the database size is ~1GB. The database size will increase day by day as the scraped data will be inserted daily.
Right now, some queries are taking 4-6 seconds to execute. We need to host this application to the public and there are a lot of users will be using the application. So when we load tested the application with 50 users, I found that the CPU of RDS reached 100% and some queries had taken more than 60 seconds to execute and then timed-out. Also, the CPU and memory of other microservices (frontend and backend) are normal. I have tried vertically scaling the application up to 64GB RAM and 4 vCPUs but still, this condition remains.
Is this an issue with the query or can I do anything with the database server configuration?
The RDS storage I'm using is 100GB with a general-purpose SSD. So, I guess there will be only 300 IOPS, right? I'm planning to use RDS read replicas but before that, I need to know is there anything that I need to do for improving the performance? Any database configurations etc?
I also not have a good idea about the MySQL connection count. Right now, it is using a total of 24 connections. Do I need to change the connection count also?
Query Optimisation
As Tim pointed out, try and optimise the queries. Since you have more data getting inserted, consider indexing the table and make the queries to use indexed columns if possible. also consider archiving unused old data.
number of connections
If you have control over the code, you can make use of database pools to control the number of connections that your applications can use.
CPU usage
the CPU usage is highly related to the performance of the queries, once you optimise the queries, the CPU usage should come down.
disk usage
Use the cloudwatch metrics to monitor the disk usage, based on that, you can decide on a provisioned IOPS disk.
hope this helps.
I created replication in Microsoft Azure Virtual Machine .
I'm using MySQL and working with sql workbench (windows).
yesterday I discovered that my 250 GB storage are full and replication stopped.
this log wrote,
Timestamp, Thread, Type, Details
2015-07-29 23:26:44, 1672, Warning, Disk is full writing '.\database123-relay-bin.000164' (Errcode: 28 - No space left on device). Waiting for someone to free space...
and I created another 250 GB external storage.
I have 2 Q :
how can I create queries and use data within two difference storage ?
is it the right thing to do? to create another storage or there is a way to create flexible storage
?
that i found is this : http://www.psce.com/blog/2012/05/07/running-out-of-disk-space-on-mysql-partition-a-quick-rescue/
but it not help , need help and Guidance
this is another option that i found :
how to extend C drive size of Azure VM
Use data disks for storing application data from your Azure VMs. The largest data disk size available is 1TB. If you require more space, you can stripe more than one data disk together. Avoid using the OS disk for storing application data because you will run into this issue of limited space. Also avoid using the temporary disk for storing application data as it is not persistent. You cannot extend the OS disk size, but if you use data disks, you can start with a smaller data disk and increase its size as your application grows.
Learn more about Azure disks here: https://msdn.microsoft.com/en-us/library/azure/Dn790303.aspx
When I launch an Amazon MySQL database instance in RDS, I choose the amount of Allocated Storage for it.
When I create a snapshot (either manually or with the automatic backup), it says under "Storage" the same size as the size allocated for the instance, even though my database did not reach that size.
Since the pricing (or the free tier) in Amazon is dependent on the amount of storage used, I would like to know the real storage size I'm using, rather than the size allocated by the original database.
From looking at the Account Activity, and from knowing how mysqldump works, I would guess the snapshot does not really include the empty space allocated.
I was interested in the answer to this question and a google search brought me here. I was surprised to see that although there is an accepted, upvoted answer, it does not actually answer the question that was asked.
The question asked is:
How can I tell the raw size of a MySQL DB snapshots in Amazon RDS?
However, the accepted answer is actually the answer to this question:
Am I charged for the allocated size of the source database when I take an RDS snapshot from it.
As to the the original question, AFAICT, there is no API or console function to determine the storage used by an RDS snapshot. The DBSnapshot resource has allocated_storage (ruby, java), but this returns the storage maximum size requested when the database was created. This mirrors the AWS RDS console:
One might have thought this would be broken out on the AWS bill, but it provides very little details. For RDS:
The S3 part of the bill is even less helpful:
Conclusion, there is no way to tell the raw size of a MySQL DB snapshot in Amazon RDS.
RDS is being stored through EBS according to FAQ:
Amazon RDS uses EBS volumes for database and log storage.
EBS doesn't store empty blocks, according to its pricing page:
Because data is compressed before being saved to Amazon S3, and Amazon EBS does not save empty blocks, it is likely that the snapshot size will be considerably less than your volume size.
And takes space only for changed blocks after initial snapshot was made, according to details page:
If you have a device with 100 GB of data but only 5 GB has changed after your last snapshot, a subsequent snapshot consumes only 5 additional GB and you are billed only for the additional 5 GB of snapshot storage, even though both the earlier and later snapshots appear complete.
RDS backups are block level full virtual machine snapshots; no mysqldump at all is involved. Given this fact, each of your snapshots will use exactly the same ammount of storage as your production instance at the moment the backup took place.
We have an EC2 running both apache and mysql at the moment. I am wondering if moving the mysql to another EC2 instance will increase or decrease the performance of the site. I am more worried about the network speed issues between the two instances.
EC2 instances in the same availability zone are connected via a 10,000 Mbps network - that's faster than a good solid state drive on a SATA-3 interface (6Gb/s)
You won't see any performance drop by moving a database to another server, in fact you'll probably see a performance increase because of having separate memory and cpu cores for the two servers.
If your worry is network latency then forget about it - not a problem on AWS in the same availability zone.
Another consideration is that you're probably storing your website & db file on an EBS mounted volume. That EBS block is stored off-instance so you're actually storing a storage array on the same super-fast 10Gbps network.
So what I'm saying is... with EBS your website and database are already talking across the network to get their data, putting them on seperate instances won't really change anything in that respect - besides giving more resources to both servers. More resources means more data stored locally in memory and more performance.
The answer depends largely on what resources apache and MySQL are using. They can happily co-habit if demands on your website are low, and each are configured with enough memory that they don't shell out to virtual memory. In this instance, they are best kept together.
As traffic grows, or your application grows, you will benefit from splitting them out because they can then both run inside dedicated memory. Provided that the instances are in the same region then you should see fast performance between them. I have even run a web application in Europe with the DB in USA and performance wasn't noticeably bad! I wouldn't recommend that though!
Because AWS is easy and cheap, your best bet is to set it up and benchmark it!