Aurora MySql DB backup size increased drastically - mysql

I am running Aurora MySql Engine version 5.6.10a in the production environment. Automated DB snapshot size on 9th May was 120 GB and this snapshot size increased by 27 GB to 147 GB. I have checked that DB size did not increase even by 1 GB. I looked on the internet for the reason why this happened but got nothing.
Graph for snapshot size for the last two weeks as:
It's pretty consistent till 9th May and after 10th May. Does anyone have insight into this issue?
The rate of increase in DB size:
VolumeBytesUsed Graph:
Your help will be much appreciated!!
Thanks,
Mayurkumar

There is bug in aurora storage provisioning , we have also faced the same issue for one of our client.
In our case the storage grew from 17gb to 22000 gb in 15 days where db size was just 16gb. We were charged a lot for that period , you can connect to aws support for the resolution of this problem.

How were you checking you snapshot size? One possibility is that may have configured daily backups or so, which might be the reason why your total backup size is growing. Would love to help out more if you can provide more details on the number of snapshots, how you were checking the size of the cluster and the snapshot etc.

Related

MySQL Configuration Issue - 1 GB Ram - UBuntu

We have 1 GB RAM - 2 Core CPU with Ubuntu
Website is new so very less traffic. But storage is increasing #1% daily which is alarming. Could you please evaluate this SQL Tuner Report and suggest the configuration pls. It's for an e-commerce website.
[https://pastebin.com/tVw80PKu][1]
1% per day compounded leads to about 40x per year. You have 23.5MB of data now; that would be about 900MB after a year. How much disk do you have? If you have a few GB, what is the problem?
RAM is used as a cache, so you do not have to have enough RAM for the entire dataset. (Performance may suffer.)

MySQL freezing for 40 seconds every 30 minutes

We are running MySQL 5.6 on Windows Server 2008r2.
Every 30 minutes it runs very slowly for around 40 seconds and then goes back to normal for another 30 minutes. It is happening like clockwork with each ‘hang’ being 30 minutes after the last one finished.
Any ideas? We are stumped and don’t know where next to look.
Background / things we have ruled out below.
Thanks.
• Our initial thoughts were a locking query but we have eliminated this.
• The slow query log shows affected queries but with zero lock time.
• General logs show nothing (as an aside, is there a way to increase the logging level to get it to log when it is flushing the cache etc? What does MySQL run every 30 minutes?)
• When it is running slowly, it is still running but even simple queries like Select ‘Hello World’; take over a second to run.
• All MySQL operations run slowly at the time in question including monitoring tools and especially making new connections. InnoDB and MyISAM are equally affected.
• We have switched from using the SAN array to using local SSD and it has made no difference ruling out disk / spindles.
• The machine has Sophos Endpoint Protection but this is not scanning anything on the database drives.
• It is as if the machine is maxed out but local performance monitoring does show any unusual system metrics. CPU, disk queue, disk throughput, memory, network activity etc. are all flat.
• The machine is a VM running on VMware. Hypervisor monitoring is not showing any performance issues – but I am not convinced it is granular enough to pick up a 30 second spike.
• We have tried adjusting MySQL settings like the InnoDB cache size, log size etc and this has made no difference.
• The server runs nothing other than a couple of MySQL instances.
• The other instances are unaffected - as far as we can tell.
There's some decent advice here on Server Fault:
https://serverfault.com/questions/733590/mysql-stops-responding-periodically
Have you monitored Disk I/O? Is there an increase in I/O wait times or
queued transactions? It's possible that requests are queueing up at
the storage level due to an I/O limitation put on by your host. Also,
have you checked if you're hitting your max allowable mysql clients?
If these queries are suddenly taking a lot longer to complete, it's
also possible that it's not leaving enough available connections for
normal site traffic because the other connections aren't closing fast
enough.
I'd recommend using IOSTAT and seeing if you're saturating your disks. It should show if all your disks are at 100% usage, etc.

Google Cloud SQL Storage Usage High Issue

We have Setup Cloud SQL in google cloud with configuration of Tier db-n1-standard-4 with storage of 100GB SSD. My actual database size is having only 160MB Max but in Cloud Cloud SQL instances it showing up to 72GB used i don't know why? and its still increasing per day about 10GB. Can anyone explain about this issue.
Thanks
Most of the time this is due to binary logs that are used for replication.
The growth of binary logs is roughly proportional to the amount of modified rows.
Binary logs are purged after 7 days so the space will stabilize after 7 days.
Possibly you are enabling general_log option. Check EDIT -> Cloud SQL flags -> general_log. If this is on, turn it to off.

couchbase nodes RAM is getting full frequently

we have a 4 node cluster, with 24 GB RAM, out of which 18GB has been given to couchbase with zero replicaion.
We have a approx 10M of records in this cluster with ~2.5M/hour and expire old items.
My RAM Usage which is ~72GB is getting full every ~12 days, and i need to restart the cluster to fix this. After restart again the RAM usage is back to ~20GB.
Can someone please help to understand the reason for it.
FYI : Auto Compaction is set to 40% fragment level and Meta Data Purge Interval is set to 1 Day, -- which we reduced to do 2 hours. But it didn't help.
Under scenarios with very high memory allocation churn Couchbase can experience memory fragmentation, which would cause the effects you are describing. This was addressed in the 4.x release by switching to jemalloc on non-Windows OSes and using tcmalloc with agressive decommit on Windows. I would suggest you download the RC version of Couchbase 4 (http://www.couchbase.com/nosql-databases/downloads#Couchbase_Server) and give it a try to see if that fixes the issue.

Amazon Aurora memory overflow

I created an Amazon Aurora instance few weeks ago , it was doing great at the beginning and memory current value was 13,000 MB , Everyday current value of memory is reduced by a certain amount and now it is only 781 MB.
I don't know if it is a cache problem or I have something wrong with my configuration any ideas ?
I found that MySQL tries to cache data and use memory as long as it has free space , so If you want to change this, you could use different configuration other than Amazon RDS configuration.
By default : innodb_buffer_pool_size is {DBInstanceClassMemory*3/4}
I changed it to innodb_buffer_pool_size = {DBInstanceClassMemory*1/2}
and that solved my problem.