Amazon AWS RDS 120GB free storage suddenly became 0GB within less than 2 hours - mysql

I've been using AWS RDS db.m4.large with 128GB storage for MySQL(InnoDB) for more than one year now.
The size of my database is now less than 2GB and the free storage had always been more than 125GB as captured below but it suddenly dropped to 0GB in less than 2 hours 2 weeks ago.
So I upgraded the storage to 250GB but it was dropped to 0GB again two days ago.
Free storage suddenly dropped
So I asked AWS customer center about it and they told me that mysqlInnoDbTablespace took 243GB and I can reduce the size of it by executing OPTIMIZE TABLE on all of my tables.
dataVolumeAvailableSize 43.3 gb,
dataVolumeTotalSize 295 gb
dataVolumeUsedSize 251 gb
mysqlErrorLogFileSize 72.4 kb
mysqlGeneralLogBackupSize 0 bytes
mysqlGeneralLogFileSize 0 bytes
mysqlGeneralLogSize 0 bytes
mysqlInnoDbLogSize 256 mb
mysqlInnoDbTablespace 243 gb
mysqlSlowLogBackupSize 124 mb
mysqlSlowLogFileSize 35.9 kb
mysqlSlowLogSize 4.03 gb
I did it but the free storage hasn't been increased. so I asked them again but they haven't replied my inquiry.
Below is the size of my database. it's less than 2GB.
The size of my database
Did anybody experience this kind of case or can somebody help me to free up the free storage?
Thanks.

Related

MySql/mariadb load data more 100 millon rows

I hope you can help me I would be very grateful. I need to load a text file with more than 100 million records of which would be the ideal configuration and the storage engine for this load data(my. Cnf)
Windows que 10 server
64 Gb ram

How much resources a mysql event use?

Im working on a browser game and I write 7 mysql events for each player.
5 of events update a table row every 5 secs.
And two of them update 2 other tables every sec.
I have a linux vps that its ram is 512 M and cpu has 1 core.
How many online players can support this VPS.
thanks.
Minimum system requirement for mysql for windows (86 and 64) is 800 mb ram and 500 mb hard disk space
Read this article to get some further knowledge.
dev.mysql.com/doc/mysql-monitor/3.0/en/system-prereqs-reference.html

Google Compute Engine VM disk is very slow

We just switched over to Google Compute Engine and are having major issues with disk speed. It's been about 5% of Linode or worse. It's never exceeded 20M/s for writing and 10M/s for reading. Most of the time it's 15M/s for writing and 5M/s for reading.
We're currently running a n1-highmem-4 (4 vCPU, 26 GB memory) machine. CPU & memory aren't the bottleneck. Just running a script that reads rows from PostgreSQL database, processes them, then writes back to PostgreSQL. It's just for a common job to update database row in batch. Tried running 20 processes to take advantage of multi-core but the overall progress is still slow.
We're thinking disk may be bottleneck because traffic is abnormally low.
Finally we decided to do benchmarking. We found it's not only slow but seems to have a major bug which is reproducible:
create & connect to instance
run the benchmark at least three times:
dd if=/dev/zero bs=1024 count=5000000 of=~/5Gb.file
We found it becomes extremely slow and aren't able to finish the benchmarking at all.
Persistent Disk performance is proportional to the size of the disk itself and the VM that it is attached to. The larger the disk (or the VM), the higher the performance, so in essence, the price you are paying for the disk or the VM pays not only for the disk/CPU/RAM but also for the IOPS and throughput.
Quoting the Persistent Disk documentation:
Persistent disk performance depends on the size of the volume and the
type of disk you select. Larger volumes can achieve higher I/O levels
than smaller volumes. There are no separate I/O charges as the cost of
the I/O capability is included in the price of the persistent disk.
Persistent disk performance can be described as follows:
IOPS performance limits grow linearly with the size of the persistent disk volume.
Throughput limits also grow linearly, up to the maximum bandwidth for the virtual machine that the persistent disk is attached to.
Larger virtual machines have higher bandwidth limits than smaller virtual machines.
There's also a more detailed pricing chart on the page which shows what you get per GB of space that you buy (data below is current as of August 2014):
Standard disks SSD persistent disks
Price (USD/GB per month) $0.04 $0.025
Maximum Sustained IOPS
Read IOPS/GB 0.3 30
Write IOPS/GB 1.5 30
Read IOPS/volume per VM 3,000 10,000
Write IOPS/volume per VM 15,000 15,000
Maximum Sustained Throughput
Read throughput/GB (MB/s) 0.12 0.48
Write throughput/GB (MB/s) 0.09 0.48
Read throughput/volume per VM (MB/s) 180 240
Write throughput/volume per VM (MB/s) 120 240
and a concrete example on the page of what a particular size of a disk will give you:
As an example of how you can use the performance chart to determine
the disk volume you want, consider that a 500GB standard persistent
disk will give you:
(0.3 × 500) = 150 small random reads
(1.5 × 500) = 750 small random writes
(0.12 × 500) = 60 MB/s of large sequential reads
(0.09 × 500) = 45 MB/s of large sequential writes

Heavy mysql usage CPU or Memory

I have an Amazon s3 instance and the project we have on the server does a lot of INSERTs and UPDATEs and a few complex SELECTs
We are finding that MySQL will quite often take up a lot of the CPU.
I am trying to establish whether a higher memory or higher cpu is better of the above setup.
Below is an output of cat /proc/meminfo
MemTotal: 7347752 kB
MemFree: 94408 kB
Buffers: 71932 kB
Cached: 2202544 kB
SwapCached: 0 kB
Active: 6483248 kB
Inactive: 415888 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 168264 kB
Writeback: 0 kB
AnonPages: 4617848 kB
Mapped: 21212 kB
Slab: 129444 kB
SReclaimable: 86076 kB
SUnreclaim: 43368 kB
PageTables: 54104 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 3673876 kB
Committed_AS: 5384852 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 180 kB
VmallocChunk: 34359738187 kB
Current Setup:
High-CPU Extra Large Instance
7 GB of memory 20 EC2 Compute Units (8
virtual cores with 2.5 EC2 Compute
Units each) 1690 GB of instance
storage 64-bit platform I/O
Performance: High API name: c1.xlarge
Possible Setup:
High-Memory Double Extra Large
Instance
34.2 GB of memory 13 EC2 Compute Units (4 virtual cores with 3.25 EC2 Compute
Units each) 850 GB of instance storage
64-bit platform I/O Performance: High
API name: m2.2xlarge
I would go for 32GB memory and maybe more harddisks in RAID. CPU won't help that much - you have eough cpu power. You also need to configure mysql correctly.
Leave 1-2 GB for OS cache and for temp tables.
Increase tmp_table_size
remove swap
optimize query_cache_size (don't make it too big - see mysql documentation about it)
periodically run FLUSH QUERY CACHE. if your query cache is <512 MB - run it every 5 minutes. This doesn't clean the cache, it optimizes it (defragment). This is from mysql docs:
Defragment the query cache to better
utilize its memory. FLUSH QUERY CACHE
does not remove any queries from the
cache, unlike FLUSH TABLES or RESET
QUERY CACHE.
However I noticed that the other solution has the half disk space: 850GB, which might be reduced number of hard disks. That's generally a bad idea. The biggest problem in databases is hard disks. If you use RAID5 - make sure you don't use less hard disks. If you don't use raid at all - I would suggest raid 0.
Use vmstat and iostat to find out whether CPU or I/O is the bottleneck (if I/O - add more RAM and load data into memory). Run from shell and check results:
vmstat 5
iostat -dx 5
if CPU is the problem vmstat will show high values in us column, and iostat will show low disk use (util)
if I/O is the problem then vmstat will show low values in us column and iostat will show high disk utilization (util); by high I mean >50%
It depends on the application.
You could use memcached to cache mysql queries. This would ease cpu usage a bit, however with this method you would want to increase RAM for storing the queries.
On the other hand if it's not feasible based on type of application then I would recommend higher CPU.
There are not many reasons for a MySQL to use a lot of CPU: It is either processing of stored routines (stored procedures or stored functions) or sorting going on that can eat CPU.
If you are using a lot of CPU due to stored routines, you are doing it wrong and your soul cannot be saved anyway.
If you are using a lot of CPU due to sorting going on, some things can be done, depending on the nature of your queries: You can extend indexes to include the ORDER BY columns at the end, or you can drop the ORDER BY clauses and sort in the client.
What approach to chose depends on the actual cause of the CPU usage - is it queries and sorting? And on the actual queries. So in any case you will need better monitoring first.
Not having monitoring information, the general advice is always: Buy more memory, not more CPU for a database.
Doesn't the on-demand nature of EC2 make it rather straightforward to rent the possible setup for a day, and do some load testing? Measurements speak louder than words.
Use "High-CPU Extra Large Instance".
In your current setup, MySQL is not constrained by memory:
MemTotal: 7347752 kB
MemFree: 94408 kB
Buffers: 71932 kB
Cached: **2202544 kB**
Out of 7 GB memory, 2 GB is unused and being used by OS as I/O cache.
In this case, increasing CPU count would give you more bang for buck.

How much data can be stored in MySQL?

I am just a beginner in MySQL, I need to know how much data can be stored in MySQL. I am developing a web crawler, can I store all the data in MySQL, or do I need to use another Database? Which is more faster? What I mean is, which has the highest Writing/Reading Rate? Do I need to reconfigure to add more data?
Depends on the operating system.
**Operating System** **File-size Limit**
Win32 w/ FAT/FAT32 2GB/4GB
Win32 w/ NTFS 2TB (possibly larger)
Linux 2.2-Intel 32-bit 2GB (LFS: 4GB)
Linux 2.4+ 4TB
Solaris 9/10 16TB
MacOS X w/ HFS+ 2TB
NetWare w/NSS file system 8TB
http://dev.mysql.com/doc/refman/5.0/en/full-table.html
Your write/read rate is of pretty much no concern to you, your bottleneck is going to be your internet connection.
https://forums.mysql.com/read.php?22,379547,381106
InnoDB Size Limits
Max # of tables: 4 G
Max size of a table: 32TB
Columns per table: 1000
Max row size: n*4 GB
8 kB if stored on the same page
n*4 GB with n BLOBs
Max key length: 3500
Maximum tablespace size: 64TB
Max # of concurrent trxs: 1023
Nanda Kishore Toomula
Sr DBA,Nokia India
CMDBA 5.0