I was trying to run the following statement with the hope to create a join of two existing tables.
create table CRS_PAIR
select concat_ws(',', a.TESTING_ID, b.TRAINING_ID, a.TESTING_C) as k, concat_ws(',', a.YTG, b.YTG) as YTG
from CRS_TESTING a, CRS_TRAINING b
where a.TESTING_C=b.TRAINING_C;
Currently the size of these two tables are:
mysql> SELECT table_name, round(((data_length + index_length) / (1024*1024)),2) as "size in megs" FROM information_schema.tables WHERE table_schema = "crs";
+----------------+---------------+
| table_name | size in megs |
+----------------+---------------+
| CRS_TESTING | 36.59 |
| CRS_TRAINING | 202.92 |
+----------------+---------------+
After a little over a day, The query finished and I got the following result.
140330 2:53:50 [ERROR] /usr/sbin/mysqld: The table 'CRS_PAIR' is full
140330 2:53:54 InnoDB: ERROR: the age of the last checkpoint is 9434006,
InnoDB: which exceeds the log group capacity 9433498.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
It turned out that the size of /var/lib/mysql has grown to 246GB in disk space, and the disk run out of space. However, for some reason, the CRS_PAIR table does not show up in the shell. Even when I try to get the size of all databases.
mysql> SELECT table_schema "Data Base Name", sum( data_length + index_length ) / (1024 * 1024) "Data Base Size in MB" FROM information_schema.TABLES GROUP BY table_schema ;
+--------------------+----------------------+
| Data Base Name | Data Base Size in MB |
+--------------------+----------------------+
| crs | 1426.4531 |
| information_schema | 0.0088 |
| mysql | 0.6453 |
| performance_schema | 0.0000 |
+--------------------+----------------------+
4 rows in set (0.74 sec)
This is the show tables command.
mysql> show tables;
+----------------+
| Tables_in_crs |
+----------------+
| CRS_TESTING |
| CRS_TRAINING |
some other tables
+----------------+
9 rows in set (0.00 sec)
CRS_PAIR is not there.
May I ask if anyone can help me figure out where this mysterious table went to so that I can clean up my disk space?
If you don't have innodb_file_per_table set (or set to 0) then InnoDB is going to put all your InnoDB tables into the pool file (usually /var/lib/mysql/ibdata1), expanding it as required to fit in written data. However, the engine never does any space reclamation. That means the ibdata1 file always grows, it never shrinks.
The only way to reduce the size of this file is to backup your data, shutdown MySQL, delete it, restart MySQL and then reload your data.
Related
My available storage doesn't seem to match up with the Instance storage size in RDS.
When I run:
SELECT table_schema "database_name",
sum( data_length + index_length ) / 1024 /
1024 "Database Size in MB",
sum( data_free )/ 1024 / 1024 "Free Space in MB"
FROM information_schema.TABLES
GROUP BY table_schema ;
I get:
| database_name | Data Base Size in MB | Free Space in MB |
+--------------------+----------------------+------------------+
| fx | 6787.34375000 | 3239.00000000 |
| information_schema | 0.21875000 | 0.00000000 |
| mysql | 10.04687500 | 0.00000000 |
| performance_schema | 0.00000000 | 0.00000000 |
So total available space is about 10Gb.
But the storage I have provisioned in RDS for this database instance is 29Gb (3 times more than the space I actually have).
This is after I've cleared the slow query log and general log.
Can someone clarify the discrepancy here? At the moment I'm risking running out of space.
Thanks
Turns out it was the general error log filling up - I had some batch delete jobs that could potentially run thousands of times daily, which was causing this to get big fast. Switching this off and purging it stopped the storage space continuously dropping. Hope this helps someone in future (Probably just me again).
This is also probably the best write up of options to try:
https://aws.amazon.com/premiumsupport/knowledge-center/view-storage-rds-mysql-mariadb/
This is not a duplicate of Why is InnoDB table size much larger than expected? The answer to that question states that if I don't specify a primary key then 6 bytes is added to the row. I did specify a primary key, and there is more than 6 bytes to explain here.
I have a table that is expecting millions of records, so I paid close attention to the storage size of each column. Each row should take 15 bytes (smallint = 2 bytes, date = 3 bytes, datetime = 8 bytes)
CREATE TABLE archive (
customer_id smallint(5) unsigned NOT NULL,
calendar_date date NOT NULL,
inserted datetime NOT NULL,
value smallint(5) unsigned NOT NULL,
PRIMARY KEY (`customer_id`,`calendar_date`,`inserted`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
The table now has a half million records in it and is taking more storage than expected. I ran this query to get more details from the system:
SELECT *
FROM information_schema.TABLES
WHERE table_name = 'archive';
information_schema.index_length = 0
information_schema.avg_row_length = 37
information_schema.engine = InnoDB
information_schema.table_type = BASE TABLE
HOW!?
I was expecting 15 bytes per row, and it's taking 37. Can anyone give me an idea of where to look next for an explanation? I've done a lot of reading on thais and I've seen some explanations for an extra 6 or 10 bytes being added to a row size, but that doesn't explain the 22 extra bytes.
One explanation is that indexes also take up storage. There are no indexes on this table.
One explanation is that the the information_schema.tables query returns an unreliable row count which would throw off the avg_row_length. I have checked the row count it is using against a count(*) query and it is only off by a little (1/20 of 1%), so that's not the whole story.
Another explanation is fragmentation. Of note, this table has been rebuilt from a sql dump, so there hasn't been any hammering of updates, inserts and deletes.
Because avg_row_length is data_length / rows.
data_length is basically the total size of the table on disk. An InnoDB table is more than just a list of rows. So there's that extra overhead.
Because an InnoDB row is more than the data.
Similar to above, each row comes with some overhead. So that's going to add to the size of a row. An InnoDB table also isn't just a list of data crammed together. It needs a little extra empty space to work efficiently.
Because stuff is stored on disks in blocks and those blocks aren't always full.
Disks store things in usually 4K, 8K or 16K blocks. Sometimes things don't fit perfectly in those blocks, so you can get some empty space.
As we'll see below, MySQL is going to allocate the table in blocks. And it's going to allocate a lot more than it needs to avoid having to grow the table (which can be slow and lead to disk fragmentation which makes things even slower).
To illustrate this, let's start with an empty table.
mysql> create table foo ( id smallint(5) unsigned NOT NULL );
mysql> select data_length, table_rows, avg_row_length from information_schema.tables where table_name = 'foo';
+-------------+------------+----------------+
| data_length | table_rows | avg_row_length |
+-------------+------------+----------------+
| 16384 | 0 | 0 |
+-------------+------------+----------------+
It uses 16K, or four 4K blocks, to store nothing. The empty table doesn't need this space, but MySQL allocated it on the assumption that you're going to put a bunch of data in it. This avoids having to do an expensive reallocation on each insert.
Now let's add a row.
mysql> insert into foo (id) VALUES (1);
mysql> select data_length, table_rows, avg_row_length from information_schema.tables where table_name = 'foo';
+-------------+------------+----------------+
| data_length | table_rows | avg_row_length |
+-------------+------------+----------------+
| 16384 | 1 | 16384 |
+-------------+------------+----------------+
The table didn't get any bigger, there's all that unused space within those 4 blocks it has. There's one row which means an avg_row_length of 16K. Clearly absurd. Let's add another row.
mysql> insert into foo (id) VALUES (1);
mysql> select data_length, table_rows, avg_row_length from information_schema.tables where table_name = 'foo';
+-------------+------------+----------------+
| data_length | table_rows | avg_row_length |
+-------------+------------+----------------+
| 16384 | 2 | 8192 |
+-------------+------------+----------------+
Same thing. 16K is allocated for the table, 2 rows using that space. An absurd result of 8K per row.
As I insert more and more rows, the table size stays the same, it's using up more and more of its allocated space, and the avg_row_length comes closer to reality.
mysql> select data_length, table_rows, avg_row_length from information_schema.tables where table_name = 'foo';
+-------------+------------+----------------+
| data_length | table_rows | avg_row_length |
+-------------+------------+----------------+
| 16384 | 2047 | 8 |
+-------------+------------+----------------+
Here also we start to see table_rows become inaccurate. I definitely inserted 2048 rows.
Now when I insert some more...
mysql> select data_length, table_rows, avg_row_length from information_schema.tables where table_name = 'foo';
+-------------+------------+----------------+
| data_length | table_rows | avg_row_length |
+-------------+------------+----------------+
| 98304 | 2560 | 38 |
+-------------+------------+----------------+
(I inserted 512 rows, and table_rows has snapped back to reality for some reason)
MySQL decided the table needs more space, so it got resized and grabbed a bunch more disk space. avg_row_length just jumped again.
It grabbed a lot more space than it needs for those 512 rows, now it's 96K or 24 4K blocks, on the assumption that it will need it later. This minimizes how many potentially slow reallocations it needs to do and minimizes disk fragmentation.
This doesn't mean all that space was filled. It just means MySQL thought it was full enough to need more space to run efficiently. If you want an idea why that's so, look into how a hash table operates. I don't know if InnoDB uses a hash table, but the principle applies: some data structures operate best when there's some empty space.
The disk used by a table is directly related to the number of rows and types of columns in the table, but the exact formula is difficult to figure out and will change from version to version of MySQL. Your best bet is to do some empirical testing and resign yourself that you'll never get an exact number.
i wanted to switch from mysql to mariadb, to do so, i exported the old databeses and import them to a new mariadb server, now i have the problem, that the new imported database is smaller than the original.
I did the following:
Created a backup with mysqldump from mysql (mysqldump --all-databases --user=root --password --master-data > backupdatabase.sql)
Imported it to a new maridb-server (mysql -u root -p < backupdatabase.sql)
If in want to see, how big the databases are, i see the following:
On the original mysql-server:
mysql> SELECT table_schema "database", sum(data_length + index_length)/1024/1024 "size in MB" FROM information_schema.TABLES GROUP BY table_schema;
| database | size in MB |
| DB-1 | 0.40625000
| DB-2 | 4.09375000 |
| DB-3 | 506.60937500 |
6 rows in set (0.90 sec)
If i do now the same on the maraidb host:
MariaDB [(none)]> SELECT table_schema "database", sum(data_length + index_length)/1024/1024 "size in MB" FROM information_schema.TABLES GROUP BY table_schema;
| database | size in MB |
| DB-1 | 0.39062500 |
| DB-2 | 3.03125000 |
| DB-3 | 416.39062500 |
6 rows in set (0.09 sec)
Where can the diffrence come from? Is this my failure?
MySQL and MariaDB storages for InnoDB and MySIAM (one of the engines you are probably using) is identical. Furthermore, you can install MariaDB by stomping it on top of MySQL and it will work with the same file databases.
Your issue is most likely coming from the fact that when you create a table from scratch and insert all the rows there is no fragmentation or space wasted or anything. When you have a live server, as you delete rows the data becomes somewhat fragmented, and sometimes it even stores "empty" space and doesn't give it back to the OS (InnoDB).
Try this, create a new Database in MySQL as DB1-test and import the dump into it, then compare the DB1-test size vs the MariaDB-DB1 and it should coincide.
Yor Table are not optimized on the MySQL Server. So MySQL free NEVER unused Disc Space. The only thing to free it is to Drop and create a Table. This is what optimize done. A other Thing is the Option "One File per Table". If it set MySQL use on file per Table and its possible to free disc space with Optimize else you have only one big file.
For performance reason i will put a 188MB table (rebuild every day on disk) with ~ 550.000 datasets into MEMORY table. Whenever i tried this, i run into HEAP error ...
My server has 1.3GB free RAM (only 32BIt 4 GB)
Have you checked the configured mysql heap table size? Have a look at this:
mysql> show variables like "%heap%";
+---------------------+----------+
| Variable_name | Value |
+---------------------+----------+
| max_heap_table_size | 16777216 |
+---------------------+----------+
1 row in set (0.02 sec)
The default value is 16MB.
I am running NDB Cluster and I see that on mysql api nodes, there is a very big binary log table.
+---------------------------------------+--------+-------+-------+------------+---------+
| CONCAT(table_schema, '.', table_name) | rows | DATA | idx | total_size | idxfrac |
+---------------------------------------+--------+-------+-------+------------+---------+
| mysql.ndb_binlog_index | 83.10M | 3.78G | 2.13G | 5.91G | 0.56 |
Is there any recommended way to reduce the size of that without breaking anything? I understand that this will limit the time frame for point-in-time recovery, but the data has is growing out of hand and I need to do a bit of clean up.
It looks like this is possible. I don't see anything here: http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-replication-pitr.html that says you can't based on the last epoch.
Some additional information might be gained by reading this article:
http://www.mysqlab.net/knowledge/kb/detail/topic/backup/id/8309
The mysql.ndb_binlog_index is a MyISAM table. If you are cleaning it,
make sure you don't delete entries of binary logs that you still need.