I foolishly tried to add a column to a table that I did not have enough space on disk to copy and had to kill it and expand my RDS instance's storage capacity to avert a site crash. I would like to do it again (this time with enough disk space) but I can't seem to get back to my pre-query free storage levels. My query was to create a table like a giant table, add a column and then insert the entire contents of the old table together with null into the new table. I tried CALL mysql.rds_rotate_slow_log; and CALL mysql.rds_rotate_general_log; but judging by my AWS Cloudwatch panel, I'm still down ~10GB from my pre-query levels. No lines were successfully inserted into the new table. Is there some "clear hdd cache" command or something like that? Since it's RDS, I don't have access to the instance that's running it but I do have master user and RDS CLI access.
EDIT: It seems my problem may be related to giant ibdata files but since I don't have root access, I can't really execute the solutions mentioned in How to shrink/purge ibdata1 file in MySQL
The solution was to drop the new table. I didn't think that anything was stored in the new table because select count(*) from new_table; returned 0 but I guess the temporary data was tied in to the new table anyway. I'm not sure how exactly this works from a database structural point of view but fortunately it did what I wanted.
Bottom line: killed inserts still use storage space.
If somebody can explain why this is the case, it would be helpful for the future.
Related
I have a large datatable of around 60GB. This table had alot of unused rows and after deleting about 6GB of them I have noticed that table size stayed the same (60GB) and their was an "optimize" message within phpmyadmin. So i clicked to optimize, but i didnt have enough space on the hd. So i had to halt the process and restarted mysql.
After i logged back in I happen to have a problem with my table, when I try to access it i get message similar to:
"Table 'table_name' is marked as crashed and last (automatic?) repair failed"
Right now I have 3.5GB of space to use on the hard drive. What would be the best way forward to repair, fix and shirt this particular table?
At the moment my plan is to download full database from the server onto a local hard drive; after which I will delete unused data (it will most likely be 59.99GB of it) and then to either copy or re-import data back into live database.
Thanks.
May you free space exporting only another heavy table and truncate this?
You get more space and after repair affected table you only must import a table, not full database.
We are running a service where we have to setup a new database for each new site. The database is exactly the same so we can simply dump from a backup file or clone from a sample database (which is created only for clone purpose, no transaction will be run there thus no worry about corrupting data) from the same server. The database it self contains around 100 tables and with some data, taking around 1-2mins to import, which is too slow.
I'm trying to find a way to do it as fast as possible, the first thought came to mind was to copy the files within the sample database data_dir, but it seems like I also need to somehow edit the table lists or mysql wont be able to read my new database's tables eventhough it still shows them there.
You're duplicating the database the wrong way, it will be much faster if you do it properly.
Here is how you duplicate a database:
create database new_database;
create table new_database.table_one select * from source_database.table_one;
create table new_database.table_two select * from source_database.table_two;
create table new_database.table_three select * from source_database.table_three;
...
I just did a performance test, this takes 81 seconds to duplicate 750MB of data across 7 million table rows. Presumably your database is smaller than that?
I don't think you are going to find anything faster. One thing you could do is already have a queue of duplicate databases on standby ready to be picked up and used at any time. So you don't need to create a new database at all, you just rename an existing database from a queue of available ones. And have a cron job running to make sure the queue never runs empty.
Why mysql not able to read or what you changes in table lists?
I think there may be problem of permissions to read by mysql, otherwise it would be fine..
Thanks
Mysql 5.05 that is hosting an older application that still gets lots of love from the users. Unfortunately, I'm not anything other than a hack dba at best, and am very hesitant in my skills to safely migrate to a new version of the database unless absolutely necessary. We are in the process of procuring a new application to take over responsibilities for the old application, but are probably a year out or so.
Anyway, I was patching the application the other day and added a column to a table, the command took a while to complete and in the meantime nearly filled up my drive hosting the datafiles. (table is roughly 25G) I believe this was a function of the creation of a temporary table. For reasons I'm not clear on, the space did not become free again after the column was added; i.e., I lost roughly 25G of disk space. I believe (?) this was due to the fact that the database was created with a single datafile; I'm not really sure on the whys, but I do know that I had to free up some space elsewhere to get the drive to an operable state.
That all being said, I've got the column added, but it is worthless to the application without an index. I held off adding the index trying to figure out if it is going to create another massive, persistent 'temporary' table at index creation time. Can anyone out there give me insight into:
Will a create index and or alter table create index statement result in the creation of a temporary table the same size as the existing table?
How can I recover the space that got added to ibdata1 when I added the column?
Any and all advice is greatly appreciated.
MySQL prior to version 5.1 adds/removes indices on InnoDB tables by building temporary tables. It's very slow and expensive. The only way around this is to either upgrade MySQL to 5.1, or to dump the table with e.g. mysqldump, drop it, recreate it with the new indices, and then restore it from the dump.
You can't shrink ibdata1 at all. Your only solution is to rebuild from scratch. It is possible to configure MySQL so it doesn't use one giant ibdata1 file for all the databases - read that answer and it will explain how to configure MySQL/InnoDB so this doesn't happen again, and also how to safely dump and recreate all your databases.
Ultimately, you probably want to
Make a complete dump of your database
Upgrade to MySQL 5.1 or newer
Turn on InnoDB one-file-per-table mode
Restore the dump.
I have a few very large MySql tables on an Amazon Std EBS 1TB Volume (the file-per-table flag is ON and each ibd file is about 150 GB). I need to move all these tables from database db1 to database db2. Alongwith this, I would also like to move the tables out to a different Amazon Volume (which I think is considered a different Partition/File-System, even if the FileSystem type is the same). The reason I am moving to another volume is so I can get another 1TB space.
Things I have tried:
RENAME TABLE db1.tbl1 TO db2.tbl1 does not help because I cannot move it out to a different volume. I cannot mount a Volume at db2 because then it is considered a different file-system and MYSQL fails with an error:
"Invalid cross-device link" error 18
Created a stub db2.tbl1, stopped mysql, deleted db2's tbl1 and copied over db1's tbl.ibd. Doesn't work (the db information is buried in the ibd?)
I do not want to try the obvious mysqldump-import OR selectinto-loadfile because each table takes a day and a half to move even with most optimizations (foreign-key checks off etc). If I take indexes out before the import , re-indexing takes long and the overall time taken is still too long.
Any suggestions would be much appreciated.
Usually what I would suggest in this case, is to create an ec2 snapshot of the volume and write that snapshot into your larger volume.
You'll need to resize the partition afterwards.
As a sidenote, if your database is that large, EBS might be a major bottleneck. You're better off getting locally attached storage, but unfortunately the process is a bit different.
You might want to use Percona xtrabackup for this:
https://www.percona.com/doc/percona-xtrabackup/LATEST/index.html
I'm trying to convert a 10million rows MySQL MyISAM table into InnoDB.
I tried ALTER TABLE but that made my server get stuck so I killed the mysql manually. What is the recommended way to do so?
Options I've thought about:
1. Making a new table which is InnoDB and inserting parts of the data each time.
2. Dumping the table into a text file and then doing LOAD FILE
3. Trying again and just keep the server non-responsive till he finishes (I tried for 2hours and the server is a production server so I prefer to keep it running)
4. Duplicating the table, Removing its indexes, then converting, and then adding indexes
Changing the engine of the table requires rewrite of the table, and that's why the table is not available for so long. Removing indexes, then converting, and adding indexes, may speed up the initial convert, but adding index creates a read lock on your table, so the effect in the end will be the same. Making new table and transferring the data is the way to go. Usually this is done in 2 parts - first copy records, then replay any changes that were done while copying the records. If you can afford disabling inserts/updates in the table, while leaving the reads, this is not a problem. If not, there are several possible solutions. One of them is to use facebook's online schema change tool. Another option is to set the application to write in both tables, while migrating the records, than switch only to the new record. This depends on the application code and crucial part is handling unique keys / duplicates, as in the old table you may update record, while in the new you need to insert it. (here transaction isolation level may also play crucial role, lower it as much as you can). "Classic" way is to use replication, which, as far as I know is also done in 2 parts - you start replication, recording the master position, then import dump of the database in the second server, then start it as a slave to catch up with changes.
Have you tried to order your data first by the PK ? e.g:
ALTER TABLE tablename ORDER BY PK_column;
should speed up the conversion.