I have an InnoDB table used to pre-process some data before rendering a web page. It is a very bulky table. And; I do not need to keep some old records it it.
To keep the database size slim, I need to delete the data. But partially only.
TRUNCATE does not work, but DELETE.
Will the database create a bigger log files with this?
Is there a way to delete the data without producing a binary log?
You can use mysql option named --binlog-ignore-db so that particular db is not logged in binary log. so create your bulky table in binlog ignored db.
Also, you can use CREATE TEMPORARY TABLE so that temp table will be deleted after session closed.
Related
Long story short. I had a very large corrupt InnoDB table. Tried a number of things to rebuild/recover the table. (I was finally successful) However, one of the things I tried was building a new table with Myisam engine with force innodb reovery on, however there was another crash along the way and I'm left with a .TMD file for that table.
Just curious if I'm safe to delete this file? Table does not show up anywhere in the database, via show tables, drop table doesn't do anything. etc. At this point its just taking up disk space in my data directory.
The .TMD file is an intermediate data file for a table that needs to recreate its data file.
So you can remove it because it's normally used as a temporarly file but you could rename the file and check what happens just in case.
I have a 12 GB table full of pictures, I'm trying to rename the blob column that holds the data, and it is taking forever. Can someone give me a blow by blow account of why it is taking so long to rename the column? I would have thought that this operation would be pretty quick, no matter the size of the table?
EDIT: The query I ran is as follows
alter table `rails_production`.`pictures` change `data` `image_file_data` mediumblob NULL
It appears that most of the time is spent waiting for mysql to make a temporary copy of the pictures table, which since it is very large is taking a while to do.
It is on the list of things to do, to change the picture storage from the database to the filesystem.
EDIT2: Mysql Server version: 5.0.51a-24+lenny2 (Debian)
I can't give you the blow-by-blow (feature request #34354 would help, except that it probably wouldn't be back-ported to MySQL 5.0), but the extra time is due to the fact that an ALTER ... CHANGE may change the type of the column (and column attributes, if any), which necessitates converting the values stored in the column and other checks. MySQL 5.0 doesn't include optimizations for when the new type and attributes are the same as the old. From the documentation for ALTER under MySQL 5.0:
In most cases, ALTER TABLE works by making a temporary copy of the original table. The alteration is performed on the copy, and then the original table is deleted and the new one is renamed. While ALTER TABLE is executing, the original table is readable by other sessions. Updates and writes to the table are stalled until the new table is ready, and then are automatically redirected to the new table without any failed updates.
[...]
If you use any option to ALTER TABLE other than RENAME, MySQL always creates a temporary table, even if the data wouldn't strictly need to be copied (such as when you change the name of a column).
Under 5.1, ALTER has some additional optimizations:
In some cases, no temporary table is necessary:
Alterations that modify only table metadata and not table data can be made immediately by altering the table's .frm file and not touching table contents. The following changes are fast alterations that can be made this way:
Renaming a column, except for the InnoDB storage engine.
[...]
Because MySQL will rebuild the entire table when you make schema changes.
This is done because it's the only way of doing it in some cases, and it makes it much easier for the server to rebuild it anyway.
Yes mysql does a temporary copy of the table. I don't think there's an easy way around that. You should really think about to store the pictures on the filesystem and only store paths in mysql. That's the only way to fasten it up, I guess.
In my production database Alerts related tables are created with default CharSet of "latin", due to this we are getting error when we try
to insert Japanese characters in the table. We need to change the table and columns default charset to UTF8.
As these tables are having huge data, Alter command might take so much time (it took 5hrs in my local DB with same amount of data)
and lock the table which will cause data loss. Can we plan a mechanism to change the Charset to UTF8, without data loss.
which is the better way to change the charset for huge data tables?
I found this on mysql manual http://dev.mysql.com/doc/refman/5.1/en/alter-table.html:
In most cases, ALTER TABLE makes a temporary copy of the original
table. MySQL waits for other operations that are modifying the table,
then proceeds. It incorporates the alteration into the copy, deletes
the original table, and renames the new one. While ALTER TABLE is
executing, the original table is readable by other sessions. Updates
and writes to the table that begin after the ALTER TABLE operation
begins are stalled until the new table is ready, then are
automatically redirected to the new table without any failed updates
So yes -- it's tricky to minimize downtime while doing this. It depends on the usage profile of your table, are there more reads/writes?
One approach I can think of is to use some sort of replication. So create a new Alert table that uses UTF-8, and find a way to replicate original table to the new one without affecting availability / throughput. When the replication is complete (or close enough), switch the table by renaming it ?
Ofcourse this is easier said than done -- need more learning if it's even possible.
You may take a look into Percona Toolkit::online-chema-change tool:
pt-online-schema-change
It does exactly this - "alters a table’s structure without blocking reads or writes" - with some
limitations(only InnoDB tables etc) and risks involved.
Create a replicated copy of your database on an other machine or instance, when you setup the replication issue stop slave command and alter the table. If you have more than one table, between each conversation you may consider issuing again start slave to synchronise two databases. (If you do not this it may take longer to synchronise) When you complete the conversion the replicated copy can replace your old production database and you remove the old one. This is the way i found out to minimize downtime.
When I run the following command, the output only consists of the create syntax for 'mytable', but none of the data:
mysqldump --single-transaction -umylogin -p mydatabase mytable > dump.sql
If I drop --single-transaction, I get an error as I can't lock the tables.
If I drop 'mytable' (and do the DB), it looks like it's creating the INSERT statements, but the entire DB is huge.
Is there a way I can dump the table -- schema & data -- without having to lock the table?
(I also tried INTO OUTFILE, but lacked access for that as well.)
The answer might depend on the database engine that you are using for your tables. InnoDB has some support for non-locking backups. Given your comments about permissions, you might lack the permissions required for that.
The best option that comes to mind would be to create a duplicate table without the indicies. Copy all of the data from the table you would like to backup over to the new table. If you create your query in a way that can easily page through the data, you can adjust the duration of the locks. I have found that 10,000 rows per iteration is usually pretty darn quick. Once you have this query, you can just keep running it until all rows are copied.
Now that you have a duplicate, you can either drop it, truncate it, or keep it around and try to update it with the latest data and leverage it as a backup source.
Jacob
I have a 12 GB table full of pictures, I'm trying to rename the blob column that holds the data, and it is taking forever. Can someone give me a blow by blow account of why it is taking so long to rename the column? I would have thought that this operation would be pretty quick, no matter the size of the table?
EDIT: The query I ran is as follows
alter table `rails_production`.`pictures` change `data` `image_file_data` mediumblob NULL
It appears that most of the time is spent waiting for mysql to make a temporary copy of the pictures table, which since it is very large is taking a while to do.
It is on the list of things to do, to change the picture storage from the database to the filesystem.
EDIT2: Mysql Server version: 5.0.51a-24+lenny2 (Debian)
I can't give you the blow-by-blow (feature request #34354 would help, except that it probably wouldn't be back-ported to MySQL 5.0), but the extra time is due to the fact that an ALTER ... CHANGE may change the type of the column (and column attributes, if any), which necessitates converting the values stored in the column and other checks. MySQL 5.0 doesn't include optimizations for when the new type and attributes are the same as the old. From the documentation for ALTER under MySQL 5.0:
In most cases, ALTER TABLE works by making a temporary copy of the original table. The alteration is performed on the copy, and then the original table is deleted and the new one is renamed. While ALTER TABLE is executing, the original table is readable by other sessions. Updates and writes to the table are stalled until the new table is ready, and then are automatically redirected to the new table without any failed updates.
[...]
If you use any option to ALTER TABLE other than RENAME, MySQL always creates a temporary table, even if the data wouldn't strictly need to be copied (such as when you change the name of a column).
Under 5.1, ALTER has some additional optimizations:
In some cases, no temporary table is necessary:
Alterations that modify only table metadata and not table data can be made immediately by altering the table's .frm file and not touching table contents. The following changes are fast alterations that can be made this way:
Renaming a column, except for the InnoDB storage engine.
[...]
Because MySQL will rebuild the entire table when you make schema changes.
This is done because it's the only way of doing it in some cases, and it makes it much easier for the server to rebuild it anyway.
Yes mysql does a temporary copy of the table. I don't think there's an easy way around that. You should really think about to store the pictures on the filesystem and only store paths in mysql. That's the only way to fasten it up, I guess.