ERROR 1878 (HY000): Temporary file write failure - mysql

I am executing a query
ALTER TABLE message ADD COLUMN syncid int(10) NOT NULL DEFAULT 0;
MySQL returned error:
ERROR 1878 (HY000): Temporary file write failure.
message table info:
engine type:InnoDB
rows:15786772
index length:1006.89 MB
data length:11.25 GB
How to fix it?

MySQL implements ALTER TABLE as a table re-creation, so two copies of the table exists on the system at some stage during the process. You will need over 12 GB free space for this operation.
Free some space. Alternatively, set your server to use a different temporary directory, where there is enough space.
Alternative to the alternative (the WHILE might need to be wrapped in a stored procedure):
create a new table (temp_table) with the new structure
transfer data in small batches from original_table into temp_table
drop original_table and rename temp_table
-- useful only if concurrent access is allowed during migration
LOCK TABLES original_table WRITE, temp_table WRITE;
SELECT COUNT(*) INTO #anythingleft FROM original_table;
WHILE #anythingleft DO
-- transfer data
INSERT INTO temp_table
SELECT
original_table.old_stuff,
"new stuff"
FROM original_table
ORDER BY any_sortable_column_with_unique_constraint -- very important!
LIMIT 1000; -- batch size, adjust to your situation
DELETE FROM original_table
ORDER BY any_sortable_column_with_unique_constraint
LIMIT 1000; -- ORDER BY and LIMIT clauses MUST be exactly the same as above
SELECT COUNT(*) INTO #anythingleft FROM original_table;
END WHILE;
-- delete, rename
DROP TABLE original_table;
UNLOCK TABLES;
RENAME TABLE old_table TO original_table;
If your table uses InnoDB, a more elaborate solution is possible with SELECT ... FOR UPDATE; instead of table locks, but I trust you get the idea.

Sorry for the late answer or digging up this old topic, but the following tools can help you with that:
pt-online-schema-change
github/gh-ost
Both tools recreate the table in the fashion that #RandomSeed proposed, but in a simpler way.
However please ensure that there is enough space on the file system. Those tools don't need more space in the temporary folder, which is interesting when you're mounting your temporary folder on a separate drive / RAMdisk.

Related

Mysql Batch insert around 11 GB data from one table to another [duplicate]

Is there a more-efficent, less laborious way of copying all records from one table to another that doing this:
INSERT INTO product_backup SELECT * FROM product
Typically, the product table will hold around 50,000 records. Both tables are identical in structure and have 31 columns in them. I'd like to point out this is not my database design, I have inherited a legacy system.
There's just one thing you're missing. Especially, if you're using InnoDB, is you want to explicitly add an ORDER BY clause in your SELECT statement to ensure you're inserting rows in primary key (clustered index) order:
INSERT INTO product_backup SELECT * FROM product ORDER BY product_id
Consider removing secondary indexes on the backup table if they're not needed. This will also save some load on the server.
Finally, if you are using InnoDB, reduce the number of row locks that are required and just explicitly lock both tables:
LOCK TABLES product_backup WRITE;
LOCK TABLES product READ;
INSERT INTO product_backup SELECT * FROM product ORDER BY product_id;
UNLOCK TABLES;
The locking stuff probably won't make a huge difference, as row locking is very fast (though not as fast as table locks), but since you asked.
mysqldump -R --add-drop-table db_name table_name > filepath/file_name.sql
This will take a dump of specified tables with a drop option to delete the exisiting table when you import it. then do,
mysql db_name < filepath/file_name.sql
DROP the destination table:
DROP TABLE DESTINATION_TABLE;
CREATE TABLE DESTINATION_TABLE AS (SELECT * FROM SOURCE_TABLE);
I don't think this will be worthy for a 50k table but:
If you have the database dump you can reload a table from it. As you want to load a table in another one you could change the table name in the dump with a sed command:
Here you have some hints:
http://blog.tsheets.com/2008/tips-tricks/mysql-restoring-a-single-table-from-a-huge-mysqldump-file.html
An alternative (depending on your design) would be to use triggers on the original table inserts so that the duplicated table gets the data as well.
And a better alternative would be to create another MySQL instance and either run it in a master-slave configuration or in a daily dump master/load slave fashion.

Fastest way to replace data in a table from a temporary table in MySQL

I have a need to "update" some table data I receive from external source (every time I receive "all" data, with some fields for some records updated).
There's no unique field or combination of fields, and thus I figured the best way would be to every time to wipe out all data from DB and write all (now updated) data in again. There are up to a 1000 records (there will never be more than that), about 15 short fields each: text, numbers, datetime. And I'm writing it to remote DB (so, it's slow).
Currently I'm doing:
delete from `table` where `date_dt` > ?
and then for each row
INSERT INTO `table` ( `field_0`,`field_1`,... ) VALUES (?,?,...)
It's not only slow, but it's possible that the end user may not see the complete data while I'm still inserting.
I figured I could do:
CREATE TEMPORARY TABLE `temp_table` ( ... ); -- same structure as in main table
INSERT INTO `temp_table` ( `field_0`,`field_1`,... ) VALUES (?,?,...) -- repeat 1000x
START TRANSACTION;
DELETE FROM `table`;
INSERT INTO `table` SELECT * FROM `temp_table`;
DROP `temp_table`;
COMMIT;
Does this makes any sense? What's is a better way of solving this?
The speed of filling up the temp table with data is not crucial, but filling the main table with data is (so users don't see incomplete data, or the period of time they do is minimal).
mysqlimport --delete will truncate the table first, and then load your external data from a CSV file. It runs many times faster than doing INSERT one row at a time.
See https://dev.mysql.com/doc/refman/5.7/en/mysqlimport.html
I did a presentation in April 2017 about performance of bulk data loads for MySQL:
https://www.slideshare.net/billkarwin/load-data-fast
P.S.: Don't use the temp table solution if you have a MySQL replication environment. This is a well-known way of breaking replication. If the slave restarts in between your creation of the temp table and the INSERT...SELECT that reads from the temp table, then the slave will find the temp table is gone, and this will result in an error and stop replication. This might seem unlikely, but it does happen eventually.

Optimize mySql for faster alter table add column

I have a table that has 170,002,225 rows with about 35 columns and two indexes. I want to add a column. The alter table command took about 10 hours. Neither the processor seemed busy during that time nor were there excessive IO waits. This is on a 4 way high performance box with tons of memory.
Is this the best I can do? Is there something I can look at to optimize the add column in tuning of the db?
I faced a very similar situation in the past and i improve the performance of the operation in this way :
Create a new table (using the structure of the current table) with the new column(s) included.
execute a INSERT INTO new_table (column1,..columnN) SELECT (column1,..columnN) FROM current_table;
rename the current table
rename the new table using the name of the current table.
ALTER TABLE in MySQL is actually going to create a new table with new schema, then re-INSERT all the data and delete the old table. You might save some time by creating the new table, loading the data and then renaming the table.
From "High Performance MySQL book" (the percona guys):
The usual trick for loading MyISAM table efficiently is to disable keys, load the data and renalbe the keys:
mysql> ALTER TABLE test.load_data DISABLE KEYS;
-- load data
mysql> ALTER TABLE test.load_data ENABLE KEYS;
Well, I would recommend using latest Percona MySQL builds plus since there is the following note in MySQL manual
In other cases, MySQL creates a
temporary table, even if the data
wouldn't strictly need to be copied.
For MyISAM tables, you can speed up
the index re-creation operation (which
is the slowest part of the alteration
process) by setting the
myisam_sort_buffer_size system
variable to a high value.
You can do ALTER TABLE DISABLE KEYS first, then add column and then ALTER TABLE ENABLE KEYS. I don't see anything can be done here.
BTW, can't you go MongoDB? It doesn't rebuild anything when you add column.
Maybe you can remove the index before alter the table because what is take most of the time to build is the index?
Combining some of the comments on the other answers, this was the solution that worked for me (MySQL 5.6):
create table mytablenew like mytable;
alter table mytablenew add column col4a varchar(12) not null after col4;
alter table mytablenew drop index index1, drop index index2,...drop index indexN;
insert into mytablenew (col1,col2,...colN) select col1,col2,...colN from mytable;
alter table mytablenew add index index1 (col1), add index index2 (col2),...add index indexN (colN);
rename table mytable to mytableold, mytablenew to mytable
On a 75M row table, dropping the indexes before the insert caused the query to complete in 24 minutes rather than 43 minutes.
Other answers/comments have insert into mytablenew (col1) select (col1) from mytable, but this results in ERROR 1241 (21000): Operand should contain 1 column(s) if you have the parenthesis in the select query.
Other answers/comments have insert into mytablenew select * from mytable;, but this results in ERROR 1136 (21S01): Column count doesn't match value count at row 1 if you've already added a column.

MySQL temporary vs memory table in stored procedures

What's is better to use in a stored procedure: a temporary table or a memory table?
The table is used to stored summary data for reports.
Are there any trade offs that developers should be aware off?
CREATE TEMPORARY TABLE t (avg (double));
or
CREATE TABLE t (avg (double)) ENGINE=MEMORY;
Why is this restricted to just the two options? You can do:
CREATE TEMPORARY TABLE t (avg double) ENGINE=MEMORY;
Which works, although I'm not sure how to check if the memory engine is actually being used here.
Of the two, I'd use a temporary table for report.
A memory table holds data across user sessions & connections, so you'd have to truncate it every time to make sure you wouldn't be using data from someone else. Assuming you put in whats necessary to maintain a memory table depending on your needs, it's fine - the temp table is a little safer from a maintenance perspective.
A temporary table will only exist for the duration of your session. A table declared with Engine=Memory will persist across user sessions / connections but will only exist in the lifetime of the MySQL instance. So if MySQL gets restarted the table goes away.
In MySQL, temporary tables are seriously crippled:
http://dev.mysql.com/doc/refman/5.6/en/temporary-table-problems.html
You cannot refer to a TEMPORARY table more than once in the same query.
For example, the following does not work:
mysql> SELECT * FROM temp_table, temp_table AS t2;
ERROR 1137: Can't reopen table: 'temp_table'
I Just wanted to point out that, in 2021 using MariaDB-10.3.27, the code #biziclop said doesn't work, is not the case any more, this is possible:
CREATE TEMPORARY TABLE tmp1 AS
SELECT * FROM products LIMIT 10;
SELECT * FROM tmp1, tmp1 AS t2;
(I just tested it)

Quickest way to delete enormous MySQL table

I have an enormous MySQL (InnoDB) database with millions of rows in the sessions table that were created by an unrelated, malfunctioning crawler running on the same server as ours. Unfortunately, I have to fix the mess now.
If I try to truncate table sessions; it seems to take an inordinately long time (upwards of 30 minutes). I don't care about the data; I just want to have the table wiped out as quickly as possible. Is there a quicker way, or will I have to just stick it out overnight?
(As this turned up high in Google's results, I thought a little more instruction might be handy.)
MySQL has a convenient way to create empty tables like existing tables, and an atomic table rename command. Together, this is a fast way to clear out data:
CREATE TABLE new_foo LIKE foo;
RENAME TABLE foo TO old_foo, new_foo TO foo;
DROP TABLE old_foo;
Done
The quickest way is to use DROP TABLE to drop the table completely and recreate it using the same definition. If you have no foreign key constraints on the table then you should do that.
If you're using MySQL version greater than 5.0.3, this will happen automatically with a TRUNCATE. You might get some useful information out of the manual as well, it describes how a TRUNCATE works with FK constraints. http://dev.mysql.com/doc/refman/5.0/en/truncate-table.html
EDIT: TRUNCATE is not the same as a drop or a DELETE FROM. For those that are confused about the differences, please check the manual link above. TRUNCATE will act the same as a drop if it can (if there are no FK's), otherwise it acts like a DELETE FROM with no where clause.
EDIT: If you have a large table, your MariaDB/MySQL is running with a binlog_format as ROW and you execute a DELETE without a predicate/WHERE clause, you are going to have issues to keep up the replication or even, to keep your Galera nodes running without hitting a flow control state. Also, binary logs can get your disk full. Be careful.
The best way I have found of doing this with MySQL is:
DELETE from table_name LIMIT 1000;
Or 10,000 (depending on how fast it happens).
Put that in a loop until all the rows are deleted.
Please do try this as it will actually work. It will take some time, but it will work.
Couldn't you grab the schema drop the table and recreate it?
drop table should be the fastest way to get rid of it.
Have you tried to use "drop"? I've used it on tables over 20GB and it always completes in seconds.
If you just want to get rid of the table altogether, why not simply drop it?
Truncate is fast, usually on the order of seconds or less. If it took 30 minutes, you probably had a case of some foreign keys referencing the table you were truncating. There may also be locking issues involved.
Truncate is effectively as efficient as one can empty a table, but you may have to remove the foreign key references unless you want those tables scrubbed as well.
We had these issues. We no longer use the database as a session store with Rails 2.x and the cookie store. However, dropping the table is a decent solution. You may want to consider stopping the mysql service, temporarily disable logging, start things up in safe mode and then do your drop/create. When done, turn on your logging again.
I'm not sure why it's taking so long. But perhaps try a rename, and recreate a blank table. Then you can drop the "extra" table without worrying how long it takes.
searlea's answer is nice, but as stated in the comments, you lose the foreign keys during the fight.
this solution is similar: the truncate is executed within a second, but you keep the foreign keys.
The trick is that we disable/enable the FK checks.
SET FOREIGN_KEY_CHECKS=0;
CREATE TABLE NewFoo LIKE Foo;
insert into NewFoo SELECT * from Foo where What_You_Want_To_Keep
truncate table Foo;
insert into Foo SELECT * from NewFoo;
SET FOREIGN_KEY_CHECKS=1;
Extended answer - Delete all but some rows
My problem was: Because of a crazy script, my table was for with 7.000.000 junk rows. I needed to delete 99% of data in this table, this is why i needed to copy What I Want To Keep in a tmp table before deleteting.
These Foo Rows i needed to keep were depending on other tables, that have foreign keys, and indexes.
something like that:
insert into NewFoo SELECT * from Foo where ID in (
SELECT distinct FooID from TableA
union SELECT distinct FooID from TableB
union SELECT distinct FooID from TableC
)
but this query was always timing out after 1 hour.
So i had to do it like this:
CREATE TEMPORARY TABLE tmpFooIDS ENGINE=MEMORY AS (SELECT distinct FooID from TableA);
insert into tmpFooIDS SELECT distinct FooID from TableB
insert into tmpFooIDS SELECT distinct FooID from TableC
insert into NewFoo SELECT * from Foo where ID in (select ID from tmpFooIDS);
I theory, because indexes are setup correctly, i think both ways of populating NewFoo should have been the same, but practicaly it didn't.
This is why in some cases, you could do like this:
SET FOREIGN_KEY_CHECKS=0;
CREATE TABLE NewFoo LIKE Foo;
-- Alternative way of keeping some data.
CREATE TEMPORARY TABLE tmpFooIDS ENGINE=MEMORY AS (SELECT * from Foo where What_You_Want_To_Keep);
insert into tmpFooIDS SELECT ID from Foo left join Bar where OtherStuff_You_Want_To_Keep_Using_Bar
insert into NewFoo SELECT * from Foo where ID in (select ID from tmpFooIDS);
truncate table Foo;
insert into Foo SELECT * from NewFoo;
SET FOREIGN_KEY_CHECKS=1;