I have a very large (900GB) table in mysql that I need to perform an alter on, but I lack the temp space necessary on my data partition to hold a temp copy of the table (or an extra copy for an online schema change). I have a newly formatted partition just mounted on my box, is there a way I can have an alter write the new table's schema and data to the new partition?
Related
Hi I have mysql rds database server and one of my table is having 3 million+ records when I am trying to add new column it's always failing and giving following error
Query ALTER TABLE user_notifications ADD program_id int(11);
Error Temporary file write failure.
My RDS DB instance is db.t2.medium
ALTER TABLE is a table re-creation, so two copies of the table will exist on the system at some stage during the process.
So, you will need free space more than data length size for this operation.
You can check data length in table info.
There is this alternate solution, in which you create a new table with new desired schema, and move data from old table to new table in chunks.
MySQL will re-create the table when using an alter so you need enough free space on the disk for another copy of the table.
I suspect the table is large as it has 3million records and your disk does not have sufficient space
This article shows how to determine how much disk space a tables are using
I have quite a big table in MySQL 5.5, ~200M rows, and I want to add an index to one of the columns in this table (btree type). The column is of type integer and contains a wide distribution of integers.
My question is when is the btree computed?
When I execute the simple create index query:
ALTER TABLE bigtable ADD INDEX (column3);
It returns immediately. Is the computing of the btree happening in the background? I can't imagine that MySQL is that fast at creating a btree of ~200M values with a wide distribution of integers.
Short answer: Yes.
Long Answer: A look at the MySQL Documentation for ALTER_TABLE reveals the following:
In most cases, ALTER TABLE makes a temporary copy of the original table. MySQL waits for other operations that are modifying the table, then proceeds. It incorporates the alteration into the copy, deletes the original table, and renames the new one. While ALTER TABLE is executing, the original table is readable by other sessions (with the exception noted shortly). Updates and writes to the table that begin after the ALTER TABLE operation begins are stalled until the new table is ready, then are automatically redirected to the new table without any failed updates. The temporary copy of the original table is created in the database directory of the new table. This can differ from the database directory of the original table for ALTER TABLE operations that rename the table to a different database.
So, when you create your index, the index is being created on a temporary copy of the table, which is then imported in place of the now dropped original table when it completes.
I'm aware that copying data will only copy the data and no constraints, I'll lose my pk and fk etc.
But what if i were to copy the data back into a table with primary keys and foreign. because i want to remodel my db on workbench but don't want to lose the data i have imput into my tables so i was thinking of making a copy deleting the original remodelling the db and forward engineering and copying the data back into the table will this work?
Not sure if I gut your question right but from what it looks you only need a new table with the data to mass with. have you tried using CREATE TABLE AS SELECT?
http://dev.mysql.com/doc/refman/5.0/en/create-table-select.html
CREATE TABLE t AS SELECT * from old_t
I have a MySQL database that is up to about 17 GB in size and has 38 million entries. At the moment, I need to both increase the size of one column (varchar 40 to varchar 80) and add more columns.
Many of the fields are indexed including the one that I need to change. It is part of a unique pair that is necessary for the applications to work. In attempting to just make the change yesterday, the query ran for almost four hours without finishing, when I decided to cut our outage and just bring the service back up.
What is the most efficient way to make changes to something of this size?
Many of these entries are also old and if there is a good way to sort of shard off entries but still have them available that might help with this problem by making the table a much more manageable size.
You have some choices.
In any case you should take a backup before you do this stuff.
One possibility is to take your service offline and do it in place, as you have tried. If you do that you should disable key checks and constraints.
ALTER TABLE bigtable DISABLE KEYS;
SET FOREIGN_KEY_CHECKS=0;
ALTER TABLE (whatever);
ALTER TABLE (whatever else);
...
SET FOREIGN_KEY_CHECKS=1;
ALTER TABLE bigtable ENABLE KEYS;
This will allow the ALTER TABLE operation to go faster. It will regenerate the indexes all at once when you do ENABLE KEYS.
Another possibility is to create a new table with the new schema you want, then disable the keys on the new table, then do as #Bader suggested and insert the contents of the old table.
After your new table is built you will re-enable the keys on it, then rename the old table to some name like "old_bigtable" then rename the new table to "bigtable".
It's possible that you can keep your service online while you're populating the new table. But that might work poorly.
A third possibility is to dump your giant table (to a flat file) and then load it to a new table with the new layout. That is pretty much like the second possibility except that you get a table backup for free. You can make this go pretty fast with SELECT DATA INTO OUTFILE and LOAD DATA INFILE. You'll need to have access to your server machine's file system to do this.
In all cases, disable, then re-enable, the constraints and keys to get things to go fast.
Create a new table with the new structure you want with a different name for example NewTable.
Then insert data into this new table from the old table using the following query:
INSERT INTO NewTable (field1, field2, etc...) SELECT field1, field2, ... FROM OldTable
After this is done, you can drop the old table and rename the new table to the original name
DROP TABLE `OldTable`;
RENAME TABLE `NewTable` TO `OldTable` ;
I have tried this approach on a very large table and it's much much faster than altering the table.
With MySQL 5.1 and again with 5.5 certain alter statements were enhanced to just modify the structure without rewriting the entire table ( http://dev.mysql.com/doc/refman/5.5/en/alter-table.html - search for in-place). The availability of this though varies by the type of change you are making and the engine in use, the most value comes from InnoDB Plugin. In the case of your specific changes though the entire table would be rewritten.
When we encounter these issues, we typically try to leverage replica databases. As long as you are adding and not removing you can run your DDL against the replica first and then schedule a brief outage for promoting the replica to the master role. If you happen to be on RDS this is even one of their suggested uses for their replica instances http://aws.amazon.com/about-aws/whats-new/2012/10/11/amazon-rds-mysql-rr-promotion/.
Some other alternatives include:
Selecting out a subset of records into a new table with the desired structure (use INTO OUTFILE to avoid a table lock). Once complete you can schedule a maintenance window and REPLACE INTO or UPDATE any records that have changed in the origin table since the initial data copy. Once the update is complete a RENAME TABLE... of both tables wraps the changes up.
Using a tool like Percona's pt-online-schema-change: http://www.percona.com/doc/percona-toolkit/2.1/pt-online-schema-change.html. This tool works with triggers so if you already have triggers on the tables you want to change this may not fit your needs.
is it safe to rename a table with partitions? would the partitions need to be rebuilt?
basically i have a large database at 87G. I need to do some upgrades/schema changes. my intention was to rename the large data tables, make necessary changes/upgrades and then back fill the table afterwards.