MySQL drop foreign key too slow? - mysql

Dropping a foreign key on a table with 215k+ (with alter table) records seems to take a long time (17+ minutes). Is it possible to somehow speed up the process?
SQL: ALTER TABLE sales_flat_order_grid DROP FOREIGN KEY FK_SALES_FLAT_ORDER_GRID_STORE;
It is a magento upgrade that takes ages

Unless you are using InnoDB Plugin (and by default, in MySQL 5.0 and 5.1 you are not), removing an index require rebuilding the whole table.
If you can't upgrade MySQL, you should either look at online-schema-change (involving transfering all of the data to a new table without the index) or stop the site, minimize any I/O activity and wait the operation to complete.

Related

Create foreign key on MySQL table takes forever with copy to tmp table

I am trying to set a foreign key constraint on a 5.7 InnoDB table with 30M+ rows.
It now already runs for 45 minutes on a quad core 64GB server. The processlist outputs the state copy to tmp table for the issued alter table command.
InnoDB_buffer_pool_size is set to 32G and has room.
Why does the system create a tmp table and can this somehow be increased in performance?
It's likely that the time is being taken building an index for that foreign key. If you already had an index where the foreign key column(s) were the leftmost columns of the index, then it would use that index and not build a new one.
45 minutes doesn't sound like an unusual amount of time to build an index on such a large table. You haven't said what the data type of the foreign key column(s) are, so perhaps it's a large varchar or something and it is taking many gigabytes to build that index.
Perhaps your server's disk is too slow. If you're using non-SSD storage, or remote storage (like Amazon EBS), it's slow by modern standards.
The CPU cores isn't going to make any difference, because the work is being done in one thread anyway. A faster CPU speed would help, but not more cores.
At my company, we use pt-online-schema-change to apply all schema changes or index builds. This allows clients to read and write the table concurrently, so it doesn't matter that it takes 45 minutes or 90 minutes or even longer. Eventually it finishes, and swaps the new table for the old table.
Attention! This disables key checking so know what you are doing, in some cases this is not recommended, but can help many people so I think it's worth answering.
I had this problem this week, I have a client that still have mySQL 5.5, so I had to make it work. You just need to disable keys checking, as well as put your application down for maintenance (so you don't have any operations).
Before creating your FK or adding a column, use:
ALTER TABLE table_name DISABLE KEYS;
Then run your command, my table with 1M rows took only 57 seconds.
Then you run:
ALTER TABLE table_name ENABLE KEYS;

Creating a index before a FK in MySQL

I have a not so big table, around 2M~ rows.
Because some business rule I had to add a new reference on this table.
Right now the application is writing values but not using the column.
Now I need to update all null rows to the correct values, create a FK, and start using the column.
But this table has a lot of reads, and when I try to alter table to add the FK the table is locked and the read queries get blocked.
There is any way to speed this?
Leaving all fields in NULL values helps to speed up (since I think there will be no need to check if the values is valid)?
Creating a index before helps to speed up?
In postgres I could create a not valid FK and then validate it(which caused only row lock, not table lock), there is anything similar in MySQL?
What's taking time is building the index. A foreign key requires an index. If there is already an index on the appropriate column(s), the FK will use it. If there is no index, then adding the FK constraint implicitly builds a new index. This takes a while, and the table is locked in the meantime.
Starting in MySQL 5.6, building an index should allow concurrent read and write queries. You can try to make this explicit:
ALTER TABLE mytable ADD INDEX (col1, col2) LOCK=NONE;
If this doesn't work (like if it gives an error because it doesn't recognize the LOCK=NONE syntax), then you aren't using a version of MySQL that supports online DDL. See https://dev.mysql.com/doc/refman/5.6/en/innodb-online-ddl-operations.html
If you can't build an index or define a foreign key without locking the table, then I suggest trying the free tool pt-online-schema-change. We use this at my job, and we make many schema changes per day in production, without blocking any queries.

Geodjango and Innodb, mixed innodb and myisam models

I keep hearing that InnoDB is better for data integrity, unfortunately as of MySQL 5.6 it has yet to support SPATIAL indexes. A fast SPATIAL index is pretty critical to my app, though what's nice about my model that it's pretty much results in a fairly static (write once, read many) table of (ID, POINT), so I could use MyISAM and not care too much.
I'd like to restrict the use of MyISAM to just that table, and migrate it over when InnoDB support for SPATIAL is ready. Problem is, if I ALTER TABLE after my models are migrated (by having an app/sql/app_model.sql) to switch the table to MyISAM, MySQL complains:
ERROR 1217 (23000): Cannot delete or update a parent row: a foreign key constraint fails
That makes sense, my other models refer to this one and Django automatically makes FOREIGN KEY constraints between those models and this one.
What's the best strategy here? Should I abandon InnoDB and switch everything back to MyISAM? Can I just drop all the FOREIGN KEY constraints?
I tried automating the FOREIGN KEY drops by looking in INFORMATION_SCHEMA.TABLE_CONSTRAINTS, but that only lists the tables that have the constraints, not the tables referred to by those constraints. I would have to do some fuzzy column name matching which feels very brittle.
To solve this I gave up on using InnoDB by default. Because Amazon RDS makes Inno the default, I did this by adding an init_command in my settings.py:
'default': {
'OPTIONS': {
'init_command' : 'SET storage_engine=MYISAM', # Can't make SPATIAL keys on InnoDB
},
}
Then for all but the table with a SPATIAL index I created a $modelname.sql file under the $appname/sql directory that changes the engine after it's created.
-- Alter to InnoDB so we can make concurrent insertions w/o full table lock.
ALTER TABLE <modeltable> ENGINE=INNODB;
Switching to MYISAM default means Django doesn't automatically create the FOREIGN KEY constraints for you for your Inno tables which isn't ideal. I wish there was a way to make Django create them after-the-fact.

MySQL: Dropping a foreign key constraint on an InnoDB table

I want to remove a foreign key constraint from a table, it is taking a very long time and I wonder what bad things can happen when doing this on a production environment.
ALTER TABLE table DROP FOREIGN KEY fk_my_foreign_key;
Why is it taking that long?
Can I speed it up?
Is it safe to interrupt the process in the middle?
Is there any side effect to running such an operation on a production server?
Is there any consistency issue when the alter table fails (lost connection to the server)? What to do in this case when you cannot restart the server with a different configuration (max packet size)?
More information as requested:
Mysql Server version: 5.5.34
Foreign key references a column on the same table
Table has around 80 million of rows
Key + Constraint on table, ON UPDATE CASCADE ON DELETE CASCADE
In most cases, ALTER TABLE works by making a temporary copy of the
original table. The alteration is performed on the copy, and then the
original table is deleted and the new one is renamed. While ALTER
TABLE is executing, the original table is readable by other sessions.
Updates and writes to the table are stalled until the new table is
ready, and then are automatically redirected to the new table without
any failed updates. Thanks.
What about the others cases? Can I prevent such locks?
Firstly I must say best practice is always to Test such a change in an offline environment.
Is the table used by replication? if so you would need to remove it first. Also if the table is currently being used it could be locked in a process, check the activity monitor and also look for deadlocks. It would be a good idea to ensure that the key is also not referenced by any index
To safely and correctly remove a foreign key there are many detailed articles that can ben found on Google.

CREATE INDEX MySQL 5.6.13 On Production Database

I am running MySQL 5.6.13 and I would like to run a CREATE INDEX ... BTREE statement on my production database.
The table is InnoDB and has ~ 4 million rows, and I would like very much not to lock it.
According to the docs, it appears as if this statement will not completely lock my table and return quickly. But, I wanted a second opinion before I made this change.
Would it be safe to create this index?
By default, InnoDB in MySQL 5.6 will perform a read lock while creating the index, so you can still have other concurrent clients SELECT from the table, but not do insert/update/delete on that table while the index is being created.
You can optionally allow the index creation to be completely online and not even do the read lock:
ALTER TABLE my_table ADD INDEX a (a), LOCK=NONE;
See http://dev.mysql.com/doc/refman/5.6/en/innodb-create-index-overview.html for more details about online DDL statements in MySQL.
Also see this blog posted today from a MySQL Community Manager: Top 10 advances to availability since MySQL 5.5
PS: It's not necessary to specify BTREE for the index type. InnoDB supports only BTREE indexes, so it ignores that option.