copy to tmp table which alter table on auto increment - mysql

We've a problem with MySql (5.5.x), and I hope you can help us.
At some point during the day, I notice a process with state "copy to tmp table", and this query:
ALTER TABLE `MyTableName` AUTO_INCREMENT=200000001
After this, all other queries get a "Waiting for table metadata lock", and all the queries become freezed, and nothing process.
I need to Kill that process, and from that point all queries restarted.
Why? How can I fix this problem?

In MySQL 5.5, an ALTER TABLE such as the one you ran makes a copy of the whole table. The larger the table, the more time this takes. Especially if you have slow storage.
What is the size of your table (you can get this from SHOW TABLE STATUS LIKE 'MyTableName'\G and look at the data_length + index_length)?
I just did a test on my laptop. I filled a table in a MySQL 5.5 instance, until the size of the table is about 175MB. Running an alter table to set the auto-increment value takes about 5-6 seconds. Your results may be different, depending on the power of your server and the speed of storage.
While the alter table is running, the thread doing that operation holds a metadata lock on the table, which blocks all other queries, even read-only SELECT statements.
ALTER TABLE was improved in 2013, as a feature of MySQL 5.6. Some types of alters were optimized to be done "in-place" so they don't have to copy the whole table if it's not necessary. Changing the AUTO_INCREMENT is one of these operations. No matter how large the table, if you alter table to change the AUTO_INCREMENT, it's quick because it only changes an attribute of the table, without requiring copying any rows of data.
See https://dev.mysql.com/doc/refman/5.6/en/innodb-online-ddl-operations.html
In MySQL 5.5, these optimizations were not implemented. So any alter table takes a long time, proportional to the size of the table.
I would recommend the best way to fix this issue in your case is to upgrade to a newer version. MySQL 5.5 is beyond its end-of-life. Even MySQL 5.6 is reaching its end-of-life in February 2021. It's time to upgrade.
If you can't upgrade, then you should investigate what client is doing this ALTER TABLE statement. You said you noticed it at some point during the day. Track that down. In the processlist, it will tell you the client host where that SQL statement is being run from. It will also tell you the MySQL user they logged in as. You may also need to do a search on your source code of any apps or scripts that use this database. Or ask your team mates.
Once you have found the client that is doing that ALTER TABLE, try to change the time the client runs this statement to a time of day when the ALTER TABLE won't block important queries. Or ask the developer responsible if it's really necessary to do this alter table so often?

The problem could be due to a server restart, as InnoDb stores the last auto-increment index in memory and recalculates it at server restart (InnoDB AUTO_INCREMENT Counter Initialization):
If you specify an AUTO_INCREMENT column for an InnoDB table, the table handle in the InnoDB data dictionary contains a special counter called the auto-increment counter that is used in assigning new values for the column. This counter is stored only in main memory, not on disk.
To initialize an auto-increment counter after a server restart, InnoDB executes the equivalent of the following statement on the first insert into a table containing an AUTO_INCREMENT column.
SELECT MAX(ai_col) FROM table_name FOR UPDATE;
InnoDB increments the value retrieved by the statement and assigns it to the column and to the auto-increment counter for the table. By default, the value is incremented by 1. This default can be overridden by the auto_increment_increment configuration setting.
If the table is empty, InnoDB uses the value 1. This default can be overridden by the auto_increment_offset configuration setting.
Look at the mysql logs and try to find out if the server is restarting, causing the ALTER table to reset the autoincrement counter.
Try to restart the mysql server to see if you get this behaviour.
If this is the case, you could try to:
Prevent restarting the mysql server, maybe there is a cron process that restarts it once a day
Upgrade your mysql version to 8 (Autoicrement saved to table metadata):
In MySQL 8.0, this behavior is changed. The current maximum auto-increment counter value is written to the redo log each time it changes and is saved to an engine-private system table on each checkpoint.
On a server restart following a normal shutdown, InnoDB initializes the in-memory auto-increment counter using the current maximum auto-increment value stored in the data dictionary system table.
You could try to speed up "copy to tmp table" operations, skip copying to tmp table on disk mysql
References
How to make InnoDB table not reset autoincrement on server restart?
https://serverfault.com/questions/228690/mysql-auto-increment-fields-resets-by-itself

Related

Aurora MySQL - Add column to table getting stuck

I have an instance of Aurora MySQL v2.10.2
I am trying to alter a small table (3k rows) to add a new column.
This is a prod table and is constantly queried/updated.
The alter command is getting stuck and it also blocks all the other running queries in the background. By stuck, I mean its running for more than 1 min and all the queries including the alter statement is in Waiting for table metadata lock state.
This should not take more than a few seconds though.
I can not upgrade to version 3 or change the lab settings as described here to enable Fast/Instant DDL: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.FastDDL.html#AuroraMySQL.Managing.FastDDL-v2
Is there anything I can check for to get this alter to run.
I have tried these so far and each of it gets stuck.
ALTER TABLE table ADD COLUMN `my_col` int DEFAULT 100 AFTER another_col;
ALTER TABLE table ADD COLUMN `my_col` int;
ALTER TABLE table ADD COLUMN `my_col` int NULL;
It sounds like you are running into an issue with the table metadata lock when trying to alter your table in an Aurora MySQL v2.10.2 instance. This can happen when the table is being constantly queried/updated, as you mentioned.
Here are a few things you can try to resolve this issue:
Try to reduce the workload on the table during the alter operation. You can do this by temporarily disabling updates to the table or by redirecting queries to a replica.
Increase the innodb_buffer_pool_size parameter in the MySQL configuration file. This parameter controls the amount of memory used for caching data and index pages, and increasing it can help reduce the impact of table locks.
Increase the innodb_lock_wait_timeout parameter in the MySQL configuration file. This parameter controls the time that a session waits for a lock before giving up and returning an error. By increasing this value, you can allow the alter statement more time to complete.
Try running the alter statement during a maintenance window or low-usage period.
Try breaking the ALTER command into multiple commands. For example, you can create a new table with the new column and then use a SELECT INTO statement to transfer the data, after that you can drop the original table and rename the new table.
If none of the above solutions work, you might consider using the "pt-online-schema-change" tool from Percona. This tool can perform the alter table operation

What tools are available to free allocated space in a MySQL database after deleting data?

I am using MySQL Server-5.1.58 Community log. The problem is after deleting the data the allocated space of MySQL database is not getting free and as a result day by day the backup size of my using database is increasing.
Please kindly let me know any tool which can resolve the issue.
Remember that MySQL locks the table during the time OPTIMIZE TABLE is running
For your MySQL version from the official documentation:
OPTIMIZE TABLE should be used if you have deleted a large part of a
table or if you have made many changes to a table with variable-length
rows (tables that have VARCHAR, VARBINARY, BLOB, or TEXT columns).
Deleted rows are maintained in a linked list and subsequent INSERT
operations reuse old row positions. You can use OPTIMIZE TABLE to
reclaim the unused space and to defragment the data file
Additional notes for InnoDB:
For InnoDB tables, OPTIMIZE TABLE is mapped to ALTER TABLE, which
rebuilds the table to update index statistics and free unused space in
the clustered index. Beginning with MySQL 5.1.27, this is displayed in
the output of OPTIMIZE TABLE when you run it on an InnoDB table, as
shown here:
mysql> OPTIMIZE TABLE foo;
Table does not support optimize, doing recreate + analyze instead
So:
OPTIMIZE [NO_WRITE_TO_BINLOG | LOCAL] TABLE
tbl_name [, tbl_name] ...
By default, OPTIMIZE TABLE statements are written to the binary log so
that they will be replicated to replication slaves. Logging can be
suppressed with the optional NO_WRITE_TO_BINLOG keyword or its alias
LOCAL.

MySQL performance of adding a column to a large table

I have MySQL 5.5.37 with InnoDB installed locally with apt-get on Ubuntu 13.10. My machine is i7-3770 + 32Gb memory + SSD hard drive on my desktop. For a table "mytable" which contains only 1.5 million records the following DDL query takes more than 20 min (!):
ALTER TABLE mytable ADD some_column CHAR(1) NOT NULL DEFAULT 'N';
Is there a way to improve it?
I checked
show processlist;
and it was showing that it is copying my table for some reason.
It is disturbingly inconvenient. Is there a way to turn off this copy?
Are there other ways to improve performance of adding a column to a large table?
Other than that my DB is relatively small with only 1.3Gb dump size. Therefore it should (in theory) fit 100% in memory.
Are there settings which can help?
Would migration to Precona change anything for me?
Add: I have
innodb_buffer_pool_size = 134217728
Are there other ways to improve performance of adding a column to a large table?
Short answer: no. You may add ENUM and SET values instantly, and you may add secondary indexes while locking only for writes, but altering table structure always requires a table copy.
Long answer: your real problem isn't really performance, but the lock time. It doesn't matter if it's slow, it only matters that other clients can't perform queries until your ALTER TABLE is finished. There are some options in that case:
You may use the pt-online-schema-change, from Percona toolkit. Backup your data first! This is the easiest solution, but may not work in all cases.
If you don't use foreign keys and it's slow because you have a lot of indexes, it might be faster for you to create a copy of the table with the changes you need but no secondary indexes, populate it with the data, and create all indexes with a single alter table at the end.
If it's easy for you to create replicas, like if you're hosted at Amazon RDS, you may create a master-master replica, run the alter table there, let it get back in sync, and switch instances after finished.
UPDATE
As others mentioned, MySQL 8.0 INNODB added support for instant column adds. It's not a magical solution, it has limitations and side-effects -- it can only be the last column, the table must not have a full text index, etc -- but should help in many cases.
You can specify explicit ALGORITHM=INSTANT LOCK=NONE parameters, and if an instant schema change isn't possible, MySQL will fail with an error instead of falling back to INPLACE or COPY. Example:
ALTER TABLE mytable
ADD COLUMN mycolumn varchar(36) DEFAULT NULL,
ALGORITHM=INPLACE, LOCK=NONE;
https://mysqlserverteam.com/mysql-8-0-innodb-now-supports-instant-add-column/
MariaDb 10.3, MySQL 8.0 and probably other MySQL variants to follow have an "Instant ADD COLUMN" feature whereby most columns (there are a few constraints, see docs) can be added instantly with no table rebuild.
MariaDb: https://mariadb.com/resources/blog/instant-add-column-innodb
MySQL: https://mysqlserverteam.com/mysql-8-0-innodb-now-supports-instant-add-column/
I know this is a rather old question but today i encountered a similar problem. I decided to create a new table and to import the old table in the new table. Something like:
CREATE TABLE New_mytable LIKE mytable ;
ALTER TABLE New_mytable ADD some_column CHAR(1) NOT NULL DEFAULT 'N';
insert into New_mytable select * from mytable ;
Then
START TRANSACTION;
insert into New_mytable select * from mytable where id > (Select max(id) from New_mytable) ;
RENAME TABLE mytable TO Old_mytable;
RENAME TABLE New_mytable TO mytable;
COMMIT;
This does not make the update process go any faster, but it does minimize downtime.
Hope this helps.
What about Online DDL?
http://www.tocker.ca/2013/11/05/a-closer-look-at-online-ddl-in-mysql-5-6.html
Maybe you would use TokuDB instead:
http://www.tokutek.com/products/tokudb-for-mysql/
There is no way to avoid copying the table when adding or removing columns because the structure changes. You can add or remove secondary indexes without a table copy.
Your table data doesn't reside in memory. The indexes can reside in memory.
1.5 million records is not a lot of rows, and 20 minutes seems quite long, but perhaps your rows are large and you have many indexes.
While the table is being copied, you can still select rows from the table. However, if you try to do any updates, they will be blocked until the ALTER is complete.

Adding Index to 3 million rows MySQL

I need to add at least 1 index to a column of type int(1) on an InnoDB table. There are about 3 million rows that it would need to index. This is a database on my production server, and it is in use by thousands of people everyday. I tried to add an index the standard way, but it was taking up too much time (I let it run for about 7 minutes before killing the process) and locking rows, meaning a frozen application for many users.
My VPS that runs all of this has 512mb of RAM and has an Intel Xeon E5504 processor.
How can I add an index to this production database without interrupting my user's experience?
Unless the table either reads XOR writes then you'll probably need to take down the site. Lock the databases, run the operation and wait.
If the table is a write only swap the writes to a temporary table and run the operation on the old table, then swap the writes back to the old table and insert the data from the temporary table.
If the table is read only, duplicate the table and run the operation on the copy.
If the table is a read/write then a messy alternative that might work, is to create a new table with the indexes and set the primary key start point to the next value in the original table, add a join to your read requests to select from both tables, but write exclusively to the new table. Then write a script that inserts from the old table to the new then deletes the row in the old table. It'll take far, far longer than the downtime, and plenty can go wrong, but it should be do-able.
you can set the start point of a primary key with
ALTER TABLE `my_table` AUTO_INCREMENT = X;
hope that helps.
take a look at pt-online-schema-change. i think this tool can be quite useful in your case. it will obviously put additional load on your database server but should not block access to the table for most of the operation time.

MySQL Drop INDEX and REPLICATION

In a MySQL MASTER MASTER scenario using InnoDB
When dropping an index on one instance will the same table on the other instance be available?
What is the sequence of activities?
I assume the following sequence:
DROP INDEX on 1st instance
Added to the binary log
DROP INDEX on 2nd instance
Can anyone confirm?
I believe the following will happen:
Your DROP INDEX (which really runs an ALTER TABLE ... DROP INDEX) runs on the master
If the ALTER completes successfully the statement will then be added to the binlog and will be run on the slave
This means that the ALTER TABLE on the other machine won't start until the ALTER TABLE has successfully completed on the first (master) machine.
While the ALTER TABLE is running on either machine the table will be readable for a while and then neither readable/writeable as MySQL first makes a copy of the table internally then applies the changes.
From http://dev.mysql.com/doc/refman/5.0/en/alter-table.html
In most cases, ALTER TABLE works by
making a temporary copy of the
original table. The alteration is
performed on the copy, and then the
original table is deleted and the new
one is renamed. While ALTER TABLE is
executing, the original table is
readable by other sessions. Updates
and writes to the table are stalled
until the new table is ready, and then
are automatically redirected to the
new table without any failed updates.
The temporary table is created in the
database directory of the new table.
This can be different from the
database directory of the original
table if ALTER TABLE is renaming the
table to a different database.
In a MySQL MASTER MASTER scenario using InnoDB
In so far as I'm aware, such a thing is not possible. You'd need to use NDB, or be in a multi-master-slave environment with auto-incrementing fields configured to increment by 1/2/3/more. So assuming the latter. (Note: if you're aware of an InnoDB based solution, please share.)
When dropping an index on one instance will the same table on the other instance be available?
Dropping an index only means your index won't be available. Not the table. It'll be written (and propagated) to the binary log and begone with.