why mysql innodb could update data when alter table structure? - mysql

When I add a new column to a table, at the same time I update the table date before the alter table transaction finished, but the update data task succeeds, why?
Why does mysql innodb engine don't lock the table when altering table structure? If locking the table, why could I update the table data?
Conditions:
My table data is too large, about 16000000 records.
mysql version:5.7.15;

Certain ALTERs do not require locking the table; some don't even modify any part of the data. If you would like to show us the ALTER and provide the MySQL version number, we can be more specific.

Related

Mysql Batch insert around 11 GB data from one table to another [duplicate]

Is there a more-efficent, less laborious way of copying all records from one table to another that doing this:
INSERT INTO product_backup SELECT * FROM product
Typically, the product table will hold around 50,000 records. Both tables are identical in structure and have 31 columns in them. I'd like to point out this is not my database design, I have inherited a legacy system.
There's just one thing you're missing. Especially, if you're using InnoDB, is you want to explicitly add an ORDER BY clause in your SELECT statement to ensure you're inserting rows in primary key (clustered index) order:
INSERT INTO product_backup SELECT * FROM product ORDER BY product_id
Consider removing secondary indexes on the backup table if they're not needed. This will also save some load on the server.
Finally, if you are using InnoDB, reduce the number of row locks that are required and just explicitly lock both tables:
LOCK TABLES product_backup WRITE;
LOCK TABLES product READ;
INSERT INTO product_backup SELECT * FROM product ORDER BY product_id;
UNLOCK TABLES;
The locking stuff probably won't make a huge difference, as row locking is very fast (though not as fast as table locks), but since you asked.
mysqldump -R --add-drop-table db_name table_name > filepath/file_name.sql
This will take a dump of specified tables with a drop option to delete the exisiting table when you import it. then do,
mysql db_name < filepath/file_name.sql
DROP the destination table:
DROP TABLE DESTINATION_TABLE;
CREATE TABLE DESTINATION_TABLE AS (SELECT * FROM SOURCE_TABLE);
I don't think this will be worthy for a 50k table but:
If you have the database dump you can reload a table from it. As you want to load a table in another one you could change the table name in the dump with a sed command:
Here you have some hints:
http://blog.tsheets.com/2008/tips-tricks/mysql-restoring-a-single-table-from-a-huge-mysqldump-file.html
An alternative (depending on your design) would be to use triggers on the original table inserts so that the duplicated table gets the data as well.
And a better alternative would be to create another MySQL instance and either run it in a master-slave configuration or in a daily dump master/load slave fashion.

Does it make sense to optimize the table after alter table drop column?

I dropped the column name in table employees.
If I run OPTIMIZE TABLE employees, will it reduce space usage?
My thoughts:
The documentation says that optimize table is equal to alter table for InnoDB (if I read this https://dev.mysql.com/doc/refman/8.0/en/optimize-table.html#optimize-table-innodb-details correctly).
Also, alter table drop column changes rows structure in the table, so it should rewrite all rows. This is where, I assume, optimization happens.
It's not necessary to OPTIMIZE TABLE on an InnoDB table after running any ALTER TABLE that changes the row size.
The ALTER TABLE copies rows into a new tablespace, and rebuilds indexes. This will accomplish the same defragmentation you hoped to do with OPTIMIZE TABLE.

change engine type from MyISAM to InnoDB

I want to change engine type from MyISAM to InnoDB.
What I Did:
Method 1:
Copy table structure in a new database.
Change table engine from MyISAM to InnoDB.
Export data from existing table (MyISAM).
Import data in a new table (InnoDB).
Here, I can see the total rows of a table and the size of the table. But not see any record on browse.
Method 2:
Copy table structure in a new database.
Export data from the existing database.
Import data in a new database.
Change table engine from MyISAM to InnoDB.
Here, I notice after change engine type many records are deleted.
In customer table imported records are 310749 after change engine type, I see only 243898, loss total 66851 records.
What is wrong with this?
Any other way to change the type from MyISAM to InnoDB without loss data.
Simply do ALTER TABLE foo ENGINE=InnoDB; But that does it 'in-place'. If you want the new table in a different database:
CREATE TABLE db2.foo LIKE db1.foo;
ALTER TABLE db2.foo ENGINE=InnoDB; -- and possibly other changes, see blog below
INSERT INTO db2.foo
SELECT * FROM db1.foo; -- copy data over
SELECT COUNT(*) FROM db1.foo;
SELECT COUNT(*) FROM db2.foo; -- compare exact number of rows
The number of rows -- If you are using SHOW TABLE STATUS to see that, be aware that MyISAM provides an exact number of rows, but InnoDB only approximates the number. Use SELECT COUNT(*) FROM foo to get the exact number of rows.
Here, let me knock the cobwebs off my old blog on moving from MyISAM to InnoDB: http://mysql.rjweb.org/doc.php/myisam2innodb

Switching to InnoDB From MYISAM via phpmyadmin very slow

I have been switching my tables one by one to InnoDB on phpMyAdmin. Each table took a max of 30 seconds.
One table is stuck and has taken over 15 minutes (still going).
In the mysql process list, it shows:
status:
copy to tmp table
info:
ALTER TABLE `table` auto_increment = 2446976 ROW_FORMAT = DYNAMIC
Why is this process taking so long?
Can I kill this process? Or should I just let it go? The table is hot so some rows are waiting to be inserted.
The table does have a unique index on a varchar(30) column. Could this be the problem?
It takes long time because MySQL needs to create new table with the new structure , then copy data from old table (MyISAM) to the new table (InnoDB). When all records are copied it will replace the tables.
I don't recommend to kill it, because rollback process (the new table is InnoDB) will take longer. Just wait until it finishes. After ALTER is done the table will be InnoDB in good condition.

Optimize mySql for faster alter table add column

I have a table that has 170,002,225 rows with about 35 columns and two indexes. I want to add a column. The alter table command took about 10 hours. Neither the processor seemed busy during that time nor were there excessive IO waits. This is on a 4 way high performance box with tons of memory.
Is this the best I can do? Is there something I can look at to optimize the add column in tuning of the db?
I faced a very similar situation in the past and i improve the performance of the operation in this way :
Create a new table (using the structure of the current table) with the new column(s) included.
execute a INSERT INTO new_table (column1,..columnN) SELECT (column1,..columnN) FROM current_table;
rename the current table
rename the new table using the name of the current table.
ALTER TABLE in MySQL is actually going to create a new table with new schema, then re-INSERT all the data and delete the old table. You might save some time by creating the new table, loading the data and then renaming the table.
From "High Performance MySQL book" (the percona guys):
The usual trick for loading MyISAM table efficiently is to disable keys, load the data and renalbe the keys:
mysql> ALTER TABLE test.load_data DISABLE KEYS;
-- load data
mysql> ALTER TABLE test.load_data ENABLE KEYS;
Well, I would recommend using latest Percona MySQL builds plus since there is the following note in MySQL manual
In other cases, MySQL creates a
temporary table, even if the data
wouldn't strictly need to be copied.
For MyISAM tables, you can speed up
the index re-creation operation (which
is the slowest part of the alteration
process) by setting the
myisam_sort_buffer_size system
variable to a high value.
You can do ALTER TABLE DISABLE KEYS first, then add column and then ALTER TABLE ENABLE KEYS. I don't see anything can be done here.
BTW, can't you go MongoDB? It doesn't rebuild anything when you add column.
Maybe you can remove the index before alter the table because what is take most of the time to build is the index?
Combining some of the comments on the other answers, this was the solution that worked for me (MySQL 5.6):
create table mytablenew like mytable;
alter table mytablenew add column col4a varchar(12) not null after col4;
alter table mytablenew drop index index1, drop index index2,...drop index indexN;
insert into mytablenew (col1,col2,...colN) select col1,col2,...colN from mytable;
alter table mytablenew add index index1 (col1), add index index2 (col2),...add index indexN (colN);
rename table mytable to mytableold, mytablenew to mytable
On a 75M row table, dropping the indexes before the insert caused the query to complete in 24 minutes rather than 43 minutes.
Other answers/comments have insert into mytablenew (col1) select (col1) from mytable, but this results in ERROR 1241 (21000): Operand should contain 1 column(s) if you have the parenthesis in the select query.
Other answers/comments have insert into mytablenew select * from mytable;, but this results in ERROR 1136 (21S01): Column count doesn't match value count at row 1 if you've already added a column.