Switching to InnoDB From MYISAM via phpmyadmin very slow - mysql

I have been switching my tables one by one to InnoDB on phpMyAdmin. Each table took a max of 30 seconds.
One table is stuck and has taken over 15 minutes (still going).
In the mysql process list, it shows:
status:
copy to tmp table
info:
ALTER TABLE `table` auto_increment = 2446976 ROW_FORMAT = DYNAMIC
Why is this process taking so long?
Can I kill this process? Or should I just let it go? The table is hot so some rows are waiting to be inserted.
The table does have a unique index on a varchar(30) column. Could this be the problem?

It takes long time because MySQL needs to create new table with the new structure , then copy data from old table (MyISAM) to the new table (InnoDB). When all records are copied it will replace the tables.
I don't recommend to kill it, because rollback process (the new table is InnoDB) will take longer. Just wait until it finishes. After ALTER is done the table will be InnoDB in good condition.

Related

change engine type from MyISAM to InnoDB

I want to change engine type from MyISAM to InnoDB.
What I Did:
Method 1:
Copy table structure in a new database.
Change table engine from MyISAM to InnoDB.
Export data from existing table (MyISAM).
Import data in a new table (InnoDB).
Here, I can see the total rows of a table and the size of the table. But not see any record on browse.
Method 2:
Copy table structure in a new database.
Export data from the existing database.
Import data in a new database.
Change table engine from MyISAM to InnoDB.
Here, I notice after change engine type many records are deleted.
In customer table imported records are 310749 after change engine type, I see only 243898, loss total 66851 records.
What is wrong with this?
Any other way to change the type from MyISAM to InnoDB without loss data.
Simply do ALTER TABLE foo ENGINE=InnoDB; But that does it 'in-place'. If you want the new table in a different database:
CREATE TABLE db2.foo LIKE db1.foo;
ALTER TABLE db2.foo ENGINE=InnoDB; -- and possibly other changes, see blog below
INSERT INTO db2.foo
SELECT * FROM db1.foo; -- copy data over
SELECT COUNT(*) FROM db1.foo;
SELECT COUNT(*) FROM db2.foo; -- compare exact number of rows
The number of rows -- If you are using SHOW TABLE STATUS to see that, be aware that MyISAM provides an exact number of rows, but InnoDB only approximates the number. Use SELECT COUNT(*) FROM foo to get the exact number of rows.
Here, let me knock the cobwebs off my old blog on moving from MyISAM to InnoDB: http://mysql.rjweb.org/doc.php/myisam2innodb

why mysql innodb could update data when alter table structure?

When I add a new column to a table, at the same time I update the table date before the alter table transaction finished, but the update data task succeeds, why?
Why does mysql innodb engine don't lock the table when altering table structure? If locking the table, why could I update the table data?
Conditions:
My table data is too large, about 16000000 records.
mysql version:5.7.15;
Certain ALTERs do not require locking the table; some don't even modify any part of the data. If you would like to show us the ALTER and provide the MySQL version number, we can be more specific.

Update or insert a mysql database with 60 million entries

I have a mysql database which has a table with around 60 million entries with primary key say 'x'. I have a data set(csv file) which also has around 60 million entries. This dataset also has index 'x'. For values of key 'x' common to both the mysql table and dataset, the corresponding entries in the mysql table just gets updated with increment to a counter variable. The new ones in the dataset are to be inserted.
A simple serial execution in which we try to update the entry if present or else insert takes around 8 hours to complete. What can I do to increase the speed of this whole procedure?
Plan A: IODKU, as #Rogue suggested.
Plan B: Two sqls; they might run faster because part of the 8 hours is gathering a huge amount of undo information in case of a crash. The normalization section comes close to those 2 queries.
Plan C: Walk through the pair of tables, using the PRIMARY KEY of one of them to do IODKU in chunks of, say, 1000 rows. See my Chunking code (and adapt it from DELETE to IODKU).
In Plans B and C, turn on autocommit so that you don't build up a huge redo log.
Plan D: Build a new table as you merge the two tables with a JOIN. Finish with an atomic
RENAME TABLE real TO old,
new TO real;
DROP TABLE old; -- when happy with the result.
Plan E: Plan D + Chunking of the INSERT ... SELECT real JOIN tmp ...

MySQL Locking Tables with millions of rows

I've been running a website, with a large amount of data in the process.
A user's save data like ip , id , date to the server and it is stored in a MySQL database. Each entry is stored as a single row in a table.
Right now there are approximately 24 million rows in the table
Problem 1:
Things are getting slow now, as a full table scan can take too many minutes but I already indexed the table.
Problem 2:
If a user is pulling a select data from table it could potentially block all other users (as the table is locked) access to the site until the query is complete.
Our server
32 Gb Ram
12 core with 24 thread cpu
table use MyISAM engine
EXPLAIN SELECT SUM(impresn), SUM(rae), SUM(reve), `date` FROM `publisher_ads_hits` WHERE date between '2015-05-01' AND '2016-04-02' AND userid='168' GROUP BY date ORDER BY date DESC
Lock to comment from #Max P. If you write to MyIsam Tables ALL SELECTs are blocked. There is only a Table lock. If you use InnoDB there is a ROW Lock that only locks the ROWs they need. Aslo show us the EXPLAIN of your Queries. So it is possible that you must create some new one. MySQL can only handle one Index per Query. So if you use more fields in the Where Condition it can be useful to have a COMPOSITE INDEX over this fields
According to explain, query doesn't use index. Try to add composite index (userid, date).
If you have many update and delete operations, try to change engine to INNODB.
Basic problem is full table scan. Some suggestion are:
Partition the table based on date and dont keep more than 6-12months data in live system
Add an index on user_id

query execution taking too long for altering table

I am using mysql
I have a table called address and the table has a column called zip5 which is of type varchar(6) .
I am using query
alter table address change zip5 zip5 varchar(14);
but the query execution is taking too long I am waiting from almost 15 minutes and waiting for query to execute, the address table has 9.7 million records. Does it take this long for this amount of data or am I doing something wrong here?
Hmm, I dont know why but
ALTER TABLE address MODIFY zip5 varchar(14)
seems to be a bit faster. At least on my system with a comparable table-structure.
ALTER TABLE makes a copy of you table. Perhaps the bottle-neck is your HD? Do you use SSDs? or is your temporary table storage not set on a fast disk? Is the disk full?