MySQL InnoDB row/table lock when performing ALTER - mysql

I created a sysbench table shown below with 25,000,000 records (5.7G in size):
Create Table: CREATE TABLE `sbtest1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`k` int(11) NOT NULL DEFAULT '0',
`c` char(120) NOT NULL DEFAULT '',
`pad` char(60) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k_1` (`k`)
) ENGINE=InnoDB AUTO_INCREMENT=25000001 DEFAULT CHARSET=latin1
Then added an index on c using the ALTER statement directly, which took about 18 minutes to update the table as shown below:
mysql> alter table sbtest1 add index c_1(c);
Query OK, 0 rows affected (18 min 47.32 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> show create table sbtest1\G
*************************** 1. row ***************************
Table: sbtest1
Create Table: CREATE TABLE `sbtest1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`k` int(11) NOT NULL DEFAULT '0',
`c` char(120) NOT NULL DEFAULT '',
`pad` char(60) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k_1` (`k`),
KEY `c_1` (`c`)
) ENGINE=InnoDB AUTO_INCREMENT=25000002 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
During the 18 minutes of the table update process, i tried to perform some transactions on the table by inserting new records and also update existing records on column c, and which to my surprise all worked when i expected a lock to prevent this from happening. I have always understood that performing an ALTER on an InnoDB table, especially a large table, can result on a record lock for the duration of the process, so wondering why i was able t perform inserts and updates without any problems?
Here are some info about my server:
mysql> show variables like '%isolation%';
+-----------------------+-----------------+
| Variable_name | Value |
+-----------------------+-----------------+
| transaction_isolation | REPEATABLE-READ |
| tx_isolation | REPEATABLE-READ |
+-----------------------+-----------------+
mysql> select version()
-> ;
+-----------+
| version() |
+-----------+
| 5.7.25-28 |
+-----------+
To me, it now seems like in MySQL 5.7, its okay to directly run the ALTER statement without any worries about locks? Is this an accurate conclusion?
UPDATED
When i tried to delete the added index c_1, it only took less than a second, which also surprised me coz i expected this too take longer than actually adding an index. I have always believed that adding an index is simple and quick, yet deleting or updating one takes a long time as the entire table structure has to be altered. So a bit confused about this???

Adding secondary index can be done inplace and permit concurrent DML.

Related

"Lost" 30% of records after partitioning

I've got a MYISAM table of 90 million records over 18GB of data, and tests suggest it's a candidate for partitioning.
Original schema:
CREATE TABLE `email_tracker` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`hash` varchar(65) COLLATE utf8_unicode_ci NOT NULL,
`userId` int(11) NOT NULL,
`dateSent` datetime NOT NULL,
`dateViewed` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `userId` (`userId`),
KEY `dateSent` (`dateSent`),
KEY `dateViewed` (`dateViewed`),
KEY `hash` (`hash`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci 1 row in set (0.01 sec)
I've previously partitioned the table on a test server with "ALTER TABLE email_tracker PARTITION BY HASH..." and run typical queries against it, and there were no problems with the queries. To avoid locking the table on the production DB, I'm testing again on the test server using this approach as we can afford to lose some tracking data while this runs:
RENAME TABLE email_tracker TO email_tracker_orig; CREATE TABLE email_tracker LIKE email_tracker_orig;
CREATE TABLE email_tracker_part LIKE email_tracker_orig;
ALTER TABLE email_tracker_part DROP PRIMARY KEY, ADD PRIMARY KEY (id, userId);
ALTER TABLE email_tracker_part PARTITION BY HASH (id + userId) partitions 30;
INSERT INTO email_tracker_part (SELECT * FROM email_tracker_orig);
The _orig table has 90,795,103 records. After the query, the _part table only has 68,282,298. And I have no idea why that might be. Any ideas?
mysql> select count(*) from email_tracker_orig;
+----------+
| count(*) |
+----------+
| 90795103 |
+----------+
1 row in set (0.00 sec)
mysql> select count(*) from email_tracker_part;
+----------+
| count(*) |
+----------+
| 68274818 |
+----------+
1 row in set (0.00 sec)
(On subsequent tests, the _part table contains slightly different numbers of records which is weirder still)
Edit #1: Just realised that half of the partition table are empty due to auto-increment-increment = 2 for replication, so going to repartition BY KEY (userId) and see how that works out.
Edit #2 - Still the same after re-partitioning so trying to identify missing rows to establish a pattern.
I am not sure of your requirements, but the mysql documentation states that "the use of hashing expressions involving multiple columns is not particularly recommended." I would recommend that you just partition by id. Partitioning by id + userId doesn't give any obviously better distribution of your elements across the partitions.
Looks like the INSERT query merely terminated prematurely - exactly 40 mins in this case. Just re-running this for the missing records is doing the trick:
INSERT INTO email_tracker_part (SELECT * FROM email_tracker_orig WHERE id > 148893974);
There's nothing in the my.cnf that suggests a timeout of 40 mins, and I've been running longer queries than this on this test server, but I have my solution so I'll close this even though the underlying reason remains unclear to me.

The insert APPEARS to work, but doesn’t in mysql

this is a very strange issue...
select count(*) from imprint_users;
count 461
INSERT INTO coresource.imprint_users (imprint_sid, users_sid) VALUES (2387,165);
Query OK, 1 row affected (0.00 sec)
select count(*) from imprint_users;
count 461
1) cannot see anything in mysql-error
2) checked the status of the table just in case
+--------------------------+-------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+--------------------------+-------+----------+----------+
| coresource.imprint_users | check | status | OK |
+--------------------------+-------+----------+----------+
3)here is the create statement
CREATE TABLE "imprint_users" (
"imprint_sid" bigint(20) NOT NULL,
"users_sid" bigint(20) NOT NULL,
PRIMARY KEY ("imprint_sid","users_sid"),
KEY "FK__IMPRINT_U__IMPRI__47E69B3D" ("imprint_sid"),
KEY "FK__IMPRINT_U__USERS__48DABF76" ("users_sid"),
CONSTRAINT "fk__imprint_u__impri__47e69b3d" FOREIGN KEY ("imprint_sid") REFERENCES "imprint" ("imprint_sid"),
CONSTRAINT "fk__imprint_u__users__48dabf76" FOREIGN KEY ("users_sid") REFERENCES "users" ("users_sid")
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci |
4) This is in a M-M setup and we are writing to M1
Server version: 5.6.24-72.2-log Percona Server (GPL), Release 72.2,
Any help or direction would be appreciated.
try a test.
first you flush table & autocommit is set to zero then set it to 1.
flush table;
set #autocommit=1;
select count(*) from imprint_users;
INSERT INTO coresource.imprint_users (imprint_sid, users_sid) VALUES (2387,165);
select count(*) from imprint_users;
If still the problem remains then see is there any TRIGGER OR EVENT working in background. So disable all triggers, disable the entire scheduler (all events)-
SET #disable_triggers = 1;
SET #event_scheduler = OFF;
run the query & test

Where oh where is my FULLTEXT index?

Okay, I'm fully prepared to be told this is something dumb. I've got a table like so:
mysql> show create table node\G
*************************** 1. row ***************************
Table: node
Create Table: CREATE TABLE `node` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`graph` varchar(100) CHARACTER SET latin1 DEFAULT NULL,
`subject` varchar(200) NOT NULL,
`predicate` varchar(200) NOT NULL,
`object` mediumtext NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `nodeindex` (`graph`(20),`subject`(100),`predicate`(100),`object`(100)),
KEY `ix_node_subject` (`subject`),
KEY `ix_node_graph` (`graph`),
KEY `ix_node_object` (`object`(255)),
KEY `ix_node_predicate` (`predicate`),
KEY `node_po` (`predicate`,`object`(130)),
KEY `node_so` (`subject`,`object`(130)),
KEY `node_sp` (`subject`,`predicate`(130)),
FULLTEXT KEY `node_search` (`object`)
) ENGINE=MyISAM AUTO_INCREMENT=574161093 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
Note the line FULLTEXT KEYnode_search(object). When I try the query
mysql> select count(*) from node where match(object) against ('Entrepreneur');
I get the error
ERROR 1191 (HY000): Can't find FULLTEXT index matching the column list
Duh?
Update
I tried an ANALYZE TABLE to no aval
mysql> analyze table node;
+------------------+---------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+------------------+---------+----------+----------+
| xxxxxxxxxxx.node | analyze | status | OK |
+------------------+---------+----------+----------+
1 row in set (21 min 13.86 sec)

getting locks on table due to select queries?

we are using below select queries from long time.
But today we are receiving many locks on database.
Please help me how to resolve the locks due to select queries.
the table size is very small 300kb.
we optimized table but no luck
query info and table structure from below.
Req-SQL:[select max(fullname) from prod_sets where name='view_v01' for update]
Req-Time: 5 sec
Blocker-SQL:[]
Blocker-Command:[Sleep]
Blocker-Time: 73 sec
Req-SQL:[select max(fullname) from prod_sets where name='view_v01' for update]
Req-Time: 22 sec
Blocker-SQL:[]
Blocker-Command:[Sleep]
Blocker-Time: 73 sec
CREATE TABLE `prod_sets` (
`modified` datetime DEFAULT NULL,
`create` datetime DEFAULT NULL,
`name` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`fullname` decimal(12,0) DEFAULT NULL,
UNIQUE KEY `idx_n` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin
Explain Plan:
mysql> explain select max(fullname) from prod_sets where name='view_v01' for update;
+----+-------------+---------------+-------+---------------+----------+---------+-------+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+-------+---------------+----------+---------+-------+------+-------+
| 1 | SIMPLE | prod_sets | const | idx_name | idx_name | 53 | const | 1 | |
+----+-------------+---------------+-------+---------------+----------+---------+-------+------+-------+
1 row in set (0.01 sec)
If you are locking some rows of a table then you must explicitly unlock the table after your work has been done.
use:
UNLOCK TABLES;
or use :
kill put_process_id_here;
refer these links for further reading
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
http://lmorgado.com/blog/2008/09/10/mysql-locks-and-a-bit-of-the-query-cache/
Assuming you know what FOR UPDATE means. Is there any reason name is DEFAULT NULL? If not, I would like to make name to PRIMARY KEY. Innodb's PK is clustered, so it makes access fullname faster
CREATE TABLE `prod_sets` (
`modified` datetime DEFAULT NULL,
`create` datetime DEFAULT NULL,
`name` varchar(50) COLLATE latin1_bin DEFAULT NOT NULL,
`fullname` decimal(12,0) DEFAULT NULL,
PRIMARY KEY `idx_n` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin
Or simply add following INDEX.
ALTER TABLE prod_sets ADD INDEX(name, fullname);

MySQL Refusing to Use Index for Simple Query

I have a table that I'm running a very simple query against. I've added an index to the table on a high cardinality column, so MySQL should be able to narrow the result almost instantly, but it's doing a full table scan every time. Why isn't MySQL using my index?
mysql> select count(*) FROM eventHistory;
+----------+
| count(*) |
+----------+
| 247514 |
+----------+
1 row in set (0.15 sec)
CREATE TABLE `eventHistory` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`whatID` varchar(255) DEFAULT NULL,
`whatType` varchar(255) DEFAULT NULL,
`whoID` varchar(255) DEFAULT NULL,
`createTimestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `whoID` (`whoID`,`whatID`)
) ENGINE=InnoDB;
mysql> explain SELECT * FROM eventHistory where whoID = 12551\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: eventHistory
type: ALL
possible_keys: whoID
key: NULL
key_len: NULL
ref: NULL
rows: 254481
Extra: Using where
1 row in set (0.00 sec)
I have tried adding FORCE INDEX to the query as well, and it still seems to be doing a full table scan. The performance of the query is also poor. It's currently taking about 0.65 seconds to find the appropriate row.
The above answers lead me to realize two things.
1) When using a VARCHAR index, the query criteria needs to be quoted or MySQL will refuse to use the index (implicitly casting behind the scenes?)
SELECT * FROM foo WHERE column = '123'; # do this
SELECT * FROM foo where column = 123; # don't do this
2) You're better off using/indexing an INT if at all possible.