Single insert to mysql table locks whole database - mysql

I have MyIsam table with few records (about 20):
CREATE TABLE `_cm_dtstd_37` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`NUMBER` int(10) NOT NULL COMMENT 'str',
`DESCRIPTION` char(32) NOT NULL COMMENT 'str',
PRIMARY KEY (`id`),
UNIQUE KEY `PHONE` (`NUMBER`)
) ENGINE=MyISAM AUTO_INCREMENT=15 DEFAULT CHARSET=utf8 COMMENT='==CORless Numbers=='
Single insert:
INSERT IGNORE INTO _cm_dtstd_37 VALUES(NULL, 55555, '55555')
takes very long time to execute (about 5 to 7 minutes) and makes MySql server put every next query on 'wait' state. No other query (even those that read/write other tables) is executed until first INSERT is done.
I have no idea how to debug this and where to search for any clue.
All inserts to another tables work well, whole database works great when not inserting to feral table.

That is one big reason for moving from MyISAM to InnoDB.
MyISAM allows multiple simultaneous reads (SELECT), but any type of write locks the entire table, even against writes.
InnoDB uses "row locking", so most simultaneous accesses to a table have no noticeable impact on each other.

Related

MySQL performance - Selecting and deleting from a large table

I have a large table called "queue". It has 12 million records right now.
CREATE TABLE `queue` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`userid` varchar(64) DEFAULT NULL,
`action` varchar(32) DEFAULT NULL,
`target` varchar(64) DEFAULT NULL,
`name` varchar(64) DEFAULT NULL,
`state` int(11) DEFAULT '0',
`timestamp` int(11) DEFAULT '0',
`errors` int(11) DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `idx_unique` (`userid`,`action`,`target`),
KEY `idx_userid` (`userid`),
KEY `idx_state` (`state`)
) ENGINE=InnoDB;
Multiple PHP workers (150) use this table simultaneously.
They select a record, perform a network request using the selected data and then delete the record.
I get mixed execution times from the select and delete queries. Is the delete command locking the table?
What would be the best approach for this scenario?
SELECT record + NETWORK request + DELETE the record
SELECT record + NETWORK request + MARK record as completed + DELETE completed records using a cron from time to time (I don't want an even bigger table).
Note: The queue gets new records every minute but the INSERT query is not the issue here.
Any help is appreciated.
"Don't queue it, just do it". That is, if the tasks are rather fast, it is better to simply perform the action and not queue it. Databases don't make good queuing mechanisms.
DELETE does not lock an InnoDB table. However, you can write a DELETE that seems that naughty. Let's see your actual SQL so we can work in improving it.
12M records? That's a huge backlog; what's up?
Shrink the datatypes so that the table is not gigabytes:
action is only a small set of possible values? Normalize it down to a 1-byte ENUM or TINYINT UNSIGNED.
Ditto for state -- surely it does not need a 4-byte code?
There is no need for INDEX(userid) since there is already an index (UNIQUE) starting with userid.
If state has only a few value, the index won't be used. Let's see your enqueue and dequeue queries so we can discuss how to either get rid of that index or make it 'composite' (and useful).
What's the current value of MAX(id)? Is it threatening to exceed your current limit of about 4 billion for INT UNSIGNED?
How does PHP use the queue? Does it hang onto an item via an InnoDB transaction? That defeats any parallelism! Or does it change state. Show us the code; perhaps the lock & unlock can be made less invasive. It should be possible to run a single autocommitted UPDATE to grab a row and its id. Then, later, do an autocommitted DELETE with very little impact.
I do not see a good index for grabbing a pending item. Again, let's see the code.
150 seems like a lot -- have you experimented with fewer? They may be stumbling over each other.
Is the Slowlog turned on (with a low value for long_query_time)? If so, I wonder what is the 'worst' query. In situations like this, the answer may be surprising.

Mysql IO write too much

I have a table that uses myisam engine on my server. There are 10 update statements per second on average. I found that the mysql process disk write a lot higher than the theoretical value. After experimenting, I suspect that modifying any column of data would rewrite the entire row of data. The following is an experiment...
My table:
CREATE TABLE `test_update` (
`id` int(11) NOT NULL DEFAULT '0',
`str1` blob,
`str2` blob,
`str3` blob,
`update_time` int(11) DEFAULT '0',
PRIMARY KEY (`id`),
KEY `update_time` (`update_time`)
) ENGINE=MyISAM;
I inserted 100000 rows data,each row has 30k string(10k per blob).After that I randomly update ‘update_time’ column 1 row/sec
while 1:
sql = "update test_update set update_time=%d where id=%d" %(now, randomid)
cur.execute(sql)
conn.commit()
slp_t = 1-(time.time()-end)
if slp_t>0:
time.sleep(slp_t)
end=time.time()
and iotop shows:
https://i.stack.imgur.com/sJa8y.png
It seems like modifying an int column would rewrite the entire row(even more). Is that true? If the answer is yes, why was it designed like this? what should i do to avoid this waste?

Insert ignore MySql slow

I am using this query:
insert ignore into CategoryLinks (article_id,category_id) values ('$art_id','$id')
$art_id and $id are two integers.
CategoryLinks has one unique index (both columns).
Unfortunately the query is very slow sometimes and sometimes it's fast and I don't know why!
The table has around 100,000 data records. The query needs between 1*10^(-5) seconds and over two seconds.
And it's strange that PHPMyAdmin shows: Index usage: 0B.
show create table CategoryLinks
CREATE TABLE `CategoryLinks` (
`article_id` int(10) NOT NULL,
`category_id` int(7) NOT NULL,
UNIQUE KEY `Unique` (`article_id`,`category_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
Btw: Is it possible to check whether the index is used?
MyISAM is faster!

Slow MySQL InnoDB Inserts and Updates

I am using magento and having a lot of slowness on the site. There is very, very light load on the server. I have verified cpu, disk i/o, and memory is light- less than 30% of available at all times. APC caching is enabled- I am using new relic to monitor the server and the issue is very clearly insert/updates.
I have isolated the slowness to all insert and update statements. SELECT is fast. Very simple insert / updates into tables take 2-3 seconds whether run from my application or the command line mysql.
Example:
UPDATE `index_process` SET `status` = 'working', `started_at` = '2012-02-10 19:08:31' WHERE (process_id='8');
This table has 9 rows, a primary key, and 1 index on it.
The slowness occurs with all insert / updates. I have run mysqltuner and everything looks good. Also, changed innodb_flush_log_at_trx_commit to 2.
The activity on this server is very light- it's a dv box with 1 GB RAM. I have magento installs that run 100x better with 5x the load on a similar setup.
I started logging all queries over 2 seconds and it seems to be all inserts and full text searches.
Anyone have suggestions?
Here is table structure:
CREATE TABLE IF NOT EXISTS `index_process` (
`process_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`indexer_code` varchar(32) NOT NULL,
`status` enum('pending','working','require_reindex') NOT NULL DEFAULT 'pending',
`started_at` datetime DEFAULT NULL,
`ended_at` datetime DEFAULT NULL,
`mode` enum('real_time','manual') NOT NULL DEFAULT 'real_time',
PRIMARY KEY (`process_id`),
UNIQUE KEY `IDX_CODE` (`indexer_code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=10 ;
First: (process_id='8') - '8' is char/varchar, not int, so mysql convert value first.
On my system, I had long times (greater than one second) to update users.last_active_time.
The reason was that I had a few queries that long to perform. As I joined them for the users table. This resulted in blocking of the table to read. Death lock by SELECT.
I rewrote query from: JOIN to: sub-queries and porblem gone.

InnoDB inserts very slow and slowing down

I have recently switched my project tables to InnoDB (thinking the relations would be a nice thing to have). I'm using a PHP script to index about 500 products at a time.
A table storing word/ids association:
CREATE TABLE `windex` (
`word` varchar(64) NOT NULL,
`wid` int(10) unsigned NOT NULL AUTO_INCREMENT,
`count` int(11) unsigned NOT NULL DEFAULT '1',
PRIMARY KEY (`wid`),
UNIQUE KEY `word` (`word`)
) ENGINE=InnoDB AUTO_INCREMENT=324551 DEFAULT CHARSET=latin1
Another table stores product id/word id associations:
CREATE TABLE `indx_0` (
`wid` int(7) unsigned NOT NULL,
`pid` int(7) unsigned NOT NULL,
UNIQUE KEY `wid` (`wid`,`pid`),
KEY `pid` (`pid`),
CONSTRAINT `indx_0_ibfk_1` FOREIGN KEY (`wid`) REFERENCES `windex` (`wid`) ON DELETE CASCADE ON UPDATE CASCADE,
CONSTRAINT `indx_0_ibfk_2` FOREIGN KEY (`pid`) REFERENCES `product` (`ID`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=latin1
The script was tested using MyISAM and it indexes products relatively fast (much, much faster than InnoDB). First time running in InnoDB it was ridiculously slow but after nesting more values together I ended up speeding it up by a lot (but not enough).
I would assume innodb would be much faster for this type of thing because of rowlevel locks but that's not the case.
I construct a query that looks something like:
SELECT
title,keywords,upc,...
FROM product
WHERE indexed = 0
LIMIT 500
I create a loop and fill an array with all the words that need to be added to windex and all the word id/product id pairs that need to be added to indx_0.
Because innodb keeps increasing my auto-increment values whenever i do a "REPLACE INTO" or "INSERT IGNORE INTO" that fails because of duplicate values, I need to make sure the values I add don't already exist. To do that I first select all values that exist using a query like such:
SELECT wid,word
FROM windex
WHERE
word = "someword1" or word = "someword2" or word = "someword3" ... ...
Then I filter out my array against the results which exist so all the new words I add are 100% new.
This takes about 20% of overall execution time. The other 80% goes into adding the pair values into indx_0, for which there are many more values.
Here's an example of what I get.
0.4806 seconds to select products. (0.4807 sec total).
0.0319 seconds to gather 500 items. (0.5126 sec total).
5.2396 seconds to select windex values for comparison. (5.7836 sec total).
1.8986 seconds to update count. (7.6822 sec total).
0.0641 seconds to add 832 windex records. (7.7464 sec total).
17.2725 seconds to add index of 3435 pid/wid pairs. (25.7752 sec total).
Operation took 26.07 seconds to index 500 products.
The 3435 pairs are being all executed in a single query such as:
INSERT INTO indx_0(pid,wid)
VALUES (1,4),(3,9),(9,2)... ... ...
Why is InnoDB so much slower than MyISAM in my case?
InnoDB provides more complex keys structure than MyIsam (FOREIGN KEYS) and regenerating keys is really slow in InnoDB. You should enclose all update/insert statements into one transactions (those are actually quite fast in InnoDB, once I had about 300 000 insert queries on InnoDb table with 2 indexes and it took around 30 minutes, once I enclosed every 10 000 inserts into BEGIN TRANSACTION and COMMIT it took less than 2 minutes).
I recommend to use:
BEGIN TRANSACTION;
SELECT ... FROM products;
UPDATE ...;
INSERT INTO ...;
INSERT INTO ...;
INSERT INTO ...;
COMMIT;
This will cause InnoDB to refresh indexes just once not few hundred times.
Let me know if it worked
I had a similar problem and it seems InnoDB has by default innodb_flush_log_at_trx_commit enabled which flushes every insert/update query on your hdd log file. The writing speed of your hard disk is a bottleneck for this process.
So try to modify your mysql config file
`innodb_flush_log_at_trx_commit = 0`
Restart mysql service.
I experienced about x100 speedup on inserts.