I am using magento and having a lot of slowness on the site. There is very, very light load on the server. I have verified cpu, disk i/o, and memory is light- less than 30% of available at all times. APC caching is enabled- I am using new relic to monitor the server and the issue is very clearly insert/updates.
I have isolated the slowness to all insert and update statements. SELECT is fast. Very simple insert / updates into tables take 2-3 seconds whether run from my application or the command line mysql.
Example:
UPDATE `index_process` SET `status` = 'working', `started_at` = '2012-02-10 19:08:31' WHERE (process_id='8');
This table has 9 rows, a primary key, and 1 index on it.
The slowness occurs with all insert / updates. I have run mysqltuner and everything looks good. Also, changed innodb_flush_log_at_trx_commit to 2.
The activity on this server is very light- it's a dv box with 1 GB RAM. I have magento installs that run 100x better with 5x the load on a similar setup.
I started logging all queries over 2 seconds and it seems to be all inserts and full text searches.
Anyone have suggestions?
Here is table structure:
CREATE TABLE IF NOT EXISTS `index_process` (
`process_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`indexer_code` varchar(32) NOT NULL,
`status` enum('pending','working','require_reindex') NOT NULL DEFAULT 'pending',
`started_at` datetime DEFAULT NULL,
`ended_at` datetime DEFAULT NULL,
`mode` enum('real_time','manual') NOT NULL DEFAULT 'real_time',
PRIMARY KEY (`process_id`),
UNIQUE KEY `IDX_CODE` (`indexer_code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=10 ;
First: (process_id='8') - '8' is char/varchar, not int, so mysql convert value first.
On my system, I had long times (greater than one second) to update users.last_active_time.
The reason was that I had a few queries that long to perform. As I joined them for the users table. This resulted in blocking of the table to read. Death lock by SELECT.
I rewrote query from: JOIN to: sub-queries and porblem gone.
Related
I have MyIsam table with few records (about 20):
CREATE TABLE `_cm_dtstd_37` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`NUMBER` int(10) NOT NULL COMMENT 'str',
`DESCRIPTION` char(32) NOT NULL COMMENT 'str',
PRIMARY KEY (`id`),
UNIQUE KEY `PHONE` (`NUMBER`)
) ENGINE=MyISAM AUTO_INCREMENT=15 DEFAULT CHARSET=utf8 COMMENT='==CORless Numbers=='
Single insert:
INSERT IGNORE INTO _cm_dtstd_37 VALUES(NULL, 55555, '55555')
takes very long time to execute (about 5 to 7 minutes) and makes MySql server put every next query on 'wait' state. No other query (even those that read/write other tables) is executed until first INSERT is done.
I have no idea how to debug this and where to search for any clue.
All inserts to another tables work well, whole database works great when not inserting to feral table.
That is one big reason for moving from MyISAM to InnoDB.
MyISAM allows multiple simultaneous reads (SELECT), but any type of write locks the entire table, even against writes.
InnoDB uses "row locking", so most simultaneous accesses to a table have no noticeable impact on each other.
I have a large table called "queue". It has 12 million records right now.
CREATE TABLE `queue` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`userid` varchar(64) DEFAULT NULL,
`action` varchar(32) DEFAULT NULL,
`target` varchar(64) DEFAULT NULL,
`name` varchar(64) DEFAULT NULL,
`state` int(11) DEFAULT '0',
`timestamp` int(11) DEFAULT '0',
`errors` int(11) DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `idx_unique` (`userid`,`action`,`target`),
KEY `idx_userid` (`userid`),
KEY `idx_state` (`state`)
) ENGINE=InnoDB;
Multiple PHP workers (150) use this table simultaneously.
They select a record, perform a network request using the selected data and then delete the record.
I get mixed execution times from the select and delete queries. Is the delete command locking the table?
What would be the best approach for this scenario?
SELECT record + NETWORK request + DELETE the record
SELECT record + NETWORK request + MARK record as completed + DELETE completed records using a cron from time to time (I don't want an even bigger table).
Note: The queue gets new records every minute but the INSERT query is not the issue here.
Any help is appreciated.
"Don't queue it, just do it". That is, if the tasks are rather fast, it is better to simply perform the action and not queue it. Databases don't make good queuing mechanisms.
DELETE does not lock an InnoDB table. However, you can write a DELETE that seems that naughty. Let's see your actual SQL so we can work in improving it.
12M records? That's a huge backlog; what's up?
Shrink the datatypes so that the table is not gigabytes:
action is only a small set of possible values? Normalize it down to a 1-byte ENUM or TINYINT UNSIGNED.
Ditto for state -- surely it does not need a 4-byte code?
There is no need for INDEX(userid) since there is already an index (UNIQUE) starting with userid.
If state has only a few value, the index won't be used. Let's see your enqueue and dequeue queries so we can discuss how to either get rid of that index or make it 'composite' (and useful).
What's the current value of MAX(id)? Is it threatening to exceed your current limit of about 4 billion for INT UNSIGNED?
How does PHP use the queue? Does it hang onto an item via an InnoDB transaction? That defeats any parallelism! Or does it change state. Show us the code; perhaps the lock & unlock can be made less invasive. It should be possible to run a single autocommitted UPDATE to grab a row and its id. Then, later, do an autocommitted DELETE with very little impact.
I do not see a good index for grabbing a pending item. Again, let's see the code.
150 seems like a lot -- have you experimented with fewer? They may be stumbling over each other.
Is the Slowlog turned on (with a low value for long_query_time)? If so, I wonder what is the 'worst' query. In situations like this, the answer may be surprising.
I have a database table that is giving me those headaches, errors when inserting lots of data. Let me break down what exactly happens and I'm hoping someone will have some insight into how I can get this figured out.
Basically I have a table that has 11+ million records in it and it's growing everyday. We track how times a user is viewing a video and their progress in that video. You can see below what the structure is like. Our setup is a master db with two slaves attached to it. Nightly we run a cron script to compile some statistical data out of this table and compile them into a couple other tables we use just for reporting. These cron scripts only do SELECT statements on the slave and will do the insert into our statistical tables on the master (so it'll propagate down). Like clockwork every time we run this script it will lock up our production table. I thought moving the SELECT to a slave would fix this issue and since we aren't even writing into the main table with the cron but rather other tables, I'm now perplexed what could possibly cause this locking up.
It's almost as if it seems that every time a large read on the main table (master or slave) it locks up the master. As soon as the cron is complete, the table goes back to normal performance.
My question is several levels about INNODB. I've had thoughts that it might be indexing that would cause this issue but maybe it's other variables on INNODB settings that I'm not fully understanding. As you would imagine, I want to keep the master from getting this lockup. I don't really care if the slave is pegged out during this script run as long as it won't effect my master db. Is this something that can happen with Slave/Master relationships in MYSQL?
The tables that are getting the compiled information to are stats_daily, stats_grouped for reference.
The biggest issue I have here, to restate a little, is that I don't understand what can cause the locking like this. Taking the reads off the master and just doing inserts into another table doesn't seem like it should do anything on the master original table. I can watch the errors start streaming in, however, 3 minutes after the script starts and it will end immediately when the script stops.
The table I'm working with is below.
CREATE TABLE IF NOT EXISTS `stats` (
`ID` int(10) unsigned NOT NULL AUTO_INCREMENT,
`VID` int(10) unsigned NOT NULL DEFAULT '0',
`UID` int(10) NOT NULL DEFAULT '0',
`Position` smallint(10) unsigned NOT NULL DEFAULT '0',
`Progress` decimal(3,2) NOT NULL DEFAULT '0.00',
`ViewCount` int(10) unsigned NOT NULL DEFAULT '0',
`DateFirstView` int(10) unsigned NOT NULL DEFAULT '0', // Use unixtimestamps
`DateLastView` int(10) unsigned NOT NULL DEFAULT '0', // Use unixtimestamps
PRIMARY KEY (`ID`),
KEY `VID` (`VID`,`UID`),
KEY `UID` (`UID`),
KEY `DateLastView` (`DateLastView`),
KEY `ViewCount` (`ViewCount`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=15004624 ;
Does anyone have any thoughts or ideas on this?
UPDATE:
The errors I get from the master DB
MysqlError: Lock wait timeout exceeded; try restarting transaction
Uncaught exception 'Exception' with message 'invalid query UPDATE stats SET VID = '13156', UID = '73859', Position = '0', Progress = '0.8', ViewCount = '1', DateFirstView = '1375789950', DateLastView = '1375790530' WHERE ID = 14752456
The update query fails because of the locking. The query is actually valid. I'll get 100s of these and afterwards I can randomly copy/paste these queries and they will work.
UPDATE 2
Queries and Explains from Cron Script
Query Ran on the Slave (leaving php variables in curly brackets for reference):
SELECT
VID,
COUNT(ID) as ViewCount,
DATE_FORMAT(FROM_UNIXTIME(DateLastView), '%Y-%m-%d') AS YearMonthDay,
{$today} as DateModified
FROM stats
WHERE DateLastView >= {$start_date} AND DateLastView <= {$end_date}
GROUP BY YearMonthDay, VID
EXPLAIN of the SELECT Stat
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE stats range DateLastView DateLastView 4 NULL 25242 Using where; Using temporary; Using filesort
That result set is looped and inserted into the compiled table. Unfortunately I don't have support for batched inserts with this (I tried) so I have to loop through these one at a time instead of sending a batch of 100 or 500 to the server at a time. This is inserted into the master DB.
foreach ($results as $result)
{
$query = "INSERT INTO stats_daily (VID, ViewCount, YearMonthDay, DateModified) VALUES ({$result->VID}, {$result->ViewCount}, '{$result->YearMonthDay}', {$today} );
DoQuery($query);
}
The GROUP BY is the culprit. Apparently MySQL decides to use a temporary table in this case (perhaps because the table has exceeded some limit) which is very inefficient.
I ran into similar problems, but no clear solution. You could consider splitting your stats table into two tables, a 'daily' and a 'history' table. Run your query on the 'daily' table which only contains entries from the latest 24 hours or whatever your interval is, then clean up the table.
To get the info into your permanent 'history' table, either write your stats into both tables from code, or copy them over from daily into history before cleanup.
A simple mysql update query is very slow sometimes. Here is the query:
update produse
set vizite = '135'
where id = '71238'
My simplified table structure is:
CREATE TABLE IF NOT EXISTS `produse`
(
`id` int(9) NOT NULL auto_increment,
`nume` varchar(255) NOT NULL,
`vizite` int(9) NOT NULL default '1',
PRIMARY KEY (`id`),
KEY `vizite` (`vizite`),
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=945179 ;
I use MySQL 5.0.77 and the table is MyISAM.
The table is about 752.6 MO and has 642,442 for the moment.
The database runs on a dedicated VPS that has 3Gb of RAM and 4 processors of 2G each. There are no more than 6-7 queries of that type per second when we have high traffic, but the query is slow not only then.
First, try rebuilding your indexes, it might happen that query is not using them (you can see that using EXPLAIN statement with your update query).
Another possibility is that you have many selects on that table or long running selects, which causes long locks. You can try using replication and have your select queries executed on slave database, only, and updates on master, only. That way, you will avoid table locks caused by updates while you are doing selects and vice versa.
See this previous question for some background. I'm trying to renumber a corrupted MPTT tree using SQL. The script is working fine logically, it is just much too slow.
I repeatedly need to execute these two queries:
UPDATE `tree`
SET `rght` = `rght` + 2
WHERE `rght` > currentLeft;
UPDATE `tree`
SET `lft` = `lft` + 2
WHERE `lft` > currentLeft;
The table is defined as such:
CREATE TABLE `tree` (
`id` char(36) NOT NULL DEFAULT '',
`parent_id` char(36) DEFAULT NULL,
`lft` int(11) unsigned DEFAULT NULL,
`rght` int(11) unsigned DEFAULT NULL,
... (a couple of more columns) ...,
PRIMARY KEY (`id`),
KEY `parent_id` (`parent_id`),
KEY `lft` (`lft`),
KEY `rght` (`rght`),
... (a few more indexes) ...
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The database is MySQL 5.1.37. There are currently ~120,000 records in the table. Each of the two UPDATE queries takes roughly 15 - 20 seconds to execute. The WHERE condition may apply to a majority of the records, so that almost all records need to be updated each time. In the worst case both queries are executed as many times as there are records in the database.
Is there a way to optimize this query by keeping the values in memory, delaying writing to disk, delaying index updates or something along these lines? The bottleneck seems to be hard disk throughput right now, as MySQL seems to be writing everything back to disk immediately.
Any suggestion appreciated.
I never used it, but if your have enough memory, try the memory table.
Create a table with the same structure as tree, insert into .. select from .., run your scripts against the memory table, and write it back.
Expanding on some ideas from comment as requested:
The default is to flush to disk after every commit. You can wrap multiple updates in a commit or change this parameter:
http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
The isolation level is simple to change. Just make sure the level fits your design. This probably won't help because a range update is being used. It's nice to know though when looking for some more concurrency:
http://dev.mysql.com/doc/refman/5.1/en/set-transaction.html
Ultimately, after noticing the range update in the query, your best bet is the MEMORY table that andrem pointed out. Also, you'll probably be able to find some performance by using a btree indexes instead of the default of hash:
http://www.mysqlperformanceblog.com/2008/02/01/performance-gotcha-of-mysql-memory-tables/
You're updating indexed columns - indexes negatively impact (read: slow down) INSERT/UPDATEs.
If this is a one time need to get things correct:
Drop/delete the indexes on the columns being updated (lft, rght)
Run the update statements
Re-create the indexes (this can take time, possibly equivalent to what you already experience in total)