InnoDB strange performances - mysql

I'm doing some tests with a really simple InnoDB table (named Test) with the following structure:
Id int(10) unsigned NOT NULL AUTO_INCREMENT
UserId int(10) NOT NULL
Body varchar(512) COLLATE utf8_unicode_ci NOT NULL
CreatedAt datetime NOT NULL
one additional index on UserId:
KEY Idx_Test_UserId (UserId) USING BTREE
When I try to execute this query...
INSERT INTO Comments (UserId,Body,CreatedAt) VALUES (1,'This is a test',NOW());
...sometimes I get the operation completed in a few milliseconds but some other times it takes around a second.
I'm the only one person doing the tests on this specific table, I really don't understand I have such execution time differences.
Last note, when I'm doing the same tests with a MyISAM table I don't have any issues.

InnoDB works by default in AUTOCOMMIT mode, which means that every insert requires two seperate write to disk operations. If you have only one disk drive in your machine, sometimes you might need to wait a bit fr that. Also, AFAIR InnoDB used to have (not sure if it's still the case) a bit of performance problemws with writing to disk in Windows, but I think it involved concurrency higher than 1.

Related

MySQL performance - Selecting and deleting from a large table

I have a large table called "queue". It has 12 million records right now.
CREATE TABLE `queue` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`userid` varchar(64) DEFAULT NULL,
`action` varchar(32) DEFAULT NULL,
`target` varchar(64) DEFAULT NULL,
`name` varchar(64) DEFAULT NULL,
`state` int(11) DEFAULT '0',
`timestamp` int(11) DEFAULT '0',
`errors` int(11) DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `idx_unique` (`userid`,`action`,`target`),
KEY `idx_userid` (`userid`),
KEY `idx_state` (`state`)
) ENGINE=InnoDB;
Multiple PHP workers (150) use this table simultaneously.
They select a record, perform a network request using the selected data and then delete the record.
I get mixed execution times from the select and delete queries. Is the delete command locking the table?
What would be the best approach for this scenario?
SELECT record + NETWORK request + DELETE the record
SELECT record + NETWORK request + MARK record as completed + DELETE completed records using a cron from time to time (I don't want an even bigger table).
Note: The queue gets new records every minute but the INSERT query is not the issue here.
Any help is appreciated.
"Don't queue it, just do it". That is, if the tasks are rather fast, it is better to simply perform the action and not queue it. Databases don't make good queuing mechanisms.
DELETE does not lock an InnoDB table. However, you can write a DELETE that seems that naughty. Let's see your actual SQL so we can work in improving it.
12M records? That's a huge backlog; what's up?
Shrink the datatypes so that the table is not gigabytes:
action is only a small set of possible values? Normalize it down to a 1-byte ENUM or TINYINT UNSIGNED.
Ditto for state -- surely it does not need a 4-byte code?
There is no need for INDEX(userid) since there is already an index (UNIQUE) starting with userid.
If state has only a few value, the index won't be used. Let's see your enqueue and dequeue queries so we can discuss how to either get rid of that index or make it 'composite' (and useful).
What's the current value of MAX(id)? Is it threatening to exceed your current limit of about 4 billion for INT UNSIGNED?
How does PHP use the queue? Does it hang onto an item via an InnoDB transaction? That defeats any parallelism! Or does it change state. Show us the code; perhaps the lock & unlock can be made less invasive. It should be possible to run a single autocommitted UPDATE to grab a row and its id. Then, later, do an autocommitted DELETE with very little impact.
I do not see a good index for grabbing a pending item. Again, let's see the code.
150 seems like a lot -- have you experimented with fewer? They may be stumbling over each other.
Is the Slowlog turned on (with a low value for long_query_time)? If so, I wonder what is the 'worst' query. In situations like this, the answer may be surprising.

Mysql: Deadlock found when trying to get lock, need remove key?

i count page view statistics in Mysql and sometimes get deat lock.
How can resolve this problem? Maybe i need remove one of key?
But what will happen with reading performance? Or is it not affect?
Table:
CREATE TABLE `pt_stat` (
`stat_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`post_id` int(11) unsigned NOT NULL,
`stat_name` varchar(50) NOT NULL,
`stat_value` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`stat_id`),
KEY `post_id` (`post_id`),
KEY `stat_name` (`stat_name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8
Error: "Deadlock found when trying to get lock; try restarting transaction".
UPDATE pt_stat SET stat_value = stat_value + 1 WHERE post_id = "21500" AND stat_name = 'day_20170111';
When dealing with deadlocks, the first thing to do, always, is to see whether you have complex transactions deadlocking against eachother. This is the normal case. I assume based on your question that the update statement, however, is in its own transaction and therefore there are no complex interdependencies among writes from a logical database perspective.
Certain multi-threaded databases (including MySQL) can have single statements deadlock against themselves due to write dependencies within threads on the same query. MySQL is not alone here btw. MS SQL Server has been known to have similar problems in some cases and workloads. The problem (as you seem to grasp) is that a thread updating an index can deadlock against another thread that updates an index (and remember, InnoDB tables are indexes with leaf-nodes containing the row data).
In these cases there are three things you can look at doing:
If the problem is not severe, then the best option is generally to retry the transaction in case of deadlock.
You could reduce the number of background threads but this will affect both read and write performance, or
You could try removing an index (key). However, keep in mind that unindexed scans on MySQL are slow.

Slow MySQL InnoDB Inserts and Updates

I am using magento and having a lot of slowness on the site. There is very, very light load on the server. I have verified cpu, disk i/o, and memory is light- less than 30% of available at all times. APC caching is enabled- I am using new relic to monitor the server and the issue is very clearly insert/updates.
I have isolated the slowness to all insert and update statements. SELECT is fast. Very simple insert / updates into tables take 2-3 seconds whether run from my application or the command line mysql.
Example:
UPDATE `index_process` SET `status` = 'working', `started_at` = '2012-02-10 19:08:31' WHERE (process_id='8');
This table has 9 rows, a primary key, and 1 index on it.
The slowness occurs with all insert / updates. I have run mysqltuner and everything looks good. Also, changed innodb_flush_log_at_trx_commit to 2.
The activity on this server is very light- it's a dv box with 1 GB RAM. I have magento installs that run 100x better with 5x the load on a similar setup.
I started logging all queries over 2 seconds and it seems to be all inserts and full text searches.
Anyone have suggestions?
Here is table structure:
CREATE TABLE IF NOT EXISTS `index_process` (
`process_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`indexer_code` varchar(32) NOT NULL,
`status` enum('pending','working','require_reindex') NOT NULL DEFAULT 'pending',
`started_at` datetime DEFAULT NULL,
`ended_at` datetime DEFAULT NULL,
`mode` enum('real_time','manual') NOT NULL DEFAULT 'real_time',
PRIMARY KEY (`process_id`),
UNIQUE KEY `IDX_CODE` (`indexer_code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=10 ;
First: (process_id='8') - '8' is char/varchar, not int, so mysql convert value first.
On my system, I had long times (greater than one second) to update users.last_active_time.
The reason was that I had a few queries that long to perform. As I joined them for the users table. This resulted in blocking of the table to read. Death lock by SELECT.
I rewrote query from: JOIN to: sub-queries and porblem gone.

Should I use MyISAM or InnoDB Tables for my MySQL Database?

I have the following two tables in my database (the indexing is not complete as it will be based on which engine I use):
Table 1:
CREATE TABLE `primary_images` (
`imgId` smallint(6) unsigned NOT NULL AUTO_INCREMENT,
`imgTitle` varchar(255) DEFAULT NULL,
`view` varchar(45) DEFAULT NULL,
`secondary` enum('true','false') NOT NULL DEFAULT 'false',
`imgURL` varchar(255) DEFAULT NULL,
`imgWidth` smallint(6) DEFAULT NULL,
`imgHeight` smallint(6) DEFAULT NULL,
`imgDate` datetime DEFAULT NULL,
`imgClass` enum('jeans','t-shirts','shoes','dress_shirts') DEFAULT NULL,
`imgFamily` enum('boss','lacoste','tr') DEFAULT NULL,
`imgGender` enum('mens','womens') NOT NULL DEFAULT 'mens',
PRIMARY KEY (`imgId`),
UNIQUE KEY `imgDate` (`imgDate`)
)
Table 2:
CREATE TABLE `secondary_images` (
`imgId` smallint(6) unsigned NOT NULL AUTO_INCREMENT,
`primaryId` smallint(6) unsigned DEFAULT NULL,
`view` varchar(45) DEFAULT NULL,
`imgURL` varchar(255) DEFAULT NULL,
`imgWidth` smallint(6) DEFAULT NULL,
`imgHeight` smallint(6) DEFAULT NULL,
`imgDate` datetime DEFAULT NULL,
PRIMARY KEY (`imgId`),
UNIQUE KEY `imgDate` (`imgDate`)
)
Table 1 will be used to create a thumbnail gallery with links to larger versions of the image. imgClass, imgFamily, and imgGender will refine the thumbnails that are shown.
Table 2 contains images related to those in Table 1. Hence the use of primaryId to relate a single image in Table 1, with one or more images in Table 2. This is where I was thinking of using the Foreign Key ability of InnoDB, but I'm also familiar with the ability of Indexes in MyISAM to do the same.
Without delving too much into the remaining fields, imgDate is used to order the results.
Last, but not least, I should mention that this database is READ ONLY. All data will be entered by me. I have been told that if a database is read only, it should be MyISAM, but I'm hoping you can shed some light on what you would do in my situation.
Always use InnoDB by default.
In MySQL 5.1 later, you should use InnoDB. In MySQL 5.1, you should enable the InnoDB plugin. In MySQL 5.5, the InnoDB plugin is enabled by default so just use it.
The advice years ago was that MyISAM was faster in many scenarios. But that is no longer true if you use a current version of MySQL.
There may be some exotic corner cases where MyISAM performs marginally better for certain workloads (e.g. table-scans, or high-volume INSERT-only work), but the default choice should be InnoDB unless you can prove you have a case that MyISAM does better.
Advantages of InnoDB besides the support for transactions and foreign keys that is usually mentioned include:
InnoDB is more resistant to table corruption than MyISAM.
Row-level locking. In MyISAM, readers block writers and vice-versa.
Support for large buffer pool for both data and indexes. MyISAM key buffer is only for indexes.
MyISAM is stagnant; all future development will be in InnoDB.
See also my answer to MyISAM versus InnoDB
MyISAM won't enable you to do mysql level check. For instance if you want to update the imgId on both tables as a single transaction:
START TRANSACTION;
UPDATE primary_images SET imgId=2 WHERE imgId=1;
UPDATE secondary_images SET imgId=2 WHERE imgId=1;
COMMIT;
Another drawback is integrity check, using InnoDB you can do some error check like to avoid duplicated values in the field UNIQUE KEY imgDate (imgDate). Trust me, this really come at hand and is way less error prone. In my opinion MyISAM is for playing around while some more serious work should rely on InnoDB.
Hope it helps
A few things to consider :
Do you need transaction support?
Will you be using foreign keys?
Will there be a lot of writes on a table?
If answer to any of these questions is "yes", then you should definitely use InnoDB.
Otherwise, you should answer the following questions :
How big are your tables?
How many rows do they contain?
What is the load on your database engine?
What kind of queries you expect to run?
Unless your tables are very large and you expect large load on your database, either one works just fine.
I would prefer MyISAM because it scales pretty well for a wide range of data-sizes and loads.
I would like to add something that people may benefit from:
I've just created a InnoDB table (leaving everything as the default, except changing the collation to Unicode), and populated it with about 300,000 records (rows).
Queries like SELECT COUNT(id) FROM table - would hang until giving an error message, not returning a result;
I've cloned the table with the data into a new MyISAM table -
and that same query, along with other large SELECTqueries - would return fast, and everything worked ok.

How to optimize a 'col = col + 1' UPDATE query that runs on 100,000+ records?

See this previous question for some background. I'm trying to renumber a corrupted MPTT tree using SQL. The script is working fine logically, it is just much too slow.
I repeatedly need to execute these two queries:
UPDATE `tree`
SET `rght` = `rght` + 2
WHERE `rght` > currentLeft;
UPDATE `tree`
SET `lft` = `lft` + 2
WHERE `lft` > currentLeft;
The table is defined as such:
CREATE TABLE `tree` (
`id` char(36) NOT NULL DEFAULT '',
`parent_id` char(36) DEFAULT NULL,
`lft` int(11) unsigned DEFAULT NULL,
`rght` int(11) unsigned DEFAULT NULL,
... (a couple of more columns) ...,
PRIMARY KEY (`id`),
KEY `parent_id` (`parent_id`),
KEY `lft` (`lft`),
KEY `rght` (`rght`),
... (a few more indexes) ...
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The database is MySQL 5.1.37. There are currently ~120,000 records in the table. Each of the two UPDATE queries takes roughly 15 - 20 seconds to execute. The WHERE condition may apply to a majority of the records, so that almost all records need to be updated each time. In the worst case both queries are executed as many times as there are records in the database.
Is there a way to optimize this query by keeping the values in memory, delaying writing to disk, delaying index updates or something along these lines? The bottleneck seems to be hard disk throughput right now, as MySQL seems to be writing everything back to disk immediately.
Any suggestion appreciated.
I never used it, but if your have enough memory, try the memory table.
Create a table with the same structure as tree, insert into .. select from .., run your scripts against the memory table, and write it back.
Expanding on some ideas from comment as requested:
The default is to flush to disk after every commit. You can wrap multiple updates in a commit or change this parameter:
http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
The isolation level is simple to change. Just make sure the level fits your design. This probably won't help because a range update is being used. It's nice to know though when looking for some more concurrency:
http://dev.mysql.com/doc/refman/5.1/en/set-transaction.html
Ultimately, after noticing the range update in the query, your best bet is the MEMORY table that andrem pointed out. Also, you'll probably be able to find some performance by using a btree indexes instead of the default of hash:
http://www.mysqlperformanceblog.com/2008/02/01/performance-gotcha-of-mysql-memory-tables/
You're updating indexed columns - indexes negatively impact (read: slow down) INSERT/UPDATEs.
If this is a one time need to get things correct:
Drop/delete the indexes on the columns being updated (lft, rght)
Run the update statements
Re-create the indexes (this can take time, possibly equivalent to what you already experience in total)