A really slow UPDATE to a MySQL server - mysql

I am running a CMS, but this has nothing do to with it.
I have a simple query which is:
UPDATE e107_online SET `online_location` = 'http://page.com/something.php?', `online_pagecount` = 133 WHERE `online_ip` = '175.44.*.*' AND `online_user_id` = '0' LIMIT 1;
but the same query reported from my website support gives that:
User#Host: cosyclim_website[cosyclim_website] # localhost []
Thread_id: 7493739 Schema: cosyclim_website
Query_time: 12.883518 Lock_time: 0.000028 Rows_sent: 0 Rows_examined: 0 Rows_affected: 1 Rows_read: 1
It takes 12 (almost 13) seconds for a simple update query? Is there a way I could optimize it somehow? If I run it through PhpMyAdmin it takes 0.0003s.
The table:
CREATE TABLE IF NOT EXISTS `e107_online` (
`online_timestamp` int(10) unsigned NOT NULL default '0',
`online_flag` tinyint(3) unsigned NOT NULL default '0',
`online_user_id` varchar(100) NOT NULL default '',
`online_ip` varchar(15) NOT NULL default '',
`online_location` varchar(255) NOT NULL default '',
`online_pagecount` tinyint(3) unsigned NOT NULL default '0',
`online_active` int(10) unsigned NOT NULL default '0',
KEY `online_ip` (`online_ip`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

Your query is updating one row which meets certain criteria:
UPDATE e107_online
SET `online_location` = 'http://page.com/something.php?', `online_pagecount` = 133
WHERE `online_ip` = '175.44.*.*' AND `online_user_id` = '0'
LIMIT 1;
Given that you have ip addresses, I'm guessing that this table is pretty big. Millions and millions and millions of rows. There are many reasons why an update can take a long time -- such as server load, blocking transactions, and log file performance. In this case, let's make the assumption that the problem is finding one of the rows. You can test this just by doing a select with the same conditions and see how long that takes.
Assuming the select is consistently slow, then the problem can probably be fixed with indexes. If the table has no indexes -- or if MySQL cannot use existing indexes -- then it needs to do a full table scan. And, perhaps the one record that matches is at the end of the table. It takes a while to find it.
I would suggest adding an index on either e107_online(online_ip) or e107_online(online_user_id, online_ip) to help it find the record faster. The index needs to be a b-tree index, as explained here.
One consequence of using an index is that the ip with the lowest matching value will probably be the one chosen. I don't know if this lack of randomness makes a difference in your application.

Is it just this query that is slow, or are queries from your website generally slower? phpMyAdmin is most likely running queries directly on the machine your database lives on, which means network latency is effectively 0ms. I would have suggested adding an index including the two columns in your WHERE clause, but with <50 rows that doesn't make any sense. This is going to come down to a blockage between your website and database server.
Also make sure you're not doing anything silly like running without connection pooling turned on (or creating a ton of connections unnecessarily). I've seen connection pools that had run out of space cause problems similar to this.

Related

What could explain a 30+ second MySQL SELECT query latency when profiling records an execution time of <1 second?

I'm trying to figure out why a simple select with a LIMIT 1 clause (admittedly, on a really bloated table with a lot of rows and indices) is sometimes taking 30+ seconds (even 2 minutes, sometimes) to execute on an AWS RDS Aurora instance. This is on a writer instance.
It seems to occur for the first query from a client, only on a particular select that looks through hundreds of thousands of rows, and only sometimes.
The query is in the form:
SELECT some_table.col1, some_table.col2, some_table.col3, some_table.col4,
MAX(some_table.col2) AS SomeValue
FROM some_table
WHERE some_table.col3=123456 LIMIT 1;
And 'explain' outputs:
+----+-------------+---------------+------+---------------+---------+---------+-------+--------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+------+---------------+---------+---------+-------+--------+-------+
| 1 | SIMPLE | some_table | ref | col1 | col1 | 4 | const | 268202 | NULL |
+----+-------------+---------------+------+---------------+---------+---------+-------+--------+-------+
I managed to reproduce the issue and captured the profile for the query in PhpMyAdmin. PhpMyAdmin recorded the query as taking 30.1 seconds to execute, but the profiler shows that execution itself takes less than a second:
So it looks like the execution itself isn't taking a lot of time; what could be causing this latency issue? I also found the same query recorded in RDS Performance Insights:
This seems to occur for the first query in a series of identical or similar queries. Could it be a caching issue? I've tried running RESET QUERY CACHE; in an attempt to reproduce the latency but with no success. Happy to provide more information about the infrastructure if that would help.
More info
SHOW VARIABLES LIKE 'query_cache%';
SHOW GLOBAL STATUS LIKE 'Qc%';
Rows examined and sent (from Performance Insights):
SHOW CREATE TABLE output:
CREATE TABLE `some_table` (
`col1` int(10) unsigned NOT NULL AUTO_INCREMENT,
`col2` int(10) unsigned NOT NULL DEFAULT '0',
`col3` int(10) unsigned NOT NULL DEFAULT '0',
`col4` int(10) unsigned NOT NULL DEFAULT '0',
`col5` mediumtext COLLATE utf8mb4_unicode_ci NOT NULL,
`col6` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
`col7` int(10) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`col1`),
KEY `col2` (`col2`),
KEY `col3` (`col3`),
KEY `col4` (`col4`),
KEY `col6` (`col6`),
KEY `col7` (`col7`)
) ENGINE=InnoDB AUTO_INCREMENT=123456789 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Possible explanations are:
The query is delayed from executing because it's waiting for a lock. Even a read-only query like SELECT may need to wait for a metadata lock.
The query must examine hundreds of thousands of rows, and it takes time to read those rows from storage. Aurora is supposed to have fast storage, but it can't be zero cost.
The system load on the Aurora instance is too high, because it's competing with other queries you are running.
The system load on the Aurora instance is too high, because the host is shared by other Aurora instances owned by other Amazon customers. This case is sometimes called "noisy neighbor" and there's practically nothing you can do to prevent it. Amazon automatically colocates virtual machines for different customers on the same hardware.
It's taking a long time to transfer the result set to the client. Since you use LIMIT 1, that single row would have to be huge to take 30 seconds, or else your client must be on a very slow network.
The query cache is not relevant the first time you run the query. Subsequently executing the same query will be faster, until some later time after the result has been evicted from the cache, or if any data in that table is updated, which forces the result of all queries against that table to be evicted from the query cache.
It seems that your understanding of the LIMIT function isn't quite right in this scenario.
If you were to run a simple function like SELECT * FROM tablea LIMIT 1; then the database would present you with the first row that it comes across and terminate there, giving you a quick return.
However in your example above, you have both an aggregate function and a WHERE clause.
Therefore in order for your database to return the first row, it must first return the whole data set and then work out what is the first row.
You can read more about this in this earlier question;
https://dba.stackexchange.com/a/62444
If you were to run this same query without limit 1 on the end you're likely to find that it will take around the same sort of time to return the result.
As you mentioned in your comment, it would be best to look at the schema and work out how this query can be amended to be more efficient.

Mysql: Deadlock found when trying to get lock, need remove key?

i count page view statistics in Mysql and sometimes get deat lock.
How can resolve this problem? Maybe i need remove one of key?
But what will happen with reading performance? Or is it not affect?
Table:
CREATE TABLE `pt_stat` (
`stat_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`post_id` int(11) unsigned NOT NULL,
`stat_name` varchar(50) NOT NULL,
`stat_value` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`stat_id`),
KEY `post_id` (`post_id`),
KEY `stat_name` (`stat_name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8
Error: "Deadlock found when trying to get lock; try restarting transaction".
UPDATE pt_stat SET stat_value = stat_value + 1 WHERE post_id = "21500" AND stat_name = 'day_20170111';
When dealing with deadlocks, the first thing to do, always, is to see whether you have complex transactions deadlocking against eachother. This is the normal case. I assume based on your question that the update statement, however, is in its own transaction and therefore there are no complex interdependencies among writes from a logical database perspective.
Certain multi-threaded databases (including MySQL) can have single statements deadlock against themselves due to write dependencies within threads on the same query. MySQL is not alone here btw. MS SQL Server has been known to have similar problems in some cases and workloads. The problem (as you seem to grasp) is that a thread updating an index can deadlock against another thread that updates an index (and remember, InnoDB tables are indexes with leaf-nodes containing the row data).
In these cases there are three things you can look at doing:
If the problem is not severe, then the best option is generally to retry the transaction in case of deadlock.
You could reduce the number of background threads but this will affect both read and write performance, or
You could try removing an index (key). However, keep in mind that unindexed scans on MySQL are slow.

Large INNODB Tables Locking Up

I have a database table that is giving me those headaches, errors when inserting lots of data. Let me break down what exactly happens and I'm hoping someone will have some insight into how I can get this figured out.
Basically I have a table that has 11+ million records in it and it's growing everyday. We track how times a user is viewing a video and their progress in that video. You can see below what the structure is like. Our setup is a master db with two slaves attached to it. Nightly we run a cron script to compile some statistical data out of this table and compile them into a couple other tables we use just for reporting. These cron scripts only do SELECT statements on the slave and will do the insert into our statistical tables on the master (so it'll propagate down). Like clockwork every time we run this script it will lock up our production table. I thought moving the SELECT to a slave would fix this issue and since we aren't even writing into the main table with the cron but rather other tables, I'm now perplexed what could possibly cause this locking up.
It's almost as if it seems that every time a large read on the main table (master or slave) it locks up the master. As soon as the cron is complete, the table goes back to normal performance.
My question is several levels about INNODB. I've had thoughts that it might be indexing that would cause this issue but maybe it's other variables on INNODB settings that I'm not fully understanding. As you would imagine, I want to keep the master from getting this lockup. I don't really care if the slave is pegged out during this script run as long as it won't effect my master db. Is this something that can happen with Slave/Master relationships in MYSQL?
The tables that are getting the compiled information to are stats_daily, stats_grouped for reference.
The biggest issue I have here, to restate a little, is that I don't understand what can cause the locking like this. Taking the reads off the master and just doing inserts into another table doesn't seem like it should do anything on the master original table. I can watch the errors start streaming in, however, 3 minutes after the script starts and it will end immediately when the script stops.
The table I'm working with is below.
CREATE TABLE IF NOT EXISTS `stats` (
`ID` int(10) unsigned NOT NULL AUTO_INCREMENT,
`VID` int(10) unsigned NOT NULL DEFAULT '0',
`UID` int(10) NOT NULL DEFAULT '0',
`Position` smallint(10) unsigned NOT NULL DEFAULT '0',
`Progress` decimal(3,2) NOT NULL DEFAULT '0.00',
`ViewCount` int(10) unsigned NOT NULL DEFAULT '0',
`DateFirstView` int(10) unsigned NOT NULL DEFAULT '0', // Use unixtimestamps
`DateLastView` int(10) unsigned NOT NULL DEFAULT '0', // Use unixtimestamps
PRIMARY KEY (`ID`),
KEY `VID` (`VID`,`UID`),
KEY `UID` (`UID`),
KEY `DateLastView` (`DateLastView`),
KEY `ViewCount` (`ViewCount`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=15004624 ;
Does anyone have any thoughts or ideas on this?
UPDATE:
The errors I get from the master DB
MysqlError: Lock wait timeout exceeded; try restarting transaction
Uncaught exception 'Exception' with message 'invalid query UPDATE stats SET VID = '13156', UID = '73859', Position = '0', Progress = '0.8', ViewCount = '1', DateFirstView = '1375789950', DateLastView = '1375790530' WHERE ID = 14752456
The update query fails because of the locking. The query is actually valid. I'll get 100s of these and afterwards I can randomly copy/paste these queries and they will work.
UPDATE 2
Queries and Explains from Cron Script
Query Ran on the Slave (leaving php variables in curly brackets for reference):
SELECT
VID,
COUNT(ID) as ViewCount,
DATE_FORMAT(FROM_UNIXTIME(DateLastView), '%Y-%m-%d') AS YearMonthDay,
{$today} as DateModified
FROM stats
WHERE DateLastView >= {$start_date} AND DateLastView <= {$end_date}
GROUP BY YearMonthDay, VID
EXPLAIN of the SELECT Stat
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE stats range DateLastView DateLastView 4 NULL 25242 Using where; Using temporary; Using filesort
That result set is looped and inserted into the compiled table. Unfortunately I don't have support for batched inserts with this (I tried) so I have to loop through these one at a time instead of sending a batch of 100 or 500 to the server at a time. This is inserted into the master DB.
foreach ($results as $result)
{
$query = "INSERT INTO stats_daily (VID, ViewCount, YearMonthDay, DateModified) VALUES ({$result->VID}, {$result->ViewCount}, '{$result->YearMonthDay}', {$today} );
DoQuery($query);
}
The GROUP BY is the culprit. Apparently MySQL decides to use a temporary table in this case (perhaps because the table has exceeded some limit) which is very inefficient.
I ran into similar problems, but no clear solution. You could consider splitting your stats table into two tables, a 'daily' and a 'history' table. Run your query on the 'daily' table which only contains entries from the latest 24 hours or whatever your interval is, then clean up the table.
To get the info into your permanent 'history' table, either write your stats into both tables from code, or copy them over from daily into history before cleanup.

Simple MySQL UPDATE query - very low performance

A simple mysql update query is very slow sometimes. Here is the query:
update produse
set vizite = '135'
where id = '71238'
My simplified table structure is:
CREATE TABLE IF NOT EXISTS `produse`
(
`id` int(9) NOT NULL auto_increment,
`nume` varchar(255) NOT NULL,
`vizite` int(9) NOT NULL default '1',
PRIMARY KEY (`id`),
KEY `vizite` (`vizite`),
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=945179 ;
I use MySQL 5.0.77 and the table is MyISAM.
The table is about 752.6 MO and has 642,442 for the moment.
The database runs on a dedicated VPS that has 3Gb of RAM and 4 processors of 2G each. There are no more than 6-7 queries of that type per second when we have high traffic, but the query is slow not only then.
First, try rebuilding your indexes, it might happen that query is not using them (you can see that using EXPLAIN statement with your update query).
Another possibility is that you have many selects on that table or long running selects, which causes long locks. You can try using replication and have your select queries executed on slave database, only, and updates on master, only. That way, you will avoid table locks caused by updates while you are doing selects and vice versa.

Slow MySQL InnoDB Inserts and Updates

I am using magento and having a lot of slowness on the site. There is very, very light load on the server. I have verified cpu, disk i/o, and memory is light- less than 30% of available at all times. APC caching is enabled- I am using new relic to monitor the server and the issue is very clearly insert/updates.
I have isolated the slowness to all insert and update statements. SELECT is fast. Very simple insert / updates into tables take 2-3 seconds whether run from my application or the command line mysql.
Example:
UPDATE `index_process` SET `status` = 'working', `started_at` = '2012-02-10 19:08:31' WHERE (process_id='8');
This table has 9 rows, a primary key, and 1 index on it.
The slowness occurs with all insert / updates. I have run mysqltuner and everything looks good. Also, changed innodb_flush_log_at_trx_commit to 2.
The activity on this server is very light- it's a dv box with 1 GB RAM. I have magento installs that run 100x better with 5x the load on a similar setup.
I started logging all queries over 2 seconds and it seems to be all inserts and full text searches.
Anyone have suggestions?
Here is table structure:
CREATE TABLE IF NOT EXISTS `index_process` (
`process_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`indexer_code` varchar(32) NOT NULL,
`status` enum('pending','working','require_reindex') NOT NULL DEFAULT 'pending',
`started_at` datetime DEFAULT NULL,
`ended_at` datetime DEFAULT NULL,
`mode` enum('real_time','manual') NOT NULL DEFAULT 'real_time',
PRIMARY KEY (`process_id`),
UNIQUE KEY `IDX_CODE` (`indexer_code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=10 ;
First: (process_id='8') - '8' is char/varchar, not int, so mysql convert value first.
On my system, I had long times (greater than one second) to update users.last_active_time.
The reason was that I had a few queries that long to perform. As I joined them for the users table. This resulted in blocking of the table to read. Death lock by SELECT.
I rewrote query from: JOIN to: sub-queries and porblem gone.