I have an InnoDB table in MySQL 5.5.53 where simple updates like
UPDATE mytable SET acol = 'value' WHERE id = 42;
hang for several seconds. id is the primary key of the table.
If I enable query profiling using
SET profiling = 1;
then run the query and look at the profile, I see something like:
show profile;
+------------------------------+----------+
| Status | Duration |
+------------------------------+----------+
| starting | 0.000077 |
| checking permissions | 0.000008 |
| Opening tables | 0.000024 |
| System lock | 0.000008 |
| init | 0.000346 |
| Updating | 0.000108 |
| end | 0.000004 |
| Waiting for query cache lock | 0.000002 |
| end | 3.616845 |
| query end | 0.000016 |
| closing tables | 0.000015 |
| freeing items | 0.000023 |
| logging slow query | 0.000003 |
| logging slow query | 0.000048 |
| cleaning up | 0.000004 |
+------------------------------+----------+
That is, all the time is spent in end.
The documentation says:
end
This occurs at the end but before the cleanup of ALTER TABLE, CREATE VIEW, DELETE, INSERT, SELECT, or UPDATE statements.
How can such a simple statement spend such a long time in this state?
It turns out that the problem is the query cache.
If I disable it with
SET GLOBAL query_cache_size = 0;
SET GLOBAL query_cache_type = 0;
the problem goes away.
It must be invalidating query cache entries that causes the query to hang for such a long time.
Related
Let's say I have a table like the following:
+-----------+------------+------+-----+---------+
| Field | Type | Null | Key | Default |
+------------+------------+------+-----+---------+
| datetime | double | NO | PRI | NULL |
| some_value | float | NO | | NULL |
+------------+------------+------+-----+---------+
Date is necessary to be in double and is registered in unix time with fractional seconds (no possibility to install mysql 5.6 to use fractional DATETIME). In addition, the values of the field datetime are not only primary, they are also always increasing. I would like to find the closest row to certain value. Usually you can use something like:
select * from table order by abs(datetime - $myvalue) limit 1
However, I'm afraid that this implementation will be slow for hundred thousands of values, because it is going to search in all the database. And since I have an ordered list, I know I can do some binary search to speed up the process, but I have no idea how to tell MySQL to perform such kind of search.
In order to test the performance I do the following lines:
SET profiling = 1;
SELECT * FROM table order by abs(datetime - $myvalue) limit 1;
SHOW PROFILE FOR QUERY 1;
With the following results:
+--------------------------------+----------+
| Status | Duration |
+--------------------------------+----------+
| starting | 0.000122 |
| Waiting for query cache lock | 0.000051 |
| checking query cache for query | 0.000191 |
| checking permissions | 0.000038 |
| Opening tables | 0.000094 |
| System lock | 0.000047 |
| Waiting for query cache lock | 0.000085 |
| init | 0.000103 |
| optimizing | 0.000031 |
| statistics | 0.000057 |
| preparing | 0.000049 |
| executing | 0.000023 |
| Sorting result | 2.806665 |
| Sending data | 0.000359 |
| end | 0.000049 |
| query end | 0.000033 |
| closing tables | 0.000050 |
| freeing items | 0.000089 |
| logging slow query | 0.000067 |
| cleaning up | 0.000032 |
+--------------------------------+----------+
Which in my understanding, the sorting the result takes 2.8 seconds, however my data is already sorted. As additional information, I have around 240,000 rows.
It won't scan the entire database. A primary key is indexed by a B-tree. Forcing it into a binary search would be slower, if you could do it, which you can't.
Try making it a field:
select abs(datetime - $myvalue) as date_diff, table.*
from table
order by date_diff
limit 1
Indexes are supported in RDBMSs. Define an index on date time or field of your interest and db will not do the complete table scan
I recently added a lot of triggers to various mysql tables in order to force integrity with everything. I am worried that I might have killed my engine because simple updates are now taking very very long.
Consider:
UPDATE `partner_stats` SET earnings=1 WHERE date=CURRENT_DATE()
0 rows affected. ( Query took 0.6523 sec )
SELECT * FROM `partner_stats` WHERE date = CURRENT_DATE()
1 total, Query took 0.0004 sec
The SELECT takes 0.0004 but a simple UPDATE takes .65!
This particular table has only one row and has no triggers associated with it. Switching the engine to MyISAM fixes the problem but I will need to add triggers for this table in the future so I want to stick with InnoDB.
What is wrong with my engine? Is it too busy working with the other tables? What profiling or debugging options do I have?
EDIT: Did a profiling and it shows this:
mysql> show profile for QUERY 2;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000064 |
| checking permissions | 0.000008 |
| Opening tables | 0.000032 |
| System lock | 0.000007 |
| init | 0.000051 |
| Updating | 0.000069 |
| end | 0.011682 |
| query end | 0.218070 |
| closing tables | 0.000016 |
| freeing items | 0.000017 |
| logging slow query | 0.000003 |
| cleaning up | 0.000002 |
+----------------------+----------+
12 rows in set (0.00 sec)
You should try to optimize InnoDB Engine like explained here. On a production server with no replication, you can use:
innodb_flush_log_at_trx_commit = 2
# A value of 1 is required for ACID compliance. You can achieve better performance by setting the value different from 1, but then you can lose at most one second worth of transactions in a crash.
innodb_buffer_pool_size = [75% of total memory]
innodb_log_file_size = [25% of innodb_buffer_pool_size]
innodb_log_buffer_size = [10% of innodb_log_file_size]
innodb_thread_concurrency = [2 X Number of CPUs) + Number of Disks, or 0 for autodetect]
is there a Profiler like in T-SQL namely SQL Server to trace queries for MYSQL.
I am using Windows, XAMPP MySQL. I have this PHP Command that is so simple yet it does not update properly. I want to see if it is being run properly so I would like to trace it like in MSSQL.
MySQL has an inbuild profiler, which allows you to see very detailed for what part of the query how much time has been spend.
To enable it, use this statement:
SET profiling = 1;
Then these steps:
(1) Execute your query.
(2) Find out the query id for profiling:
SHOW PROFILES;
It will return you something like this:
Query_ID | Duration | Query
---------+-----------+-----------------------
2 | 0.0006200 | SHOW STATUS
3 | 0.3600000 | (your query here)
... | ... | ...
Now you know the query id is (3).
(3) Profile the query.
SHOW PROFILE FOR QUERY 3; // example
This will return you the details, which might look like this:
Status | Duration
--------------------------------+-------------------
starting | 0.000010
checking query cache for query | 0.000078
Opening tables | 0.000051
System lock | 0.000003
Table lock | 0.000008
init | 0.000036
optimizing | 0.000020
statistics | 0.000013
preparing | 0.000015
Creating tmp table | 0.000028
executing | 0.000602
Copying to tmp table | 0.000176
Sorting result | 0.000043
Sending data | 0.080032
end | 0.000004
removing tmp table | 0.000024
end | 0.000006
query end | 0.000003
freeing items | 0.000148
removing tmp table | 0.000019
closing tables | 0.000005
logging slow query | 0.000003
cleaning up | 0.000004
In this example, most of the time was actually spend sending the data from the server back to the client.
I have a large database (approx 50GB). It is on a server I have little control over, but I know they are using mysqldump to do backups nightly.
I have a query that takes hours to finish. I set it to run, but it never actually finishes.
I've noticed that after the backup time, all the tables have a lock request (SHOW OPEN TABLES WHERE in_use > 0; lists all tables).
The tables from my query have in_use = 2, all other tables have in_use = 1.
So... what is happening here?
a) my query is running normally, blocking the dump from happening. I should just wait?
b) the dump is causing the server to hang (maybe lack of memory/disk space?)
c) something else?
EDIT: using MyISAM tables
There is a server admin who is not very competent, but if I ask him specific things he does them. What should I get him to check?
EDIT: adding query
SELECT citing.article_id as citing, citing.year, r.id_when_cited, cited_issue.country
FROM isi_lac_authored_articles as citing # 1M records
JOIN isi_citation_references r ON (citing.article_id = r.article_id) # 400M records
JOIN isi_articles cited ON (cited.id_when_cited = r.id_when_cited) # 25M records
JOIN isi_issues cited_issue ON (cited.issue_id = cited_issue.issue_id) # 1M records
This is what EXPLAIN has to say:
+----+-------------+-------------+------+--------------------------------------------------------------------------+---------------------------------------+---------+-------------------------------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+------+--------------------------------------------------------------------------+---------------------------------------+---------+-------------------------------+---------+-------------+
| 1 | SIMPLE | cited_issue | ALL | NULL | NULL | NULL | NULL | 1156856 | |
| 1 | SIMPLE | cited | ref | isi_articles_id_when_cited,isi_articles_issue_id | isi_articles_issue_id | 49 | func | 19 | Using where |
| 1 | SIMPLE | r | ref | isi_citation_references_article_id,isi_citation_references_id_when_cited | isi_citation_references_id_when_cited | 17 | mimir_dev.cited.id_when_cited | 4 | Using where |
| 1 | SIMPLE | citing | ref | isi_lac_authored_articles_article_id | isi_lac_authored_articles_article_id | 16 | mimir_dev.r.article_id | 1 | |
+----+-------------+-------------+------+--------------------------------------------------------------------------+---------------------------------------+---------+-------------------------------+---------+-------------+
I actually don't understand why it needs to look at all the records in isi_issues table. Shouldn't it just be matching up by the isi_articles (cited) on issue_id? Both fields are indexed.
For a MySQL database of that size, you may want to consider setting up replication to a slave node, and then have your nightly database backups performed on the slave.
Yes -- some options to mysqldump will have the effect of locking all MyISAM tables while the backup is in progress, so that the backup is a consistent "snapshot" of a point in time.
InnoDB supports transactions, which make this unnecessary. It's also generally faster than MyISAM. You should use it. :)
I have recently upgraded a Drupal site to multi-webhead environment and am trying to tune MySQL with InnoDB engine. I notice SELECT queries are faster on production than on staging, but UPDATE queries are slower on production.
Staging: On a virtual machine with LAMP stack on it.
Production: Double webheads with load balancer. Dedicated MySQL server and a second hot stand-by DB server.
My system admin tells me that the latency is due to 1) remote DB connection and 2) binary logging for data replication between two DB servers.
I am new to InnoDB and multi-server environment. I'd like to see if the output from MySQL profile confirms my server settings, or if there is any more room for further optimization of production MySQL server.
This is what I ran from staging and production databases. I modified the output with number columns side by side for comparison. Note that the query runs faster on production on every rows in the table except one with status "end". Is "end" phase where binary logging is performed?
mysql> SET profiling = 1;
mysql> UPDATE node SET created = created + 1 WHERE nid = 100;
mysql> SHOW profile;
+----------------------+----------+------------+
| Status | Staging | Production |
+----------------------+----------+------------+
| starting | 0.000100 | 0.000037 |
| checking permissions | 0.000014 | 0.000006 |
| Opening tables | 0.000042 | 0.000017 |
| System lock | 0.000007 | 0.000004 |
| Table lock | 0.000009 | 0.000003 |
| init | 0.000076 | 0.000030 |
| Updating | 0.000062 | 0.000022 |
| end | 0.000031 | 0.002159 |
| query end | 0.000006 | 0.000003 |
| freeing items | 0.000010 | 0.000003 |
| closing tables | 0.000009 | 0.000002 |
| logging slow query | 0.000005 | 0.000001 |
| cleaning up | 0.000004 | 0.000001 |
+----------------------+----------+------------+
| Total | 0.000385 | 0.002288 |
+----------------------+----------+------------+
You're on the money. The "end" state will include binary logging.
For the end state, the following
operations could be happening:
Removing query cache entries after data in a table is changed
Writing an event to the binary log
Freeing memory buffers, including for blobs
http://dev.mysql.com/doc/refman/5.5/en/general-thread-states.html