I have recently upgraded a Drupal site to multi-webhead environment and am trying to tune MySQL with InnoDB engine. I notice SELECT queries are faster on production than on staging, but UPDATE queries are slower on production.
Staging: On a virtual machine with LAMP stack on it.
Production: Double webheads with load balancer. Dedicated MySQL server and a second hot stand-by DB server.
My system admin tells me that the latency is due to 1) remote DB connection and 2) binary logging for data replication between two DB servers.
I am new to InnoDB and multi-server environment. I'd like to see if the output from MySQL profile confirms my server settings, or if there is any more room for further optimization of production MySQL server.
This is what I ran from staging and production databases. I modified the output with number columns side by side for comparison. Note that the query runs faster on production on every rows in the table except one with status "end". Is "end" phase where binary logging is performed?
mysql> SET profiling = 1;
mysql> UPDATE node SET created = created + 1 WHERE nid = 100;
mysql> SHOW profile;
+----------------------+----------+------------+
| Status | Staging | Production |
+----------------------+----------+------------+
| starting | 0.000100 | 0.000037 |
| checking permissions | 0.000014 | 0.000006 |
| Opening tables | 0.000042 | 0.000017 |
| System lock | 0.000007 | 0.000004 |
| Table lock | 0.000009 | 0.000003 |
| init | 0.000076 | 0.000030 |
| Updating | 0.000062 | 0.000022 |
| end | 0.000031 | 0.002159 |
| query end | 0.000006 | 0.000003 |
| freeing items | 0.000010 | 0.000003 |
| closing tables | 0.000009 | 0.000002 |
| logging slow query | 0.000005 | 0.000001 |
| cleaning up | 0.000004 | 0.000001 |
+----------------------+----------+------------+
| Total | 0.000385 | 0.002288 |
+----------------------+----------+------------+
You're on the money. The "end" state will include binary logging.
For the end state, the following
operations could be happening:
Removing query cache entries after data in a table is changed
Writing an event to the binary log
Freeing memory buffers, including for blobs
http://dev.mysql.com/doc/refman/5.5/en/general-thread-states.html
Related
I have an InnoDB table in MySQL 5.5.53 where simple updates like
UPDATE mytable SET acol = 'value' WHERE id = 42;
hang for several seconds. id is the primary key of the table.
If I enable query profiling using
SET profiling = 1;
then run the query and look at the profile, I see something like:
show profile;
+------------------------------+----------+
| Status | Duration |
+------------------------------+----------+
| starting | 0.000077 |
| checking permissions | 0.000008 |
| Opening tables | 0.000024 |
| System lock | 0.000008 |
| init | 0.000346 |
| Updating | 0.000108 |
| end | 0.000004 |
| Waiting for query cache lock | 0.000002 |
| end | 3.616845 |
| query end | 0.000016 |
| closing tables | 0.000015 |
| freeing items | 0.000023 |
| logging slow query | 0.000003 |
| logging slow query | 0.000048 |
| cleaning up | 0.000004 |
+------------------------------+----------+
That is, all the time is spent in end.
The documentation says:
end
This occurs at the end but before the cleanup of ALTER TABLE, CREATE VIEW, DELETE, INSERT, SELECT, or UPDATE statements.
How can such a simple statement spend such a long time in this state?
It turns out that the problem is the query cache.
If I disable it with
SET GLOBAL query_cache_size = 0;
SET GLOBAL query_cache_type = 0;
the problem goes away.
It must be invalidating query cache entries that causes the query to hang for such a long time.
I recently added a lot of triggers to various mysql tables in order to force integrity with everything. I am worried that I might have killed my engine because simple updates are now taking very very long.
Consider:
UPDATE `partner_stats` SET earnings=1 WHERE date=CURRENT_DATE()
0 rows affected. ( Query took 0.6523 sec )
SELECT * FROM `partner_stats` WHERE date = CURRENT_DATE()
1 total, Query took 0.0004 sec
The SELECT takes 0.0004 but a simple UPDATE takes .65!
This particular table has only one row and has no triggers associated with it. Switching the engine to MyISAM fixes the problem but I will need to add triggers for this table in the future so I want to stick with InnoDB.
What is wrong with my engine? Is it too busy working with the other tables? What profiling or debugging options do I have?
EDIT: Did a profiling and it shows this:
mysql> show profile for QUERY 2;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000064 |
| checking permissions | 0.000008 |
| Opening tables | 0.000032 |
| System lock | 0.000007 |
| init | 0.000051 |
| Updating | 0.000069 |
| end | 0.011682 |
| query end | 0.218070 |
| closing tables | 0.000016 |
| freeing items | 0.000017 |
| logging slow query | 0.000003 |
| cleaning up | 0.000002 |
+----------------------+----------+
12 rows in set (0.00 sec)
You should try to optimize InnoDB Engine like explained here. On a production server with no replication, you can use:
innodb_flush_log_at_trx_commit = 2
# A value of 1 is required for ACID compliance. You can achieve better performance by setting the value different from 1, but then you can lose at most one second worth of transactions in a crash.
innodb_buffer_pool_size = [75% of total memory]
innodb_log_file_size = [25% of innodb_buffer_pool_size]
innodb_log_buffer_size = [10% of innodb_log_file_size]
innodb_thread_concurrency = [2 X Number of CPUs) + Number of Disks, or 0 for autodetect]
is there a Profiler like in T-SQL namely SQL Server to trace queries for MYSQL.
I am using Windows, XAMPP MySQL. I have this PHP Command that is so simple yet it does not update properly. I want to see if it is being run properly so I would like to trace it like in MSSQL.
MySQL has an inbuild profiler, which allows you to see very detailed for what part of the query how much time has been spend.
To enable it, use this statement:
SET profiling = 1;
Then these steps:
(1) Execute your query.
(2) Find out the query id for profiling:
SHOW PROFILES;
It will return you something like this:
Query_ID | Duration | Query
---------+-----------+-----------------------
2 | 0.0006200 | SHOW STATUS
3 | 0.3600000 | (your query here)
... | ... | ...
Now you know the query id is (3).
(3) Profile the query.
SHOW PROFILE FOR QUERY 3; // example
This will return you the details, which might look like this:
Status | Duration
--------------------------------+-------------------
starting | 0.000010
checking query cache for query | 0.000078
Opening tables | 0.000051
System lock | 0.000003
Table lock | 0.000008
init | 0.000036
optimizing | 0.000020
statistics | 0.000013
preparing | 0.000015
Creating tmp table | 0.000028
executing | 0.000602
Copying to tmp table | 0.000176
Sorting result | 0.000043
Sending data | 0.080032
end | 0.000004
removing tmp table | 0.000024
end | 0.000006
query end | 0.000003
freeing items | 0.000148
removing tmp table | 0.000019
closing tables | 0.000005
logging slow query | 0.000003
cleaning up | 0.000004
In this example, most of the time was actually spend sending the data from the server back to the client.
I found that my mysql sever have many of connection who is sleep. i want to delete them all.
so how i can configure my mysql server than then delete or dispose the connection who is in sleep not currently in process.
are this possible to delete this thing in mysql tell me how i can do following
a connection allow only one time datareader open and destroy the connection [process] after giving resposnse of query.
If you want to do it manually you can do like this:
login to Mysql as admin:
mysql -uroot -ppassword;
And than run command:
mysql> show processlist;
You will get something like below :
+----+-------------+--------------------+----------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+-------------+--------------------+----------+---------+------+-------+------------------+
| 49 | application | 192.168.44.1:51718 | XXXXXXXX | Sleep | 183 | | NULL ||
| 55 | application | 192.168.44.1:51769 | XXXXXXXX | Sleep | 148 | | NULL |
| 56 | application | 192.168.44.1:51770 | XXXXXXXX | Sleep | 148 | | NULL |
| 57 | application | 192.168.44.1:51771 | XXXXXXXX | Sleep | 148 | | NULL |
| 58 | application | 192.168.44.1:51968 | XXXXXXXX | Sleep | 11 | | NULL |
| 59 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+----+-------------+--------------------+----------+---------+------+-------+------------------+
You will see complete details of different connections. Now you can kill the sleeping connection as below:
mysql> kill 52;
Query OK, 0 rows affected (0.00 sec)
Why would you want to delete a sleeping thread? MySQL creates threads for connection requests, and when the client disconnects the thread is put back into the cache and waits for another connection.
This reduces a lot of overhead of creating threads 'on-demand', and it's nothing to worry about. A sleeping thread uses about 256k of memory.
you can find all working process execute the sql:
show process;
and you will find the sleep process, if you want terminate it, please remember the processid and excute this sql:
kill processid
but actually you can set a timeout variable in my.cnf:
wait_timeout=15
connect_timeout=10
interactive_timeout=100
for me with MySql server on windows,
I update the file (because cannot set variable with sql request due privileges):
D:\MySQL\mysql-5.6.48-winx64\my.ini
add the lines:
wait_timeout=61
interactive_timeout=61
restart service, and acknowledge new values with:
SHOW VARIABLES LIKE '%_timeout';
==> i do a connection tests and after 1 minutes all 10+ connections in sleep are disapeared!
I have a MySQL query that is copying data from one table to another for processing. For some reason, this query that normally takes a few seconds locked up overnight and ran for several hours. When I logged in this morning, I tried to kill the query, but it is still listed in the process list.
| Id | User | Host | db | Command | Time | State | Info |
+---------+----------+-----------+------+---------+-------+--------------+--------------------------------------------------------------------------------------+
| 1061763 | tb_admin | localhost | dw | Killed | 45299 | Sending data | INSERT INTO email_data_inno_stage SELECT * FROM email_data_test LIMIT 4480000, 10000 |
| 1062614 | tb_admin | localhost | dw | Killed | 863 | Sending data | INSERT INTO email_data_inno_stage SELECT * FROM email_data_test LIMIT 4480000, 10000 |
What could have caused this, and how can I kill this process so I can get on with my work?
If the table email_data_test is MyISAM and it was locked, that would have held up the the INSERT.
If the table email_data_test is InnoDB, then a lot of MVCC data was being written in ib_logfiles, which may not have occurred yet.
In both cases, you had the LIMIT clause scroll through 4,480,000 rows just to get to 10,000 rows you actually needed to INSERT.
Killing the query only causes the InnoDB table email_data_inno_stage to execute a rollback.