MySQL query not going away after being killed - mysql

I have a MySQL query that is copying data from one table to another for processing. For some reason, this query that normally takes a few seconds locked up overnight and ran for several hours. When I logged in this morning, I tried to kill the query, but it is still listed in the process list.
| Id | User | Host | db | Command | Time | State | Info |
+---------+----------+-----------+------+---------+-------+--------------+--------------------------------------------------------------------------------------+
| 1061763 | tb_admin | localhost | dw | Killed | 45299 | Sending data | INSERT INTO email_data_inno_stage SELECT * FROM email_data_test LIMIT 4480000, 10000 |
| 1062614 | tb_admin | localhost | dw | Killed | 863 | Sending data | INSERT INTO email_data_inno_stage SELECT * FROM email_data_test LIMIT 4480000, 10000 |
What could have caused this, and how can I kill this process so I can get on with my work?

If the table email_data_test is MyISAM and it was locked, that would have held up the the INSERT.
If the table email_data_test is InnoDB, then a lot of MVCC data was being written in ib_logfiles, which may not have occurred yet.
In both cases, you had the LIMIT clause scroll through 4,480,000 rows just to get to 10,000 rows you actually needed to INSERT.
Killing the query only causes the InnoDB table email_data_inno_stage to execute a rollback.

Related

InnoDB Engine Simple Update Query Takes Way Too Long (query_end)

I recently added a lot of triggers to various mysql tables in order to force integrity with everything. I am worried that I might have killed my engine because simple updates are now taking very very long.
Consider:
UPDATE `partner_stats` SET earnings=1 WHERE date=CURRENT_DATE()
0 rows affected. ( Query took 0.6523 sec )
SELECT * FROM `partner_stats` WHERE date = CURRENT_DATE()
1 total, Query took 0.0004 sec
The SELECT takes 0.0004 but a simple UPDATE takes .65!
This particular table has only one row and has no triggers associated with it. Switching the engine to MyISAM fixes the problem but I will need to add triggers for this table in the future so I want to stick with InnoDB.
What is wrong with my engine? Is it too busy working with the other tables? What profiling or debugging options do I have?
EDIT: Did a profiling and it shows this:
mysql> show profile for QUERY 2;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000064 |
| checking permissions | 0.000008 |
| Opening tables | 0.000032 |
| System lock | 0.000007 |
| init | 0.000051 |
| Updating | 0.000069 |
| end | 0.011682 |
| query end | 0.218070 |
| closing tables | 0.000016 |
| freeing items | 0.000017 |
| logging slow query | 0.000003 |
| cleaning up | 0.000002 |
+----------------------+----------+
12 rows in set (0.00 sec)
You should try to optimize InnoDB Engine like explained here. On a production server with no replication, you can use:
innodb_flush_log_at_trx_commit = 2
# A value of 1 is required for ACID compliance. You can achieve better performance by setting the value different from 1, but then you can lose at most one second worth of transactions in a crash.
innodb_buffer_pool_size = [75% of total memory]
innodb_log_file_size = [25% of innodb_buffer_pool_size]
innodb_log_buffer_size = [10% of innodb_log_file_size]
innodb_thread_concurrency = [2 X Number of CPUs) + Number of Disks, or 0 for autodetect]

very poor performance from MySQL INNODB

lately my mysql 5.5.27 has been performing very poorly. I have changed just about everything in the config to try and see if it makes a difference with no luck. I am getting tables locked up constantly reaching 6-9 locks per table. My select queries take forever 300sec-1200sec.
Moved Everything to PasteBin because it exceeded 30k chars
http://pastebin.com/bP7jMd97
SYS ACTIVITIES
90% UPDATES AND INSERTS
10% SELECT
My slow query log is backed up. below I have my mysql info. Please let me know if there is anything i should add that would help.
Server version 5.5.27-log
Protocol version 10
Connection XX.xx.xxx via TCP/IP
TCP port 3306
Uptime: 21 hours 39 min 40 sec
Uptime: 78246 Threads: 125 Questions: 6764445 Slow queries: 25 Opens: 1382 Flush tables: 2 Open tables: 22 Queries per second avg: 86.451
SHOW OPEN TABLES
+----------+---------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+----------+---------------+--------+-------------+
| aridb | ek | 0 | 0 |
| aridb | ey | 0 | 0 |
| aridb | ts | 4 | 0 |
| aridb | tts | 6 | 0 |
| aridb | tg | 0 | 0 |
| aridb | tgle | 2 | 0 |
| aridb | ts | 5 | 0 |
| aridb | tg2 | 1 | 0 |
| aridb | bts | 0 | 0 |
+---------+--------------+-------+------------+
I've hit a brick wall and need some guidance. thanks!
From looking through your log it would seem the problem (as I’m quite sure you’re aware) is due to the huge amount of locks that are present given the amount of data being updated / selected / inserted and possible at the same time.
It is really hard to give performance tips without first knowing lots of information which you don’t provide such as size of tables, schema, hardware, config, topology etc – SO probably isn’t the best place for such a broad question anyway!
I’ll keep my answer as generic as I can, but possible things to look at or try would be:
Run Explain the select queries and make sure they are selectively finding data and not performing full table scans or wasting huge amounts of data
Leave the server to do it's inserts and updates but create a read replica for reporting, this way data won’t be locked
If you’re updating many rows at a time, think about updating with a limit supplied to stop so much data being locked
If you are able to, delay the inserts to relieve pressure
Look at a hardware fix such as Solid State Disks for IO performance and more memory so more indexing / data can be held in memory or to have a larger buffer

Mysql query taking too much time

I have problem related to mysql database. i am linux webserver admin and i am facing a problem with a mysql query. The database is very small. I tried to track in logs and found that a query is taking minimum 5 sec to respond . The first page of site is coming from the database. Client are using cms. when the server gets some number of hits database server starts to give response very slowly and wait time increases from 5 sec to several seconds.
I checked slow query logs
{
Query_time: 11.480138 Lock_time: 0.003837 Rows_sent: 921 Rows_examined: 3333
SET timestamp=1346656767;
SELECT `Tender`.`id`,
`Tender`.`department_id`,
`Tender`.`title_english`,
`Tender`.`content_english`,
`Tender`.`title_hindi`,
`Tender`.`content_hindi`,
`Tender`.`file_name`,
`Tender`.`start_publish`,
`Tender`.`end_publish`,
`Tender`.`publish`,
`Tender`.`status`,
`Tender`.`createdBy`,
`Tender`.`created`,
`Tender`.`modifyBy`,
`Tender`.`modified`
FROM `mcms_tenders` AS `Tender`
WHERE `Tender`.`department_id` IN ( 31, 33, 32, 30 );
}
Every line in the log is same only there is diff in Query time.
Is there any way tweak the performance?
Update: Here is the EXPLAIN result:
+----+-------------+--------+------+---------------+------+---------+------+-‌-----+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+------+---------------+------+---------+------+----‌​--+-------------+
| 1 | SIMPLE | Tender | ALL | NULL | NULL | NULL | NULL | 3542 | Using where |
+----+-------------+--------+------+---------------+------+---------+------+----‌​--+-------------+
1 row in set, 1 warning (0.00 sec)
client is saying they are using Index so i run the command to check the indexing.
I got following output. Does It means they are using Indexing.
+--------------+------------+----------+--------------+-------------+-----------+------------+----------+--------+------+------------+---------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+--------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| mcms_tenders | 0 | PRIMARY | 1 | id | A | 4264 | NULL | NULL | | BTREE | |
+--------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
The normal way to tweak the performance of a query like this is to create an index on department_id.
However, this assumes that Tenders is actually a table and not a view. You should confirm this, since the problem may be in a view.
Also, from what you describe the issue may be the connection from the server to the end users. I would try running the query locally on the server (or checking the execute time strictly on the server) to see if the query is really taking that long.
"when the server gets some number of hits"
Define 'some number'. It makes sense that reading the database is slower when it is more heavily used. Also, MySQL has a query cache that is fully invalidated when changes are made to the data. So every time someone inserts, deletes or modifies a record in this table, the next queries will be slower because the table date is still uncached.
But 11 seconds for a query like this is very slow, so either the load is way too high, the hardware insufficient or broken or your database lacks indexes (I always forget to mention that at first, because I assume adding indexes to be a second nature for anyone working with databases).

mysql index not optimizing query

I have mysql MyISAM table on which I am doing a simple select id from mytable limit 1;. This just freezes the system.
I tried explain select id from mytable limit 1;. Again it freezes my system. Table demographics: 50k records, 10 mbs size, 2 indexes (primary key autoincrement), 8 columns.
I am clueless why the explain statement failed, as it is supposed to display the query plan, nothing else. Neither the table size is enormous nor the number of records, then why is mysql working so slow? Rather, what am I missing here?
It was due a waiting state on mytable. eggyal gave me the clue to use show processlist. It showed:
+-----+---------+-----------------+----------------+---------+------+---------------------------------+----------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+---------+-----------------+----------------+---------+------+---------------------------------+----------------------------------------------------+
| 349 | root | localhost:56612 | mydb | Query | 3582 | Waiting for table metadata lock | ALTER TABLE `mytable` ADD INDEX(`fk_to_02`) |
I planted a kill 349 to terminate that wait chain, and now the explain statement works as expected.

can mysqldump on a large database be causing my long queries to hang?

I have a large database (approx 50GB). It is on a server I have little control over, but I know they are using mysqldump to do backups nightly.
I have a query that takes hours to finish. I set it to run, but it never actually finishes.
I've noticed that after the backup time, all the tables have a lock request (SHOW OPEN TABLES WHERE in_use > 0; lists all tables).
The tables from my query have in_use = 2, all other tables have in_use = 1.
So... what is happening here?
a) my query is running normally, blocking the dump from happening. I should just wait?
b) the dump is causing the server to hang (maybe lack of memory/disk space?)
c) something else?
EDIT: using MyISAM tables
There is a server admin who is not very competent, but if I ask him specific things he does them. What should I get him to check?
EDIT: adding query
SELECT citing.article_id as citing, citing.year, r.id_when_cited, cited_issue.country
FROM isi_lac_authored_articles as citing # 1M records
JOIN isi_citation_references r ON (citing.article_id = r.article_id) # 400M records
JOIN isi_articles cited ON (cited.id_when_cited = r.id_when_cited) # 25M records
JOIN isi_issues cited_issue ON (cited.issue_id = cited_issue.issue_id) # 1M records
This is what EXPLAIN has to say:
+----+-------------+-------------+------+--------------------------------------------------------------------------+---------------------------------------+---------+-------------------------------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+------+--------------------------------------------------------------------------+---------------------------------------+---------+-------------------------------+---------+-------------+
| 1 | SIMPLE | cited_issue | ALL | NULL | NULL | NULL | NULL | 1156856 | |
| 1 | SIMPLE | cited | ref | isi_articles_id_when_cited,isi_articles_issue_id | isi_articles_issue_id | 49 | func | 19 | Using where |
| 1 | SIMPLE | r | ref | isi_citation_references_article_id,isi_citation_references_id_when_cited | isi_citation_references_id_when_cited | 17 | mimir_dev.cited.id_when_cited | 4 | Using where |
| 1 | SIMPLE | citing | ref | isi_lac_authored_articles_article_id | isi_lac_authored_articles_article_id | 16 | mimir_dev.r.article_id | 1 | |
+----+-------------+-------------+------+--------------------------------------------------------------------------+---------------------------------------+---------+-------------------------------+---------+-------------+
I actually don't understand why it needs to look at all the records in isi_issues table. Shouldn't it just be matching up by the isi_articles (cited) on issue_id? Both fields are indexed.
For a MySQL database of that size, you may want to consider setting up replication to a slave node, and then have your nightly database backups performed on the slave.
Yes -- some options to mysqldump will have the effect of locking all MyISAM tables while the backup is in progress, so that the backup is a consistent "snapshot" of a point in time.
InnoDB supports transactions, which make this unnecessary. It's also generally faster than MyISAM. You should use it. :)