I created an event in mysql to gather some data from different tables which will repeat itself in every 5 minutes. Let's say the event may take more than 5 minutes to complete in some scenario(maybe the db is running slow or needs a restart). Many other events gets fired simultaneously so to handle this I read locks can be used as per mysql manual.
If a repeating event does not terminate within its scheduling interval, the result may be multiple instances of the event executing simultaneously. If this is undesirable, you should institute a mechanism to prevent simultaneous instances. For example, you could use the GET_LOCK() function, or row or table locking.
But simply putting a lock didn't resolved my issue as the events were still getting executed in a queue and unpredicted data getting dumped, what I wanted was simply if the lock is there don't do anything and wait.
In locks I read if one named lock is assigned to a session another same name lock can be used until earlier lock get released.
if GET_LOCK('ev_test',-1) is not TRUE then
SIGNAL SQLSTATE '45000' set MESSAGE_TEXT = 'failed to obtain lock; not continuing; ';
end if;
some_event_body
RELEASE_LOCK('ev_test');
so I used this statement in mysql event body. and later releasing this lock manually on completion of event
My question is what happens when event some_event_body triggers some other exception like if there is select query and some columns were removed used by event body?
will the lock gets released automatically? will the lock be there always?
mysql manual says locks stays there until the session terminates. But I don't know if event lives inside a session or every event creates a new session?
Externally without above code simply using GET_LOCK I encountered this kind of situation.
+------+-----------------+-----------+-------------+---------+------+-----------------------------+-----------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+------+-----------------+-----------+-------------+---------+------+-----------------------------+-----------------------------+
| 5 | event_scheduler | localhost | NULL | Daemon | 30 | Waiting for next activation | NULL |
| 8 | root | localhost | logi_test_2 | Query | 0 | init | show processlist |
| 1330 | root | localhost | logi_test_2 | Connect | 2 | User sleep | SELECT SLEEP(30) |
| 1331 | root | localhost | logi_test_2 | Connect | 4974 | User lock | SELECT GET_LOCK('test', -1) |
| 1332 | root | localhost | logi_test_2 | Connect | 4969 | User lock | SELECT GET_LOCK('test', -1) |
| 1333 | root | localhost | logi_test_2 | Connect | 4964 | User lock | SELECT GET_LOCK('test', -1) |
| 1334 | root | localhost | logi_test_2 | Connect | 4959 | User lock | SELECT GET_LOCK('test', -1) |
| 1335 | root | localhost | logi_test_2 | Connect | 4953 | User lock | SELECT GET_LOCK('test', -1) |
| 1338 | root | localhost | logi_test_2 | Connect | 4949 | User lock | SELECT GET_LOCK('test', -1) |
| 1339 | root | localhost | logi_test_2 | Connect | 4944 | User lock | SELECT GET_LOCK('test', -1) |
| 1340 | root | localhost | logi_test_2 | Connect | 4939 | User lock | SELECT GET_LOCK('test', -1) |
| 1341 | root | localhost | logi_test_2 | Connect | 4934 | User lock | SELECT GET_LOCK('test', -1) |
| 1342 | root | localhost | logi_test_2 | Connect | 4929 | User lock | SELECT GET_LOCK('test', -1) |
| 1343 | root | localhost | logi_test_2 | Connect | 4924 | User lock | SELECT GET_LOCK('test', -1) |
| 1344 | root | localhost | logi_test_2 | Connect | 4919 | User lock | SELECT GET_LOCK('test', -1) |
| 1345 | root | localhost | logi_test_2 | Connect | 4914 | User lock | SELECT GET_LOCK('test', -1) |
| 1346 | root | localhost | logi_test_2 | Connect | 4909 | User lock | SELECT GET_LOCK('test', -1) |
| 1347 | root | localhost | logi_test_2 | Connect | 4904 | User lock | SELECT GET_LOCK('test', -1) |
| 1348 | root | localhost | logi_test_2 | Connect | 4899 | User lock | SELECT GET_LOCK('test', -1) |
| 1349 | root | localhost | logi_test_2 | Connect | 4894 | User lock | SELECT GET_LOCK('test', -1) |
| 1352 | root | localhost | logi_test_2 | Connect | 4889 | User lock | SELECT GET_LOCK('test', -1) |
| 1353 | root | localhost | logi_test_2 | Connect | 4884 | User lock | SELECT GET_LOCK('test', -1) |
why locks are getting duplicated here when only one named lock is allowed regardless of session?
I tried finding results on stackoverflow and reading mysql manual to but couldn't find anything.
This is a classic problem with cron, EVENT, etc.
I like to recommend this solution:
Instead of repeatedly firing off a potentially slow process, have a single process that loops. It would do the task, then repeat.
Embellishments
Add a "sleep" between iterations.
Add a calculated sleep to pause 'the rest of 5 minutes'.
Do something to observe that the system is busy and sleep longer.
Add a cron/EVENT as a "keepalive". This would restart the looping task if it dies. This might also be the way to get initially fired up after any type of crash or graceful outage.
I would also look at the queries -- 5 minutes is a looooong time for an SQL task.
Related
I'm trying to add a column to a specific table. Every time I run the migration up or down a SLEEP command pops up and blocks everything. Much like many other people who have run into this problem, I kill the blocking process and everything works as expected.
Ex:MySql alter table hangs
I've tried running the migration against a different table and have no issues. Seems to be something specific with this particular table.
Where or what should I be looking for? Why would this issue occur so consistently?
Thanks.
mysql> show processlist;
+-----+------+-----------------+--------+---------+------+---------------------------------+----------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+------+-----------------+--------+---------+------+---------------------------------+----------------------------------------------+
| 351 | root | localhost:54691 | database | Sleep | 25 | | NULL |
| 352 | root | localhost:54692 | NULL | Sleep | 54 | | NULL |
| 377 | root | localhost | database | Query | 0 | starting | show processlist |
| 381 | root | localhost:54858 | database | Query | 5 | Waiting for table metadata lock | ALTER TABLE company ADD COLUMN active |
| 382 | root | localhost:54860 | database | Sleep | 5 | | NULL |
+-----+------+-----------------+--------+---------+------+---------------------------------+----------------------------------------------+
5 rows in set (0.00 sec)
The offending process in this case was 382.
The answers to many other similar posts say to kill the SLEEP process and then "debug". My problem was related to a circular import issue in Python, not related to MySQL.
I want to use mySQL nodeJs connection pooling in the felixge Node module.
It appears to work fine except that I am not sure how to find if its really working, as intended. The pool connection params, passed to mysql.createPool() are:
dbConnectionParams = {
connectionLimit: 20,
host: 'localhost',
user: 'jdoe',
password:'somepasswd',
database: 'myDB'
};
All queries using connections from the pool work fine. However, when I try to see the actual connections using "show processlist" I see about 4 to 8 connections at any time, never 20. Should these not be listed too? Is there any other mySQL statement to see them or are they not opened until we actually need them? If so, is there any way to force open them so that later, when the connection is actually needed, there is no time lost in opening them?
I have read the documentation which states that "connections are lazily created by the pool." I thought that the whole idea of pooling is to circumvent this so they are not opened on an as-needed (lazy?) basis but are pre-opened.
UPDATE: Here is the output. Am trying to co-relate it to connectionLimit parameter upon new requests coming in.
+----+------+-----------------+------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+------+-----------------+-------------+---------+------+-------+------------------+
| 38 | jdoe | localhost:50716 | myDB | Sleep | 669 | | NULL |
| 39 | jdoe | localhost | myDB | Query | 0 | NULL | show processlist |
| 41 | jdoe | localhost:50718 | myDB | Sleep | 4 | | NULL |
| 44 | jdoe | localhost:50721 | myDB | Sleep | 4 | | NULL |
| 45 | jdoe | localhost:50722 | myDB | Sleep | 4 | | NULL |
| 46 | jdoe | localhost:50723 | myDB | Sleep | 5 | | NULL |
| 47 | jdoe | localhost:50724 | myDB | Sleep | 4 | | NULL |
| 48 | jdoe | localhost:50725 | myDB | Sleep | 4 | | NULL |
+----+------+-----------------+-------------+---------+------+-------+------------------+
My ~700mb database has been importing for 1hr 27min now, the actual disk activity stopped at around 15min but I just left it running to see if it would finish by itself.
The command I ran:
mysqldump DB1 | pv | mysql DB2
So it's pretty much a straight copy from 1 database to another, with DB2 starting as empty.
I can actually see that the data is already in DB2, but the command refuses to end!
So the question is... Should I let it continue to run? Or can I kill it? :/
Updated:
SHOW PROCESSLIST;
+------+--------------+---------------------+------------+---------+------+--------+------------------------------------------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+------+--------------+---------------------+------------+---------+------+--------+------------------------------------------------------------------------------------------------------+
| 2762 | user1 | localhost | DB2 | Query | 5298 | Locked | /*!50003 CREATE*/ /*!50020 DEFINER="user2"#"%"*/ /*!50003 FUNCTION "function1" |
| 2763 | user1 | localhost | DB1 | Sleep | 5298 | | NULL |
| 2770 | user2 | localhost | NULL | Query | 3633 | Locked | SELECT COUNT(*) FROM `INFORMATION_SCHEMA`.`ROUTINES` WHERE `ROUTINE_SCHEMA`='DB2' AND `ROUTIN |
| 2775 | user2 | localhost | NULL | Query | 381 | Locked | SELECT COUNT(*) FROM `INFORMATION_SCHEMA`.`ROUTINES` WHERE `ROUTINE_SCHEMA`='DB2' AND `ROUTIN |
| 2776 | user2 | <ipaddress>:<port> | NULL | Query | 0 | NULL | show processlist |
+------+--------------+---------------------+------------+---------+------+--------+------------------------------------------------------------------------------------------------------+
Some settings I have that are non-standard:
innodb_stats_on_metadata=0
innodb_flush_log_at_trx_commit=2
When I try to drop a table, MySQL hangs. I don't have any other open sessions. How to resolve this? I have waited for 10 hours and the process has not terminated.
I'm trying easier answer for newbies as i am:
1) run :
SHOW PROCESSLIST
if you get something like:
+----+-----------------+-----------------+--------+------------+-----------+---------------------------------+---------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+-----------------+-----------------+--------+------------+-----------+---------------------------------+---------------------------------------------------+
| 4 | event_scheduler | localhost | NULL | Daemon | 580410103 | Waiting on empty queue | NULL |
| 13 | root | localhost:50627 | airbnb | Sleep | 10344 | | NULL |
| 17 | root | localhost:50877 | NULL | Query | 2356 | Waiting for table metadata lock | SHOW FULL COLUMNS FROM `airbnb`.`characteristics` |
| 18 | root | localhost:50878 | airbnb | Query | 2366 | Waiting for table metadata lock | DROP TABLE `airbnb`.`characteristics` |
| 21 | root | localhost:51281 | airbnb | Query | 2305 | Waiting for table metadata lock | SHOW FULL COLUMNS FROM `airbnb`.`bed_type` |
| 22 | root | localhost:51282 | airbnb | Query | 2301 | Waiting for table metadata lock | SHOW INDEXES FROM `airbnb`.`characteristics` |
| 23 | root | localhost:51290 | airbnb | Query | 2270 | Waiting for table metadata lock | SHOW FULL COLUMNS FROM `airbnb`.`property_type` |
| 24 | root | localhost:51296 | airbnb | Query | 2240 | Waiting for table metadata lock | SHOW INDEXES FROM `airbnb`.`property_type` |
| 26 | root | localhost:51303 | NULL | Query | 2212 | Waiting for table metadata lock | SHOW FULL COLUMNS FROM `airbnb`.`characteristics` |
| 27 | root | localhost:51304 | NULL | Query | 2218 | Waiting for table metadata lock | SHOW FULL COLUMNS FROM `airbnb`.`bed_type` |
| 29 | root | localhost:51306 | NULL | Query | 2176 | Waiting for table metadata lock | SHOW INDEXES FROM `airbnb`.`characteristics` |
| 30 | root | localhost:51308 | NULL | Query | 2122 | Waiting for table metadata lock | DROP TABLE `airbnb`.`characteristics` |
| 34 | root | localhost:51312 | NULL | Query | 2063 | Waiting for table metadata lock | SHOW FULL COLUMNS FROM `airbnb`.`characteristics` |
| 35 | root | localhost:51313 | NULL | Query | 2066 | Waiting for table metadata lock | SHOW FULL COLUMNS FROM `airbnb`.`bed_type` |
| 39 | root | localhost:51338 | NULL | Query | 2004 | Waiting for table metadata lock | SHOW FULL COLUMNS FROM `airbnb`.`characteristics` |
| 40 | root | localhost:51339 | NULL | Query | 2008 | Waiting for table metadata lock | SHOW FULL COLUMNS FROM `airbnb`.`bed_type` |
| 45 | root | localhost | airbnb | Field List | 997 | Waiting for table metadata lock | |
| 46 | root | localhost | airbnb | Field List | 798 | Waiting for table metadata lock | |
| 53 | root | localhost | airbnb | Query | 0 | starting | SHOW PROCESSLIST |
+----+-----------------+-----------------+--------+------------+-----------+---------------------------------+---------------------------------------------------+
with State : waiting for table metadata lock (as mentioned in official answer)
2) KILL 13 (13 coresponding to the Id).
If it's indeed a deadlock, all the following processes will continue normally.
Waiting for table metadata lock
drop table tableA name
SELECT l1.lat, l1.lon, l2.zipcode FROM tableA l1, tableBl2 where l1.lat = l2.latitude and l1.lon = l2.longitude limit 10
If this is your table, see this link
you have an implicit deadlock. Kill the other transactions to release the drop, or kill the drop to release the other transactions.
You can use KILL thread_id, in sql_plus.
I'm adding further information since I came up with another interesting experience.
Metadata Dead locks may equally happen between a ddl operation on a given table (drop, alter...) and a select query on that table.
Yes, select.
So, if you loop over a cursor in mysql (or php, for example with pdo::fetch), and you run a ddl statement on the same table(s), you will get a deadlock.
One solution to this atypical scenario is to release the implicit locks with a commit statement systematically after any select statement is completely fetched.
Restarting MySQL might not be the prettiest solution but it worked for me:
sudo /etc/init.d/mysql restart
mysqladmin drop YOURDATABASE
when dropping or renaming a table with myisam storage engine, I realize that it's waiting for table metadata lock, however, show full processlist doesn't reveal the offending query. any idea?
| 462 | root | xxx.xxx.xxx.xx:54658 | mydb | Sleep | 1162 | | NULL |
| 465 | root | localhost | mydb | Query | 0 | NULL | show full processlist |
| 466 | root | localhost | mydb | Query | 125 | Waiting for table metadata lock | alter table mytable rename to mytable_junk |
Show processlist will only show connections for the current user. Login as root, or add "Process" permissions to your user.