I have events every night in my MYSQL, and I don't really sure what is going on because it's still running in the morning even if I set it earlier than the other event.
The question is,
how can I check the history or the log of the ran events,
which one is locked at night or which one is ran on not ran?
Thank you
this line will help you:
SELECT * FROM INFORMATION_SCHEMA.events;
You can enable slow query log in MySQL server so as to log in the slow queries in a file or MySQL table. Follow theses steps:
Check if slow query log is enabled for your MySQL server or not. Execute these query on the MySQL server.
mysql> show global variables like "%slow_query_log%";
+---------------------+----------------------------------+
| Variable_name | Value |
+---------------------+----------------------------------+
| slow_query_log | OFF |
| slow_query_log_file | /var/lib/mysql/siddhant-slow.log |
+---------------------+----------------------------------+
2 rows in set (0.00 sec)
If slow query log is not enabled, enable it like this.(or you can enable it in my.cnf or my.ini MySQL configuration file)
mysql> set global slow_query_log="ON";
Query OK, 0 rows affected (0.01 sec)
Also check the long query running time i.e. the time taken by the query to be considered as a slow query. The queries taking more time than this value would be logged in the slow query log.
mysql> show global variables like "%long_query%";
+-----------------+-----------+
| Variable_name | Value |
+-----------------+-----------+
| long_query_time | 10.000000 |
+-----------------+-----------+
1 row in set (0.00 sec)
Set this value to your requirement as follows.(here i am setting it to two seconds)
mysql> set global long_query_time=2;
Query OK, 0 rows affected (0.00 sec)
Now the slow queries will be logged in the slow query log file path as earlier returned by the query. I requirements are such that I need to monitor MySQL in real-time, you can have a look at this commercial GUI MySQL- Monitoring Tool. It analyzes the slow query log in real time.
Related
I am at the REPEATABLE-READ level.
Why does it make me wait?
I understand that all reads (SELECTs) at any level are non-blocking.
what am I missing?
Session 1:
mysql> lock tables users write;
Query OK, 0 rows affected (0.00 sec)
Session 2:
mysql> begin;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from users where id = 1; // wait
Session 1:
mysql> unlock tables;
Query OK, 0 rows affected (0.00 sec)
Session 2:
mysql> select * from users where id = 1;
+----+-----------------+--------------------+------+---------------------+--------------------------------------------------------------+----------------+---------------------+---------------------+------------+
| id | name | email | rol | email_verified_at | password | remember_token | created_at | updated_at | deleted_at |
+----+-----------------+--------------------+------+---------------------+--------------------------------------------------------------+----------------+---------------------+---------------------+------------+
| 1 | Bella Lueilwitz | orlo19#example.com | NULL | 2022-08-01 17:22:29 | $2y$10$92IXUNpkjO0rOQ5byMi.Ye4oKoEa3Ro9llC/.og/at2.uheWG/igi | MvMlaX9TQj | 2022-08-01 17:22:29 | 2022-08-01 17:22:29 | NULL |
+----+-----------------+--------------------+------+---------------------+--------------------------------------------------------------+----------------+---------------------+---------------------+------------+
1 row in set (10.51 sec)
In this question the opposite is true
Why doesn't LOCK TABLES [table] WRITE prevent table reads?
You reference a question about MySQL 5.0 posted in 2013. The answer from that time suggests that the client was allowed to get a result that had been cached in the query cache. Since then, MySQL 5.6 and 5.7 disabled the query cache by default, and MySQL 8.0 removed the feature altogether. This is a good thing.
The documentation says:
WRITE lock:
Only the session that holds the lock can access the table. No other session can access it until the lock is released.
This was true in the MySQL 5.0 days too, but the query cache allowed some clients to get around it. But I guess it wasn't reliable even then, because if the client ran a query that happened not to be cached, I suppose it would revert to the documented behavior. Anyway, it's moot, because all currently supported versions of MySQL should have the query cache disabled or removed.
I have found the use of GET_LOCK(‘lockname’, 0) of MariaDB in a java application that I am working on.
The timeout value is used as 0 here. It should work in non-blocking fashion, I suppose. But, after getting some exceptions in the log file, I have got the impression that it is still trying the get the lock using a default timeout time. Applying the call of IS_FREE_LOCK(‘lockname’) before GET_LOCK call makes the application run smoothly.
My question is, what is the impact of using 0 as the timeout value here?
Have you determined the timeout?. I can't reproduce the problem from the command line:
Session 1:
MariaDB [(none)]> SELECT GET_LOCK('lock1', 10);
+-----------------------+
| GET_LOCK('lock1', 10) |
+-----------------------+
| 1 |
+-----------------------+
1 row in set (0.000 sec)
Session 2:
MariaDB [(none)]> SELECT GET_LOCK('lock1', 0.5);
+------------------------+
| GET_LOCK('lock1', 0.5) |
+------------------------+
| 0 |
+------------------------+
1 row in set (0.500 sec)
MariaDB [(none)]> SELECT GET_LOCK('lock1', 0);
+----------------------+
| GET_LOCK('lock1', 0) |
+----------------------+
| 0 |
+----------------------+
1 row in set (0.000 sec)
"lock wait timeout" has nothing to do with GET_LOCK. It only applies to InnoDB transactions. The default for innodb_lock_wait_timeout is 50 seconds. (In my opinion, that is much too high.)
InnoDB transactions should be designed to finish in very few seconds. Never keep a transaction open while waiting for user interaction; a potty break could lead to "lock wait timeout".
I see that it is a "BatchUpdate". Is this loading a lot of data? Is it coming from some slow source? Could you use autocommit and not put the entire load into a single transaction?
Another thing... If you start a transaction (BEGIN or START TRANSACTION), then do GET_LOCK('foo', 51), and 'foo' is not available, you are asking for "wait lock timeout".
Please provide the bigger picture (BatchUpdate, reason for GET_LOCK, etc) so we can dig deeper.
Is there any way to skip "locked rows" when we make "SELECT FOR UPDATE" in MySQL with an InnoDB table?
E.g.: terminal t1
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> select id from mytable ORDER BY id ASC limit 5 for update;
+-------+
| id |
+-------+
| 1 |
| 15 |
| 30217 |
| 30218 |
| 30643 |
+-------+
5 rows in set (0.00 sec)
mysql>
At the same time, terminal t2:
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> select id from mytable where id>30643 order by id asc limit 2 for update;
+-------+
| id |
+-------+
| 30939 |
| 31211 |
+-------+
2 rows in set (0.01 sec)
mysql> select id from mytable order by id asc limit 5 for update;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
mysql>
So if I launch a query forcing it to select other rows, it's fine.
But is there a way to skip the locked rows?
I guess this should be a redundant problem in the concurrent process, but I did not find any solution.
EDIT:
In reality, my different concurrent processes are doing something apparently really simple:
take the first rows (which don't contain a specific flag - e.g.: "WHERE myflag_inUse!=1").
Once I get the result of my "select for update", I update the flag and commit the rows.
So I just want to select the rows which are not already locked and where myflag_inUse!=1...
The following link helps me to understand why I get the timeout, but not how to avoid it:
MySQL 'select for update' behaviour
mysql> SHOW VARIABLES LIKE "%version%";
+-------------------------+-------------------------+
| Variable_name | Value |
+-------------------------+-------------------------+
| innodb_version | 5.5.46 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 5.5.46-0ubuntu0.14.04.2 |
| version_comment | (Ubuntu) |
| version_compile_machine | x86_64 |
| version_compile_os | debian-linux-gnu |
+-------------------------+-------------------------+
7 rows in set (0.00 sec)
MySQL 8.0 introduced support for both SKIP LOCKED and NO WAIT.
SKIP LOCKED is useful for implementing a job queue (a.k.a batch queue) so that you can skip over locks that are already locked by a concurrent transaction.
NO WAIT is useful for avoiding waiting until a concurrent transaction releases the locks that we are also interested in locking.
Without NO WAIT, we either have to wait until the locks are released (at commit or release time by the transaction that currently holds the locks) or the lock acquisition times out. NO WAIT acts as a lock timeout with a value of 0.
For more details about SKIP LOCK and NO WAIT.
This appears to now exist in MySQL starting in 8.0.1:
https://mysqlserverteam.com/mysql-8-0-1-using-skip-locked-and-nowait-to-handle-hot-rows/
Starting with MySQL 8.0.1 we are introducing the SKIP LOCKED modifier
which can be used to non-deterministically read rows from a table
while skipping over the rows which are locked. This can be used by
our booking system to skip orders which are pending. For example:
However, I think that version is not necessarily production ready.
Unfortunately, it seems that there is no way to skip the locked row in a select for update so far.
It would be great if we could use something like the Oracle 'FOR UPDATE SKIP LOCKED'.
In my case, the queries launched in parallel are both exactly the same, and contain a 'where' clause and a 'group by' on a several millions of rows...because the queries need between 20 and 40 seconds to run, that was (as I already knew) a big part of the problem.
The only -temporary and not the best- solution I saw was to move some (i.e.: millions of) rows that I would not (directly) use in order to reduce the time the query will take.
So I will still have the same behavior but I will wait less time...
I was expecting a way to not select the locked row in the select.
I don't mark this as an answer, so if a new clause from mysql is added (or discovered), I can accept it later...
I'm sorry, but I think you approach the problem from a wrong angle. If your user wants to list records from a table that satisfy certain selection criteria, then your query should return them all, or return with an error message and provide no resultset whatsoever. But the query should not reurn only a subset of the results leading the user to belive that he has all the matching records.
The issue should be addressed by making sure that your application locks as few rows as possible, for as little time as possible.
Walk through the table in chunks of the PRIMARY KEY, using some suitable LIMIT so you are not looking at "too many" rows at once.
By using the PK, you are ordering things in a predictable way; this virtually eliminates deadlocks.
By using LIMIT, you will keep from hogging too much at once. The LIMIT should be embodied as a range over the PK. This makes it quite clear if two threads are about to step on each other.
More details are (indirectly) in my blog on big deletes.
I have followed a few tutorials in tracking down slow queries through the slow query log.
I have tried changing long_query_time to the value of 1 for testing purposes, but whatever I do, a query only makes it into the log when the default time of 10 is reached.
I tried:
set ##GLOBAL.long_query_time = 1;
set global long_query_time = 1;
When using either of these commands:
show variables like '%long%';
show global variables like '%long%';
I get the result that the variable was changed.
I have the exact same query running, just adding more LEFT JOIN entries to make it run longer. Whenever the query runs 10 seconds or longer, it is logged, but it does NOT show up in the log when it runs less than that, even though all my variables appear to say they are changed.
I am logged into MySQL as root as I make these changes.
I restarted Apache and MySQL, still no dice.
My version information is:
Server version: 5.1.63-log SUSE MySQL RPM
When I query both the session and the global variables (I tried both), I get this:
mysql> show variables like '%long%';
+--------------------+----------+
| Variable_name | Value |
+--------------------+----------+
| long_query_time | 1.000000 |
| max_long_data_size | 1048576 |
+--------------------+----------+
2 rows in set (0.00 sec)
mysql> show global variables like '%long%';
+--------------------+----------+
| Variable_name | Value |
+--------------------+----------+
| long_query_time | 1.000000 |
| max_long_data_size | 1048576 |
+--------------------+----------+
2 rows in set (0.00 sec)
The general logging feature is obviously on, and it is redirected to TABLE or I wouldn't get an entry in the log at all.
The setting log_queries_not_using_indexes if turned on starts logging EVERY query even if it does not take 1 second to execute.
What am I missing?
Thanks!
The configuration below turns MySQL to log queries which execution time is more than half second:
slow_query_log = 1
long_query_time = 0.5
log-slow-queries = /var/log/mysql/log-slow-queries.log
log_queries_not_using_indexes = 0
I'm building a website with MySQL. I'm using TOAD for MySQL and suddenly I can't connect to the database as I'm getting an error:
"Too many connections"
Is there any way in Toad for MySQL to view existing connections to be able to kill them or simple close all connections all together?
No, there is no built-in MySQL command for that. There are various tools and scripts that support it, you can kill some connections manually or restart the server (but that will be slower).
Use SHOW PROCESSLIST to view all connections, and KILL the process ID's you want to kill.
You could edit the timeout setting to have the MySQL daemon kill the inactive processes itself, or raise the connection count. You can even limit the amount of connections per username, so that if the process keeps misbehaving, the only affected process is the process itself and no other clients on your database get locked out.
If you can't connect yourself anymore to the server, you should know that MySQL always reserves 1 extra connection for a user with the SUPER privilege. Unless your offending process is for some reason using a username with that privilege...
Then after you can access your database again, you should fix the process (website) that's spawning that many connections.
mysql> SHOW PROCESSLIST;
+-----+------+-----------------+------+---------+------+-------+---------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+------+-----------------+------+---------+------+-------+----------------+
| 143 | root | localhost:61179 | cds | Query | 0 | init | SHOW PROCESSLIST |
| 192 | root | localhost:53793 | cds | Sleep | 4 | | NULL |
+-----+------+-----------------+------+---------+------+-------+----------------+
2 rows in set (0.00 sec)
mysql> KILL 192;
Query OK, 0 rows affected (0.00 sec)
USER 192 :
mysql> SELECT * FROM exept;
+----+
| id |
+----+
| 1 |
+----+
1 row in set (0.00 sec)
mysql> SELECT * FROM exept;
ERROR 2013 (HY000): Lost connection to MySQL server during query
While you can't kill all open connections with a single command, you can create a set of queries to do that for you if there are too many to do by hand.
This example will create a series of KILL <pid>; queries for all some_user's connections from 192.168.1.1 to my_db.
SELECT
CONCAT('KILL ', id, ';')
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE `User` = 'some_user'
AND `Host` = '192.168.1.1'
AND `db` = 'my_db';
I would recommend checking the connections to show the maximum thread connection is
show variables like "max_connections";
sample
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| max_connections | 13 |
+-----------------+-------+
1 row in set
Then increase it by example
set global max_connections = 500;
In MySQL Workbench:
Left-hand side navigator > Management > Client Connections
It gives you the option to kill queries and connections.
Note: this is not TOAD like the OP asked, but MySQL Workbench users like me may end up here
As above mentioned, there is no special command to do it. However, if all those connection are inactive, using 'flush tables;' is able to release all those connection which are not active.