Monitoring progress of a SQL script with many UPDATEs in MariaDB - mysql

I'm running a script with several million update statements like this:
UPDATE data SET value = 0.9234 WHERE fId = 47616 AND modDate = '2018-09-24' AND valueDate = '2007-09-01' AND last_updated < '2018-10-01';
fId, modDate and valueDate are the 3 components of the data table's composite primary key.
I initially ran this with AUTOCOMMIT=1 but I figured it would speed up if I set AUTOCOMMIT=0 and wrapped the transactions into blocks of 25.
In autocommit mode, I used SHOW PROCESSLIST and I'd see the UPDATE statement in the output, so from the fId foreign key, I could tell how far the script had progressed.
However without autocommit, watching it running now, I haven't seen anything with SHOW PROCESSLIST, just this:
610257 schema_owner_2 201.177.12.57:53673 mydb Sleep 0 NULL 0.000
611020 schema_owner_1 201.177.12.57:58904 mydb Query 0 init show processlist 0.000
The Sleep status makes me paranoid that other users on the system are blocking the updates, but if I run SHOW OPEN TABLES I'm not sure whether there's a problem:
MariaDB [mydb]> SHOW OPEN TABLES;
+----------+----------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+----------+----------------+--------+-------------+
| mydb | data | 2 | 0 |
| mydb | forecast | 1 | 0 |
| mydb | modification | 0 | 0 |
| mydb | data3 | 0 | 0 |
+----------+----------------+--------+-------------+
Is my script going to wait forever? Should I go back to using autocommit mode? Is there any way to see how far it's progressed? I guess I can inspect the data for the updates but that would be laborious to put together.

Check for progress by actually checking the data.
I assume you are doing COMMIT?
It is reasonable to see nothing -- each UPDATE will take very few milliseconds; there will be Sleep time between UPDATEs.
Time being 0 is your clue that it is progressing.
There won't necessarily be any clue in the PROCESSLIST of how far it has gotten.
You could add SELECT SLEEP(1), $fid; in the loop, where $fid (or whatever) is the last UPDATEd row id. That would slow down the progress by 1 second per 25 rows, so maybe you should do groups of 100 or 500.

Related

MySQL ALTER TABLE taking long in small table

I have two tables in my scenario
table1, which has about 20 tuples
table2, which has about 3 million tuples
table2 has a foreign key referencing table1 "ID" column.
When I try to execute the following query:
ALTER TABLE table1 MODIFY vccolumn VARCHAR(1000);
It takes forever. Why is it taking that long? I have read that it should not, because it only has 20 tuples.
Is there any way to speed it up without having server downtime? Because the query is locking the table, also.
I would guess the ALTER TABLE is waiting on a metadata lock, and it has not actually starting altering anything.
What is a metadata lock?
When you run any query like SELECT/INSERT/UPDATE/DELETE against a table, it must acquire a metadata lock. Those queries do not block each other. Any number of queries of that type can have a metadata lock.
But a DDL statement like CREATE/ALTER/DROP/TRUNCATE/RENAME or event CREATE TRIGGER or LOCK TABLES, must acquire an exclusive metadata lock. If any transaction still holds a metadata lock, the DDL statement waits.
You can demonstrate this. Open two terminal windows and open the mysql client in each window.
Window 1: CREATE TABLE foo ( id int primary key );
Window 1: START TRANSACTION;
Window 1: SELECT * FROM foo; -- it doesn't matter that the table has no data
Window 2: DROP TABLE foo; -- notice it waits
Window 1: SHOW PROCESSLIST;
+-----+------+-----------+------+---------+------+---------------------------------+------------------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined |
+-----+------+-----------+------+---------+------+---------------------------------+------------------+-----------+---------------+
| 679 | root | localhost | test | Query | 0 | starting | show processlist | 0 | 0 |
| 680 | root | localhost | test | Query | 4 | Waiting for table metadata lock | drop table foo | 0 | 0 |
+-----+------+-----------+------+---------+------+---------------------------------+------------------+-----------+---------------+
You can see the drop table waiting for the table metadata lock. Just waiting. How long will it wait? Until the transaction in window 1 completes. Eventually it will time out after lock_wait_timeout seconds (by default, this is set to 1 year).
Window 1: COMMIT;
Window 2: Notice it stops waiting, and it immediately drops the
table.
So what can you do? Make sure there are no long-running transactions blocking your ALTER TABLE. Even a transaction that ran a quick SELECT against your table earlier will hold its metadata lock until the transaction commits.

How to investigate MySQL errors

Below is a query I found to display my errors and warnings in MySQL:
SELECT
`DIGEST_TEXT` AS `query`,
`SCHEMA_NAME` AS `db`,
`COUNT_STAR` AS `exec_count`,
`SUM_ERRORS` AS `errors`,
(ifnull((`SUM_ERRORS` / nullif(`COUNT_STAR`,0)),0) * 100) AS `error_pct`,
`SUM_WARNINGS` AS `warnings`,
(ifnull((`SUM_WARNINGS` / nullif(`COUNT_STAR`,0)),0) * 100) AS `warning_pct`,
`FIRST_SEEN` AS `first_seen`,
`LAST_SEEN` AS `last_seen`,
`DIGEST` AS `digest`
FROM
performance_schema.events_statements_summary_by_digest
WHERE
((`SUM_ERRORS` > 0) OR (`SUM_WARNINGS` > 0))
ORDER BY
`SUM_ERRORS` DESC,
`SUM_WARNINGS` DESC;
Is there some way to drill down into performance_schema to find the exact error message that is associated with the errors or warnings above?
I was also curious what it means if the db column or query column shows up as NULL. Below are a few specific examples of what I'm talking about
+---------------------+--------+------------+--------+----------+--------+
| query | db | exec_count | errors | warnings | digest |
+---------------------+--------+------------+--------+----------+--------+
| SHOW MASTER LOGS | NULL | 192 | 192 | 0 | ... |
+---------------------+--------+------------+--------+----------+--------+
| NULL | NULL | 553477 | 64 | 18783 | NULL |
+---------------------+--------+------------+--------+----------+--------+
| SELECT COUNT ( * ) | NULL | 48 | 47 | 0 | ... |
|FROM `mysql` . `user`| | | | | |
+---------------------+--------+------------+--------+----------+--------+
I am also open to using a different query that someone may have to display these errors/warnings
The message will be in performance_schema.events_statements_history.message_text column. You do need to make sure that performance_schema_events_statements_history_size config variable is set to a positive and sufficiently large value, and that the history collection is enabled. To enable the history collection, run:
update performance_schema.setup_consumers set enabled='YES'
where name='events_statements_history';
To check if it is enabled:
select * from performance_schema.setup_consumers where
name='events_statements_history';
NULL value of db means the there was no active database selected. Note that the active database does not have to be the same as the database of the table involved in a query. The active database is used as default when it is not explicitly specified in a query.
This will only give you error messages, not warning messages. From a brief look at the code it appears that the warning text does not get logged anywhere - which is understandable given that one statement could produce millions of them. So you have a couple of options:
Extract the statement from events_statements_history.sql_text, re-execute it, and then run SHOW WARNINGS
Extract the statement, track it down in your application code, then instrument your code to log the output of SHOW WARNINGS in hopes of catching it live if manual execution fails to reproduce the warnings

mysql dump query hangs

I have run mysql -u root -p gf < ~/gf_backup.sql to restore my db. However when I see the process list I see that one query has has been idle for a long time. I do not know the reason why.
mysql> show processlist;
+-----+------+-----------+-------------+---------+-------+-----------+------------------------------------------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+------+-----------+-------------+---------+-------+-----------+------------------------------------------------------------------------------------------------------+
| 662 | root | localhost | gf | Query | 18925 | query end | INSERT INTO `gf_1` VALUES (1767654,'90026','Lddd',3343,34349),(1 |
| 672 | root | localhost | gf | Query | 0 | NULL | show processlist |
+-----+------+-----------+-------------+---------+-------+-----------+------------------------------------------------------------------------------------------------------+
Please check free space with df -h command (if under Linux/Unix) if you're out of space do not kill or restart MySQL until it catch up with changes when you free some space.
you may also want to check max_allowed_packet setting in my.cnf and set it to something like 256M, please refer to http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_allowed_packet
Probably your dump is very large and contains much normalized data (records split into a bunch of tables, with a bunch of foreign key constraints, indexes and so on).
If so, you may try to remove all constraints and index definitions from the SQL file, then import the data and re-create the former removed directives. This is a well-known trick to speed up imports, because INSERT commands without validation of any constraints are a lot faster, and creation of an index and so on afterwards can be done in a single transaction.
See also: http://support.tigertech.net/mysql-large-inserts
Of course, you should kill the query first. And remove all fragments it created already.

"Lock wait timeout exceeded" without no process in processlist

I've gotten the next error while trying to perform some bunch deletion with reasonable limit:
query=(DELETE FROM `A` WHERE `id` < 123456 LIMIT 1000)
exception=(1205, 'Lock wait timeout exceeded; try restarting transaction')
And
mysql> SHOW OPEN TABLES like 'A';
+----------+----------------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+----------+----------------------+--------+-------------+
| D | A | 3 | 0 |
+----------+----------------------+--------+-------------+
1 row in set (0.22 sec)
I see that where is might be a deadlock, but show full processlist outputs only itself. Where to dig into?
InnoDB, MySQL 5.5
This means there is a transaction that should be committed. Check other sessions or other applications which may operate with this table.
Also there could be unclosed transacions after SELECTs. I've solved (I hope) such case adding commit/rollback after separate (not some transaciotn parts) SELECTs.
This idea has looked strange for me, so I'd spent some time for other atempts before I've tried it. And it has helped.

How to kill MySQL connections

I'm building a website with MySQL. I'm using TOAD for MySQL and suddenly I can't connect to the database as I'm getting an error:
"Too many connections"
Is there any way in Toad for MySQL to view existing connections to be able to kill them or simple close all connections all together?
No, there is no built-in MySQL command for that. There are various tools and scripts that support it, you can kill some connections manually or restart the server (but that will be slower).
Use SHOW PROCESSLIST to view all connections, and KILL the process ID's you want to kill.
You could edit the timeout setting to have the MySQL daemon kill the inactive processes itself, or raise the connection count. You can even limit the amount of connections per username, so that if the process keeps misbehaving, the only affected process is the process itself and no other clients on your database get locked out.
If you can't connect yourself anymore to the server, you should know that MySQL always reserves 1 extra connection for a user with the SUPER privilege. Unless your offending process is for some reason using a username with that privilege...
Then after you can access your database again, you should fix the process (website) that's spawning that many connections.
mysql> SHOW PROCESSLIST;
+-----+------+-----------------+------+---------+------+-------+---------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+------+-----------------+------+---------+------+-------+----------------+
| 143 | root | localhost:61179 | cds | Query | 0 | init | SHOW PROCESSLIST |
| 192 | root | localhost:53793 | cds | Sleep | 4 | | NULL |
+-----+------+-----------------+------+---------+------+-------+----------------+
2 rows in set (0.00 sec)
mysql> KILL 192;
Query OK, 0 rows affected (0.00 sec)
USER 192 :
mysql> SELECT * FROM exept;
+----+
| id |
+----+
| 1 |
+----+
1 row in set (0.00 sec)
mysql> SELECT * FROM exept;
ERROR 2013 (HY000): Lost connection to MySQL server during query
While you can't kill all open connections with a single command, you can create a set of queries to do that for you if there are too many to do by hand.
This example will create a series of KILL <pid>; queries for all some_user's connections from 192.168.1.1 to my_db.
SELECT
CONCAT('KILL ', id, ';')
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE `User` = 'some_user'
AND `Host` = '192.168.1.1'
AND `db` = 'my_db';
I would recommend checking the connections to show the maximum thread connection is
show variables like "max_connections";
sample
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| max_connections | 13 |
+-----------------+-------+
1 row in set
Then increase it by example
set global max_connections = 500;
In MySQL Workbench:
Left-hand side navigator > Management > Client Connections
It gives you the option to kill queries and connections.
Note: this is not TOAD like the OP asked, but MySQL Workbench users like me may end up here
As above mentioned, there is no special command to do it. However, if all those connection are inactive, using 'flush tables;' is able to release all those connection which are not active.