TRUNCATE table on MariaDB just started hanging - mysql

I am running 10.1.26-MariaDB-0+deb9u1 Debian 9.1 in multiple locations.
Just got a call today that some scripts are no longer running at one of the locations. I've diagnosed that whenever a script tries to execute TRUNCATE <table name> it just hangs.
I've tried it from the CLI and Workbench as well with the same results. I have also tried TRUNCATE TABLE <table name> with the same results.
I cannot figure out A) why this all of a sudden stopped working. and B) what's different between this location and other three, where it does work.

I expect you see something like this:
mysql> show processlist;
+----+-----------------+-----------+------+---------+------+---------------------------------+------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+-----------------+-----------+------+---------+------+---------------------------------+------------------------+
| 8 | msandbox | localhost | test | Query | 435 | Waiting for table metadata lock | truncate table mytable |
Try this experiment in a test instance of MySQL (like on your local development environment): open two shell windows and run the mysql client. Create a test table.
mysql> create table test.mytable ( answer int );
mysql> insert into test.mytable set answer = 42;
Now start a transaction and query the table, but do not commit the transaction yet.
mysql> begin;
mysql> select * from test.mytable;
+--------+
| answer |
+--------+
| 42 |
+--------+
In the second window, try to truncate that table.
mysql> truncate table mytable;
<hangs>
What it's waiting for is a metadata lock. It will wait for a number of seconds equal to the lock_wait_timeout configuration option.
Now go back to the first window, and commit.
mysql> commit;
Now see in your second window, the TRUNCATE TABLE stops waiting, and it finally does its work, truncating the table.
Any DDL statement like ALTER TABLE, TRUNCATE TABLE, DROP TABLE needs to acquire an exclusive metadata lock on the table. But any transaction that has been reading or writing that table holds a shared metadata lock. This means many concurrent sessions can do their work, like SELECT/UPDATE/INSERT/DELETE without blocking each other (because their locks are shared). But a DDL statement requires an exclusive metadata lock, meaning no other metadata lock, either shared or exclusive, can exist.
So I'd guess there's some transaction hanging around that has done some read or write against your table, without committing. Either the query itself is very long-running, or else the query has finished but the transaction hasn't.
You have to figure out where you have an outstanding transaction. If you are using MySQL 5.7 or later, you can read the sys.schema_lock_waits table while one of your truncate table statements is waiting.
select * from sys.schema_table_lock_waits\G
*************************** 1. row ***************************
object_schema: test
object_name: mytable
waiting_thread_id: 47
waiting_pid: 8
waiting_account: msandbox#localhost
waiting_lock_type: EXCLUSIVE
waiting_lock_duration: TRANSACTION
waiting_query: truncate table mytable
waiting_query_secs: 625
waiting_query_rows_affected: 0
waiting_query_rows_examined: 0
blocking_thread_id: 48
blocking_pid: 9
blocking_account: msandbox#localhost
blocking_lock_type: SHARED_READ
blocking_lock_duration: TRANSACTION
sql_kill_blocking_query: KILL QUERY 9
sql_kill_blocking_connection: KILL 9
This tells us which session is blocked, waiting for a metadata lock. The waiting_pid (8 in the above example) corresponds to the Id in the processlist of the blocked session.
The blocking_pid (9 in the above example) corresponds to the Id in the processlist of the session that currently holds the lock, and which is blocking the truncate table.
It even tells you exactly how to kill the session that's holding the lock:
mysql> KILL 9;
Once the session is killed, it must release its locks, and the truncate table finally finishes.
mysql> truncate table mytable;
Query OK, 0 rows affected (13 min 34.50 sec)
Unfortunately, you're using MariaDB 10.1. This doesn't support the sys schema or the performance_schema.metadata_locks table that it needs to report those locks. MariaDB is a fork from MySQL 5.5, which is nearly ten years old now, and they didn't have the metadata_locks table at that time.
I don't use MariaDB, but I googled and found that they have their own proprietary implementation for querying metadata locks: https://mariadb.com/kb/en/library/metadata_lock_info/ I haven't used it, so I'll leave it to you to read the docs about that.

Related

How can I force MySQL to obtain a table-lock for a transaction?

I'm trying to perform an operation on a MySQL database table using the InnoDB storage engine. This operation is an INSERT-or-UPDATE type operation where I have an incoming set of data and there may be some data already in the table which must be updated. For example, I might have this table:
test_table
+-------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| value | varchar(255) | NO | | NULL | |
+-------+--------------+------+-----+---------+----------------+
... and some sample data:
+----+-------+
| id | value |
+----+-------+
| 1 | foo |
| 2 | bar |
| 3 | baz |
+----+-------+
Now, I want to "merge" the following values:
2, qux
4, corge
My code ultimately ends up issuing the following queries:
BEGIN;
SELECT id, value FROM test WHERE id=2 FOR UPDATE;
UPDATE test SET id=2, value='qux' WHERE id=2;
INSERT INTO test (id, value) VALUES (4, 'corge');
COMMIT;
(I'm not precisely sure what happens with the SELECT ... FOR UPDATE and the UPDATE because I'm using MySQL's Connector/J library for Java and simply calling the updateRow method on a ResultSet. For the sake of argument, let's assume that the queries above are actually what are being issued to the server.)
Note: the above table is a trivial example to illustrate my question. The real table is more complicated and I'm not using the PK as the field to match when executing SELECT ... FOR UPDATE. So it's not obvious whether the record needs to be INSERTed or UPDATEd by just looking at the incoming data. The database MUST be consulted to determine whether to use an INSERT/UPDATE.
The above queries work just fine most of the time. However, when there are more records to be "merged", the SELECT ... FOR UPDATE and INSERT lines can be interleaved, where I cannot predict whether SELECT ... FOR UPDATE or INSERT will be issued and in what order.
The result is that sometimes transactions deadlock because one thread has locked a part of the table for the UPDATE operation and is waiting on a table lock (for the INSERT, which requires a lock on the primary-key index), while another thread has already obtained a table lock for the primary key (presumably because it issued an INSERT query) and is now waiting for a row-lock (or, more likely, a page-level lock) which is held by the first thread.
This is the only place in the code where this table is updated and there are no explicit locks currently being obtained. The ordering of the UPDATE versus INSERT seems to be the root of the issue.
There are a few possibilities I can think of to "fix" this.
Detect the deadlock (MySQL throws an error) and simply re-try. This is my current implementation because the problem is somewhat rare. It happens a few times per day.
Use LOCK TABLES to obtain a table-lock before the merge process and UNLOCK TABLES afterward. This evidently won't work with MariaDB Galera -- which is likely in our future for this product.
Change the code to always issue INSERT queries first. This would result in any table-level locks being acquired first and avoid the deadlock.
The problem with #3 is that it will require more complicated code in a method that is already fairly complicated (a "merge" operation is inherently complex). That more-complicated code also means roughly double the number of queries (SELECT to determine if the row id already exists, then later, another SELECT ... FOR UPDATE/UPDATE to actually update it). This table is under a reasonable amount of contention, so I'd like to avoid issuing more queries if possible.
Is there a way to force MySQL to obtain a table-level lock without using LOCK TABLES? That is, in a way that will work if we move to Galera?
I think you may be able to do what you want by acquiring a set of row and gap locks:
START TRANSACTION;
SELECT id, value
FROM test
WHERE id in (2, 4) -- list all the IDs you need to UPSERT
FOR UPDATE;
UPDATE test SET value = 'qux' WHERE id = 2;
INSERT INTO test (id, value) VALUES (4, 'corge');
COMMIT;
The SELECT query will lock the rows that already exist, and create gap locks for the rows that don't exist yet. The gap locks will prevent other transactions from creating those rows.

Calling stored procedures on temporary tables

Is it possible to execute stored procedures on temporary tables in MySQL? I'm trying to create a system for data import that could, theoretically, run concurrently for different users. If one user is importing files into a temporary table while another user is doing the same thing, is it possible for the same shared procedure to be called by both users since the tables referenced in the procedure will match the temporary tables?
The workflow for an individual user would look like this...
Load data into temporary table newdata
Stored procedure is called where munging and updates are done to table newdata
Stored procedure moves data from newdata to the live/permanent tables.
...while another user could, possibly, be doing the same thing.
Yes, you can reference temp tables in a stored procedure:
mysql> create procedure p() select * from t;
Query OK, 0 rows affected (0.03 sec)
mysql> create temporary table t as select 123 union select 456;
Query OK, 2 rows affected (0.02 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> call p;
+-----+
| 123 |
+-----+
| 123 |
| 456 |
+-----+
(Tested on MySQL 5.6.31)
However, most experienced MySQL users try to avoid doing what you're planning, if they use replication. The reason is that when a slave restarts for any reason, it drops any temp tables. So any subsequent UPDATE and INSERT...SELECT referencing the temp table get an error because the temp table no longer exists. An error in the replication stream stops replication.
It might seem like this should be an uncommon occurrence for a slave to restart suddenly, but if your app creates temp tables frequently, there's a high chance for there to be a temp table going at the moment a slave restarts.
The best use of temp tables is to fill them with temp data in one statement, and then use the temp table only by SELECT queries (which are not replicated).

Possible conflicts when performing an update with WHERE IS NULL and LIMIT?

Assume I have the following table:
| id | claimed |
----------------
| 1 | NULL |
| 2 | NULL |
| 3 | NULL |
I can execute this query to update exactly (any) one of the rows without having to execute a select first.
UPDATE mytable SET claimed = [someId] WHERE claimed IS NULL LIMIT 1
However, what happens if two concurrent requests of this query take place. Is it possible for the later request to override the value of the first request? I know the chance of this happening is very slight, but still.
Performing statement UPDATE mytable SET claimed = [someId] WHERE claimed IS NULL LIMIT 1 in a transaction t1 locks the respective record and prevents any other transaction t2 from updating the same record until transaction t1 commits (or aborts). Transaction t2 is blocked in the meanwhile; t2 continues once t1 commits (or aborts), or t2 gets aborted automatically once a timeout is reached.
Confer mysql reference on internal locking methods - row level locking:
MySQL uses row-level locking for InnoDB tables to support simultaneous
write access by multiple sessions, making them suitable for
multi-user, highly concurrent, and OLTP applications.
and mysql reference on Locks Set by Different SQL Statements in InnoDB:
UPDATE ... WHERE ... sets an exclusive next-key lock on every record
the search encounters. However, only an index record lock is required
for statements that lock rows using a unique index to search for a
unique row.
and finally the behaviour of locking in mysql reference InnoDB Locking for record locks:
If a transaction T1 holds an exclusive (X) lock on row r, a request
from some distinct transaction T2 for a lock of either type on r
cannot be granted immediately. Instead, transaction T2 has to wait for
transaction T1 to release its lock on row r.
So two queries will not grab the same record as long as these two queries run in different transactions.
Note that the complete record is locked, such that other update operations by other transactions are blocked, even if they would update other attributes of the respective record.
I tried it out using SequelPro, and you can try it out with any client you want, as follows:
Make sure that mytable contains at least two records with claimed
is null.
Open two connection windows / terminals; let's call them c1 and
c2.
in c1, execute the following two commands: start transaction;
UPDATE mytable SET claimed = 15 WHERE claimed IS NULL LIMIT 1; #
No commit so far!
in c2, execute similar commands (Note the different value for
claimed): start transaction; UPDATE mytable SET claimed = 16 WHERE claimed IS NULL LIMIT 1; # Again, no commit so far
Window c2 should inform you that it is working (i.e. waiting for
the query to finish).
Switch to window c1 and execute command commit;
Switch to window c2, where the (previously started) query should
now have been finished; Execute commit;
When looking into mytable, one record should now have claim=15,
and another one should have claim=16.

PHPMyAdmin become unresponsive due to InnoDB table metadata lock in bulk INSERT / DELETE

When I run a long query in phpmyadmin, I can't access any other table through phpmyadmin in a different window or even browser.
Why is that? Can I fix it?
UPDATE:
some more details:
- The table I'm running a query on is Innodb
- I'm able to connect through command line
- The long query is a DELETE which takes a couple of hours to finish
UPDATE2:
I've done some testing from the command line and have loaded into the table from a dump file while trying to open phpmyadmin, which also didn't work. When looking at SHOW PROCESSLIST I see there's a query stuck:
| 36732 | root | localhost | db_name | Query | 17 | Waiting for table metadata lock | SELECT * FROM table ORDER BY id DESC
LIMIT 0, 30 |
So I guess my problem is that the InnoDB table is locked, although it's an InnoDB table and the dump file is a bunch of Insert operations one after the other. I assume it's some sort of configuration problem?

MySQL - Huge difference in cardinality on what should be a duplicate table

On my development server I have a column indexed with a cardinality of 200.
The table has about 6 million rows give or take and I have confirmed it is an identical row count on the production server.
However the production servers index has a cardinality of 31938.
They are both mysql 5.5 however my dev server is Ubuntu server 13.10 and the production server is Windows server 2012.
Any ideas on what would cause such a difference in what should be the exact same data?
The data was loaded into the production server from a MySQL dump of the dev server.
EDIT: Its worth noting that I have queries that take about 15 minutes to run on my dev server that seem to run forever on the production server due to what i believe to be these indexing issues. Different amounts of rows are being pulled within sub-queries.
Mysql checksums might help you verify that the tables are the same
-- a table
create table test.t ( id int unsigned not null auto_increment primary key, r float );
-- some data ( 18000 rows or so )
insert into test.t (r) select rand() from mysql.user join mysql.user u2;
-- a duplicate
create table test.t2 select * from test.t;
-- introduce a difference somewhere in there
update test.t2 set r = 0 order by rand() limit 1;
-- and prove the tables are different easily:
mysql> checksum table test.t;
+--------+------------+
| Table | Checksum |
+--------+------------+
| test.t | 2272709826 |
+--------+------------+
1 row in set (0.00 sec)
mysql> checksum table test.t2
-> ;
+---------+-----------+
| Table | Checksum |
+---------+-----------+
| test.t2 | 312923301 |
+---------+-----------+
1 row in set (0.01 sec)
Beware the checksum locks tables.
For more advanced functionality, the percona toolkit can both checksum and sync tables (though it's based on master/slave replication scenarios so it might not be perfect for you).
Beyond checksumming, you might consider looking at REPAIR OR OPTIMIZE.