MySQL ALTER TABLE taking long in small table - mysql

I have two tables in my scenario
table1, which has about 20 tuples
table2, which has about 3 million tuples
table2 has a foreign key referencing table1 "ID" column.
When I try to execute the following query:
ALTER TABLE table1 MODIFY vccolumn VARCHAR(1000);
It takes forever. Why is it taking that long? I have read that it should not, because it only has 20 tuples.
Is there any way to speed it up without having server downtime? Because the query is locking the table, also.

I would guess the ALTER TABLE is waiting on a metadata lock, and it has not actually starting altering anything.
What is a metadata lock?
When you run any query like SELECT/INSERT/UPDATE/DELETE against a table, it must acquire a metadata lock. Those queries do not block each other. Any number of queries of that type can have a metadata lock.
But a DDL statement like CREATE/ALTER/DROP/TRUNCATE/RENAME or event CREATE TRIGGER or LOCK TABLES, must acquire an exclusive metadata lock. If any transaction still holds a metadata lock, the DDL statement waits.
You can demonstrate this. Open two terminal windows and open the mysql client in each window.
Window 1: CREATE TABLE foo ( id int primary key );
Window 1: START TRANSACTION;
Window 1: SELECT * FROM foo; -- it doesn't matter that the table has no data
Window 2: DROP TABLE foo; -- notice it waits
Window 1: SHOW PROCESSLIST;
+-----+------+-----------+------+---------+------+---------------------------------+------------------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined |
+-----+------+-----------+------+---------+------+---------------------------------+------------------+-----------+---------------+
| 679 | root | localhost | test | Query | 0 | starting | show processlist | 0 | 0 |
| 680 | root | localhost | test | Query | 4 | Waiting for table metadata lock | drop table foo | 0 | 0 |
+-----+------+-----------+------+---------+------+---------------------------------+------------------+-----------+---------------+
You can see the drop table waiting for the table metadata lock. Just waiting. How long will it wait? Until the transaction in window 1 completes. Eventually it will time out after lock_wait_timeout seconds (by default, this is set to 1 year).
Window 1: COMMIT;
Window 2: Notice it stops waiting, and it immediately drops the
table.
So what can you do? Make sure there are no long-running transactions blocking your ALTER TABLE. Even a transaction that ran a quick SELECT against your table earlier will hold its metadata lock until the transaction commits.

Related

Monitoring progress of a SQL script with many UPDATEs in MariaDB

I'm running a script with several million update statements like this:
UPDATE data SET value = 0.9234 WHERE fId = 47616 AND modDate = '2018-09-24' AND valueDate = '2007-09-01' AND last_updated < '2018-10-01';
fId, modDate and valueDate are the 3 components of the data table's composite primary key.
I initially ran this with AUTOCOMMIT=1 but I figured it would speed up if I set AUTOCOMMIT=0 and wrapped the transactions into blocks of 25.
In autocommit mode, I used SHOW PROCESSLIST and I'd see the UPDATE statement in the output, so from the fId foreign key, I could tell how far the script had progressed.
However without autocommit, watching it running now, I haven't seen anything with SHOW PROCESSLIST, just this:
610257 schema_owner_2 201.177.12.57:53673 mydb Sleep 0 NULL 0.000
611020 schema_owner_1 201.177.12.57:58904 mydb Query 0 init show processlist 0.000
The Sleep status makes me paranoid that other users on the system are blocking the updates, but if I run SHOW OPEN TABLES I'm not sure whether there's a problem:
MariaDB [mydb]> SHOW OPEN TABLES;
+----------+----------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+----------+----------------+--------+-------------+
| mydb | data | 2 | 0 |
| mydb | forecast | 1 | 0 |
| mydb | modification | 0 | 0 |
| mydb | data3 | 0 | 0 |
+----------+----------------+--------+-------------+
Is my script going to wait forever? Should I go back to using autocommit mode? Is there any way to see how far it's progressed? I guess I can inspect the data for the updates but that would be laborious to put together.
Check for progress by actually checking the data.
I assume you are doing COMMIT?
It is reasonable to see nothing -- each UPDATE will take very few milliseconds; there will be Sleep time between UPDATEs.
Time being 0 is your clue that it is progressing.
There won't necessarily be any clue in the PROCESSLIST of how far it has gotten.
You could add SELECT SLEEP(1), $fid; in the loop, where $fid (or whatever) is the last UPDATEd row id. That would slow down the progress by 1 second per 25 rows, so maybe you should do groups of 100 or 500.

Stored procedure hanging

A stored procedure hangs from time to time. Any advices?
BEGIN
DECLARE bookId int;
SELECT IFNULL(id,0) INTO bookId FROM products
WHERE
isbn=p_isbn
and stoc>0
and status='vizibil'
and pret_ron=(SELECT MAX(pret_ron) FROM products
WHERE isbn=p_isbn
and stoc>0
and status='vizibil')
ORDER BY stoc DESC
LIMIT 0,1;
IF bookId>0 THEN
UPDATE products SET afisat='nu' WHERE isbn=p_isbn;
UPDATE products SET afisat='da' WHERE id=bookId;
SELECT bookId INTO obookId;
ELSE
SELECT id INTO bookId FROM products
WHERE
isbn=p_isbn
and stoc=0
and status='vizibil'
and pret_ron=(SELECT MAX(pret_ron) FROM products
WHERE isbn=p_isbn
and stoc=0
and status='vizibil')
LIMIT 0,1;
UPDATE products SET afisat='nu' WHERE isbn=p_isbn;
UPDATE products SET afisat='da' WHERE id=bookId;
SELECT bookId INTO obookId;
END IF;
END
When it hangs it does it on:
| 23970842 | username | sqlhost:54264 | database | Query | 65 | Sending data | SELECT IFNULL(id,0) INTO bookId FROM products
WHERE
isbn= NAME_CONST('p_isbn',_utf8'973-679-50 | 0.000 |
| 1133136 | username | sqlhost:52466 | database _emindb | Query | 18694 | Sending data | SELECT IFNULL(id,0) INTO bookId FROM products
WHERE
isbn= NAME_CONST('p_isbn',_utf8'606-92266- | 0.000 |
First, I'd like to mention the Percona toolkit, it's great for debugging deadlocks and hung transactions. Second, I would guess that at the time of the hang, there are multiple threads executing this same procedure. What we need to know is, which locks are being acquired at the time of the hang. MySQL command SHOW INNODB STATUS gives you this information in detail. At the next 'hang', run this command.
I almost forgot to mention the tool innotop, which is similar, but better: https://github.com/innotop/innotop
Next, I am assuming you are the InnoDB engine. The default transaction isolation level of REPEATABLE READ may be too high in this situation because of range locking, you may consider trying READ COMMITTED for the body of the procedure (SET to READ COMMITTED at the beginning and back to REPEATABLE READ at the end).
Finally, perhaps most importantly, notice that your procedure performs SELECTs and UPDATEs (in mixed order) on the same table using perhaps the same p_isbn value. Imagine if this procedure runs concurrently -- it is a perfect deadlock set up.

MySQL InnoDB "SELECT FOR UPDATE" - SKIP LOCKED equivalent

Is there any way to skip "locked rows" when we make "SELECT FOR UPDATE" in MySQL with an InnoDB table?
E.g.: terminal t1
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> select id from mytable ORDER BY id ASC limit 5 for update;
+-------+
| id |
+-------+
| 1 |
| 15 |
| 30217 |
| 30218 |
| 30643 |
+-------+
5 rows in set (0.00 sec)
mysql>
At the same time, terminal t2:
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> select id from mytable where id>30643 order by id asc limit 2 for update;
+-------+
| id |
+-------+
| 30939 |
| 31211 |
+-------+
2 rows in set (0.01 sec)
mysql> select id from mytable order by id asc limit 5 for update;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
mysql>
So if I launch a query forcing it to select other rows, it's fine.
But is there a way to skip the locked rows?
I guess this should be a redundant problem in the concurrent process, but I did not find any solution.
EDIT:
In reality, my different concurrent processes are doing something apparently really simple:
take the first rows (which don't contain a specific flag - e.g.: "WHERE myflag_inUse!=1").
Once I get the result of my "select for update", I update the flag and commit the rows.
So I just want to select the rows which are not already locked and where myflag_inUse!=1...
The following link helps me to understand why I get the timeout, but not how to avoid it:
MySQL 'select for update' behaviour
mysql> SHOW VARIABLES LIKE "%version%";
+-------------------------+-------------------------+
| Variable_name | Value |
+-------------------------+-------------------------+
| innodb_version | 5.5.46 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 5.5.46-0ubuntu0.14.04.2 |
| version_comment | (Ubuntu) |
| version_compile_machine | x86_64 |
| version_compile_os | debian-linux-gnu |
+-------------------------+-------------------------+
7 rows in set (0.00 sec)
MySQL 8.0 introduced support for both SKIP LOCKED and NO WAIT.
SKIP LOCKED is useful for implementing a job queue (a.k.a batch queue) so that you can skip over locks that are already locked by a concurrent transaction.
NO WAIT is useful for avoiding waiting until a concurrent transaction releases the locks that we are also interested in locking.
Without NO WAIT, we either have to wait until the locks are released (at commit or release time by the transaction that currently holds the locks) or the lock acquisition times out. NO WAIT acts as a lock timeout with a value of 0.
For more details about SKIP LOCK and NO WAIT.
This appears to now exist in MySQL starting in 8.0.1:
https://mysqlserverteam.com/mysql-8-0-1-using-skip-locked-and-nowait-to-handle-hot-rows/
Starting with MySQL 8.0.1 we are introducing the SKIP LOCKED modifier
which can be used to non-deterministically read rows from a table
while skipping over the rows which are locked. This can be used by
our booking system to skip orders which are pending. For example:
However, I think that version is not necessarily production ready.
Unfortunately, it seems that there is no way to skip the locked row in a select for update so far.
It would be great if we could use something like the Oracle 'FOR UPDATE SKIP LOCKED'.
In my case, the queries launched in parallel are both exactly the same, and contain a 'where' clause and a 'group by' on a several millions of rows...because the queries need between 20 and 40 seconds to run, that was (as I already knew) a big part of the problem.
The only -temporary and not the best- solution I saw was to move some (i.e.: millions of) rows that I would not (directly) use in order to reduce the time the query will take.
So I will still have the same behavior but I will wait less time...
I was expecting a way to not select the locked row in the select.
I don't mark this as an answer, so if a new clause from mysql is added (or discovered), I can accept it later...
I'm sorry, but I think you approach the problem from a wrong angle. If your user wants to list records from a table that satisfy certain selection criteria, then your query should return them all, or return with an error message and provide no resultset whatsoever. But the query should not reurn only a subset of the results leading the user to belive that he has all the matching records.
The issue should be addressed by making sure that your application locks as few rows as possible, for as little time as possible.
Walk through the table in chunks of the PRIMARY KEY, using some suitable LIMIT so you are not looking at "too many" rows at once.
By using the PK, you are ordering things in a predictable way; this virtually eliminates deadlocks.
By using LIMIT, you will keep from hogging too much at once. The LIMIT should be embodied as a range over the PK. This makes it quite clear if two threads are about to step on each other.
More details are (indirectly) in my blog on big deletes.

MySQL UPDATE operations on InnoDB occasionally timeout

These are simple UPDATEs on very small tables in an InnoDB database. On occasion, an operation appears to lock, and doesn't timeout. Then every subsequent UPDATE ends with a timeout. The only recourse right now is to ask my ISP to restart the daemon. Every field in the table is used in queries, so all the fields are indexed, including a primary.
I'm not sure what causes the initial lock, and my ISP doesn't provide enough information to diagnose the problem. They are reticent about giving me access to any settings as well.
In a previous job, I was required to handle similar information, but instead I would do an INSERT. Periodically, I had a script run to DELETE old records from the table, so that not so many records needed to be filtered. When SELECTing I used extrapolation techniques so having more than just the most recent data was useful. This setup was rock solid, it never hung, even under very heavy usage.
I have no problem replacing the UPDATE with an INSERT and periodic DELETEs, but it just seems so clunky. Has anyone encountered a similar problem and fixed it more elegantly?
Current Configuration
max_heap_table_size: 16 MiB
count(*): 4 (not a typo, four records!)
innodb_buffer_pool_size: 1 GiB
Edit: DB is failing now; locations has 5 records. Sample error below.
MySQL query:
UPDATE locations SET x = "43.630181733", y = "-79.882244160", updated = NULL
WHERE uuid = "6a5c7e9d-400f-c098-68bd-0a0c850b9c86";
MySQL error:
Error #1205 - Lock wait timeout exceeded; try restarting transaction
locations
Field Type Null Default
uuid varchar(36) No
x double Yes NULL
y double Yes NULL
updated timestamp No CURRENT_TIMESTAMP
Indexes:
Keyname Type Cardinality Field
PRIMARY PRIMARY 5 uuid
x INDEX 5 x
y INDEX 5 y
updated INDEX 5 updated
It's a known issue with InnoDB, see MySQL rollback with lost connection. I would welcome something like innodb_rollback_on_disconnect as mentioned there. What's happening to you is that you're getting connections disconnected early, as can happened on the web, and if this happens in the middle of a modifying query, the thread doing that will hang but retain a lock on the table.
Right now, accessing InnoDB directly with web services is vulnerable to these kinds of disconnects and there's nothing you can do within FatCow other than ask them to restart the service for you. Your idea to use MyISAM and low priority is okay, and will probably not have this problem, but if you want to go with InnoDB, would recommend an approach like the following.
1) Go with stored procedures, then the transactions are guaranteed to run to completion and not hang in the event of a disconnect. It's a lot of work, but improves reliability big time.
2) Don't rely on auto commit, ideally set it to zero, and explicitly begin and end each transaction with BEGIN TRANSACTION and COMMIT.
transaction1> START TRANSACTION;
transaction1> SELECT * FROM t WHERE i > 20 FOR UPDATE;
+------+
| i |
+------+
| 21 |
| 25 |
| 30 |
+------+
transaction2> START TRANSACTION;
transaction2> INSERT INTO t VALUES(26);
transaction2> COMMIT;
transaction1> select * from t where i > 20 FOR UPDATE;
+------+
| i |
+------+
| 21 |
| 25 |
| 26 |
| 30 |
+------+
What is a gap lock?
A gap lock is a lock on the gap between index records. Thanks to
this gap lock, when you run the same query twice, you get the same
result, regardless other session modifications on that table.
This makes reads consistent and therefore makes the replication
between servers consistent. If you execute SELECT * FROM id > 1000
FOR UPDATE twice, you expect to get the same value twice.
To accomplish that, InnoDB locks all index records found by the
WHERE clause with an exclusive lock and the gaps between them with a
shared gap lock.
This lock doesn’t only affect to SELECT … FOR UPDATE. This is an example with a DELETE statement:
transaction1 > SELECT * FROM t;
+------+
| age |
+------+
| 21 |
| 25 |
| 30 |
+------+
Start a transaction and delete the record 25:
transaction1 > START TRANSACTION;
transaction1 > DELETE FROM t WHERE age=25;
At this point we suppose that only the record 25 is locked. Then, we try to insert another value on the second session:
transaction2 > START TRANSACTION;
transaction2 > INSERT INTO t VALUES(26);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
transaction2 > INSERT INTO t VALUES(29);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
transaction2 > INSERT INTO t VALUES(23);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
transaction2 > INSERT INTO t VALUES(31);
Query OK, 1 row affected (0.00 sec)
After running the delete statement on the first session, not only the affected index record has been locked but also the gap before and after that record with a shared gap lock preventing the insertion of data to other sessions.
If your UPDATE is literally:
UPDATE locations SET updated = NULL;
You are locking all rows in the table. If you abandon the transaction while holding locks on all rows, of course all rows will remain locked. InnoDB is not "unstable" in your environment, it would appear that it is doing exactly what you ask. You need to not abandon the open transaction.

How to optimize mysql indexes so that INSERT operations happen quickly on a large table with frequent writes and reads?

I have a table watchlist containing today almost 3Mil records.
mysql> select count(*) from watchlist;
+----------+
| count(*) |
+----------+
| 2957994 |
+----------+
It is used as a log to record product-page-views on a large e-commerce site (50,000+ products). It records the productID of the viewed product, the IP address and USER_AGENT of the viewer. And a timestamp of when it happens:
mysql> show columns from watchlist;
+-----------+--------------+------+-----+-------------------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+-------------------+-------+
| productID | int(11) | NO | MUL | 0 | |
| ip | varchar(16) | YES | | NULL | |
| added_on | timestamp | NO | MUL | CURRENT_TIMESTAMP | |
| agent | varchar(220) | YES | MUL | NULL | |
+-----------+--------------+------+-----+-------------------+-------+
The data is then reported on several pages throughout the site on both the back-end (e.g. checking what GoogleBot is indexing), and front-end (e.g. a side-bar box for "Recently Viewed Products" and a page showing users what "People from your region also liked" etc.).
So that these "report" pages and side-bars load quickly I put indexes on relevant fields:
mysql> show indexes from watchlist;
+-----------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-----------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| watchlist | 1 | added_on | 1 | added_on | A | NULL | NULL | NULL | | BTREE | |
| watchlist | 1 | productID | 1 | productID | A | NULL | NULL | NULL | | BTREE | |
| watchlist | 1 | agent | 1 | agent | A | NULL | NULL | NULL | YES | BTREE | |
+-----------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
Without the INDEXES, pages with the side-bar for example would spend about 30-45sec executing a query to get the 7 most-recent ProductIDs. With the indexes it takes <0.2sec.
The problem is that with the INDEXES the product pages themselves are taking longer and longer to load because as the table grows the write operations are taking upwards of 5sec. In addition there is a spike on the mysqld process amounting to 10-15% of available CPU each time a product page is viewed (roughly once every 2sec). We already had to upgrade the server hardware because on a previous server it was reaching 100% and caused mysqld to crash.
My plan is to attempt a 2-table solution. One table for INSERT operations, and another for SELECT operations. I plan to purge the INSERT table whenever it reaches 1000 records using a TRIGGER, and copy the oldest 900 records into the SELECT table. The report pages are a mixture of real-time (recently viewed) and analytics (which region), but the real-time pages tend to only need a handful of fresh records while the analytical pages don't need to know about the most recent trend (i.e. last 1000 views). So I can use the small table for the former and the large table for the latter reports.
My question: Is this an ideal solution to this problem?
Also: With TRIGGERS in MySQL is it possible to nice the trigger_statement so that it takes longer, but doesn't consume much CPU? Would running a cron job every 30min that is niced, and which performs the purging if required be a better solution?
Write operations for a single row into a data table should not take 5 seconds, regardless how big the table gets.
Is your clustered index based on the timestamp field? If not, it should be, so you're not writing into the middle of your table somewhere. Also, make sure you are using InnoDB tables - MyISAM is not optimized for writes.
I would propose writing into two tables: one long-term table, one short-term reporting table with little or no indexing, which is then dumped as needed.
Another solution would be to use memcached or an in-memory database for the live reporting data, so there's no hit on the production database.
One more thought: exactly how "live" must either of these reports be? Perhaps retrieving a new list on a timed basis versus once for every page view would be sufficient.
A quick fixed might be to use the INSERT DELAYED syntax, which allows mysql to queue the inserts and execute them when it has time. That's probably not a very scalable solution though.
I actually think that the principles of what you will be attempting is sound, although I wouldn't use a trigger. My suggested solution would be to let the data accumulate for a day and then purge the data to the secondary log table with a batch script that runs at night. This is mainly because these frequent transfers of a thousand rows would still put a rather heavy load on the server, and because I don't really trust the MySQL trigger implementation (although that isn't based on any real substance).
Instead of optimizing indexes you could use some sort of database write offload. You could delegate writing to some background process via asynchronous queue (ActiveMQ for example). Inserting a message into ActiveMQ queue is very fast. We are using ActiveMQ and have about 10-20K insert operations on test platform (and this is single threaded test application! So you could have more).
Look for 'shadow tables' when reconstructing tables this way, you don't need to write to the production table.
I had the same issue even using InnoDB tables or MyISAM as mention before, not optimized for writes, and solved it by using a second table to write temp data (that can periodically update master huge table). Master table over 18 million records, used to read only records and write result on to second small table.
The problem is the insert/update onto the big master table, takes a while, and worse if there are several updates or inserts on the queue awaiting, even with the INSERT DELAYED or UPDATE [LOW_PRIORITY] options enabled
To make it even faster, do read the small secondary table first, when searching a record, if te record is there, then work on the second table only. Use the master big table for reference and picking up new data record only *if data is not on the secondary small table, you just go and read the record from the master (Read is fast on InnoDB tables or MyISAM schemes)and then insert that record on the small second table.
Works like a charm, takes much less than 5 seconds to read from huge master 20 Million record and write on to second small table 100K to 300K records in less than a second.
This works just fine.
Regards
Something that often helps when doing bulk loads is to drop any indexes, do the bulk load, then recreate the indexes. This is generally much faster than the database having to constantly update the index for each and every row inserted.