Dummies guide to locking in innodb - mysql

The typical documentation on locking in innodb is way too confusing. I think it will be of great value to have a "dummies guide to innodb locking"
I will start, and I will gather all responses as a wiki:
The column needs to be indexed before row level locking applies.
EXAMPLE: delete row where column1=10; will lock up the table unless column1 is indexed

Here are my notes from working with MySQL support on a recent, strange locking issue (version 5.1.37):
All rows and index entries traversed to get to the rows being changed will be locked. It's covered at:
http://dev.mysql.com/doc/refman/5.1/en/innodb-locks-set.html
"A locking read, an UPDATE, or a DELETE generally set record locks on every index record that is scanned in the processing of the SQL statement. It does not matter whether there are WHERE conditions in the statement that would exclude the row. InnoDB does not remember the exact WHERE condition, but only knows which index ranges were scanned. ... If you have no indexes suitable for your statement and MySQL must scan the entire table to process the statement, every row of the table becomes locked, which in turn blocks all inserts by other users to the table."
That is a MAJOR headache if true.
It is. A workaround that is often helpful is to do:
UPDATE whichevertable set whatever to something where primarykey in (select primarykey from whichevertable where constraints order by primarykey);
The inner select doesn't need to take locks and the update will then have less work to do for the updating. The order by clause ensures that the update is done in primary key order to match InnoDB's physical order, the fastest way to do it.
Where large numbers of rows are involved, as in your case, it can be better to store the select result in a temporary table with a flag column added. Then select from the temporary table where the flag is not set to get each batch. Run updates with a limit of say 1000 or 10000 and set the flag for the batch after the update. The limits will keep the amount of locking to a tolerable level while the select work will only have to be done once. Commit after each batch to release the locks.
You can also speed this work up by doing a select sum of an unindexed column before doing each batch of updates. This will load the data pages into the buffer pool without taking locks. Then the locking will last for a shorter timespan because there won't be any disk reads.
This isn't always practical but when it is it can be very helpful. If you can't do it in batches you can at least try the select first to preload the data, if it's small enough to fit into the buffer pool.
If possible use the READ COMMITTED transaction isolation mode. See:
http://dev.mysql.com/doc/refman/5.1/en/set-transaction.html
To get that reduced locking requires use of row-based binary logging (rather than the default statement based binary logging).
Two known issues:
Subqueries can be less than ideally optimised sometimes. In this case it was an undesirable dependent subquery - the suggestion I made to use a subquery turned out to be unhelpful compared to the alternative in this case because of that.
Deletes and updates do not have the same range of query plans as select statements so sometimes it's hard to properly optimise them without measuring the results to work out exactly what they are doing.
Both of these are gradually improving. This bug is one example where we've just improved the optimisations available for an update, though the changes are significant and it's still going through QA to be sure it doesn't have any great adverse effects:
http://bugs.mysql.com/bug.php?id=36569

Related

MySQL Replication lag in slave due to Delete query - Row Based Replication

I have a delete query, which delete rows by chunk (each chunk 2000)
Delete from Table1 where last_refresh_time < {time value}
Here I want to delete the rows in the table which are not refreshed for last 5days.
Usually the delete will be around 10million rows. This process will be done once per-day in a little offtime.
This query executes little faster in Master, but due to ROW_BASED_REPLICATION the SLAVE is in heavy lag. As SLAVE - SQL_THREAD deletes each rows one by one from RELAY_LOG data.
We use READ_COMMITED isolation level,
Is it okay to change this query transaction alone to STATEMENT_BASED replication ?
will we face any issue?
In MySql, it is mentioned like below, can someone explain this will other transaction INSERT gets affected?
If you are using InnoDB tables and the transaction isolation level is READ COMMITTED or READ UNCOMMITTED, only row-based logging can be used. It is possible to change the logging format to STATEMENT, but doing so at runtime leads very rapidly to errors because InnoDB can no longer perform inserts
If other TRANSACTION INSERTS gets affected can we change ISOLATION LEVEL to REPEATABLE_READ for this DELETE QUERY TRANSACTION alone ? Is it recommended do like this?
Please share your views and Suggestions for this lag issue
Mysql - INNDOB Engine - 5.7.18
Don't do a single DELETE that removes 10M rows. Or 1M. Not even 100K.
Do the delete online. Yes, it is possible, and usually preferable.
Write a script that walks through the table 200 rows at a time. DELETE and COMMIT any "old" rows in that 200. Sleep for 1 second, then move on to the next 200. When it hits the end of the table, simply start over. (1K rows in a chunk may be OK.) Walk through the table via the PRIMARY KEY so that the effort to 'chunk' is minimized. Note that the 200 rows plus 1 second delay will let you get through the table in about 1 day, effectively as fast as your current code, but will much less interference.
More details: http://mysql.rjweb.org/doc.php/deletebig Note, especially, how it is careful to touch only N rows (N=200 or whatever) of the table per pass.
My suggestion helps avoid Replica lag in these ways
Lower count (200 vs 2000). That many 'events' will be dumped into the replication stream all at once. Hence, other events are stuck behind them.
Touch only 200 rows -- by using the PRIMARY KEY careful use of LIMIT, etc
"Sleep" between chunks -- The Primary primes the cache with an initial SELECT that is not replicated. Hence, in Row Based Replication, the Replica is likely to be caught off guard (rows to delete have not been cached). The Sleep gives it a chance to finish the deletes and handle other replication items before the next chunk comes.
Discussion: With Row Based Replication (which is preferable), a 10M DELETE will ship 10M 1-row deletes to the Replicas. This clogs replication, delays replication, etc. By breaking it into small chunks, such overhead has a reasonable impact on replication.
Don't worry about isolation mode, etc, simply commit each small chunk. 100 rows will easily be done in less than a second. Probably 1K will be that fast. 10M will certainly not.
You said "refreshed". Does this mean that the processing updates a timestamp in the table? And this happens at 'random' times for 'random' rows? And such an update can happen multiple times for a given row? If that is what you mean, then I do not recommend PARTITIONing, which is also discussed in the link above.
Note that I do not depend on an index on that timestamp, much less suggest partitioning by that timestamp. I want to avoid the overhead of updating such an index so rapidly. Walking through the table via the PK is a very good alternative.
Do you really need READ_COMMITED isolation level ? It's not actually standard and ACID.
But any way.
For this query you can change session isolation to REAPEATABLE_READ and use MIXED mode for binlog_format.
With that you will get STATEMENT base replication only for this session.
Maybe that table usage will better fit to noSQL tool like Mongodb and TTL index.

Issuing multiple sql update statements in one go

I have to issue about ~1M sql queries in the following form:
update table1 ta join table2 tr on ta.tr_id=tr.id
set start_date=null, end_date=null
where title_id='X' and territory_id='AG' and code='FREE';
The sql statements are in a text document -- I can only copy paste them in as-is.
What would be the fastest way to do this? Is there some checks that I can disable so it only inserts them at the end? For example something like:
start transaction;
copy/paste all sql statements here;
commit;
I tried the above approach but saw zero speed improvement on the inserts. Are there any other things I can try?
The performance cost is partly attributed to running 1M separate SQL statements, but it's also attributed to the cost of rewriting rows and the corresponding indexes.
What I mean is, there are several steps to executing an SQL statement, and each of them take non-zero amount of time:
Start a transaction.
Parse the SQL, validate the syntax, check your privileges to make sure you have permission to update those tables, etc.
Change the values you updated in the row.
Change the values you updated in each index on that table that contain the columns you changed.
Commit the transaction.
In autocommit mode, the start & commit transaction implicitly happens for every SQL statement, so that causes maximum overhead. Using explict START and COMMIT as you showed reduces that overhead by doing each once.
Caveat: I don't usually run 1M updates in a single transaction. That causes other types of overhead, because MySQL needs to keep the original rows in case you ROLLBACK. As a compromise, I would execute maybe 1000 updates, then commit and start a new transaction. That at least reduces the START/COMMIT overhead by 99.9%.
In any case, the overhead of transactions isn't great. It might be unnoticeable compared to the cost of updating indexes.
MyISAM tables have an option to DISABLE KEYS, which means it doesn't have to update non-unique indexes during the transaction. But this might not be a good optimization for you, because (a) you might need indexes to be active, to help performance of lookups in your WHERE clause and the joins; and (b) it doesn't work in InnoDB, which is the default storage engine, and it's a better idea to use InnoDB.
You could also review if you have too many indexes or redundant indexes on your table. There's no sense having extra indexes you don't need, which only add cost to your updates.
There's also a possibility that you don't have enough indexes, and your UPDATE is slow because it's doing a table-scan for every statement. The table-scans might be so expensive that you'd be better off creating the needed indexes to optimize the lookups. You should use EXPLAIN to see if your UPDATE statement is well-optimized.
If you want me to review that, please run SHOW CREATE TABLE <tablename> for each of your tables in your update, and run EXPLAIN UPDATE ... for your example SQL statement. Add the output to your question above (please don't paste in a comment).

MySQL - Update table rows without locking the rows

I have requirement where we need to update the row without holding the lock for the while updating.
Here is the details of the requirements, we will be running a batch processing on a table every 5 mins update blogs set is_visible=1 where some conditions this query as to run on millions of records so we don't want to block all the rows for write during updates.
I totally understand the implications of not having write locks which is fine for us because is_visible column will be updated only by this batch process no other thread wil update this column. On the other hand there will be lot of updates to other columns of the same table which we don't want to block
First of all, if you default on the InnoDB storage engine of MySQL, then there is no way you can update data without row locks except setting the transaction isolation level down to READ UNCOMMITTED by running
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
However, I don't think the database behavior is what you expect since the dirty read is allowed in this case. READ UNCOMMITTED is rarely useful in practice.
To complement the answer from #Tim, it is indeed a good idea to have a unique index on the column used in the where clause. However, please note as well that there is no absolute guarantee that the optimizer will eventually choose such execution plan using the index created. It may work or not work, depending on the case.
For your case, what you could do is to split the long transaction into multiple short transactions. Instead of updating millions of rows in one shot, scanning only thousands of rows each time would be better. The X locks are released when each short transaction commits or rollbacks, giving the concurrent updates the opportunity to go ahead.
By the way, I assume that your batch has lower priority than the other online processes, thus it could be scheduled out of peak hours to further minimize the impact.
P.S. The IX lock is not on the record itself, but attached to the higher-granularity table object. And even with REPEATABLE READ transaction isolation level, there is no gap lock when the query uses a unique index.
Best practice is to always acquire a specific lock when there is a chance that an update could happen concurrently with other transactions. If your storage engine be MyISAM, then MySQL will lock the entire table during an update, and there isn't much you can do about that. If the storage engine be InnoDB, then it is possible that MySQL would only put an exclusive IX lock on the records targeted by the update, but there are caveats to this being the case. The first thing you would do to try to achieve this would be a SELECT ... FOR UPDATE:
SELECT * FROM blogs WHERE <some conditions> FOR UPDATE;
In order to ensure that InnoDB only locks the records being updated, there needs to be a unique index on the column which appears in the WHERE clause. In the case of your query, assuming id were the column involved, it would have to be a primary key, or else you would need to create a unique index:
CREATE UNIQUE INDEX idx ON blogs (id);
Even with such an index, InnoDB may still apply gap locks on the records in between index values, to ensure that the REPEATABLE READ contract is enforced.
So, you may add an index on the column(s) involved in your WHERE clause to optimize the update on InnoDB.

MySql count(*) super slow with concurrent queries

I have a script that tries to read all the rows from a table like this:
select count(*) from table where col1 = 'Y' or col1 is null;
col1 and col2 are not indexed and this query usually takes ~20 seconds but if someone is already running this query, it takes ages and gets blocked.
We just have around 100k rows in the table and I tried it without the where clause and it causes the same issue.
The table uses InnoDB so, it doesn't store the exact count but I am curious if there is any concurrency parameter I should look into. I am not sure if absence of indexes on the table causes the issue but it doesn't make sense to me.
Thanks!
If they are not indexed, then it is required to read the entire disk files of your tables to find your data. A single hard disk cannot perform very well concurrent read intensive operations. You have to index.
It looks like your SELECT COUNT(*)... query is being serialized with other operations on your table. Unless you tell the MySQL server otherwise, your query will do its best to be very precise.
Try changing the transaction isolation level by issuing this command immediately before your query.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
Setting this enables so-called dirty reads, which means you might not count everything in the table that changes during your operation. But that probably will not foul up your application too badly.
(Adding appropriate indexes is always a good idea, but not the cause of the problem you ask about.)

Is it better to use deactive status column instead of deleting rows for mysql performance?

Recently i watched a video about CRUD operations in mysql and one of the things comes to my attention in that video, commentator claimed deleting rows bad for mysql index performance instead of that we should use a status column.
So, is there a really difference between those two ?
Deleting a row is indeed quite expensive, more expensive than setting a new value to a column. Some people don't ever delete a row from their databases (though it's sometimes due to preserving history, not performance considerations).
I usually do delayed deletions: when my app needs to delete a row, it doesn't actually delete, but sets a status instead. Then later, during low traffic period, I execute those deletions.
Some database engines need their data files to be compacted every once in a while, since they cannot reuse the space from deleted records. I'm not sure if InnoDB is one of those.
I guess the strategy is that deleting a row affects all indexes, whereas modifying a 'status' column might not affect any indexes (since you probably wouldn't index that column due to the low cardinality).
Still, when deleting rows, the impact on indexes is minimal. Inserting affects index performance when it fills up a page, causing the index to be rebuilt. This doesn't happen with deletes. With deletes, the index records are merely marked for deletion.
MySQL will later (when load is low) purge deleted rows from the indexes. So, deletes are already cached. Why double the effort?
Your deletes do need indexes just like your selects and updates in order to quickly find the record to delete. So, don't blame slow deletes that are due to missing or bad indexes on MySQL index performance. Your delete statement's WHERE clause should be able to utilize an index. With InnoDB, this is also important to ensure that just a single index record is locked instead having to lock all of the records or a range.