MySQL SQL_NO_CACHE not working - mysql

I have a table (InnoDB) that gets inserted, updated and read frequently (usually in a burst of few milliseconds apart). I noticed that sometimes the SELECT statement that follows an INSERT/UPDATE would get stale data. I assume this was due to cache, but after putting SQL_NO_CACHE in front of it doesn't really do anything.
How do you make sure that the SELECT always wait until the previous INSERT/UPDATE finishes and not get the data from cache? Note that these statements are executed from separate requests (not within the same code execution).
Maybe I am misunderstanding what SQL_NO_CACHE actually does...
UPDATE:
#Uday, the INSERT, SELECT and UPDATE statement looks like this:
INSERT myTable (id, startTime) VALUES(1234, 123456)
UPDATE myTable SET startTime = 123456 WHERE id = 1234
SELECT SQL_NO_CACHE * FROM myTable ORDER BY startTime
I tried using transactions with no luck.
More UPDATE:
I think this is actually a problem with INSERT, not UPDATE. The SELECT statement always tries to get the latest row sorted by time. But since INSERT does not do any table-level locking, it's possible that SELECT will get old data. Is there a way to force table-level locking when doing INSERT?

The query cache isn't the issue. Writes invalidate the cache.
MySQL gives priority to writes and with the default isolation level (REPEATABLE READ), your SELECT would have to wait for the UPDATE to finish.
INSERT can be treated differently if you have CONCURRENT INSERTS enabled for MyISAM, also InnoDB uses record locking, so it doesn't have to wait for inserts at the end of the table.
Could this be a race condition then? Are you sure your SELECT occurs after the UPDATE? Are you reading from a replicated server where perhaps the update hasn't propagated yet?
If the issue is with the concurrent INSERT, you'll want to disable CONCURRENT INSERT on MyISAM, or explicitly lock the table with LOCK TABLES during the INSERT. The solution is the same for InnoDB, explicitly lock the table on the INSERT with LOCK TABLES.

A)
If you dont need the caching at all(for any SELECT), disable the query
cache completely.
B)
If you want this for only one session, you can do it like
"set session query_cache_type=0;" which will set this for that
perticular session.
use SQL_NO_CAHCE additionally in either case.

Related

Mysql (myisam) : "waiting for lock" REPLACE blocking all selects

I've a MyISAM table (I can't change it to use InnoDb, do please don't suggest that) which is pretty big (~20GB)
I've a worker which regularly dump this table (I launch is with the --skip-lock-tables option)
During the dump (which takes ~5min), concurrent select can be correctly run, as I would expect. When I go a "REPLACE" during the dump, this REPLACE is "waiting for metadatalock" which seems normal too.
But, every SELECT started after the start the REPLACE will also be "waiting for metadata lock". I can't understand why. Could you help me on this, and tell me how I can have all the selects correctly run (even after this replace)
Thanks !
What is happening is:
Your worker is making a big SELECT. The SELECT is locking the table with a read lock. By the way, the skip-lock-tables only means that you are not locking all the tables at once, but the SELECT query is still locking each table individually. More info on this answer.
Your REPLACE is trying to INSERT but has to wait for the first SELECT (the dump) to finish in order to acquire a write lock. It is put in the write lock queue.
Every SELECT after the REPLACE is put in the read lock queue.
This is a behavior described in the doc on table-level locking:
Table updates are given higher priority than table retrievals. Therefore, when a lock is released, the lock is made available to the requests in the write lock queue and then to the requests in the read lock queue. This ensures that updates to a table are not “starved” even when there is heavy SELECT activity for the table.
If you want the SELECT to not wait for the REPLACE you could (never actually tested that) try the LOW_PRIORITY modifier on your replace.
If you use the LOW_PRIORITY modifier, execution of the INSERT is delayed until no other clients are reading from the table. This includes other clients that began reading while existing clients are reading, and while the INSERT LOW_PRIORITY statement is waiting. It is possible, therefore, for a client that issues an INSERT LOW_PRIORITY statement to wait for a very long time (or even forever) in a read-heavy environment. (This is in contrast to INSERT DELAYED, which lets the client continue at once.)
However be careful as it might never run if there are always a lot of select.

Confusion regarding INNODB locking

When i run 'Select * from table' on an INNODB table, does the table get locked implicity? Does it mean that during the time MySQL takes to return the result set, i cannot issue an update statement on the table?
From what i have understood, the whole table would be locked in shared mode until the result set is returned from the server. Only then could the update command be executed.
InnoDB uses a feature called Multi-version concurrency control.
Locking in shared mode is not required, since MVCC will retain earlier versions of the row for your SELECT statement to be able to read if required.
So the answer is 'no', running a SELECT statement will not need to lock any rows while updating. That is, unless it is a special SELECT statement like SELECT .. FOR UPDATE.

Avoiding SQL deadlock in insert combined with select

I'm trying to insert pages into a table with a sort column that I auto-increment by 2000 in this fashion:
INSERT INTO pages (sort,img_url,thumb_url,name,img_height,plank_id)
SELECT IFNULL(max(sort),0)+2000,'/image/path.jpg','/image/path.jpg','name',1600,'3'
FROM pages WHERE plank_id = '3'
The trouble is I trigger these inserts on the upload of images, so 5-10 of these queries are run almost simultaneously. This triggers a deadlock on some files, for some reason.
Any idea what is going on?
Edit: I'm running MySQL 5.5.24 and InnoDB. The sort column has an index.
What I made for myself is setting sort to 0 on insert, retrieve id of inserted row and set sort to id*2000. But, also, you can try to use transactions:
BEGIN;
INSERT INTO pages (sort,img_url,thumb_url,name,img_height,plank_id)
SELECT IFNULL(max(sort),0)+2000,'/image/path.jpg','/image/path.jpg','name',1600,'3'
FROM pages WHERE plank_id = '3';
COMMIT;
Note that not all of the MySQL client libraries support multiqueris, so you may have to execute them separately but in stream of one connection.
Another approach is to lock the whole table for the time INSERT is executed, but this will lead to increase of queries queue because they will have to wait until insert is performed

MySQL pause index rebuild on bulk INSERT without TRANSACTION

I have a lot of data to INSERT LOW_PRIORITY into a table. As the index is rebuilt every time a row is inserted, this takes a long time. I know I could use transactions, but this is a case where I don't want the whole set to fail if just one row fails.
Is there any way to get MySQL to stop rebuilding indices on a specific table until I tell it that it can resume?
Ideally, I would like to insert 1,000 rows or so, set the index do its thing, and then insert the next 1,000 rows.
I cannot use INSERT DELAYED as my table type is InnoDB. Otherwise, INSERT DELAYED would be perfect for me.
Not that it matters, but I am using PHP/PDO to access MySQL. Any advice you could give would be appreciated. Thanks!
ALTER TABLE tableName DISABLE KEYS
// perform inserts
ALTER TABLE tableName ENABLE KEYS
This disables updating of all non-unique indexes. The disadvantage is that those indexes won't be used for select queries as well.
You can however use multi-inserts (INSERT INTO table(...) VALUES(...),(...),(...) which will also update indexes in batches.
AFAIK, for those that use InnoDB tables, if you don't want indexes to be rebuilt after each INSERT, you must use transactions.
For example, for inserting a batch of 1000 rows, use the following SQL:
SET autocommit=0;
//Insert the rows one after the other, or using multi values inserts
COMMIT;
By disabling autocommit, a transaction will be started at the first INSERT. Then, the rows are inserted one after the other and at the end, the transaction is committed and the indexes are rebuilt.
If an error occurs during execution of one of the INSERT, the transaction is not rolled back but an error is reported to the client which has the choice of rolling back or continuing. Therefore, if you don't want the entire batch to be rolled back if one INSERT fails, you can log the INSERTs that failed and continue inserting the rows, and finally commit the transaction at the end.
However, take into account that wrapping the INSERTs in a transaction means you will not be able to see the inserted rows until the transaction is committed. It is possible to set the transaction isolation level for the SELECT to READ_UNCOMMITTED but as I've tested it, the rows are not visible when the SELECT happens very close to the INSERT. See my post.

What does MySQL do if you attempt to update a table that is being queried?

I have a very slow query that I need to run on a MySQL database from time to time.
I've discovered that attempts to update the table that is being queried are blocked until the query has finished.
I guess this makes sense, as otherwise the results of the query might be inconsistent, but it's not ideal for me, as the query is of much lower importance than the update.
So my question really has two parts:
Out of curiosity, what exactly does MySQL do in this situation? Does it lock the table for the duration of the query? Or try to lock it before the update?
Is there a way to make the slow query not blocking? I guess the options might be:
Kill the query when an update is needed.
Run the query on a copy of the table as it was just before the update took place
Just let the query go wrong.
Anyone have any thoughts on this?
It sounds like you are using a MyISAM table, which uses table level locking. In this case, the SELECT will set a shared lock on the table. The UPDATE then will try to request an exclusive lock and block and wait until the SELECT is done. Once it is done, the UPDATE will run like normal.
MyISAM Locking
If you switched to InnoDB, then your SELECT will set no locks by default. There is no need to change transaction isolation levels as others have recommended (repeatable read is default for InnoDB and no locks will be set for your SELECT). The UPDATE will be able to run at the same time. The multi-versioning that InnoDB uses is very similar to how Oracle handles the situation. The only time that SELECTs will set locks is if you are running in the serializable transaction isolation level, you have a FOR UPDATE/LOCK IN SHARE MODE option to the query, or it is part of some sort of write statement (such as INSERT...SELECT) and you are using statement based binary logging.
InnoDB Locking
For the purposes of the select statement, you should probably issue a:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
command on the connection, which causes the subsequent select statements to operate without locking.
Don't use the 'SELECT ... FOR UPDATE', as that definitely locks the table rows that are affected by the select statement.
The full list of msql transaction isloation levels are in the docs.
First off all you need to know what engine you´re using (MySam or InnoDb).
This is clearly a transaction problem.
Take a look a the section 13.4.6. SET TRANSACTION Syntax in the mysql manual.
UPDATE LOW_PRIORITY .... may be helpful - the mysql docs aren't clear whether this would let the user requesting the update continue and the update happen when it can (which is what I think happens) or whether the user has to wait (which would be worse than at present ...), and I can't remember.
What table types are you using? If you are on MyISAM, switching to InnoDB (if you can - it has no full text indexing) opens up more options for this sort of thing, as it supports the transactional features and row level locking.
I don't know MySQL, But it sounds like transaction problem.
You should be able to set transaction typ to Dirty Read in your select query.
That won't nessarily give you correct results. But it should'nt be blocked.
Better would be to make the first query go faster. Do some analyzing and check if you can speed it up with correct indeing and so on.