I have a mysql SELECT query which is fast (<0.1 sec) but only the first time I run it. It joins 3 tables together (using indices) and has a relatively simple WHERE statement. When I run it by hand in the phpmyadmin (always changing numbers in the WHERE so that it isn't cached) it is always fast but when I have php run several copies of it in a row, the first one is fast and the others hang for ~400 sec. My only guess is that somehow mysql is running out of memory for the connection and then has to do expensive paging.
My general question is how can I fix this behavior, but my specific questions are without actually closing and restarting the connection how can I make these queries coming from php be seen as separate just like the queries coming from phpmyadmin, how can I tell mysql to flush any memory when the request is done, and does this sound like a memory issue to you?
Well I found the answer at least in my case and I'm putting it here for anyone in the future who runs into a similar issue. The query I was running had a lot of results returned and MYSQL's query cache was causing a lot of overhead. When you run a query MYSQL will save it and its output so that it can quickly answer future identical requests quickly. All I had to do was put SQL_NO_CACHE and the speed was back to normal. Just look out if your incoming query is large or the results are very large because it can take considerable resources for MYSQL to decide when to kick things out.
Related
I took over a project and have 2 MyISAM tables.
table1 with approx. 1M rows, and
table2 with approx. 100K rows.
In the project these tables are accessed often, and at first it seems ok.
After I installed the project on a Windows 8.1 for local development I found that every day, the first time I access the site, my query takes 14 seconds. A bit too much.
Afterwards is less than 0.1 second.
Now, since on dev this accumulated with another query runs into a timeout-exception for php, it got me concerned about whether it's recommended to do anything about it or not. On production it seems not to occur (or hard to reproduce).
I heard of things like warm cache or optimize query but don't know what is meant by that.
What do experts like you do in this case?
I had another question set up here trying to see whether I can optimize the query.
Changing to InnoDB doesn't seem to have an impact.
The "first" time you run a query, two things may or may not happen:
Lots of disk I/O may be done to fetch the index blocks and/or data blocks from disk. (If other queries happened to have fetched those blocks, the blocks may be cached already.) (14s vs 0.1s is more than I usually see for this cold/warm cache difference.)
If the "Query cache" was on, the first SELECT and its resultset were stored in the QC. The second call may have found it there and returned the result almost instantly. (Usually this is ~1ms, not the 100ms you mentioned.) The QC can be bypassed for a single query by saying SELECT SQL_NO_CACHE ....
Since it is annoying you daily, you may as well go through the exercise of trying to optimize the query. If the tables are growing daily, it may get slower and slower over time. Note that if production needs to be restarted for any reason, that query may timeout on it. So, yes, try to optimize it.
A million rows is beginning to be "big".
The characteristics of this indicate that you are I/O-bound only initially. So it does not indicate that key_buffer_size and innodb_buffer_pool_size are too low.
If you want to discuss the performance of a particular query, start a new thread and provide SHOW CREATE TABLE and EXPLAIN SELECT ....
I have a service that sits on top of a MySQL 5.5 database (INNODB). The service has a background job that is supposed to run every week or so. On a high level the background job does the following:
Do some initial DB read and write in one transaction
Execute UMQ (described below) with a set of parameters in one transaction.
If no records are returned we are done!
Process the result from UMQ (this is a bit heavy so it is done outside of any DB
transaction)
Write the outcome of the previous step to DB in one transaction (this
writes to tables queried by UMQ and ensures that the same records are not found again by UMQ).
Goto step 2.
UMQ - Ugly Monster Query: This is a nasty database query that joins a bunch of tables, has conditions on columns in several of these tables and includes a NOT EXISTS subquery with some more joins and conditions. UMQ includes ORDER BY also has LIMIT 1000. Even though the query is bad I have done what I can here - there are indexes on all columns filtered on and the joins are all over foreign key relations.
I do expect UMQ to be heavy and take some time, which is why it's executed in a background job. However, what I'm seeing is rapidly degrading performance until it eventually causes a timeout in my service (maybe 50 times slower after 10 iterations).
First I thought that it was because the data queried by UMQ changes (see step 4 above) but that wasn't it because if I took the last query (the one that caused the timeout) from the slow query log and executed it myself directly I got the same behavior only until I restated the MySQL service. After restart the exact query on the exact same data that took >30 seconds before restart now took <0.5 seconds. I can reproduce this behavior every time by restoring the database to it's initial state and restarting the process.
Also, using the trick described in this question I could see that the query scans around 60K rows after restart as opposed to 18M rows before. EXPLAIN tells me that around 10K rows should be scanned and the result of EXPLAIN is always the same. No other processes are accessing the database at the same time and the lock_time in the slow query log is always 0. SHOW ENGINE INNODB STATUS before and after restart gives me no hints.
So finally the question: Does anybody have any clue of why I'm seeing this behavior? And how can I analyze this further?
I have the feeling that I need to configure MySQL differently in some way but I have searched and tested like crazy without coming up with anything that makes a difference.
Turns out that the behavior I saw was the result of how the MySQL optimizer uses InnoDB statistics to decide on an execution plan. This article put me on the right track (even though it does not exactly discuss my problem). The most important thing I learned from this is that MySQL calculates statistics on startup and then once in a while. This statistics is then used to optimize queries.
The way I had set up the test data the table T where most writes are done in step 4 started out as empty. After each iteration T would contain more and more records but the InnoDB statistics had not yet been updated to reflect this. Because of this the MySQL optimizer always chose an execution plan for UMQ (which includes a JOIN with T) that worked well when T was empty but worse and worse the more records T contained.
To verify this I added an ANALYZE TABLE T; before every execution of UMQ and the rapid degradation disappeared. No lightning performance but acceptable. I also saw that leaving the database for half an hour or so (maybe a bit shorter but at least more than a couple of minutes) would allow the InnoDB statistics to refresh automatically.
In a real scenario the relative difference in index cardinality for the tables involved in UMQ will look quite different and will not change as rapidly so I have decided that I don't really need to do anything about it.
thank you very much for the analysis and answer. I've been searching this issue for several days during ci on mariadb 10.1 and bacula server 9.4 (debian buster).
The situation was that after fresh server installation during a CI cycle, the first two tests (backup and restore) runs smoothly on unrestarted mariadb server and only the third test showed that one particular UMQ took about 20 minutes (building directory tree during restore process from the table with about 30k rows).
Unless the mardiadb server was restarted or table has been analyzed the problem would not go away. ANALYZE TABLE or the restart changed the cardinality of the fields and internal query processing exactly as stated in the linked article.
Is it possible to issue an (expensive, but low-priority) SELECT query to mySQL in such a way that if an UPDATE query appears in the queue, mySQL will immediately terminate the query, and re-append it to the end of the queue?
If re-appending to the queue is not possible, I'm happy with simply killing the SELECT query.
No, not really.
I am not sure exactly what you need, but my guess is that you need to either optimize the SELECT to not lock an entire table, or get the replication going and do the SELECT on the slave rather than the master.
You could theoretically find out what the MySQL process ID is of the SELECT query, and in your application send a KILL before you do any update.
Well, sort of maybe.
A client runs an application which occasionally throws out queries that completely kill performance for everything else on the server. We have monitoring and if we've got a suitable person ready to react, we can deal to that query manually, and we learn about the problems in the app by doing things that way.
But to prevent major outages if noone is on the ball, we have an automated script which terminates long running queries, so the server does recover in the event that noone is available to intervene within 15 minutes.
Far from ideal, but that's where things are currently at with this project, and it does prevent the occasional extended outages that used to occur. We can only move just so fast with fixing up the problem queries.
Anyway, you could run something similar, that looks at the running queries and recognises when you have an update waiting on one of your large selects, and in that event it kills the select. Doing this sort of check a few times a minute is not overly expensive. I'd want to do a bit of testing before running.
So, whether you can solve your problem this way depends on what your tolerance is for how long an update can be delayed. Running this every minute (as we do) is no problem at all. Running it every second would noticeably add to the overall load. You'd need to test how far you can reasonably go in between those points.
This approach means some delay before the select gets pushed out of the way, but it saves you having to build this logic into potentially many different places in your application.
--
Regarding breaking up your query, you're most likely better off restricting the chunks by id range from one or more tables in your query rather than by offset and limit.
--
There may also be good solutions available based on partitioning your tables so that the queries don't collide as badly. Make sure you have a very good grasp on what you are doing for this though.
What can be done to identify the reason for DB slowness?
When i ran the query in the morning it ran quickly & i got the output.
When i run the same query after 1 hr, it took more than 2mins.
What can be checked to identify this slowness?
All the tables are properly indexed.
If it's just a single query which is running slowly, EXPLAIN SELECT... as mentioned by arex1337 may help you see the reason.
It would also be worth looking at the output of e.g. vmstat on the box whilst running the query to see what it's doing - you should be able to get a feel for whether the machine is swapping, IO-bound, CPU-bound etc.
Check also with top to look for any rogue processes hogging CPU time.
Finally, if the machine is using RAID, it's possible that, if a drive has failed, the RAID array could be in a degraded state, which could make disc access slower (this is only applicable in certain RAID configurations, but worth considering and ruling out).
You can use EXPLAIN <your query> to get information about how MySQL executes your query. Maybe you get some hints about why it's slow.
EXPLAIN SELECT ... FROM ... WHERE ...;
Also, maybe you just have a slow query, and it was fast the second time because the result was cached?
I'm running a mySQL query that joins various tables of 500,000+ rows. Sometimes it takes a second, other times around 15 seconds! This is on my local machine. I have experienced similarly varied times before on other intensive queries, does anyone know why this is?
Thanks
Thanks for the replies - I am using appropriate indexes, inner and left joins and have a WHERE clause range of one week out of possible 2 year period of invoices. If I keep varying it (so presumably query results are not cached) and re-running, time varies a lot, even if no. of rows retrieved is similar. The server is not busy. A few scheduled queries every minute but not intensive, take around 200ms.
The explain plan shows that a table of around 2000 rows is always fully scanned. So maybe these rows are sometimes cached, or maybe indexes are cached - didnt know indexes could be cached. I will try again with caching turned off.
Editing again - query cache is in fact off, I'm using InnoDB so looks like increasing innodb_buffer_pool_size is way to go
Same query each time?
It's hard to tell, based on what you've posted. If we assume that the schema and data aren't changing, I'd guess that there's something else running on your machine when the queries are long that would explain the difference. It could be that the state of memory is different, so paging is going on; an anti-virus program is running; some other service has started. It's impossible to answer.
Try to do an
Optimize Table
That should help to refresh some data useful for the query planner.
You have not give us much information, if you're using MyISAM tables, it may be a matter of locks.
Are you using ANSI INNER JOINs? Little basic, but don't use "cross joins". Those are the joins with the comma, like
SELECT * FROM t1, t2 WHERE t1.id_t1=t2.id_t1
Last things you may want to try. Increase your buffers (innodb), your key_buffers (myisam), and some query cache buffers.
Here's some common reasons(bar your server simply being too busy)
The slow query is hitting the harddrive. In the fast case the indexes and data are already cached in MySQL or the OS file cache.
Retrieving the data gets locked by updates/inserts, for MyISAM tables the whole table gets locked whenever someone inserts/updates data in it in some cases.
Table statistics are out of date and/or the wrong index gets selected. running analyze oroptimize on the table can help.
You have the query cache enabled, fetching the result of a cached query is fast, fetching it if it's not in the cache might be slow. Try turning off the query cache to check if the query is always slow if its not fetched from the cache.
In any case, you should show the output of EXPLAIN on your queries to verify indexes are getting used properly - even if they're not, queries can be fast if everything is in ram but grinding to a halt if it needs to hit the hardddrive.