MySQL 8 why is initial response to query slow, speeds up thereafter - mysql

After several minutes of inactivity (no use of the website) MySQL 8 slow right down. An initial query after non-activity can take a minute but thereafter seconds. The same query (like logging in) would take a second or two if there was activity already on the server.
Has anyone encountered this or know how to correct this behavior? The machine itself has a significant amount of resources, its just the first "warm up" call that is slow.

Related

Rails Writes Take 100% Longer After Postgres Migration

I'm working on a migration from MySQL to Postgres on a large Rails app, most operations are performing at a normal rate. However, we have a particular operation that will generate job records every 30 minutes or so. There are usually about 200 records generated and inserted after which we have separate workers that pick up the jobs and work on them from another server.
Under MySQL it takes about 15 seconds to generate the records, and then another 3 minutes for the worker to perform and write back the results, one at a time (so 200 more updates to the original job records).
Under Postgres it takes around 30 seconds, and then another 7 minutes for the worker to perform and write back the results.
The table being written to has roughly 2 million rows, and 1 sequence column under ID.
I have tried tweaking checkpoint timeouts and sizes with no luck.
The table is heavily indexed and really shouldn't be any different than it was before.
I can't post code samples as its a huge codebase and without posting pages and pages of code it wouldn't make sense.
My question is, can anyone think of why this would possibly be happening? There is nothing in the Postgres log and the process of creating these objects has not changed really. Is there some sort of blocking synchronous write behavior I'm not aware of with Postgres?
I've added all sorts of logging in my code to spot errors or transaction failures but I'm coming up with nothing, it just takes twice as long to run, which doesn't seem correct to me.
The Postgres instance is hosted on AWS RDS on a M3.Medium instance type.
We also use New Relic, and it's showing nothing of interest here, which is surprising
Why does your job queue contain 2 million rows? Are they all live or are have not moved them to an archive table to keep your reporting more simple?
Have you used EXPLAIN on your SQL from a psql prompt or your preferred SQL IDE/tool?
Postgres is a completely different RDBMS then MySQL. It allocates space differently and manipulates space differently so may need to be indexed differently.
Additionally there's a tool called pgtune that will suggest configuration changes.
edit: 2014-08-13
Also, rails comes with a profiler that might add some insight. Here's a StackOverflow thread about rails profiling.
You also want to watch your DB server at the disk IO level. Does your job fulfillment to a large number of updates? Postgres created new rows when you update a existing rows, and marks the old rows as available, instead of just overwriting the existing row. So you may be seeing a lot more IO as a result of your RDBMS switch.

Execution time is different for the same query. What should be the reason?

When I execute the same query multiple time in Mysql console. Execution times vary all the times.
I can understand the difference if its in milliseconds. But sometimes same query take 1 second and sometimes same query take 5 seconds.
What should be the reason in this case ?
MANY reasons:
the result was cached and the cache got cleared
the table is locked (maybe because it is executing another big query)
the disk is slow or busy doing other things
you are running out of memory
the results might be changing (pull 1k record vs pulling 500k records)
the Server is remote, so you might have network problems
it is the ghost in the machine

How to improve the mysql "fetch" time of a result set?

I have a query that is a large data set needed for reporting purposes. Currently, the "duration" showing in MySQL workbench which I'm assuming to be execution time is about 7 seconds, so it is fairly optimized. It returns a measly 6000 rows, but it takes nearly 150 seconds to return them according to the "fetch" time.
Now, there are over 50 columns, which may explain some of the speed, but when I extracted the data set into a spreadsheet, it turned out to be about 4MB. I'm certainly not an expert, but I didn't expect 4MB to take 150 seconds to return over the pipe. I went ahead and performed the same query on a localhost setup to eliminate networking issues. Same result! It took about 7 seconds to execute, and 150 seconds to return the data on the same machine.
This report is expected to run real-time on demand, so having the end user wait 2 minutes is unacceptable for this use case. How can I improve the time it takes to return the data from MySQL?
UPDATE: Thank you all for starting to point me in the right direction. As it turns out, the "duration" and "fetch" in workbench is horribly inaccurate. The two minutes I was experiencing was all execution time and in fact my query needed optimizing. Thanks again, this was scratching my head. I will never rely on these metrics again...

Perfectly good queries on my sql taking more than 5 seconds

Lately we are seeing some queries in mysql(master) logs but no idea why they are shown there:
Queries are select/update table where id = <some integer>.
There is index on id
table size is below 100 000
Rows scanned are in hundreds (sometimes < 100)
Server is running on extremely good hardware
there are no joins involved
We do not see any heavy activity running on database at that time
tables are innodb
the same queries generally don’t even take 50ms, but sometimes all the execution of these queries take about 4-8 seconds
One observation was all the similar "non-slow-but-weirdly-taking-high-time" queries take almost the same amount of time for some duration . I.e. queries like stated in the top will all take about 4.35 seconds with variation of 0.05 seconds.
Does the network latency/packets-drop affect mysql query timing?
show processlist;
show global status like '%onnect%';
show global status like '%open%';
Is anything backed up? Is it waiting in a queue? waiting for file handles? What are your max_connections, open-files-limit, thread_concurrency ?
One side question: does the network latency/packets-drop affect mysql query timing?
Yes, the timeout must occur before the query is resent by the client
Do you see these problems locally or over the network? If the latter, then obviously packet drops can affect your performance if you are measuring from the client.
Is it running in a virtual machine that can affect the performance?
Disk problems?
How is serialization set up? Can it be a contention problem by many processes accessing the same row?
You may want to enable the query/slow query logs to see if there is any sort of pattern that causes this.
Mysql slow log is not representative source to learn about your slow queries. If something makes server work slow all queries usually go to slow log.
E.g. if you have some slow blocking select on MyISAM a lot of PK updates will go to slow log.
You need to search for other slow queries or server problems. What about load average on this particular machine? Isn't mysql displaced into swap memory? Other applications? Queries per second?

MySQL query slowing down until restart

I have a service that sits on top of a MySQL 5.5 database (INNODB). The service has a background job that is supposed to run every week or so. On a high level the background job does the following:
Do some initial DB read and write in one transaction
Execute UMQ (described below) with a set of parameters in one transaction.
If no records are returned we are done!
Process the result from UMQ (this is a bit heavy so it is done outside of any DB
transaction)
Write the outcome of the previous step to DB in one transaction (this
writes to tables queried by UMQ and ensures that the same records are not found again by UMQ).
Goto step 2.
UMQ - Ugly Monster Query: This is a nasty database query that joins a bunch of tables, has conditions on columns in several of these tables and includes a NOT EXISTS subquery with some more joins and conditions. UMQ includes ORDER BY also has LIMIT 1000. Even though the query is bad I have done what I can here - there are indexes on all columns filtered on and the joins are all over foreign key relations.
I do expect UMQ to be heavy and take some time, which is why it's executed in a background job. However, what I'm seeing is rapidly degrading performance until it eventually causes a timeout in my service (maybe 50 times slower after 10 iterations).
First I thought that it was because the data queried by UMQ changes (see step 4 above) but that wasn't it because if I took the last query (the one that caused the timeout) from the slow query log and executed it myself directly I got the same behavior only until I restated the MySQL service. After restart the exact query on the exact same data that took >30 seconds before restart now took <0.5 seconds. I can reproduce this behavior every time by restoring the database to it's initial state and restarting the process.
Also, using the trick described in this question I could see that the query scans around 60K rows after restart as opposed to 18M rows before. EXPLAIN tells me that around 10K rows should be scanned and the result of EXPLAIN is always the same. No other processes are accessing the database at the same time and the lock_time in the slow query log is always 0. SHOW ENGINE INNODB STATUS before and after restart gives me no hints.
So finally the question: Does anybody have any clue of why I'm seeing this behavior? And how can I analyze this further?
I have the feeling that I need to configure MySQL differently in some way but I have searched and tested like crazy without coming up with anything that makes a difference.
Turns out that the behavior I saw was the result of how the MySQL optimizer uses InnoDB statistics to decide on an execution plan. This article put me on the right track (even though it does not exactly discuss my problem). The most important thing I learned from this is that MySQL calculates statistics on startup and then once in a while. This statistics is then used to optimize queries.
The way I had set up the test data the table T where most writes are done in step 4 started out as empty. After each iteration T would contain more and more records but the InnoDB statistics had not yet been updated to reflect this. Because of this the MySQL optimizer always chose an execution plan for UMQ (which includes a JOIN with T) that worked well when T was empty but worse and worse the more records T contained.
To verify this I added an ANALYZE TABLE T; before every execution of UMQ and the rapid degradation disappeared. No lightning performance but acceptable. I also saw that leaving the database for half an hour or so (maybe a bit shorter but at least more than a couple of minutes) would allow the InnoDB statistics to refresh automatically.
In a real scenario the relative difference in index cardinality for the tables involved in UMQ will look quite different and will not change as rapidly so I have decided that I don't really need to do anything about it.
thank you very much for the analysis and answer. I've been searching this issue for several days during ci on mariadb 10.1 and bacula server 9.4 (debian buster).
The situation was that after fresh server installation during a CI cycle, the first two tests (backup and restore) runs smoothly on unrestarted mariadb server and only the third test showed that one particular UMQ took about 20 minutes (building directory tree during restore process from the table with about 30k rows).
Unless the mardiadb server was restarted or table has been analyzed the problem would not go away. ANALYZE TABLE or the restart changed the cardinality of the fields and internal query processing exactly as stated in the linked article.