The following is the list of process in my database
Is there anyways to solve my database performance problem. Whenever I run mysql query it takes about 1 minute or more than 1.
And find the following image to retrieve some records from the database it takes even longer.
Related
I have been asked to diagnose why a query looking something like this
SELECT COUNT(*) AS count
FROM users
WHERE first_digit BETWEEN 500 AND 1500
AND second_digit BETWEEN 5000 AND 45000;
went from taking around 0.3 seconds to execute suddenly is taking over 3 seconds. The system is MySQL running on Ubuntu.
The table is not sorted and contains about 1.5M rows. After I added a composite index I got the execution time down to about 0.2 seconds again, however this does not explain the root cause why all of a sudden the execution time increased exponentially.
How can I begin to investigate the cause of this?
Since your SQL query has not changed, and I interpret your description as the data set has not changed/grown - I suggest you take a look at the following areas, in order:
1) Have your removed the index and run your SQL query again?
2) Other access to the database. Are other applications or users running heavy queries on the same database? Larger data transfers, in particular to and from the database server in question.
A factor of 10 slowdown? A likely cause is going from entirely cached to not cached.
Please show us SHOW CREATE TABLE. EXPLAIN SELECT, RAM size, and the value of innodb_buffer_pool_size. And how big (GB) is the table?
Also, did someone happen to do a dump or ALTER TABLE or OPTIMIZE TABLE just before the slowdown.
The above info will either show what caused caching to fail, or show the need for more RAM.
INDEX(first_digit, second_digit) (in either order) will be "covering" for that query; this will be faster than without any index.
What's the easiest way to find out how long it takes to evaluate a long-running MySQL query (longer than 10 minutes)?
The issue is that:
When I try to run it from MySQL Workbench, I get a timeout error.
When I look at the process list, sometimes the process finishes executing when I'm not looking and I don't know how long the script has taken.
I have a huge database with more than 250 tables. Different type of queries are ran on the database. Since the database has grown over the years and now I need to optimise the database and queries. I have already followed optimisation concepts such as indexing and so on.
My problem is, How to log the query and its execution time of each query which runs on the database ? So I can analyse which query takes how many seconds and optimise them.
Given that I know that MYSQL Trigger would be ideal for this but I don't know how to write a trigger for the whole database, so that it logs each query to a table with query's execution time. I want the trigger to log all the CRUD operation which occurred in the database.
How can I get it done ?
Use mysql slow query log. This will help you to get only queries which are slow instead of log / analyze all queries. You just need to set param for long_query_time like 1 or 2 second.
Also you could set long_query_time = 0 and you will see all sql queries!
I have a service that sits on top of a MySQL 5.5 database (INNODB). The service has a background job that is supposed to run every week or so. On a high level the background job does the following:
Do some initial DB read and write in one transaction
Execute UMQ (described below) with a set of parameters in one transaction.
If no records are returned we are done!
Process the result from UMQ (this is a bit heavy so it is done outside of any DB
transaction)
Write the outcome of the previous step to DB in one transaction (this
writes to tables queried by UMQ and ensures that the same records are not found again by UMQ).
Goto step 2.
UMQ - Ugly Monster Query: This is a nasty database query that joins a bunch of tables, has conditions on columns in several of these tables and includes a NOT EXISTS subquery with some more joins and conditions. UMQ includes ORDER BY also has LIMIT 1000. Even though the query is bad I have done what I can here - there are indexes on all columns filtered on and the joins are all over foreign key relations.
I do expect UMQ to be heavy and take some time, which is why it's executed in a background job. However, what I'm seeing is rapidly degrading performance until it eventually causes a timeout in my service (maybe 50 times slower after 10 iterations).
First I thought that it was because the data queried by UMQ changes (see step 4 above) but that wasn't it because if I took the last query (the one that caused the timeout) from the slow query log and executed it myself directly I got the same behavior only until I restated the MySQL service. After restart the exact query on the exact same data that took >30 seconds before restart now took <0.5 seconds. I can reproduce this behavior every time by restoring the database to it's initial state and restarting the process.
Also, using the trick described in this question I could see that the query scans around 60K rows after restart as opposed to 18M rows before. EXPLAIN tells me that around 10K rows should be scanned and the result of EXPLAIN is always the same. No other processes are accessing the database at the same time and the lock_time in the slow query log is always 0. SHOW ENGINE INNODB STATUS before and after restart gives me no hints.
So finally the question: Does anybody have any clue of why I'm seeing this behavior? And how can I analyze this further?
I have the feeling that I need to configure MySQL differently in some way but I have searched and tested like crazy without coming up with anything that makes a difference.
Turns out that the behavior I saw was the result of how the MySQL optimizer uses InnoDB statistics to decide on an execution plan. This article put me on the right track (even though it does not exactly discuss my problem). The most important thing I learned from this is that MySQL calculates statistics on startup and then once in a while. This statistics is then used to optimize queries.
The way I had set up the test data the table T where most writes are done in step 4 started out as empty. After each iteration T would contain more and more records but the InnoDB statistics had not yet been updated to reflect this. Because of this the MySQL optimizer always chose an execution plan for UMQ (which includes a JOIN with T) that worked well when T was empty but worse and worse the more records T contained.
To verify this I added an ANALYZE TABLE T; before every execution of UMQ and the rapid degradation disappeared. No lightning performance but acceptable. I also saw that leaving the database for half an hour or so (maybe a bit shorter but at least more than a couple of minutes) would allow the InnoDB statistics to refresh automatically.
In a real scenario the relative difference in index cardinality for the tables involved in UMQ will look quite different and will not change as rapidly so I have decided that I don't really need to do anything about it.
thank you very much for the analysis and answer. I've been searching this issue for several days during ci on mariadb 10.1 and bacula server 9.4 (debian buster).
The situation was that after fresh server installation during a CI cycle, the first two tests (backup and restore) runs smoothly on unrestarted mariadb server and only the third test showed that one particular UMQ took about 20 minutes (building directory tree during restore process from the table with about 30k rows).
Unless the mardiadb server was restarted or table has been analyzed the problem would not go away. ANALYZE TABLE or the restart changed the cardinality of the fields and internal query processing exactly as stated in the linked article.
I have a Mysql db an a simple query and I noticed a difference in a query time when executing the query trough the hibernate query editor in Eclipse and executing the same query directly in mysql, the table has 60524 entries(rows)
The hibernate query is
from AppLog a
and it takes 3,4 second
hibernate constructs the native sql like this
select
applog0_.ID_APP_LOG as ID1_706_,
applog0_.ID_APP_MODULE_EVENT as ID5_706_,
applog0_.DATE_INSERT as DATE2_706_,
applog0_.DESCRIPTION as DESCRIPT3_706_,
applog0_.ID_PERSON as ID6_706_,
applog0_.VERSION as VERSION706_
from
APP_LOG applog0_
when I run this directly on mysql it takes 139 miliseconds
The difference is enormous ... where is the trick?
my guess is that reading the data takes the same amount of time, and displaying it in the mysql prompt, takes almost no time at all, but coping 60k rows from mysql to Eclipse takes longer time, and proberly responible for the extra 3.3 secounds. whit mysql profiling you can dig a bit deeper in where the times go, http://dev.mysql.com/doc/refman/5.0/en/show-profiles.html