Is there any way to see an overview of what kind of queries are spent the most time on every day on MySQL?
Yes, mysql can create a slow query log. You'll need to start mysqld with the --log-slow-queries flag:
mysqld --log-slow-queries=/path/to/your.log
Then you can parse the log using mysqldumpslow:
mysqldumpslow /path/to/your.log
More info is here (http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html).
You can always set up query logging as described here:
http://dev.mysql.com/doc/refman/5.0/en/query-log.html
It depends on what you mean by 'most time'. There may be thousands if not hundreds of thousands of queries which take very little time each, but consume 90% of CPU/IO bandwidth. Or there may be a few huge outliers.
There are tools for performance monitoring and analysis, such as the built-in PERFORMANCE_SCHEMA, the enterprise tools from the Oracle/MySQL team, and online services like newrelic which can track performance of an entire application stack.
Related
We have percona monitoring tool for monitoring Mysql db, generating slow log file report not gives us instant results. do we have any best approach to handle it using metrics/ promql's or query analytics etc. where we get min,max,average time of critical queries
min,max,average -- These don't make sense until you have enough samples to take min,max,average against. Rethink the need for "instant results".
pt-query-digest could be run daily or hourly (or whatever) to get results for the "recent past".
A lot of metrics can be graphed by the various monitoring tools available from Percona, MariaDB, and Oracle, plus others. Some cost money. Some come "close" to "instant results" even for slow queries.
Please describe your goal in different words; we may be able to better direct you.
Metrics like SHOW GLOBAL STATUS LIKE 'Threads_running'; (or a graph monitoring that) can spot a spike in realtime. But it there is nothing actionable in knowing that there is a spike.
I prefer looking at the slowlog afterward. The "worst" queries are readily identified by pt-query-digest. Spikes and bottlenecks can be identified, but not until they are "finished".
Deadlocks come from the hard-to-parse SHOW ENGINE InnoDB STATUS;, but only one at a time and after the fact.
In a not-well-tuned system, the first entry in pt-query-digest is (sometimes) a query that consumes over 50% of the system resources. Fixing that one query makes a big difference. Very cost-effective.
We are planning to rewrite legacy system that is using MySQL InnoDB database and trying to analyse main bottlenecks that should be avoided in next version.
System has many services/jobs that runs over night that generates data - inserts/updates, that mainly should be optimized. Jobs runs avg. 2-3 hours now.
We already gathered long running queries that must be optimized.
But I am wondering if it is possible to gather information and statistics about long running transactions.
Very helpful will be information which tables is locked by transaction the most - average locking time, lock type, periods.
Could somebody advice any tool or script that can gather such information?
Or maybe someone can share own experience in database analyse and optimization?
MySQL has built in capability for capturing "slow" query statistics (but to get an accurate picture you need to set the slow threshold as 0). You can turn the log into useful information with mysqldumpslow (bundled with mysql). I like the percona toolkit, but there are lots of other tools available.
I want to check the performance of my database in mysql. I googled and came to know about show full processlist etc commands, but not very clear. i just want to know and calaculate the performance of database in terms of how much heap memory it is taking and other such.
Is there any way to know and assess the performance of the database. so that I can optimize and improve the performance.
Thanks in advance
The basic tool is MySQL Workbench which will work with any recent version of MySQL. It's not as powerful as the enterprise version, but is a great place to start.
The configuration can be exposed with SHOW VARIABLES and the current state of the system is exposed through SHOW STATUS. These status numbers are what ends up being graphed in most tools.
Don't forget that you can do a lot of monitoring on the application side, turning on database logs for instance. Barring that you can enable the "slow query" log in MySQL to check which queries are having the most impact. These can then be diagnosed with EXPLAIN.
Download mysql enterprise tools. They will allow you to monitor load on the server as well as performance of individual queries.
You can use open source tools from Percona called as Percona Toolkit and start using some useful tools which can help you in Efficiently archive rows, Find duplicate indexes, Summarize MySQL servers, Analyze queries from logs and tcpdump and Collect vital system information when problems occur.
You can try experimenting with Performance_Schema tables avialable in MySQL v5.6 onwards which can give a detailed information of query, database statistics.
http://www.markleith.co.uk/2012/07/04/mysql-performance-schema-statement-digests/
I am about to begin developing a logging system for future implementation in a current PHP application to get load and usage statistics from a MYSQL database.
The statistic will later on be used to get info about database calls per second, query times etc.
Of course, this will only be used when the app is in testing stage, since It will most certainly cause a bit of additional load itself.
However, my biggest questionmark right now is if i should use MYSQL to log the queries, or go for a file-based system. I'll guess that it would be a bit of a headache to create something that would allow writings from multiple locations when using a file based system to handle the logs?
How would you do it?
Use the general log, which will show client activity, including all the queries:
http://dev.mysql.com/doc/refman/5.1/en/query-log.html
If you need very detailed statistics on how long each query is taking, use the slow log with a long_query_time of 0 (or some other sufficiently short time):
http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
Then use http://www.maatkit.org/ to analyze the logs as needed.
MySQL already had logging built in- Chapter 5.2 of the manual describes these. You'll probably be interested in The General Query Log (all queries), the Binary Query Log (queries that change data) and the Slow log (queries that take too long, or don't use indexes).
If you insist on using your own solution, you will want to write a database middle layer that all your DB calls go through, which can handle the timing aspects. As to where you write them, if you're in devel, it doesn't matter too much, but the idea of using a second db isn't bad. You don't need to use an entirely separate DB, just as far as using a different instance of MySQL (on a different machine, or just a different instance using a different port). I'd go for using a second MySQL instance instead of the filesystem- you'll get all your good SQL functions like SUM and AVG to parse your data.
If all you are interested in is longer-term, non-real time analysis, turn on MySQL's regular query logging. There are tons of tools for doing analysis on the query-logs (both regular and slow-query), giving you information about the run-times, average rows returned, etc. Seems to be what you are looking for.
If you are doing tests on MySQL you should store the results in a different database such as Postgres, this way you won't increase the load with your operations.
I agree with macabail but would only add that you could couple this with a cron job and a simple script to extract and generate any statistics you might want.
What tools/methods do you recommend to diagnose and profile MySQL in live production server?
My goal to test alternative ways of scaling up the system and see their influence on read/write timing, memory, CPU load, disk access etc. and to find bottlenecks.
First of all you should set up some kind of monitoring with e.g.:
MySQL Enterprise Monitor
MONyog
Cacti (free)
Munin (free)
MySQL Activity Report (free)
Other may helpful tools: mytop innotop mtop maatkit
In addtion you should enable logging slow-queries in your my.cnf.
Befor you start to tune/change parameters you should create some kind of
test plan and compare the before/after results to see wether your changes
made sense or not.
This is something that I have worked on quite a bit.
MonYog - MySQL monitoring service. We use this in production. It is not free but has a lot of features, including alerts and historical data.
MySQL Enterprise Monitor - available with MySQL enterprise (i.e., not cheap)
Roll Your Own!
About the roll your own option:
We actually developed a really cool monitoring application that uses RRD tool (used by the common MRTG) and a combination of MySQL statistics, and system stats, such as iostat. This was not only a great exercise but gave us a ton of flexibility to monitor exactly what we want from a single interface.
Here is a Brief Description of some approaches to building your own stats.
One of our big motivations for rolling our own, even though we also use MonYog, was to track disk statistics. Disk i/o can be a major bottleneck, and the standard MySQL monitoring systems do not have i/o monitoring. We use iostat which is part of the systat package.
We have an interface that displays graphs of MySQL statistics next to disk i/o stats, allowing us to really get an overall picture of how the MySQL load is affecting disk i/o.
Before this, we really had no idea why our production applications were getting bogged down. We discovered that disk i/o was a major issue, and that MySQL was creating a lot of temporary tables on disk when we were running complex queries. We were able to optimize our queries and improve disk performance dramatically.
Jet Profiler for sure
Also add to list: RHQ 4 (open source) -- http://rhq-project.org/
Vote http://tinyurl.com/vote-gif add to list:
Maatkit
dbForge studio for mysql
Jet Profiler