How can I view mysql slow_query_log to see which query is taking too much time?
First, you need to check if it's enabled in your MySQL configuration (mysql.ini or mysql.cnf, depending on your system):
# enable slow log:
slow_query_log = 1
# log queries longer than n seconds:
long_query_time = 5
# where to log:
slow_query_log_file = /path/to/your/logs/mysql-slow.log
Restart your MySQL server, then watch the logfile using whatever program you like - tail is the simplest:
tail -f /path/to/your/logs/mysql-slow.log
You may need to play a bit with the long_query_time setting to find the limit where the volume of logging isn't too low or too high, but just right.
Check the location of this log in my.ini file, and then open it in any text editor.
if you ask google for "slow_query_log", this is the first hit - explaining all you need to know. you have to enable it, set a filename you like (if it's already set, you can find the configuration in you my.ini), start your queries and look ito that file...
If you're running mysqld < 5.2, your my.cnf may look like
log-slow-queries=/var/log/mysql/mysql-slow.log //where to store log
long_query_time=3 //quickest query to log
Related
My Ubuntu 22.04 server is suddenly telling me that "The redo log file "./#innodb_redo/#ib_redo0 size 23289856 is not a multiple of innodb_page_size." My innodb_page_size is 16K, so the error is correct, but I can't seem to find any advice on how to fix it. I tried moving ib_redo0 out of the way but that didn't help. Any ideas?
I also encountered this issue. It appeared to be specific to using ZFS on Ubuntu, in my case it was during an upgrade to MYSQL 8.0.30-0ubuntu0.20.04.2.
Following details in this Ubuntu issue report and this MySQL issue report I was able to come up with a solution that worked in my environment.
There are 3 commands below to be ran as root or with sudo. You should replace 8192 in the first with the result of <broken_file_size> % <default_page_size>. The default page size is usually 16384 unless modified.
You may need to replace the #ib_redo0 part of the second command with the broken file reported in the error message.
These commands are intended to pad out the reportedly invalid file with zeros.
Perform a backup before running!
# Gather required zeros to append
# Will create a "zeros" file in the current directory
# This has been calculated based upon 23289856 % 16384 = 8192 or <broken_file_size> % <default_page_size>
dd if=/dev/zero bs=1 count=8192 of=./zeros
# Append zeroes to invalid file
cat zeros >> /var/lib/mysql/#innodb_redo/#ib_redo0
# Restart MySQL
systemctl restart mysql.service
I'd be wary of remaining on ZFS, even if the above fixes things, for the sake of potentially hitting the same issue again.
I had the same Problem in a LXD Container running on ZFS. I had to move it to a different type of storage-pool, e.g. Directory or BTRFS.
After that the solution of #DanBrown worked for me too.
Thank you.
I would like to display my executed sql command history in my MYSQL Query Browser. What is the sql statement for displaying history?
try
cat ~/.mysql_history
this will show you all mysql commands ran on the system
For MySQL > 5.1.11 or MariaDB
SET GLOBAL log_output = 'TABLE';
SET GLOBAL general_log = 'ON';
Take a look at the table mysql.general_log
If you want to output to a log file:
SET GLOBAL log_output = "FILE";
SET GLOBAL general_log_file = "/path/to/your/logfile.log"
SET GLOBAL general_log = 'ON';
As mentioned by jeffmjack in comments, these settings will be forgetting before next session unless you edit the configuration files (e.g. edit /etc/mysql/my.cnf, then restart to apply changes).
Now, if you'd like you can tail -f /var/log/mysql/mysql.log
More info here: Server System Variables
You 'll find it there
~/.mysql_history
You 'll make it readable (without the escapes) like this:
sed "s/\\\040/ /g" < .mysql_history
(Linux)
Open your Terminal ctrl+alt+t
run the command
cat ~/.mysql_history
you will get all the previous mysql query history enjoy :)
Look at ~/.myslgui/query-browser/history.xml
here you can find the last queries made with mysql_query_browser
(some days old)
#GoYun.Info answer but with python 3
cat ~/.mysql_history | python3 -c "import sys; print(''.join([l.encode('utf-8').decode('unicode-escape') for l in sys.stdin]))"
You can see the history from ~/.mysql_history. However the content of the file is encoded by wctomb. To view the content:
shell> cat ~/.mysql_history | python2.7 -c "import sys; print(''.join([l.decode('unicode-escape') for l in sys.stdin]))"
Source:Check MySQL query history from command line
You can look at the query cache: http://www.databasejournal.com/features/mysql/article.php/3110171/MySQLs-Query-Cache.htm but it might not give you access to the actual queries and will be very hit-and-miss if it did work (subtle pun intended)
But MySQL Query Browser very likely maintains its own list of queries that it runs, outside of the MySQL engine. You would have to do the same in your app.
Edit: see dan m's comment leading to this: How to show the last queries executed on MySQL? looks sound.
I followed the instructions here: http://crazytoon.com/2007/07/23/mysql-changing-runtime-variables-with-out-restarting-mysql-server/ but that seems to only set the threshold.
Do I need to do anything else like set the filepath?
According to MySQL's docs
If no file_name value is given for --log-slow-queries, the default name is
host_name-slow.log. The server creates the file in the data directory unless
an absolute path name is given to specify a different directory.
Running
SHOW VARIABLES
doesn't indicate any log file path and I don't see any slow query log file on my server...
EDIT
Looks like I'm using server version 5.0.77, so I needed to do:
SET GLOBAL log_slow_queries = 1;
but I get: ERROR 1238 (HY000): Variable 'log_slow_queries' is a read only variable
I assume I'm going to need to restart the server and have log_slow_queries set in my config?
Try SET GLOBAL slow_query_log = 'ON'; and perhaps FLUSH LOGS;
This assumes you are using MySQL 5.1 or later. If you are using an earlier version, you'll need to restart the server. This is documented in the MySQL Manual. You can configure the log either in the config file or on the command line.
For slow queries on version < 5.1, the following configuration worked for me:
log_slow_queries=/var/log/mysql/slow-query.log
long_query_time=20
log_queries_not_using_indexes=YES
Also note to place it under [mysqld] part of the config file and restart mysqld.
Find log enabled or not?
SHOW VARIABLES LIKE '%log%';
Set the logs:-
SET GLOBAL general_log = 'ON';
SET GLOBAL slow_query_log = 'ON';
MySQL Manual - slow-query-log-file
This claims that you can run the following to set the slow-log file (5.1.6 onwards):
set global slow_query_log_file = 'path';
The variable slow_query_log just controls whether it is enabled or not.
I think the problem is making sure that MySQL server has the rights to the file and can edit it.
If you can get it to have access to the file, then you can try setting:
SET GLOBAL slow_query_log = 1;
If not, you can always 'reload' the server after changing the configuration file. On linux its usually /etc/init.d/mysql reload
These work
SET GLOBAL LOG_SLOW_TIME = 1;
SET GLOBAL LOG_QUERIES_NOT_USING_INDEXES = ON;
Broken on my setup 5.1.42
SET GLOBAL LOG_SLOW_QUERIES = ON;
SET GLOBAL SLOW_QUERY_LOG = ON;
set ##global.log_slow_queries=1;
http://bugs.mysql.com/bug.php?id=32565
Looks like the best way to do this is set log_slow_time very high thus "turning off" the slow query log. Lower log_slow_time to enable it. Use the same trick (set to OFF) for log_queries_not_using_indexes.
If you want to enable general error logs and slow query error log in the table instead of file
To start logging in table instead of file:
set global log_output = “TABLE”;
To enable general and slow query log:
set global general_log = 1;
set global slow_query_log = 1;
To view the logs:
select * from mysql.slow_log;
select * from mysql.general_log;
For more details visit this link
http://easysolutionweb.com/technology/mysql-server-logs/
This should work on mysql > 5.5
SHOW VARIABLES LIKE '%long%';
SET GLOBAL long_query_time = 1;
I've bee tearing my hair out trying to get MySQL 5 running on CentOS 5 but I've had hardly any luck.
If I leave everything as default, and launch the initial install it works a charm, but if I tell the my.cnf to use a different drive to store the data, I continuously get the "Timeout error occurred trying to start MySQL Daemon." error.
My.cnf is as follows:
[mysqld]
datadir=/database/mysql
socket=/database/mysql/mysql.sock
user=mysql
old_passwords=1
log-error=/database/log/mysqld.log
long_query_time = 10
log_slow_queries = /database/log/mysql-slow.log
query-cache-type = 1
query-cache-size = 8M
innodb_file_per_table
skip-bdb
set-variable = local-infile=0
[mysqld_safe]
pid-file=/var/run/mysqld/mysqld.pid
The folders all have the right privileges and the mysqld.log doesn't have any error messages in there, according to it, MySQL launced successfuly.
Oh, and /database is a mounted drive, but even if I trial it on a local directory, I get the same error.
Any ideas what could be going wrong? i've seriously waisted more than 5 hours on this now :(
CHEERS
Shouldn't the datadir be set to the other drive and everything else (socket) point to the standard install locations?
Did you check selinux settings? Make sure it is disabled (setenforce disabled), or spent some time learn about it (chcon command) To disable on boot look into /etc/sysconfig
Run the command : getenforce, if it says "Enforced", SE-Linux is On
How can I trace MySQL queries on my Linux server as they happen?
For example I'd love to set up some sort of listener, then request a web page and view all of the queries the engine executed, or just view all of the queries being run on a production server. How can I do this?
You can log every query to a log file really easily:
mysql> SHOW VARIABLES LIKE "general_log%";
+------------------+----------------------------+
| Variable_name | Value |
+------------------+----------------------------+
| general_log | OFF |
| general_log_file | /var/run/mysqld/mysqld.log |
+------------------+----------------------------+
mysql> SET GLOBAL general_log = 'ON';
Do your queries (on any db). Grep or otherwise examine /var/run/mysqld/mysqld.log
Then don't forget to
mysql> SET GLOBAL general_log = 'OFF';
or the performance will plummet and your disk will fill!
You can run the MySQL command SHOW FULL PROCESSLIST; to see what queries are being processed at any given time, but that probably won't achieve what you're hoping for.
The best method to get a history without having to modify every application using the server is probably through triggers. You could set up triggers so that every query run results in the query being inserted into some sort of history table, and then create a separate page to access this information.
Do be aware that this will probably considerably slow down everything on the server though, with adding an extra INSERT on top of every single query.
Edit: another alternative is the General Query Log, but having it written to a flat file would remove a lot of possibilities for flexibility of displaying, especially in real-time. If you just want a simple, easy-to-implement way to see what's going on though, enabling the GQL and then using running tail -f on the logfile would do the trick.
Even though an answer has already been accepted, I would like to present what might even be the simplest option:
$ mysqladmin -u bob -p -i 1 processlist
This will print the current queries on your screen every second.
-u The mysql user you want to execute the command as
-p Prompt for your password (so you don't have to save it in a file or have the command appear in your command history)
i The interval in seconds.
Use the --verbose flag to show the full process list, displaying the entire query for each process. (Thanks, nmat)
There is a possible downside: fast queries might not show up if they run between the interval that you set up. IE: My interval is set at one second and if there is a query that takes .02 seconds to run and is ran between intervals, you won't see it.
Use this option preferably when you quickly want to check on running queries without having to set up a listener or anything else.
Run this convenient SQL query to see running MySQL queries. It can be run from any environment you like, whenever you like, without any code changes or overheads. It may require some MySQL permissions configuration, but for me it just runs without any special setup.
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep';
The only catch is that you often miss queries which execute very quickly, so it is most useful for longer-running queries or when the MySQL server has queries which are backing up - in my experience this is exactly the time when I want to view "live" queries.
You can also add conditions to make it more specific just any SQL query.
e.g. Shows all queries running for 5 seconds or more:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND TIME >= 5;
e.g. Show all running UPDATEs:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND INFO LIKE '%UPDATE %';
For full details see: http://dev.mysql.com/doc/refman/5.1/en/processlist-table.html
strace
The quickest way to see live MySQL/MariaDB queries is to use debugger. On Linux you can use strace, for example:
sudo strace -e trace=read,write -s 2000 -fp $(pgrep -nf mysql) 2>&1
Since there are lot of escaped characters, you may format strace's output by piping (just add | between these two one-liners) above into the following command:
grep --line-buffered -o '".\+[^"]"' | grep --line-buffered -o '[^"]*[^"]' | while read -r line; do printf "%b" $line; done | tr "\r\n" "\275\276" | tr -d "[:cntrl:]" | tr "\275\276" "\r\n"
So you should see fairly clean SQL queries with no-time, without touching configuration files.
Obviously this won't replace the standard way of enabling logs, which is described below (which involves reloading the SQL server).
dtrace
Use MySQL probes to view the live MySQL queries without touching the server. Example script:
#!/usr/sbin/dtrace -q
pid$target::*mysql_parse*:entry /* This probe is fired when the execution enters mysql_parse */
{
printf("Query: %s\n", copyinstr(arg1));
}
Save above script to a file (like watch.d), and run:
pfexec dtrace -s watch.d -p $(pgrep -x mysqld)
Learn more: Getting started with DTracing MySQL
Gibbs MySQL Spyglass
See this answer.
Logs
Here are the steps useful for development proposes.
Add these lines into your ~/.my.cnf or global my.cnf:
[mysqld]
general_log=1
general_log_file=/tmp/mysqld.log
Paths: /var/log/mysqld.log or /usr/local/var/log/mysqld.log may also work depending on your file permissions.
then restart your MySQL/MariaDB by (prefix with sudo if necessary):
killall -HUP mysqld
Then check your logs:
tail -f /tmp/mysqld.log
After finish, change general_log to 0 (so you can use it in future), then remove the file and restart SQL server again: killall -HUP mysqld.
I'm in a particular situation where I do not have permissions to turn logging on, and wouldn't have permissions to see the logs if they were turned on. I could not add a trigger, but I did have permissions to call show processlist. So, I gave it a best effort and came up with this:
Create a bash script called "showsqlprocesslist":
#!/bin/bash
while [ 1 -le 1 ]
do
mysql --port=**** --protocol=tcp --password=**** --user=**** --host=**** -e "show processlist\G" | grep Info | grep -v processlist | grep -v "Info: NULL";
done
Execute the script:
./showsqlprocesslist > showsqlprocesslist.out &
Tail the output:
tail -f showsqlprocesslist.out
Bingo bango. Even though it's not throttled, it only took up 2-4% CPU on the boxes I ran it on. I hope maybe this helps someone.
From a command line you could run:
watch --interval=[your-interval-in-seconds] "mysqladmin -u root -p[your-root-pw] processlist | grep [your-db-name]"
Replace the values [x] with your values.
Or even better:
mysqladmin -u root -p -i 1 processlist;
This is the easiest setup on a Linux Ubuntu machine I have come across. Crazy to see all the queries live.
Find and open your MySQL configuration file, usually /etc/mysql/my.cnf on Ubuntu. Look for the section that says “Logging and Replication”
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
log = /var/log/mysql/mysql.log
Just uncomment the “log” variable to turn on logging. Restart MySQL with this command:
sudo /etc/init.d/mysql restart
Now we’re ready to start monitoring the queries as they come in. Open up a new terminal and run this command to scroll the log file, adjusting the path if necessary.
tail -f /var/log/mysql/mysql.log
Now run your application. You’ll see the database queries start flying by in your terminal window. (make sure you have scrolling and history enabled on the terminal)
FROM http://www.howtogeek.com/howto/database/monitor-all-sql-queries-in-mysql/
Check out mtop.
I've been looking to do the same, and have cobbled together a solution from various posts, plus created a small console app to output the live query text as it's written to the log file. This was important in my case as I'm using Entity Framework with MySQL and I need to be able to inspect the generated SQL.
Steps to create the log file (some duplication of other posts, all here for simplicity):
Edit the file located at:
C:\Program Files (x86)\MySQL\MySQL Server 5.5\my.ini
Add "log=development.log" to the bottom of the file. (Note saving this file required me to run my text editor as an admin).
Use MySql workbench to open a command line, enter the password.
Run the following to turn on general logging which will record all queries ran:
SET GLOBAL general_log = 'ON';
To turn off:
SET GLOBAL general_log = 'OFF';
This will cause running queries to be written to a text file at the following location.
C:\ProgramData\MySQL\MySQL Server 5.5\data\development.log
Create / Run a console app that will output the log information in real time:
Source available to download here
Source:
using System;
using System.Configuration;
using System.IO;
using System.Threading;
namespace LiveLogs.ConsoleApp
{
class Program
{
static void Main(string[] args)
{
// Console sizing can cause exceptions if you are using a
// small monitor. Change as required.
Console.SetWindowSize(152, 58);
Console.BufferHeight = 1500;
string filePath = ConfigurationManager.AppSettings["MonitoredTextFilePath"];
Console.Title = string.Format("Live Logs {0}", filePath);
var fileStream = new FileStream(filePath, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite);
// Move to the end of the stream so we do not read in existing
// log text, only watch for new text.
fileStream.Position = fileStream.Length;
StreamReader streamReader;
// Commented lines are for duplicating the log output as it's written to
// allow verification via a diff that the contents are the same and all
// is being output.
// var fsWrite = new FileStream(#"C:\DuplicateFile.txt", FileMode.Create);
// var sw = new StreamWriter(fsWrite);
int rowNum = 0;
while (true)
{
streamReader = new StreamReader(fileStream);
string line;
string rowStr;
while (streamReader.Peek() != -1)
{
rowNum++;
line = streamReader.ReadLine();
rowStr = rowNum.ToString();
string output = String.Format("{0} {1}:\t{2}", rowStr.PadLeft(6, '0'), DateTime.Now.ToLongTimeString(), line);
Console.WriteLine(output);
// sw.WriteLine(output);
}
// sw.Flush();
Thread.Sleep(500);
}
}
}
}
In addition to previous answers describing how to enable general logging, I had to modify one additional variable in my vanilla MySql 5.6 installation before any SQL was written to the log:
SET GLOBAL log_output = 'FILE';
The default setting was 'NONE'.
Gibbs MySQL Spyglass
AgilData launched recently the Gibbs MySQL Scalability Advisor (a free self-service tool) which allows users to capture a live stream of queries to be uploaded to Gibbs. Spyglass (which is Open Source) will watch interactions between your MySQL Servers and client applications. No reconfiguration or restart of the MySQL database server is needed (either client or app).
GitHub: AgilData/gibbs-mysql-spyglass
Learn more: Packet Capturing MySQL with Rust
Install command:
curl -s https://raw.githubusercontent.com/AgilData/gibbs-mysql-spyglass/master/install.sh | bash
If you want to have monitoring and statistics, than there is a good and open-source tool Percona Monitoring and Management
But it is a server based system, and it is not very trivial for launch.
It has also live demo system for test.