How can I view live MySQL queries? - mysql
How can I trace MySQL queries on my Linux server as they happen?
For example I'd love to set up some sort of listener, then request a web page and view all of the queries the engine executed, or just view all of the queries being run on a production server. How can I do this?
You can log every query to a log file really easily:
mysql> SHOW VARIABLES LIKE "general_log%";
+------------------+----------------------------+
| Variable_name | Value |
+------------------+----------------------------+
| general_log | OFF |
| general_log_file | /var/run/mysqld/mysqld.log |
+------------------+----------------------------+
mysql> SET GLOBAL general_log = 'ON';
Do your queries (on any db). Grep or otherwise examine /var/run/mysqld/mysqld.log
Then don't forget to
mysql> SET GLOBAL general_log = 'OFF';
or the performance will plummet and your disk will fill!
You can run the MySQL command SHOW FULL PROCESSLIST; to see what queries are being processed at any given time, but that probably won't achieve what you're hoping for.
The best method to get a history without having to modify every application using the server is probably through triggers. You could set up triggers so that every query run results in the query being inserted into some sort of history table, and then create a separate page to access this information.
Do be aware that this will probably considerably slow down everything on the server though, with adding an extra INSERT on top of every single query.
Edit: another alternative is the General Query Log, but having it written to a flat file would remove a lot of possibilities for flexibility of displaying, especially in real-time. If you just want a simple, easy-to-implement way to see what's going on though, enabling the GQL and then using running tail -f on the logfile would do the trick.
Even though an answer has already been accepted, I would like to present what might even be the simplest option:
$ mysqladmin -u bob -p -i 1 processlist
This will print the current queries on your screen every second.
-u The mysql user you want to execute the command as
-p Prompt for your password (so you don't have to save it in a file or have the command appear in your command history)
i The interval in seconds.
Use the --verbose flag to show the full process list, displaying the entire query for each process. (Thanks, nmat)
There is a possible downside: fast queries might not show up if they run between the interval that you set up. IE: My interval is set at one second and if there is a query that takes .02 seconds to run and is ran between intervals, you won't see it.
Use this option preferably when you quickly want to check on running queries without having to set up a listener or anything else.
Run this convenient SQL query to see running MySQL queries. It can be run from any environment you like, whenever you like, without any code changes or overheads. It may require some MySQL permissions configuration, but for me it just runs without any special setup.
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep';
The only catch is that you often miss queries which execute very quickly, so it is most useful for longer-running queries or when the MySQL server has queries which are backing up - in my experience this is exactly the time when I want to view "live" queries.
You can also add conditions to make it more specific just any SQL query.
e.g. Shows all queries running for 5 seconds or more:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND TIME >= 5;
e.g. Show all running UPDATEs:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND INFO LIKE '%UPDATE %';
For full details see: http://dev.mysql.com/doc/refman/5.1/en/processlist-table.html
strace
The quickest way to see live MySQL/MariaDB queries is to use debugger. On Linux you can use strace, for example:
sudo strace -e trace=read,write -s 2000 -fp $(pgrep -nf mysql) 2>&1
Since there are lot of escaped characters, you may format strace's output by piping (just add | between these two one-liners) above into the following command:
grep --line-buffered -o '".\+[^"]"' | grep --line-buffered -o '[^"]*[^"]' | while read -r line; do printf "%b" $line; done | tr "\r\n" "\275\276" | tr -d "[:cntrl:]" | tr "\275\276" "\r\n"
So you should see fairly clean SQL queries with no-time, without touching configuration files.
Obviously this won't replace the standard way of enabling logs, which is described below (which involves reloading the SQL server).
dtrace
Use MySQL probes to view the live MySQL queries without touching the server. Example script:
#!/usr/sbin/dtrace -q
pid$target::*mysql_parse*:entry /* This probe is fired when the execution enters mysql_parse */
{
printf("Query: %s\n", copyinstr(arg1));
}
Save above script to a file (like watch.d), and run:
pfexec dtrace -s watch.d -p $(pgrep -x mysqld)
Learn more: Getting started with DTracing MySQL
Gibbs MySQL Spyglass
See this answer.
Logs
Here are the steps useful for development proposes.
Add these lines into your ~/.my.cnf or global my.cnf:
[mysqld]
general_log=1
general_log_file=/tmp/mysqld.log
Paths: /var/log/mysqld.log or /usr/local/var/log/mysqld.log may also work depending on your file permissions.
then restart your MySQL/MariaDB by (prefix with sudo if necessary):
killall -HUP mysqld
Then check your logs:
tail -f /tmp/mysqld.log
After finish, change general_log to 0 (so you can use it in future), then remove the file and restart SQL server again: killall -HUP mysqld.
I'm in a particular situation where I do not have permissions to turn logging on, and wouldn't have permissions to see the logs if they were turned on. I could not add a trigger, but I did have permissions to call show processlist. So, I gave it a best effort and came up with this:
Create a bash script called "showsqlprocesslist":
#!/bin/bash
while [ 1 -le 1 ]
do
mysql --port=**** --protocol=tcp --password=**** --user=**** --host=**** -e "show processlist\G" | grep Info | grep -v processlist | grep -v "Info: NULL";
done
Execute the script:
./showsqlprocesslist > showsqlprocesslist.out &
Tail the output:
tail -f showsqlprocesslist.out
Bingo bango. Even though it's not throttled, it only took up 2-4% CPU on the boxes I ran it on. I hope maybe this helps someone.
From a command line you could run:
watch --interval=[your-interval-in-seconds] "mysqladmin -u root -p[your-root-pw] processlist | grep [your-db-name]"
Replace the values [x] with your values.
Or even better:
mysqladmin -u root -p -i 1 processlist;
This is the easiest setup on a Linux Ubuntu machine I have come across. Crazy to see all the queries live.
Find and open your MySQL configuration file, usually /etc/mysql/my.cnf on Ubuntu. Look for the section that says “Logging and Replication”
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
log = /var/log/mysql/mysql.log
Just uncomment the “log” variable to turn on logging. Restart MySQL with this command:
sudo /etc/init.d/mysql restart
Now we’re ready to start monitoring the queries as they come in. Open up a new terminal and run this command to scroll the log file, adjusting the path if necessary.
tail -f /var/log/mysql/mysql.log
Now run your application. You’ll see the database queries start flying by in your terminal window. (make sure you have scrolling and history enabled on the terminal)
FROM http://www.howtogeek.com/howto/database/monitor-all-sql-queries-in-mysql/
Check out mtop.
I've been looking to do the same, and have cobbled together a solution from various posts, plus created a small console app to output the live query text as it's written to the log file. This was important in my case as I'm using Entity Framework with MySQL and I need to be able to inspect the generated SQL.
Steps to create the log file (some duplication of other posts, all here for simplicity):
Edit the file located at:
C:\Program Files (x86)\MySQL\MySQL Server 5.5\my.ini
Add "log=development.log" to the bottom of the file. (Note saving this file required me to run my text editor as an admin).
Use MySql workbench to open a command line, enter the password.
Run the following to turn on general logging which will record all queries ran:
SET GLOBAL general_log = 'ON';
To turn off:
SET GLOBAL general_log = 'OFF';
This will cause running queries to be written to a text file at the following location.
C:\ProgramData\MySQL\MySQL Server 5.5\data\development.log
Create / Run a console app that will output the log information in real time:
Source available to download here
Source:
using System;
using System.Configuration;
using System.IO;
using System.Threading;
namespace LiveLogs.ConsoleApp
{
class Program
{
static void Main(string[] args)
{
// Console sizing can cause exceptions if you are using a
// small monitor. Change as required.
Console.SetWindowSize(152, 58);
Console.BufferHeight = 1500;
string filePath = ConfigurationManager.AppSettings["MonitoredTextFilePath"];
Console.Title = string.Format("Live Logs {0}", filePath);
var fileStream = new FileStream(filePath, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite);
// Move to the end of the stream so we do not read in existing
// log text, only watch for new text.
fileStream.Position = fileStream.Length;
StreamReader streamReader;
// Commented lines are for duplicating the log output as it's written to
// allow verification via a diff that the contents are the same and all
// is being output.
// var fsWrite = new FileStream(#"C:\DuplicateFile.txt", FileMode.Create);
// var sw = new StreamWriter(fsWrite);
int rowNum = 0;
while (true)
{
streamReader = new StreamReader(fileStream);
string line;
string rowStr;
while (streamReader.Peek() != -1)
{
rowNum++;
line = streamReader.ReadLine();
rowStr = rowNum.ToString();
string output = String.Format("{0} {1}:\t{2}", rowStr.PadLeft(6, '0'), DateTime.Now.ToLongTimeString(), line);
Console.WriteLine(output);
// sw.WriteLine(output);
}
// sw.Flush();
Thread.Sleep(500);
}
}
}
}
In addition to previous answers describing how to enable general logging, I had to modify one additional variable in my vanilla MySql 5.6 installation before any SQL was written to the log:
SET GLOBAL log_output = 'FILE';
The default setting was 'NONE'.
Gibbs MySQL Spyglass
AgilData launched recently the Gibbs MySQL Scalability Advisor (a free self-service tool) which allows users to capture a live stream of queries to be uploaded to Gibbs. Spyglass (which is Open Source) will watch interactions between your MySQL Servers and client applications. No reconfiguration or restart of the MySQL database server is needed (either client or app).
GitHub: AgilData/gibbs-mysql-spyglass
Learn more: Packet Capturing MySQL with Rust
Install command:
curl -s https://raw.githubusercontent.com/AgilData/gibbs-mysql-spyglass/master/install.sh | bash
If you want to have monitoring and statistics, than there is a good and open-source tool Percona Monitoring and Management
But it is a server based system, and it is not very trivial for launch.
It has also live demo system for test.
Related
Cannot set LC_ALL to locale en_US.UTF-8: JavaScript is not supported
I'm running mysql v8.0.23 in my local machine. $ sudo apt-get install mysql-server $ sudo snap install mysql-shell But when I try to enter mysqlsh enter into js mode, It is giving the following error: $ mysqlsh --js Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory JavaScript is not supported. Though I can switch to \sql or \py. What am I missing? SHELL COMMANDS The shell commands allow executing specific operations including updating the shell configuration. The following shell commands are available: - \ Start multi-line input when in SQL mode. - \connect (\c) Connects the shell to a MySQL server and assigns the global session. - \disconnect Disconnects the global session. - \edit (\e) Launch a system editor to edit a command to be executed. - \exit Exits the MySQL Shell, same as \quit. - \G Send command to mysql server, display result vertically. - \g Send command to mysql server. - \help (\?,\h) Prints help information about a specific topic. - \history View and edit command line history. - \nopager Disables the current pager. - \nowarnings (\w) Don't show warnings after every statement. - \option Allows working with the available shell options. - \pager (\P) Sets the current pager. - \py Switches to Python processing mode. - \quit (\q) Exits the MySQL Shell. - \reconnect Reconnects the global session. - \rehash Refresh the autocompletion cache. - \show Executes the given report with provided options and arguments. - \source (\.) Loads and executes a script from a file. - \sql Executes SQL statement or switches to SQL processing mode when no statement is given. - \status (\s) Print information about the current global session. - \system (\!) Execute a system shell command. - \use (\u) Sets the active schema. - \warnings (\W) Show warnings after every statement. - \watch Executes the given report with provided options and
tried to follow the offical documentation again.. needed to add apt-package for mysql everything working fine now. https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-install-linux-quick.html https://dev.mysql.com/doc/mysql-apt-repo-quick-guide/en/#apt-repo-setup
MySQL login-path issues with clustercheck script used in xinetd
default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream type = UNLISTED port = 9200 wait = no user = root server = /usr/bin/mysqlclustercheck log_on_failure += USERID only_from = 0.0.0.0/0 # # Passing arguments to clustercheck # <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>" # Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local" # Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local" # 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra" # # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } /etc/xinetd.d # It is kind of strange that script works fine when run manually when it runs using /etc/xinetd.d/ , it is not working as expected. In mysqlclustercheck script, instead of using --user= and passord= syntax, I am using --login-path= syntax script runs fine when I run using command line but status for xinetd was showing signal 13. After debugging, I have found that even simple command like this is not working mysql_config_editor print --all >>/tmp/test.txt We don't see any output generated when it is run using xinetd ( mysqlclustercheck)
Have you tried the following instead of /usr/bin/mysqlclustercheck? server = /usr/bin/clustercheck I am wondering if you could test your binary location with the linux which command.
A long time ago since this question was asked, but it just came to my attention. First of all as mentioned, Percona Cluster Control script is called clustercheck, so make sure you are using the correct name and correct path. Secondly, since the server script runs fine from command line, it seems to me that the path of mysql client command is not known by the xinetd when it runs the Cluster Control script. Since the mysqlclustercheck script as it is offered from Percona, it uses only the binary name mysql without specifying the absolute path I suggest you do the following: Find where mysql client command is located on your system: ccloud#gal1:~> sudo -i gal1:~ # which mysql /usr/local/mysql/bin/mysql gal1:~ # then edit script /usr/bin/mysqlclustercheck and in the following line: MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \ place the exact path of mysql client command you found in the previous step. I also see that you are not using MySQL connection credentials for connecting to MySQL server. mysqlclustercheck script as it is offered from Percona, it uses User/Password in order to connect to MySQL server. So normally, you should execute the script in the command line like: gal1:~ # /usr/sbin/clustercheck haproxy haproxyMySQLpass HTTP/1.1 200 OK Content-Type: text/plain Where haproxy/haproxyMySQLpass is the MySQL connection user/pass for HAProxy monitoring user. Additionally, you should specify them to your script's xinetd settings like: server = /usr/bin/mysqlclustercheck server_args = haproxy haproxyMySQLpass Last but not least, the signal 13 you are getting is because you try to write something in a script run by xinetd. If for example in your mysqlclustercheck you try to add a statement like echo "debug message" you probably going to see the broken pipe signal (13 in POSIX). Finally, I had issues with this script using SLES 12.3 and I finally manage to run it not as 'nobody' but as 'root'. Hope it helps
Executing a system command from mysql
I am trying to execute a shell command from within mysql (from within a procedure or a trigger or the command line for mysql). I have added lib_mysqludf_sys to the mysql plugins and created the functions that are available with the library. (the library) home page The library has 5 functions. sys_set - to set $PATH - this works and stores the $PATH which i can later check. sys_get - to get the stored value of $PATH - this also works and returns the value that I have stored. sys_exec - to execute a command in the system and return the exit code. sys_eval - to execute a command in the system and return the standard output. lib_mysqludf_sys_info - return the current version of the library - this also works. I need sys_exec and sys_eval to work correctly. I think I have found the problem in my search but cannot solve it. mysql is limited by apparmor and is not granted access to execute system commands by the default apparmor profile. I have tried the commands in the documentation to disable a single profile, disable the framework, putting all profiles except one into enforce mode and putting all profiles in complain mode. Nothing works. the command sudo apparmor_status always gives me the same output. 20 profiles are loaded. 20 profiles are in enforce mode. /opt/extras.ubuntu.com/unity-lens-askubuntu/unity-askubuntu-daemon /sbin/dhclient /usr/bin/evince /usr/bin/evince-previewer /usr/bin/evince-previewer//launchpad_integration /usr/bin/evince-previewer//sanitized_helper /usr/bin/evince-thumbnailer /usr/bin/evince-thumbnailer//sanitized_helper /usr/bin/evince//launchpad_integration /usr/bin/evince//sanitized_helper /usr/lib/NetworkManager/nm-dhcp-client.action /usr/lib/connman/scripts/dhclient-script /usr/lib/cups/backend/cups-pdf /usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper /usr/lib/telepathy/mission-control-5 /usr/lib/telepathy/telepathy-* /usr/sbin/cupsd /usr/sbin/mysqld /usr/sbin/tcpdump /usr/share/gdm/guest-session/Xsession 0 profiles are in complain mode. 5 processes have profiles defined. 5 processes are in enforce mode. /sbin/dhclient (2537) /usr/lib/telepathy/mission-control-5 (2709) /usr/sbin/cupsd (12245) /usr/sbin/cupsd (12250) /usr/sbin/mysqld (12675) 0 processes are in complain mode. 0 processes are unconfined but have a profile defined. Please tell me how I could disable apparmor or change the profile for mysql so that it has access to executing system commands. The reason I am doing all this is so that I can execute a system command when somethings happen in the DB (via a DB trigger), if you have suggestion for some other ways in which this can be easily implemented then please mention those too. Thanks.
managed to get this working. First put apparmor in complain mode for the necessary profiles then used apparmor's interactive tools (aa-genprof/aa-logprof) to configure the profile for mysqld
SQL command to display history of queries
I would like to display my executed sql command history in my MYSQL Query Browser. What is the sql statement for displaying history?
try cat ~/.mysql_history this will show you all mysql commands ran on the system
For MySQL > 5.1.11 or MariaDB SET GLOBAL log_output = 'TABLE'; SET GLOBAL general_log = 'ON'; Take a look at the table mysql.general_log If you want to output to a log file: SET GLOBAL log_output = "FILE"; SET GLOBAL general_log_file = "/path/to/your/logfile.log" SET GLOBAL general_log = 'ON'; As mentioned by jeffmjack in comments, these settings will be forgetting before next session unless you edit the configuration files (e.g. edit /etc/mysql/my.cnf, then restart to apply changes). Now, if you'd like you can tail -f /var/log/mysql/mysql.log More info here: Server System Variables
You 'll find it there ~/.mysql_history You 'll make it readable (without the escapes) like this: sed "s/\\\040/ /g" < .mysql_history
(Linux) Open your Terminal ctrl+alt+t run the command cat ~/.mysql_history you will get all the previous mysql query history enjoy :)
Look at ~/.myslgui/query-browser/history.xml here you can find the last queries made with mysql_query_browser (some days old)
#GoYun.Info answer but with python 3 cat ~/.mysql_history | python3 -c "import sys; print(''.join([l.encode('utf-8').decode('unicode-escape') for l in sys.stdin]))"
You can see the history from ~/.mysql_history. However the content of the file is encoded by wctomb. To view the content: shell> cat ~/.mysql_history | python2.7 -c "import sys; print(''.join([l.decode('unicode-escape') for l in sys.stdin]))" Source:Check MySQL query history from command line
You can look at the query cache: http://www.databasejournal.com/features/mysql/article.php/3110171/MySQLs-Query-Cache.htm but it might not give you access to the actual queries and will be very hit-and-miss if it did work (subtle pun intended) But MySQL Query Browser very likely maintains its own list of queries that it runs, outside of the MySQL engine. You would have to do the same in your app. Edit: see dan m's comment leading to this: How to show the last queries executed on MySQL? looks sound.
How do I do backups in MySQL? [duplicate]
This question already has answers here: How do I backup a MySQL database? (5 answers) Closed 9 years ago. How do I do backups in MySQL? I'm hoping there'll be something better than just running mysqldump every "x" hours. Is there anything like SQL Server has, where you can take a full backup each day, and then incrementals every hour, so if your DB dies you can restore up to the latest backup? Something like the DB log, where as long as the log doesn't die, you can restore up to the exact point where the DB died? Also, how do these things affect locking? I'd expect the online transactions to be locked for a while if I do a mysqldump.
You might want to look at incremental backups.
mysqldump is a reasonable approach, but bear in mind that for some engines, this will lock your tables for the duration of the dump - and this has availability concerns for large production datasets. An obvious alternative to this is mk-parallel-dump from Maatkit (http://www.maatkit.org/) which you should really check out if you're a mysql administrator. This dumps multiple tables or databases in parallel using mysqldump, thereby decreasing the amount of total time your dump takes. If you're running in a replicated setup (and if you're using MySQL for important data in production, you have no excuses not to be doing so), taking dumps from a replication slave dedicated to the purpose will prevent any lock issues from causing trouble. The next obvious alternative - on Linux, at least - is to use LVM snapshots. You can lock your tables, snapshot the filesystem, and unlock the tables again; then start an additional MySQL using a mount of that snapshot, dumping from there. This approach is described here: http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/
now i am beginning to sound like a marketeer for this product. i answered a question with it here, then i answered another with it again here. in a nutshell, try sqlyog (enterprise in your case) from webyog for all your mysql requirements. it not only schedules backups, but also schedules synchronization so you can actually replicate your database to a remote server. it has a free community edition as well as an enterprise edition. i recommend the later to you though i also reccomend you start with the comm edition and first see how you like it.
I use mysqlhotcopy, a fast on-line hot-backup utility for local MySQL databases and tables. I'm pretty happy with it.
the Percona guys made a open source altenative to innobackup ... Xtrabackup https://launchpad.net/percona-xtrabackup/ Read this article about XtraDB http://www.linux-mag.com/cache/7356/1.html
You might want to supplement your current offline backup scheme with MySQL replication. Then if you have a hardware failure you can just swap machines. If you catch the failure quickly you're users won't even notice any downtime or data loss.
I use a simple script that dumps the mysql database into a tar.gz file, encrypts it using gpg and sends it to a mail account (Google Mail, but that's irrelevant really) The script is a Python script, which basically runs the following command, and emails the output file. mysqldump -u theuser -p mypassword thedatabase | gzip -9 - | gpg -e -r 12345 -r 23456 > 2008_01_02.tar.gz.gpg This is the entire backup. It also has the web-backup part, which just tar/gzips/encrypts the files. It's a fairly small site, so the web backups are much less than 20MB, so can be sent to the GMail account without problem (the MySQL dumps are tiny, about 300KB compressed). It's extremely basic, and won't scale very well. I run it once a week using cron. I'm not quite sure how we're supposed to put longish scripts in answers, so I'll just shove it as a code-block.. #!/usr/bin/env python #encoding:utf-8 # # Creates a GPG encrypted web and database backups, and emails it import os, sys, time, commands ################################################ ### Config DATE = time.strftime("%Y-%m-%d_%H-%M") # MySQL login SQL_USER = "mysqluser" SQL_PASS = "mysqlpassword" SQL_DB = "databasename" # Email addresses BACKUP_EMAIL=["email1#example.com", "email2#example.com"] # Array of email(s) FROM_EMAIL = "root#myserver.com" # Only one email # Temp backup locations DB_BACKUP="/home/backupuser/db_backup/mysite_db-%(date)s.sql.gz.gpg" % {'date':DATE} WEB_BACKUP="/home/backupuser/web_backup/mysite_web-%(date)s.tar.gz.gpg" % {'date':DATE} # Email subjects DB_EMAIL_SUBJECT="%(date)s/db/mysite" % {'date':DATE} WEB_EMAIL_SUBJECT="%(date)s/web/mysite" % {'date':DATE} GPG_RECP = ["MrAdmin","MrOtherAdmin"] ### end Config ################################################ ################################################ ### Process config GPG_RECP = " ".join(["-r %s" % (x) for x in GPG_RECP]) # Format GPG_RECP as arg sql_backup_command = "mysqldump -u %(SQL_USER)s -p%(SQL_PASS)s %(SQL_DB)s | gzip -9 - | gpg -e %(GPG_RECP)s > %(DB_BACKUP)s" % { 'GPG_RECP':GPG_RECP, 'DB_BACKUP':DB_BACKUP, 'SQL_USER':SQL_USER, 'SQL_PASS':SQL_PASS, 'SQL_DB':SQL_DB } web_backup_command = "cd /var/www/; tar -c mysite.org/ | gzip -9 | gpg -e %(GPG_RECP)s > %(WEB_BACKUP)s" % { 'GPG_RECP':GPG_RECP, 'WEB_BACKUP':WEB_BACKUP, } # end Process config ################################################ ################################################ ### Main application def main(): """Main backup function""" print "Backing commencing at %s" % (DATE) # Run commands print "Creating db backup..." sql_status,sql_cmd_out = commands.getstatusoutput(sql_backup_command) if sql_status == 0: db_file_size = round(float( os.stat(DB_BACKUP)[6] ) /1024/1024, 2) # Get file-size in MB print "..successful (%.2fMB)" % (db_file_size) try: send_mail( send_from = FROM_EMAIL, send_to = BACKUP_EMAIL, subject = DB_EMAIL_SUBJECT, text = "Database backup", files = [DB_BACKUP], server = "localhost" ) print "Sending db backup successful" except Exception,errormsg: print "Sending db backup FAILED. Error was:",errormsg #end try # Remove backup file print "Removing db backup..." try: os.remove(DB_BACKUP) print "...successful" except Exception, errormsg: print "...FAILED. Error was: %s" % (errormsg) #end try else: print "Creating db backup FAILED. Output was:", sql_cmd_out #end if sql_status print "Creating web backup..." web_status,web_cmd_out = commands.getstatusoutput(web_backup_command) if web_status == 0: web_file_size = round(float( os.stat(WEB_BACKUP)[6] ) /1024/1024, 2) # File size in MB print "..successful (%.2fMB)" % (web_file_size) try: send_mail( send_from = FROM_EMAIL, send_to = BACKUP_EMAIL, subject = WEB_EMAIL_SUBJECT, text = "Website backup", files = [WEB_BACKUP], server = "localhost" ) print "Sending web backup successful" except Exception,errormsg: print "Sending web backup FAIELD. Error was: %s" % (errormsg) #end try # Remove backup file print "Removing web backup..." try: os.remove(WEB_BACKUP) print "...successful" except Exception, errormsg: print "...FAILED. Error was: %s" % (errormsg) #end try else: print "Creating web backup FAILED. Output was:", web_cmd_out #end if web_status #end main ################################################ ################################################ # Send email function # needed email libs.. import smtplib from email.MIMEMultipart import MIMEMultipart from email.MIMEBase import MIMEBase from email.MIMEText import MIMEText from email.Utils import COMMASPACE, formatdate from email import Encoders def send_mail(send_from, send_to, subject, text, files=[], server="localhost"): assert type(send_to)==list assert type(files)==list msg = MIMEMultipart() msg['From'] = send_from msg['To'] = COMMASPACE.join(send_to) msg['Date'] = formatdate(localtime=True) msg['Subject'] = subject msg.attach( MIMEText(text) ) for f in files: part = MIMEBase('application', "octet-stream") try: part.set_payload( open(f,"rb").read() ) except Exception, errormsg: raise IOError("File not found: %s"%(errormsg)) Encoders.encode_base64(part) part.add_header('Content-Disposition', 'attachment; filename="%s"' % os.path.basename(f)) msg.attach(part) #end for f smtp = smtplib.SMTP(server) smtp.sendmail(send_from, send_to, msg.as_string()) smtp.close() #end send_mail ################################################ if __name__ == '__main__': main()
You can make full dumps of InnoDB databases/tables without locking (downtime) via mysqldump with "--single-transaction --skip-lock-tables" options. Works well for making weekly snapshots + daily/hourly binary log increments (#Using the Binary Log to Enable Incremental Backups).
#Jake, Thanks for the info. Now, it looks like only the commercial version has backup features. Isn't there ANYTHING built into MySQL to do decent backups? The official MySQL page even recommends things like "well, you can copy the files, AS LONG AS THEY'RE NOT BEING UPDATED"...
The problem with a straight backup of the mysql database folder is that the backup will not necessarily be consistent, unless you do a write-lock during the backup. I run a script that iterates through all of the databases, doing a mysqldump and gzip on each to a backup folder, and then backup that folder to tape. This, however, means that there is no such thing as incremental backups, since the nightly dump is a complete dump. But I would argue that this could be a good thing, since a restore from a full backup will be a significantly quicker process than restoring from incrementals - and if you are backing up to tape, it will likely mean gathering a number of tapes before you can do a full restore. In any case, whichever backup plan you go with, make sure to do a trial restore to ensure that it works, and get an idea of how long it might take, and exactly what the steps are that you need to go through.
the correct way to run incremental or continuous backups of a mysql server is with binary logs. to start with, lock all of the tables or bring the server down. use mysql dump to make a backup, or just copy the data directory. you only have to do this once, or any time you want a FULL backup. before you bring the server back up, make sure binary logging is enabled. to take an incremental backup, log in to the server and issue a FLUSH LOGS command. then backup the most recently closed binary log file. if you have all innodb tables, it's simpler to just use inno hot backup (not free) or mysqldump with the --single-transaction option (you'd better have a lot of memory to handle the transactions).
Binary logs are probably the correct way to do incremental backups, but if you don't trust binary file formats for permanent storage here is an ASCII way to do incremental backups. mysqldump is not a bad format, the main problem is that it outputs stuff a table as one big line. The following trivial sed will split its output along record borders: mysqldump --opt -p | sed -e "s/,(/,\n(/g" > database.dump The resulting file is pretty diff-friendly, and I've been keeping them in a standard SVN repository fairly successfully. That also allows you to keep a history of backups, if you find that the last version got borked and you need last week's version.
This is a pretty solid solution for Linux shell. I have been using it for years: http://sourceforge.net/projects/automysqlbackup/ Does rolling backups: daily, monthly, yearly Lots of options
#Daniel, in case you are still interested, there is a newish (new to me) solution shared by Paul Galbraith, a tool that allows for online backup of innodb tables called ibbackup from oracle which to quote Paul, when used in conjunction with innobackup, has worked great in creating a nightly backup, with no downtime during the backup more detail can be found on Paul's blog
Sound like you are talking about transaction roll back. So in terms of what you need, if you have the logs containing all historical queries, isn't that the backup already? Why do you need an incremental backup which is basically a redundant copy of all the information in DB logs? If so, why don't you just use mysqldump and do the backup every once a while?