MySql - replication monitoring tool [closed] - mysql

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have a master/slave MySql replication.
Im looking for a tool that will allow me to monitor the replication (see it has no error, check on the lag, etc.)
I prefer a visual tool that will allow all team members get visibility on the status and not a script tool.
any ideas?

We are using the following bash script. You could do the same idea in php and web base the code.
#!/bin/sh
## Joel Chaney##
## joel.chaney#mongoosemetrics.com (look at robots.txt) ##
## 2012-02-03 ##
repeat_alert_interval=30 # minutes for lock file life
lock_file=/tmp/slave_alert.lck # location of lock file
EMAIL=YOURNAME#YOURCOMPANY.DOM # where to send alerts
SSTATUS=/tmp/sstatus # location of sstatus file
### Code -- do not edit below ##
NODE=`uname -n`
## Check if alert is locked ##
function check_alert_lock () {
if [ -f $lock_file ] ; then
current_file=`find $lock_file -cmin -$repeat_alert_interval`
if [ -n "$current_file" ] ; then
# echo "Current lock file found"
return 1
else
# echo "Expired lock file found"
rm $lock_file
return 0
fi
else
touch $lock_file
return 0
fi
}
SLAVE=mysql
$SLAVE -e 'SHOW SLAVE STATUS\G' > $SSTATUS
function extract_value {
FILENAME=$1
VAR=$2
grep -w $VAR $FILENAME | awk '{print $2}'
}
Master_Binlog=$(extract_value $SSTATUS Master_Log_File )
Master_Position=$(extract_value $SSTATUS Exec_Master_Log_Pos )
Master_Host=$(extract_value $SSTATUS Master_Host)
Master_Port=$(extract_value $SSTATUS Master_Port)
Master_Log_File=$(extract_value $SSTATUS Master_Log_File)
Read_Master_Log_Pos=$(extract_value $SSTATUS Read_Master_Log_Pos)
Slave_IO_Running=$(extract_value $SSTATUS Slave_IO_Running)
Slave_SQL_Running=$(extract_value $SSTATUS Slave_SQL_Running)
Slave_ERROR=$(extract_value $SSTATUS Last_Error)
ERROR_COUNT=0
if [ "$Master_Binlog" != "$Master_Log_File" ]
then
ERRORS[$ERROR_COUNT]="master binlog ($Master_Binlog) and Master_Log_File ($Master_Log_File) differ"
ERROR_COUNT=$(($ERROR_COUNT+1))
fi
POS_DIFFERENCE=$(echo ${Master_Position}-${Read_Master_Log_Pos}|bc)
if [ $POS_DIFFERENCE -gt 1000 ]
then
ERRORS[$ERROR_COUNT]="The slave is lagging behind of $POS_DIFFERENCE"
ERROR_COUNT=$(($ERROR_COUNT+1))
fi
if [ "$Slave_IO_Running" == "No" ]
then
ERRORS[$ERROR_COUNT]="Replication is stopped"
ERROR_COUNT=$(($ERROR_COUNT+1))
fi
if [ "$Slave_SQL_Running" == "No" ]
then
ERRORS[$ERROR_COUNT]="Replication (SQL) is stopped"
ERROR_COUNT=$(($ERROR_COUNT+1))
fi
if [ $ERROR_COUNT -gt 0 ]
then
if [ check_alert_lock == 0 ]
then
SUBJECT="${NODE}-ERRORS in replication"
BODY=''
CNT=0
while [ "$CNT" != "$ERROR_COUNT" ]
do
BODY="$BODY ${ERRORS[$CNT]}"
CNT=$(($CNT+1))
done
BODY=$BODY" \n${Slave_ERROR}"
echo $BODY | mail -s "$SUBJECT" $EMAIL
fi
else
echo "Replication OK"
fi

#!/bin/bash
HOST=your-server-ip
USER=mysql-user
PASSWORD=mysql-password
SUBJECT="Mysql replication problem"
EMAIL=your#email.address
RESULT=`mysql -h $HOST -u$USER -p$PASSWORD -e 'show slave status\G' | grep Last_SQL_Error | sed -e 's/ *Last_SQL_Error: //'`
if [ -n "$RESULT" ]; then
echo "$RESULT" | mail -s "$SUBJECT" $EMAIL
fi

You can use any programming language to query mysql and fetch the results from:
show slave status; <-- execute on slave
show master status; <-- execute on master
If you think this is a bad idea, then install phpmyadmin, there is an already built-in GUI for replication monitoring, like: http://demo.phpmyadmin.net/master-config/ (replication)

If you're just interested in whether the slave is up to date or not:
mysql 'your connection info' -e 'show slave status\G' | grep -i seconds_behind

I've used a few different approaches, the most basic being using a PHP web page to check the slave status, and then getting standard monitoring tools to monitor the page. This is a nice approach as it means your existing monitoring tools can be used for alerts by checking the web page.
Example: to check the status of a database server at host db1.internal
http://mywebserver.com/replicationtest.php?host=db1.internal
Should always return "Yes"
replicationtest.php:
<?php
$username="myrepadmin";
$password="";
$database="database";
mysql_connect($_REQUEST['host'],$username,$password);
#mysql_select_db($database) or die( "Unable to select database");
$query="show slave status;";
$result=mysql_query($query);
$arr = mysql_fetch_assoc($result);
echo $arr['Slave_SQL_Running'] ;
mysql_close();
?>
You could also monitor Seconds_Behind_Master, Last_IO_Errno, Last_SQL_Errno etc.
You can monitor this web page externally, or add it into many standard monitoring tools that can check a web page. I've used free service http://monitor.us
Alternatively, if you don't mind running code from 3rd parties on your internal infrastructure http://newrelic.com offer great server monitoring tools with a web interface, and include a MySQL plugin that provides lots of great info such as Query Analysis, InnoDB metrics and Replication status with lag monitors. New Relic specialise on web app monitoring but the free service allows you to monitor an unlimited number of servers.
I currently use a combination of these tools with the above web page being used to trigger alerts for emergencies, and the NewRelic tools for viewing long term performance and trend analysis.

The question is:
do you want to know if your Mysql replication is OK
OR do you want to know if your data are consistent ?
You can't rely only on SHOW SLAVE STATUS output to know if your slave is identical to Master: a (bad) try to solve an error that halted your replication may imply that some INSERT or UPDATE or whatever, never occured on your slave.
To check this you have to read SHOW SLAVE STATUS, of course, everything has to be ok in that output, but you also have to compare data (ie number of rows, checksum, ...).
I've written a PHP tool to do that: https://bitbucket.org/verticalassertions/verticalslave
It features :
check replication (checks on show slave status values, check table, checksum table, ...)
check replication with auto-repair (same as above + dump of replicated tables in errors)
dump of non-replicated databases (listed in config)
reset replication when you broke all to the ground - basically a dump of replicated databases and start slave)
send shorten report by mail, link to full report on website version
store past reports
can be run from CLI (crontab) or manually from website you set up
Feel free to fork and improve. I'm sure some tools are better (especially in layout xD ), but I needed a tool that does exactly what I ask and no fancy things I couldn't figure.

Related

How can I automatically kill idle GCE instances based on CPU usage?

I'm running some slightly unreliable software on some instances in an instance group. The software is installed and run by a startup script, and most of the time it works without issue, but about ~10% of the new instances run out of memory and crash due to some sort of memory leak in the software. I can't get this leak fixed myself, so in the meantime, I've been checking the instances every few hours and killing any that show an idle CPU (the software consumes all available CPU power normally).
However, I'm using preemptible instances, and they can be killed off and restarted at any time, leaving dead instances running whenever I'm not actively monitoring them. After a day of leaving things unattended, I usually see ~80-85% CPU usage in the dashboard, the rest of which is wasted.
Is there any automated way I can kill off these dead instances? Restarting them is already handled by the instance group.
The following worked for me. It's a bash script which uses the uptime UNIX command to check whether the 15-minute average load on the CPU is below a threshold, and automatically shuts down the system if this is true on ten consecutive checks. You need to run this within your VM instance.
Credit, and more detailed explanation: Rohit Rawat's blog.
#!/bin/bash
threshold=0.4
count=0
while true
do
load=$(uptime | sed -e 's/.*load average: //g' | awk '{ print $3 }')
res=$(echo $load'<'$threshold | bc -l)
if (( $res ))
then
echo "Idling.."
((count+=1))
fi
echo "Idle minutes count = $count"
if (( count>10 ))
then
echo Shutting down
# wait a little bit more before actually pulling the plug
sleep 300
sudo poweroff
fi
sleep 60
done
It seems like there are two parts to this question:
Identifying dead instances.
Killing off those instances.
In terms of identifying dead instances, one way to do this would be to have a separate, management instance that does not run this software and that keeps tabs on the other instances. For example, it could do this by periodically sending a health request to the various instances and marking non-responsive instances or instances reporting an overly high CPU usage as unhealthy.
Once your management instance has identified the unhealthy instances that need to be reset, you should be able to reset those other instances using the API (I'm guessing the reset command) or by executing the same operation using the gcloud commandline tool.
I wish I could add this as a comment to viswajithiii answer but I'm just shy of the reputations necessary to comment.
I found the static threshold variable to be inappropriate when I am using cloud VM's with variable numbers of cpu's as the output of uptime scales with the number of CPU's as discussed here.
My updated script adds two lines below the threshold assignment to scale the threshold by the number of cpu's. This allows me to set a percentage cpu utilization that will work across VM's with different numbers of cpu's.
Otherwise, the script is the same as viswajithiii's.
#!/bin/bash
threshold=0.4
n_cpu=$( grep 'model name' /proc/cpuinfo | wc -l )
threshold=$( echo $n_cpu*$threshold | bc )
count=0
while true
do
load=$(uptime | sed -e 's/.*load average: //g' | awk '{ print $3 }')
res=$(echo $load'<'$threshold | bc -l)
if (( $res ))
then
echo "Idling.."
((count+=1))
fi
echo "Idle minutes count = $count"
if (( count>10 ))
then
echo Shutting down
# wait a little bit more before actually pulling the plug
sleep 300
sudo poweroff
fi
sleep 60
done
This works without bc (not in GCP Container OS) using viswajithiii's answer and this post:
How can I replace 'bc' tool in my bash script?
It also appends the history list to file before poweroff. I set my threshold very low, but the load is showing 0.00 even when I'm editing files via cli. Might work better if instance is under heavy load.
#!/bin/bash
threshold=10
count=0
while true
do
load=$(uptime | sed -e 's/.*load average: //g' | awk '{ print $3 }')
load2=$(awk -v a="$load" 'BEGIN {print a*100}')
echo $load2
if [ $load2 -lt $threshold ]
then
echo "Idling.."
((count+=1))
fi
echo "Idle minutes count = $count"
if (( count>10 ))
then
echo Shutting down
# wait a little bit more before actually pulling the plug
sleep 300
history -a
sudo poweroff
fi
sleep 60
done
That's not working for my low cpu, but this seems too:
#!/bin/bash
threshold=1
count=0
while true
do
load=$(awk '{u=$2+$4; t=$2+$4+$5; if (NR==1){u1=u; t1=t;} else print ($2+$4-u1) * 1000 / (t-t1); }' <(grep 'cpu ' /proc/stat) <(sleep 1;grep 'cpu ' /proc/stat))
load2=$(printf "%.0f\n" $load)
echo $load
echo $load2
if [[ $load2 -lt $threshold ]]
then
echo "Idling.."
((count+=1))
fi
echo "Idle minutes count = $count"
if (( count>10 ))
then
echo Shutting down
# wait a little bit more before actually pulling the plug
sleep 300
history -a
sudo poweroff
fi
sleep 60
done
It only works with both echo loads for some reason.
credits:
How to get overall CPU usage (e.g. 57%) on Linux
https://unix.stackexchange.com/questions/89712/how-to-convert-floating-point-number-to-integer
FYI: according to here, GCP monitoring agent is not available for N type instances: Google Cloud Platform: how to monitor memory usage of VM instances
Put this in a startup script in /etc/my_init.d and make it executable:
sudo mkdir /etc/my_init.d
sudo mv autooff.sh /etc/my_init.d/autooff.sh
sudo chmod 755 /etc/my_init.d/autooff.sh
Actually, that's being deleted. Instead add to Custom Metadata in Edit for the instance: startup-script and #! /bin/bash \n~./autooff.sh

How to accelerate a shell script for a mongoDB application?

I started shell scripting for my work, but I must admit I'm still far away from even being a rookie. Therefore I wanted to ask you for your help/advice.
I build a script for a big data application (taking the quick and dirty approach, patching stuff from the internet together) to recursively go through a folder structure and convert convert all XML files to JSON.
The status quo of my script is:
#!/bin/sh
# Shell script to find out all the files under a directory and
#its subdirectories. This also takes into consideration those files
#or directories which have spaces or newlines in their names
cd /Users/q337498/Desktop/Archiv/2014/01/10
DIR="."
function list_files()
{
if !(test -d "$1")
then echo $1; return;
fi
cd "$1"
#echo; echo `pwd`:; #Display Directory name
for i in *
do
if test -d "$i"; then # if dictionary
if [ "$(ls -A $i)" ]; then
list_files "$i" #recursively list files
cd ..
else
echo "$i is Empty"
fi
else
java -jar /Users/q337498/Desktop/XML2JSON/SaxonEE9-5-1-4J/saxon9ee.jar -s:"$i" -xsl:/Users/q337498/Desktop/xsltjson-master/conf/xml-to-json.xsl -o:output/$(pwd)/${i%%[.]*}
# if jsonlint /Users/q337498/Desktop/Archiv/2014/01/08/$(pwd)/${i%%[.]*} -q; then
# echo "GOOD"
# else
# echo "NOT GOOD"
# fi
# echo ${i%%[.]*}
# echo "$i"; #Display File name
fi
done
}
if [ $# -eq 0 ]
then list_files .
exit 0
fi
for i in $*
do
DIR="$1"
list_files "$DIR"
shift 1 #To read next directory/file namedone
done
This code works, but the problem is that for 60000 files it takes up to 15 hours on a macbook pro with 16gb RAM and an 2.8 Ghz i7. And I need to convert 10 million files.
How do you think that I could accelerate the script? parallelize? take some commands out? What options do I have, and how would I actually implement them?
The files are ultimately going to end up in a MongoDB, so if someone knows a better way to convert xml to json and upload it to mongo his input is also welcome.
Cheers,
Dudu
I see 2 immediate problems here:
You are invoking java once for each file, therefore incurring the JVM startup time for every file which is going to add up to a huge chunk of time.
You are running single threaded
So I would suggest that you:
Write a Java program that does the directory traversal and does your transformation
Benchmark the performance difference
Try other Java libraries for doing the XML->JSON conversion https://github.com/beckchr/staxon/wiki/Benchmark
If necessary for performance, add multi-threading to your application using java.util.concurrent.

User in passdb, but getpwnam() fails!

Attempting to set up Samba + OpenLDAP using nss_ldap.
After joining Windows7 to Samba stand alone PDC, I can not login with a domain account unless that account is also added to the /etc/passwd file.
I get: user in passdb, but getpwnam() fails!
Everything I've read points to an NSS_LDAP issue yet, getent passwd shows users perfectly fine and I am able to ssh into the same Linux host using a user account that is only in the LDAP database.
Additionally, if I crack open the /etc/passwd file and add a line for the user in question, I can then login.
I'm not using PAM. I added the two Windows7 registry updates required per the Samba.org site.
Software stack is as follows:
Samba 3.5.3
OpenLDAP 2.4.21
nss_ldap 264
Thoughts/suggestions?
--------------------------------- UPDATE ---------------------------------
Getting closer! My nsswitch.conf did have files ldap so I reversed the order (now ldap files) and something odd happen. Notice, before, I said I could login with SSH and getent passwd dumped users in both ldap and files. After making the nsswitch.conf change, ldap before files, simple commands like ls took a long time. Additionally I observed nss_ldap errors as follows:
ls: nss_ldap: could not search LDAP server - Server is unavailable
and
ls: nss_ldap: failed to bind to LDAP server ldap://tsrvr.example.corp: Invalid credentials
I commented out the rootbinddn line in ldap.conf and these errors went away and getent passwd immediately began working again. The order of the output changed also: ldap entries listed before files entries.
Still, though, my Windows7 client will not login to the domain and I continue to get the same Samba error message
User test in passdb, but getpwnam() fails!
In my smb.conf, I tried removing ldapsam:trusted = yes line and when I do, I get domain authentication errors.
I'm not using SSL/TLS with OpenLDAP and I have the SSL = no setting. I also have the ldap.secret file set. I'm running slapd under the root account. My rootbinddn, before commenting out, referenced an LDAP root user of uid=root,ou=Users,dc=example,dc=corp. root's userPassword using CRYPT matches the bindpw as well as the one in /etc/shadow.
Looking at LDAP log activity for when I get the Samba error, it appears as if LDAP is returning the correct result against a Samba query:
Jun 19 14:20:14 tsrvr slapd[3803]: conn=1025 op=15 SRCH base="dc=example,dc=corp" scope=2 deref=0 filter="(&(uid=test)(objectClass=sambaSamAccount))"
Jun 19 14:20:14 tsrvr slapd[3803]: conn=1025 op=15 SRCH attr=uid uidNumber gidNumber homeDirectory sambaPwdLastSet sambaPwdCanChange sambaPwdMustChange sambaLogonTime sambaLogoffTime
sambaKickoffTime cn sn displayName sambaHomeDrive sambaHomePath sambaLogonScript sambaProfilePath description sambaUserWorkstations sambaSID sambaPrimaryGroupSID sambaLMPassword sam
baNTPassword sambaDomainName objectClass sambaAcctFlags sambaMungedDial sambaBadPasswordCount sambaBadPasswordTime sambaPasswordHistory modifyTimestamp sambaLogonHours modifyTimestam
p uidNumber gidNumber homeDirectory loginShell gecos
Jun 19 14:20:14 tsrvr slapd[3803]: conn=1025 op=15 SEARCH RESULT tag=101 err=0 nentries=1 text=
Any other suggestions?
Much appreciated
Sounds like a problem with /etc/nsswitch.conf. Specifically, the passwd and group lines should refer to ldap before compat or file. Have you looked at this Samba wiki entry?
SOLVED!!!!!!!!!!!
I have a script that was starting Samba (NMBD, SMBD) as well as OpenLDAP (SLAPD). It's an RC script that reads configuration data from a file to determine, among other things, which processes are already running or if a dependent process fails to start, etc... Here is a snippet of the relevant part in the script. The last line copies a version of the nsswitch.conf into place that specifies to use LDAP lookups.
while [ $i -lt $MAXPROCS ];
do
PID=${PROC[$i]}
StartProc $PID
if test $? != 0; then
echo "!!! Aborting Any Remaining Start-up Processes !!!"
exit 1
fi
i=$(($i+1))
done
cp /etc/rc.d/pozix/nsswitch.conf.ldap /etc/nsswitch.conf
And upon shutdown I was doing the following; notice I copy a nsswitch.conf file that has "noldap" entries in it.
while [ $i -lt $MAXPROCS ];
do
PID=${PROC[$i]}
StopProc $PID
i=$(($i+1))
done
cp /etc/rc.d/pozix/nsswitch.conf.noldap /etc/nsswitch.conf
It turns out that in the start-up scenario, samba wants the nsswtich.conf content to have the ldap entries there prior to invocation. Here is what I did to fix my issues:
cp /etc/rc.d/pozix/nsswitch.conf.ldap /etc/nsswitch.conf
while [ $i -lt $MAXPROCS ];
do
PID=${PROC[$i]}
StartProc $PID
if test $? != 0; then
cp /etc/rc.d/pozix/nsswitch.conf.noldap /etc/nsswitch.conf
echo "!!! Aborting Any Remaining Start-up Processes !!!"
exit 1
fi
i=$(($i+1))
done
In summary, it appears that how you start SMBD is just as important as when you start it. If you start SMBD when nsswitch.conf has no LDAP entries, you get a version of smbd running linked to nss_ldap.so thinking it should only rely upon /etc/passwd (if that is all that is in the nsswitch.conf file) and changing the nsswitch.conf contents after SMBD is running has no effect.
Hope this helps other system builders....

How can I view live MySQL queries?

How can I trace MySQL queries on my Linux server as they happen?
For example I'd love to set up some sort of listener, then request a web page and view all of the queries the engine executed, or just view all of the queries being run on a production server. How can I do this?
You can log every query to a log file really easily:
mysql> SHOW VARIABLES LIKE "general_log%";
+------------------+----------------------------+
| Variable_name | Value |
+------------------+----------------------------+
| general_log | OFF |
| general_log_file | /var/run/mysqld/mysqld.log |
+------------------+----------------------------+
mysql> SET GLOBAL general_log = 'ON';
Do your queries (on any db). Grep or otherwise examine /var/run/mysqld/mysqld.log
Then don't forget to
mysql> SET GLOBAL general_log = 'OFF';
or the performance will plummet and your disk will fill!
You can run the MySQL command SHOW FULL PROCESSLIST; to see what queries are being processed at any given time, but that probably won't achieve what you're hoping for.
The best method to get a history without having to modify every application using the server is probably through triggers. You could set up triggers so that every query run results in the query being inserted into some sort of history table, and then create a separate page to access this information.
Do be aware that this will probably considerably slow down everything on the server though, with adding an extra INSERT on top of every single query.
Edit: another alternative is the General Query Log, but having it written to a flat file would remove a lot of possibilities for flexibility of displaying, especially in real-time. If you just want a simple, easy-to-implement way to see what's going on though, enabling the GQL and then using running tail -f on the logfile would do the trick.
Even though an answer has already been accepted, I would like to present what might even be the simplest option:
$ mysqladmin -u bob -p -i 1 processlist
This will print the current queries on your screen every second.
-u The mysql user you want to execute the command as
-p Prompt for your password (so you don't have to save it in a file or have the command appear in your command history)
i The interval in seconds.
Use the --verbose flag to show the full process list, displaying the entire query for each process. (Thanks, nmat)
There is a possible downside: fast queries might not show up if they run between the interval that you set up. IE: My interval is set at one second and if there is a query that takes .02 seconds to run and is ran between intervals, you won't see it.
Use this option preferably when you quickly want to check on running queries without having to set up a listener or anything else.
Run this convenient SQL query to see running MySQL queries. It can be run from any environment you like, whenever you like, without any code changes or overheads. It may require some MySQL permissions configuration, but for me it just runs without any special setup.
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep';
The only catch is that you often miss queries which execute very quickly, so it is most useful for longer-running queries or when the MySQL server has queries which are backing up - in my experience this is exactly the time when I want to view "live" queries.
You can also add conditions to make it more specific just any SQL query.
e.g. Shows all queries running for 5 seconds or more:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND TIME >= 5;
e.g. Show all running UPDATEs:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND INFO LIKE '%UPDATE %';
For full details see: http://dev.mysql.com/doc/refman/5.1/en/processlist-table.html
strace
The quickest way to see live MySQL/MariaDB queries is to use debugger. On Linux you can use strace, for example:
sudo strace -e trace=read,write -s 2000 -fp $(pgrep -nf mysql) 2>&1
Since there are lot of escaped characters, you may format strace's output by piping (just add | between these two one-liners) above into the following command:
grep --line-buffered -o '".\+[^"]"' | grep --line-buffered -o '[^"]*[^"]' | while read -r line; do printf "%b" $line; done | tr "\r\n" "\275\276" | tr -d "[:cntrl:]" | tr "\275\276" "\r\n"
So you should see fairly clean SQL queries with no-time, without touching configuration files.
Obviously this won't replace the standard way of enabling logs, which is described below (which involves reloading the SQL server).
dtrace
Use MySQL probes to view the live MySQL queries without touching the server. Example script:
#!/usr/sbin/dtrace -q
pid$target::*mysql_parse*:entry /* This probe is fired when the execution enters mysql_parse */
{
printf("Query: %s\n", copyinstr(arg1));
}
Save above script to a file (like watch.d), and run:
pfexec dtrace -s watch.d -p $(pgrep -x mysqld)
Learn more: Getting started with DTracing MySQL
Gibbs MySQL Spyglass
See this answer.
Logs
Here are the steps useful for development proposes.
Add these lines into your ~/.my.cnf or global my.cnf:
[mysqld]
general_log=1
general_log_file=/tmp/mysqld.log
Paths: /var/log/mysqld.log or /usr/local/var/log/mysqld.log may also work depending on your file permissions.
then restart your MySQL/MariaDB by (prefix with sudo if necessary):
killall -HUP mysqld
Then check your logs:
tail -f /tmp/mysqld.log
After finish, change general_log to 0 (so you can use it in future), then remove the file and restart SQL server again: killall -HUP mysqld.
I'm in a particular situation where I do not have permissions to turn logging on, and wouldn't have permissions to see the logs if they were turned on. I could not add a trigger, but I did have permissions to call show processlist. So, I gave it a best effort and came up with this:
Create a bash script called "showsqlprocesslist":
#!/bin/bash
while [ 1 -le 1 ]
do
mysql --port=**** --protocol=tcp --password=**** --user=**** --host=**** -e "show processlist\G" | grep Info | grep -v processlist | grep -v "Info: NULL";
done
Execute the script:
./showsqlprocesslist > showsqlprocesslist.out &
Tail the output:
tail -f showsqlprocesslist.out
Bingo bango. Even though it's not throttled, it only took up 2-4% CPU on the boxes I ran it on. I hope maybe this helps someone.
From a command line you could run:
watch --interval=[your-interval-in-seconds] "mysqladmin -u root -p[your-root-pw] processlist | grep [your-db-name]"
Replace the values [x] with your values.
Or even better:
mysqladmin -u root -p -i 1 processlist;
This is the easiest setup on a Linux Ubuntu machine I have come across. Crazy to see all the queries live.
Find and open your MySQL configuration file, usually /etc/mysql/my.cnf on Ubuntu. Look for the section that says “Logging and Replication”
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
log = /var/log/mysql/mysql.log
Just uncomment the “log” variable to turn on logging. Restart MySQL with this command:
sudo /etc/init.d/mysql restart
Now we’re ready to start monitoring the queries as they come in. Open up a new terminal and run this command to scroll the log file, adjusting the path if necessary.
tail -f /var/log/mysql/mysql.log
Now run your application. You’ll see the database queries start flying by in your terminal window. (make sure you have scrolling and history enabled on the terminal)
FROM http://www.howtogeek.com/howto/database/monitor-all-sql-queries-in-mysql/
Check out mtop.
I've been looking to do the same, and have cobbled together a solution from various posts, plus created a small console app to output the live query text as it's written to the log file. This was important in my case as I'm using Entity Framework with MySQL and I need to be able to inspect the generated SQL.
Steps to create the log file (some duplication of other posts, all here for simplicity):
Edit the file located at:
C:\Program Files (x86)\MySQL\MySQL Server 5.5\my.ini
Add "log=development.log" to the bottom of the file. (Note saving this file required me to run my text editor as an admin).
Use MySql workbench to open a command line, enter the password.
Run the following to turn on general logging which will record all queries ran:
SET GLOBAL general_log = 'ON';
To turn off:
SET GLOBAL general_log = 'OFF';
This will cause running queries to be written to a text file at the following location.
C:\ProgramData\MySQL\MySQL Server 5.5\data\development.log
Create / Run a console app that will output the log information in real time:
Source available to download here
Source:
using System;
using System.Configuration;
using System.IO;
using System.Threading;
namespace LiveLogs.ConsoleApp
{
class Program
{
static void Main(string[] args)
{
// Console sizing can cause exceptions if you are using a
// small monitor. Change as required.
Console.SetWindowSize(152, 58);
Console.BufferHeight = 1500;
string filePath = ConfigurationManager.AppSettings["MonitoredTextFilePath"];
Console.Title = string.Format("Live Logs {0}", filePath);
var fileStream = new FileStream(filePath, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite);
// Move to the end of the stream so we do not read in existing
// log text, only watch for new text.
fileStream.Position = fileStream.Length;
StreamReader streamReader;
// Commented lines are for duplicating the log output as it's written to
// allow verification via a diff that the contents are the same and all
// is being output.
// var fsWrite = new FileStream(#"C:\DuplicateFile.txt", FileMode.Create);
// var sw = new StreamWriter(fsWrite);
int rowNum = 0;
while (true)
{
streamReader = new StreamReader(fileStream);
string line;
string rowStr;
while (streamReader.Peek() != -1)
{
rowNum++;
line = streamReader.ReadLine();
rowStr = rowNum.ToString();
string output = String.Format("{0} {1}:\t{2}", rowStr.PadLeft(6, '0'), DateTime.Now.ToLongTimeString(), line);
Console.WriteLine(output);
// sw.WriteLine(output);
}
// sw.Flush();
Thread.Sleep(500);
}
}
}
}
In addition to previous answers describing how to enable general logging, I had to modify one additional variable in my vanilla MySql 5.6 installation before any SQL was written to the log:
SET GLOBAL log_output = 'FILE';
The default setting was 'NONE'.
Gibbs MySQL Spyglass
AgilData launched recently the Gibbs MySQL Scalability Advisor (a free self-service tool) which allows users to capture a live stream of queries to be uploaded to Gibbs. Spyglass (which is Open Source) will watch interactions between your MySQL Servers and client applications. No reconfiguration or restart of the MySQL database server is needed (either client or app).
GitHub: AgilData/gibbs-mysql-spyglass
Learn more: Packet Capturing MySQL with Rust
Install command:
curl -s https://raw.githubusercontent.com/AgilData/gibbs-mysql-spyglass/master/install.sh | bash
If you want to have monitoring and statistics, than there is a good and open-source tool Percona Monitoring and Management
But it is a server based system, and it is not very trivial for launch.
It has also live demo system for test.

How do I do backups in MySQL? [duplicate]

This question already has answers here:
How do I backup a MySQL database?
(5 answers)
Closed 9 years ago.
How do I do backups in MySQL?
I'm hoping there'll be something better than just running mysqldump every "x" hours.
Is there anything like SQL Server has, where you can take a full backup each day, and then incrementals every hour, so if your DB dies you can restore up to the latest backup?
Something like the DB log, where as long as the log doesn't die, you can restore up to the exact point where the DB died?
Also, how do these things affect locking?
I'd expect the online transactions to be locked for a while if I do a mysqldump.
You might want to look at incremental backups.
mysqldump is a reasonable approach, but bear in mind that for some engines, this will lock your tables for the duration of the dump - and this has availability concerns for large production datasets.
An obvious alternative to this is mk-parallel-dump from Maatkit (http://www.maatkit.org/) which you should really check out if you're a mysql administrator. This dumps multiple tables or databases in parallel using mysqldump, thereby decreasing the amount of total time your dump takes.
If you're running in a replicated setup (and if you're using MySQL for important data in production, you have no excuses not to be doing so), taking dumps from a replication slave dedicated to the purpose will prevent any lock issues from causing trouble.
The next obvious alternative - on Linux, at least - is to use LVM snapshots. You can lock your tables, snapshot the filesystem, and unlock the tables again; then start an additional MySQL using a mount of that snapshot, dumping from there. This approach is described here: http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/
now i am beginning to sound like a marketeer for this product. i answered a question with it here, then i answered another with it again here.
in a nutshell, try sqlyog (enterprise in your case) from webyog for all your mysql requirements. it not only schedules backups, but also schedules synchronization so you can actually replicate your database to a remote server.
it has a free community edition as well as an enterprise edition. i recommend the later to you though i also reccomend you start with the comm edition and first see how you like it.
I use mysqlhotcopy, a fast on-line hot-backup utility for local MySQL databases and tables. I'm pretty happy with it.
the Percona guys made a open source altenative to innobackup ...
Xtrabackup
https://launchpad.net/percona-xtrabackup/
Read this article about XtraDB
http://www.linux-mag.com/cache/7356/1.html
You might want to supplement your current offline backup scheme with MySQL replication.
Then if you have a hardware failure you can just swap machines. If you catch the failure quickly you're users won't even notice any downtime or data loss.
I use a simple script that dumps the mysql database into a tar.gz file, encrypts it using gpg and sends it to a mail account (Google Mail, but that's irrelevant really)
The script is a Python script, which basically runs the following command, and emails the output file.
mysqldump -u theuser -p mypassword thedatabase | gzip -9 - | gpg -e -r 12345 -r 23456 > 2008_01_02.tar.gz.gpg
This is the entire backup. It also has the web-backup part, which just tar/gzips/encrypts the files. It's a fairly small site, so the web backups are much less than 20MB, so can be sent to the GMail account without problem (the MySQL dumps are tiny, about 300KB compressed). It's extremely basic, and won't scale very well. I run it once a week using cron.
I'm not quite sure how we're supposed to put longish scripts in answers, so I'll just shove it as a code-block..
#!/usr/bin/env python
#encoding:utf-8
#
# Creates a GPG encrypted web and database backups, and emails it
import os, sys, time, commands
################################################
### Config
DATE = time.strftime("%Y-%m-%d_%H-%M")
# MySQL login
SQL_USER = "mysqluser"
SQL_PASS = "mysqlpassword"
SQL_DB = "databasename"
# Email addresses
BACKUP_EMAIL=["email1#example.com", "email2#example.com"] # Array of email(s)
FROM_EMAIL = "root#myserver.com" # Only one email
# Temp backup locations
DB_BACKUP="/home/backupuser/db_backup/mysite_db-%(date)s.sql.gz.gpg" % {'date':DATE}
WEB_BACKUP="/home/backupuser/web_backup/mysite_web-%(date)s.tar.gz.gpg" % {'date':DATE}
# Email subjects
DB_EMAIL_SUBJECT="%(date)s/db/mysite" % {'date':DATE}
WEB_EMAIL_SUBJECT="%(date)s/web/mysite" % {'date':DATE}
GPG_RECP = ["MrAdmin","MrOtherAdmin"]
### end Config
################################################
################################################
### Process config
GPG_RECP = " ".join(["-r %s" % (x) for x in GPG_RECP]) # Format GPG_RECP as arg
sql_backup_command = "mysqldump -u %(SQL_USER)s -p%(SQL_PASS)s %(SQL_DB)s | gzip -9 - | gpg -e %(GPG_RECP)s > %(DB_BACKUP)s" % {
'GPG_RECP':GPG_RECP,
'DB_BACKUP':DB_BACKUP,
'SQL_USER':SQL_USER,
'SQL_PASS':SQL_PASS,
'SQL_DB':SQL_DB
}
web_backup_command = "cd /var/www/; tar -c mysite.org/ | gzip -9 | gpg -e %(GPG_RECP)s > %(WEB_BACKUP)s" % {
'GPG_RECP':GPG_RECP,
'WEB_BACKUP':WEB_BACKUP,
}
# end Process config
################################################
################################################
### Main application
def main():
"""Main backup function"""
print "Backing commencing at %s" % (DATE)
# Run commands
print "Creating db backup..."
sql_status,sql_cmd_out = commands.getstatusoutput(sql_backup_command)
if sql_status == 0:
db_file_size = round(float( os.stat(DB_BACKUP)[6] ) /1024/1024, 2) # Get file-size in MB
print "..successful (%.2fMB)" % (db_file_size)
try:
send_mail(
send_from = FROM_EMAIL,
send_to = BACKUP_EMAIL,
subject = DB_EMAIL_SUBJECT,
text = "Database backup",
files = [DB_BACKUP],
server = "localhost"
)
print "Sending db backup successful"
except Exception,errormsg:
print "Sending db backup FAILED. Error was:",errormsg
#end try
# Remove backup file
print "Removing db backup..."
try:
os.remove(DB_BACKUP)
print "...successful"
except Exception, errormsg:
print "...FAILED. Error was: %s" % (errormsg)
#end try
else:
print "Creating db backup FAILED. Output was:", sql_cmd_out
#end if sql_status
print "Creating web backup..."
web_status,web_cmd_out = commands.getstatusoutput(web_backup_command)
if web_status == 0:
web_file_size = round(float( os.stat(WEB_BACKUP)[6] ) /1024/1024, 2) # File size in MB
print "..successful (%.2fMB)" % (web_file_size)
try:
send_mail(
send_from = FROM_EMAIL,
send_to = BACKUP_EMAIL,
subject = WEB_EMAIL_SUBJECT,
text = "Website backup",
files = [WEB_BACKUP],
server = "localhost"
)
print "Sending web backup successful"
except Exception,errormsg:
print "Sending web backup FAIELD. Error was: %s" % (errormsg)
#end try
# Remove backup file
print "Removing web backup..."
try:
os.remove(WEB_BACKUP)
print "...successful"
except Exception, errormsg:
print "...FAILED. Error was: %s" % (errormsg)
#end try
else:
print "Creating web backup FAILED. Output was:", web_cmd_out
#end if web_status
#end main
################################################
################################################
# Send email function
# needed email libs..
import smtplib
from email.MIMEMultipart import MIMEMultipart
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email.Utils import COMMASPACE, formatdate
from email import Encoders
def send_mail(send_from, send_to, subject, text, files=[], server="localhost"):
assert type(send_to)==list
assert type(files)==list
msg = MIMEMultipart()
msg['From'] = send_from
msg['To'] = COMMASPACE.join(send_to)
msg['Date'] = formatdate(localtime=True)
msg['Subject'] = subject
msg.attach( MIMEText(text) )
for f in files:
part = MIMEBase('application', "octet-stream")
try:
part.set_payload( open(f,"rb").read() )
except Exception, errormsg:
raise IOError("File not found: %s"%(errormsg))
Encoders.encode_base64(part)
part.add_header('Content-Disposition', 'attachment; filename="%s"' % os.path.basename(f))
msg.attach(part)
#end for f
smtp = smtplib.SMTP(server)
smtp.sendmail(send_from, send_to, msg.as_string())
smtp.close()
#end send_mail
################################################
if __name__ == '__main__':
main()
You can make full dumps of InnoDB databases/tables without locking (downtime) via mysqldump with "--single-transaction --skip-lock-tables" options. Works well for making weekly snapshots + daily/hourly binary log increments (#Using the Binary Log to Enable Incremental Backups).
#Jake,
Thanks for the info.
Now, it looks like only the commercial version has backup features.
Isn't there ANYTHING built into MySQL to do decent backups?
The official MySQL page even recommends things like "well, you can copy the files, AS LONG AS THEY'RE NOT BEING UPDATED"...
The problem with a straight backup of the mysql database folder is that the backup will not necessarily be consistent, unless you do a write-lock during the backup.
I run a script that iterates through all of the databases, doing a mysqldump and gzip on each to a backup folder, and then backup that folder to tape.
This, however, means that there is no such thing as incremental backups, since the nightly dump is a complete dump. But I would argue that this could be a good thing, since a restore from a full backup will be a significantly quicker process than restoring from incrementals - and if you are backing up to tape, it will likely mean gathering a number of tapes before you can do a full restore.
In any case, whichever backup plan you go with, make sure to do a trial restore to ensure that it works, and get an idea of how long it might take, and exactly what the steps are that you need to go through.
the correct way to run incremental or continuous backups of a mysql server is with binary logs.
to start with, lock all of the tables or bring the server down. use mysql dump to make a backup, or just copy the data directory. you only have to do this once, or any time you want a FULL backup.
before you bring the server back up, make sure binary logging is enabled.
to take an incremental backup, log in to the server and issue a FLUSH LOGS command. then backup the most recently closed binary log file.
if you have all innodb tables, it's simpler to just use inno hot backup (not free) or mysqldump with the --single-transaction option (you'd better have a lot of memory to handle the transactions).
Binary logs are probably the correct way to do incremental backups, but if you don't trust binary file formats for permanent storage here is an ASCII way to do incremental backups.
mysqldump is not a bad format, the main problem is that it outputs stuff a table as one big line. The following trivial sed will split its output along record borders:
mysqldump --opt -p | sed -e "s/,(/,\n(/g" > database.dump
The resulting file is pretty diff-friendly, and I've been keeping them in a standard SVN repository fairly successfully. That also allows you to keep a history of backups, if you find that the last version got borked and you need last week's version.
This is a pretty solid solution for Linux shell. I have been using it for years:
http://sourceforge.net/projects/automysqlbackup/
Does rolling backups: daily, monthly, yearly
Lots of options
#Daniel,
in case you are still interested, there is a newish (new to me) solution shared by Paul Galbraith, a tool that allows for online backup of innodb tables called ibbackup from oracle which to quote Paul,
when used in conjunction with
innobackup, has worked great in
creating a nightly backup, with no
downtime during the backup
more detail can be found on Paul's blog
Sound like you are talking about transaction roll back.
So in terms of what you need, if you have the logs containing all historical queries, isn't that the backup already? Why do you need an incremental backup which is basically a redundant copy of all the information in DB logs?
If so, why don't you just use mysqldump and do the backup every once a while?