Removing old MySQL / MariaDB backups in Bash Backup Script - mysql

I've written a bash script, initiated on cron, that backups all databases on a particular machine nightly and weekly. The script correctly removes old databases, except for those cases when there's been a change in month.
As an example, let's say is November 2nd. The script runs at 11:00pm, and correctly removes the backup made from November 1st. But come December 1st, the script gets confused, and does not correctly remove the backup made from November 30th.
How can I fix this script to correctly remove the old backups in this case?
DATABASES=$(echo 'show databases;' | mysql -u backup --password='(password)' | grep -v ^Database$)
LIST=$(echo $DATABASES | sed -e "s/\s/\n/g")
DATE=$(date +%Y%m%d)
DAYOLD=$(($DATE-1))
SUNDAY=$(date +%a)
WEEKOLD=$(($DATE-7))
for i in $LIST; do
if [[ $i != "mysql" ]]; then
mysqldump --single-transaction $i > /mnt/backups/mariadb/daily/$i.$DATE.sql
if [ -f /mnt/backups/mariadb/daily/$i.$DAYOLD.sql ]; then
rm -f /mnt/backups/mariadb/daily/$i.$DAYOLD.sql
fi
if [[ $SUNDAY == "Sun" ]]; then
cp /mnt/backups/mariadb/daily/$i.$DATE.sql /mnt/backups/mariadb/weekly/$i.$DATE.sql
rm -f /mnt/backups/mariadb/weekly/$i.$WEEKOLD.sql
fi
fi
done

If you know the number of backups performed in a specific range of time, let's say you know from 2nd Nov until 2nd Dec you know that exactly 30 backups have been made and you now want to erase those, just use the number of backups, it's super simple to do and you don't have to deal with dates which is pretty complex in bash:
$ (ls -t|head -n 30;ls)|grep -v ^Database|sort|uniq -u|xargs rm -rf
You can then easily automate this script by removing each day the older one so you only get the fix number of backups you want:
#! /bin/bash
# Create new full backup
BACKUP_DIR="/path-to-backups/"
BACKUP_DAYS=1
# Prepare backup
cd ${BACKUP_DIR}
latest=`ls -rt | grep 201 | head -1`
# Change latest reference
ln -sf ${BACKUP_DIR}${latest} latest
# Cleanup older than one week (n days)
to_remove=`(ls -t | grep 201 | head -n 3;ls)|sort|uniq -u`
echo "Cleaning up... $to_remove"
(ls -t|head -n ${BACKUP_DAYS};ls)|sort|uniq -u|xargs rm -rf
echo "Backup Finished"
exit 0
Then you can link it to daily cron. This is explained in this blog entry, how to do this stuff in a very straightforward fashion (but with hot backups, no mysqldump): http://codeispoetry.me/index.php/mariadb-daily-hot-backups-with-xtrabackup/

I was making this too complicated. Instead of using the date at all, I'm just searching for the age of the file backup with:
find /mnt/backups/mariadb/weekly/* -type f -mtime +8 -exec rm -f {} \;
So the entire script becomes:
DATABASES=$(echo 'show databases;' | mysql -u backup --password='foo' | grep -v ^Database$)
LIST=$(echo $DATABASES | sed -e "s/\s/\n/g")
DATE=$(date +%Y%m%d)
SUNDAY=$(date +%a)
for i in $LIST; do
if [[ $i != "mysql" ]]; then
/bin/nice mysqldump --single-transaction $i > /mnt/backups/mariadb/daily/$i.$DATE.sql
find /mnt/backups/mariadb/daily/* -type f -mtime +1 -exec rm -f {} \;
if [[ $SUNDAY == "Sun" ]]; then
cp /mnt/backups/mariadb/daily/$i.$DATE.sql /mnt/backups/mariadb/weekly/$i.$DATE.sql
find /mnt/backups/mariadb/weekly/* -type f -mtime +8 -exec rm -f {} \;
fi
fi
chown -R backup.backup /mnt/backups
done

Related

Shell script to take up daily DB backup (for something like Redmine, etc)

Need a script solution to take daily back-up of a DB and send a mail for it.
The code makes a back-up for mysql DB and the files involved (as in most use cases)
#!/bin/bash
cat /dev/null > Body.text
DATE=`date +%b/%d/%Y`
BACKUP_DIR="/home/user/Redmine/tmp"
DB_NAME="xxx"
DB_USER="xxx"
DB_PASSWORD="XXXXXX"
Redmine_Root="/home/webuser/apps/redmine"
echo "Redmine Backup Directory is $BACKUP_DIR" >> /root/Body.text
rm -rf $BACKUP_DIR/files* $BACKUP_DIR/Redmine*
# -- MySQL
echo "`date` at Redmine's MySQL db Backup on" >> /root/Body.text
mysqldump -u $DB_USER -p$DB_PASSWORD $DB_NAME > $BACKUP_DIR/Redmine.sql
gzip $BACKUP_DIR/Redmine.sql
rm -f $BACKUP_DIR/Redmine.sql
#........Redmine
echo "`date` at REDMINE_Files Backup " >> /root/Body.text
#Create back up for Files
echo "Backing up Redmine attachments..."
rsync -a $Redmine_Root/files/ $BACKUP_DIR/files/
echo "Packing into single archive redmine files"
tar -zcvf $BACKUP_DIR/redminefiles.tar $BACKUP_DIR/files
rm -rf $BACKUP_DIR/files/
#Create a single tar file
echo "Create Backup of Single File" >> /root/Body.text
cd /home/user/Redmine
tar -cvf Redmine`date +%b%d%y`.tar tmp/
tar -cvf /home/user/Redmine`date +%d%b%y`.tar /home/user/Redmine/tmp
rm -f $BACKUP_DIR/redminefiles* $BACKUP_DIR/Redmine*
rm -f $BACKUP_DIR/Redmine.sql.gz
#Cleaning Up
echo "Delete five days olderbackup" >> /root/Body.text
find /home/maitreya/Redmine/* -mtime +5 -exec rm -rf {} \;
#Sending Report
/usr/bin/mutt -e "set realname=\" Redmine-Backup\" " \ -s "Redmine Backup on $DATE" xy#yz.com -c yz#xy.com < /root/Body.text

How to log mysql queries of specific database - Linux

I have been looking at this post
How can I log "show processlist" when there are more than n queries?
It is working fine by running this command
mysql -uroot -e "show full processlist" | tee plist-$date.log | wc -l
the problem it is overriding the file
I also want to run it in cronjob.
I have added this command to the /var/spool/cron/root:
* * * * * [ $(mysql -uroot -e "show full processlist" | tee plist-`date +%F-%H-%M`.log | wc -l) -lt 51 ] && rm plist-`date +%F-%H-%M`.log
but it is not working. Or maybe it is saving the log file some place out of the root folder.
So my question is: how to temporarily log all queries from specific database and specific table and save the whole queries in 1 file?
Note: it is not slow/long queries log I am looking for, but just temp solution to read which queries are running for a database
solution is:
watch -n 1 "mysqladmin -u root -pXXXXX processlist | grep tablename" | tee -a /root/plist.log
The % character has special meaning in crontab commands, you need to escape them. So you need to do:
* * * * * [ $(mysql -uroot -e "show full processlist" | tee plist-`date +\%F-\%H-\%M`.log | wc -l) -lt 51 ] && rm plist-`date +\%F-\%H-\%M`.log
If you want to use your original command, but not overwrite the file each time, you can use the -a option of tee to append:
mysql -uroot -e "show full processlist" | tee -a plist-$date.log | wc -l
To run the command every second for a minute, write a shell script:
#!/bin/bash
for i in {1..60}; do
[ $(mysql -uroot -e "show full processlist" | tee -a plist.log | wc -l) -lt 51 ] && rm plist.log
sleep 1
done
You can then run this script from cron every minute:
* * * * * /path/to/script
Although if you want to run something continuously like this, cron may not be the best way. You could use /etc/inittab to run the script when the system boots, and it will automatically restart it if it dies for some reason. Then you would just use an infinite loop:
#!/bin/bash
while :; do
[ $(mysql -uroot -e "show full processlist" | tee -a plist.log | wc -l) -lt 51 ] && rm plist.log
sleep 1
done

Outputting data from 5gb file with awk

I have a csv file with approximately 300 columns.
I'm using awk to create a subset of this file where the 24th column is "CA".
Example of data:
Here's what I am trying:
awk -F "," '{if($24~/CA/)print}' myfile.csv > subset.csv
After approximately 10 minutes the subset file grew to 400 mb, and then I killed it because this is too slow.
How can I speed this up? Perhaps a combination of sed / awk?
\
tl;dr:
awk implementations can significantly differ in performance.
In this particular case, see if using gawk (GNU awk) helps.
Ubuntu comes with mawk as the default awk, which is usually considered faster than gawk. However, in the case at hand it seems that gawk is significantly faster (related to line length?), at least based on the following simplified tests, which I ran
in a VM on Ubuntu 14.04 on a 1-GB file with 300 columns of length 2.
The tests also include an equivalent sed and grep command.
Hopefully they provide at least a sense of comparative performance.
Test script:
#!/bin/bash
# Pass in test file
f=$1
# Suppress stdout
exec 1>/dev/null
awkProg='$24=="CA"'
echo $'\n\n\t'" $(mawk -W version 2>&1 | head -1)" >&2
time mawk -F, "$awkProg" "$f"
echo $'\n\n\t'" $(gawk --version 2>&1 | head -1)" >&2
time gawk -F, "$awkProg" "$f"
sedProg='/^([^,]+,){23}CA,/p'
echo $'\n\n\t'" $(sed --version 2>&1 | head -1)" >&2
time sed -En "$sedProg" "$f"
grepProg='^([^,]+,){23}CA,'
echo $'\n\n\t'" $(grep --version 2>&1 | head -1)" >&2
time grep -E "$grepProg" "$f"
Results:
mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan
real 0m11.341s
user 0m4.780s
sys 0m6.464s
GNU Awk 4.0.1
real 0m3.560s
user 0m0.788s
sys 0m2.716s
sed (GNU sed) 4.2.2
real 0m9.579s
user 0m4.016s
sys 0m5.504s
grep (GNU grep) 2.16
real 0m50.009s
user 0m42.040s
sys 0m7.896s

Linux shell script for database backup

I tried many scripts for database backup but I couldn't make it. I want to backup my database every hour.
I added files to "/etc/cron.hourly/" folder, changed its chmod to 755, but it didn't run.
At least I write my pseudo code.
I would be happy if you can write a script for this operation and tell me what should I do more ?
After adding this script file to /etc/cron.hourly/ folder.
Get current date and create a variable, date=date(d_m_y_H_M_S)
Create a variable for the file name, filename="$date".gz
Get the dump of my database like this mysqldump --user=my_user --password=my_pass --default-character-set=utf8 my_database | gzip > "/var/www/vhosts/system/example.com/httpdocs/backups/$("filename")
Delete all files in the folder /var/www/vhosts/system/example.com/httpdocs/backups/ that are older than 8 days
To the file "/var/www/vhosts/system/example.com/httpdocs/backup_log.txt", this text will be written: Backup is created at $("date")
Change the file owners (chown) from root to "my_user". Because I want to open the backup and log files from the "my_user" FTP account.
I don't want an email after each cron. >/dev/null 2>&1 will be added.
After hours and hours work, I created a solution like the below. I copy paste for other people that can benefit.
First create a script file and give this file executable permission.
# cd /etc/cron.daily/
# touch /etc/cron.daily/dbbackup-daily.sh
# chmod 755 /etc/cron.daily/dbbackup-daily.sh
# vi /etc/cron.daily/dbbackup-daily.sh
Then copy following lines into file with Shift+Ins
#!/bin/sh
now="$(date +'%d_%m_%Y_%H_%M_%S')"
filename="db_backup_$now".gz
backupfolder="/var/www/vhosts/example.com/httpdocs/backups"
fullpathbackupfile="$backupfolder/$filename"
logfile="$backupfolder/"backup_log_"$(date +'%Y_%m')".txt
echo "mysqldump started at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
mysqldump --user=mydbuser --password=mypass --default-character-set=utf8 mydatabase | gzip > "$fullpathbackupfile"
echo "mysqldump finished at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
chown myuser "$fullpathbackupfile"
chown myuser "$logfile"
echo "file permission changed" >> "$logfile"
find "$backupfolder" -name db_backup_* -mtime +8 -exec rm {} \;
echo "old files deleted" >> "$logfile"
echo "operation finished at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
echo "*****************" >> "$logfile"
exit 0
Edit:
If you use InnoDB and backup takes too much time, you can add "single-transaction" argument to prevent locking. So mysqldump line will be like this:
mysqldump --user=mydbuser --password=mypass --default-character-set=utf8
--single-transaction mydatabase | gzip > "$fullpathbackupfile"
Create a script similar to this:
#!/bin/sh -e
location=~/`date +%Y%m%d_%H%M%S`.db
mysqldump -u root --password=<your password> database_name > $location
gzip $location
Then you can edit the crontab of the user that the script is going to run as:
$> crontab -e
And append the entry
01 * * * * ~/script_path.sh
This will make it run on the first minute of every hour every day.
Then you just have to add in your rolls and other functionality and you are good to go.
I got the same issue.
But I manage to write a script.
Hope this would help.
#!/bin/bash
# Database credentials
user="username"
password="password"
host="localhost"
db_name="dbname"
# Other options
backup_path="/DB/DB_Backup"
date=$(date +"%d-%b-%Y")
# Set default file permissions
umask 177
# Dump database into SQL file
mysqldump --user=$user --password=$password --host=$host $db_name >$backup_path/$db_name-$date.sql
# Delete files older than 30 days
find $backup_path/* -mtime +30 -exec rm {} \;
#DB backup log
echo -e "$(date +'%d-%b-%y %r '):ALERT:Database has been Backuped" >>/var/log/DB_Backup.log
#!/bin/sh
#Procedures = For DB Backup
#Scheduled at : Every Day 22:00
v_path=/etc/database_jobs/db_backup
logfile_path=/etc/database_jobs
v_file_name=DB_Production
v_cnt=0
MAILTO="abc#as.in"
touch "$logfile_path/kaka_db_log.log"
#DB Backup
mysqldump -uusername -ppassword -h111.111.111.111 ddbname > $v_path/$v_file_name`date +%Y-%m-%d`.sql
if [ "$?" -eq 0 ]
then
v_cnt=`expr $v_cnt + 1`
mail -s "DB Backup has been done successfully" $MAILTO < $logfile_path/db_log.log
else
mail -s "Alert : kaka DB Backup has been failed" $MAILTO < $logfile_path/db_log.log
exit
fi
Here is my mysql backup script for ubuntu in case it helps someone.
#Mysql back up script
start_time="$(date -u +%s)"
now(){
date +%d-%B-%Y_%H-%M-%S
}
ip(){
/sbin/ifconfig eth0 2>/dev/null|awk '/inet addr:/ {print $2}'|sed 's/addr://'
}
filename="`now`".zip
backupfolder=/path/to/any/folder
fullpathbackupfile=$backupfolder/$filename
db_user=xxx
db_password=xxx
db_name=xxx
printf "\n\n"
printf "******************************\n"
printf "Started Automatic Mysql Backup\n"
printf "******************************\n"
printf "TIME: `now`\n"
printf "IP_ADDRESS: `ip` \n"
printf "DB_SERVER_NAME: DB-SERVER-1\n"
printf "%sBACKUP_FILE_PATH $fullpathbackupfile\n"
printf "Starting Mysql Dump \n"
mysqldump -u $db_user -p$db_password $db_name| pv | zip > $fullpathbackupfile
end_time="$(date -u +%s)"
elapsed=$(($end_time-$start_time))
printf "%sMysql Dump Completed In $elapsed seconds\n"
printf "******************************\n"
PS: Rememember to install pv and zip in your ubuntu
sudo apt install pv
sudo apt install zip
Here is how I set crontab by using crontab -e in ubuntu to run every 6 hours
0 */6 * * * sh /path/to/shfile/backup-mysql.sh >> /path/to/logs/backup-mysql.log 2>&1
Cool thing is it will create a zip file which is easier to unzip from anywhere
Now, copy the following content in a script file (like: /backup/mysql-backup.sh) and save on your Linux system.
#!/bin/bash
export PATH=/bin:/usr/bin:/usr/local/bin
TODAY=`date +"%d%b%Y"`
DB_BACKUP_PATH='/backup/dbbackup'
MYSQL_HOST='localhost'
MYSQL_PORT='3306'
MYSQL_USER='root'
MYSQL_PASSWORD='mysecret'
DATABASE_NAME='mydb'
BACKUP_RETAIN_DAYS=30
mkdir -p ${DB_BACKUP_PATH}/${TODAY}
echo "Backup started for database - ${DATABASE_NAME}"
mysqldump -h ${MYSQL_HOST} \
-P ${MYSQL_PORT} \
-u ${MYSQL_USER} \
-p${MYSQL_PASSWORD} \
${DATABASE_NAME} | gzip > ${DB_BACKUP_PATH}/${TODAY}/${DATABASE_NAME}-${TODAY}.sql.gz
if [ $? -eq 0 ]; then
echo "Database backup successfully completed"
else
echo "Error found during backup"
exit 1
fi
##### Remove backups older than {BACKUP_RETAIN_DAYS} days #####
DBDELDATE=`date +"%d%b%Y" --date="${BACKUP_RETAIN_DAYS} days ago"`
if [ ! -z ${DB_BACKUP_PATH} ]; then
cd ${DB_BACKUP_PATH}
if [ ! -z ${DBDELDATE} ] && [ -d ${DBDELDATE} ]; then
rm -rf ${DBDELDATE}
fi
fi
After creating or downloading script make sure to set execute permission to run properly.
$ chmod +x /backup/mysql-backup.sh
Edit crontab on your system with crontab -e command. Add following settings to enable backup at 3 in the morning.
0 3 * * * root /backup/mysql-backup.sh
Add the following code to your shell script file. Replace dbname, dbuser and dbpass with your database name, username and password respectively.
#!/bin/sh
echo "starting db backup"
day="$(date +"%m-%d-%y")"
db_backup="mydb_${day}.sql"
sudo mysqldump -udbuser -pdbpass --no-tablespaces dbname >/home/${db_backup}
echo " backup complete"
If you want to compress the above backup data, just
Replace with the following code.
db_backup="mydb_${day}.gz"
sudo mysqldump -udbuser -pdbpass --no-tablespaces dbname | gzip -c >/home/${db_backup}
If you want to delete files older than 14 days in a folders,
use following code.
#!/bin/bash
fpath1=/home/ubuntu/mysql/*
fpath2=/home/ubuntu/postgsql/*
file_path=($fpath1 $fpath2)
for i in ${file_path[#]};
do
find $i -type d -mtime +13 -exec rm -Rf {} +
done
#!/bin/bash
# Add your backup dir location, password, mysql location and mysqldump location
DATE=$(date +%d-%m-%Y)
BACKUP_DIR="/var/www/back"
MYSQL_USER="root"
MYSQL_PASSWORD=""
MYSQL='/usr/bin/mysql'
MYSQLDUMP='/usr/bin/mysqldump'
DB='demo'
#to empty the backup directory and delete all previous backups
rm -r $BACKUP_DIR/*
mysqldump -u root -p'' demo | gzip -9 > $BACKUP_DIR/demo$date_format.sql.$DATE.gz
#changing permissions of directory
chmod -R 777 $BACKUP_DIR
You might consider this Open Source tool, matiri, https://github.com/AAFC-MBB/matiri which is a concurrent mysql backup script with metadata in Sqlite3. Features:
Multi-Server: Multiple MySQL servers are supported whether they are co-located on the same or separate physical servers.
Parallel: Each database on the server to be backed up is done separately, in parallel (concurrency settable: default: 3)
Compressed: Each database backup compressed
Checksummed: SHA256 of each compressed backup file stored and the archive of all files
Archived: All database backups tar'ed together into single file
Recorded: Backup information stored in Sqlite3 database
Full disclosure: original matiri author.
As a DBA, You must schedule the backup of MySQL Database in case of any issues so that you can recover your databases from the current backup.
Here, we are using mysqldump to take the backup of mysql databases and the same you can put into the script.
[orahow#oradbdb DB_Backup]$ cat .backup_script.sh
#!/bin/bash
# Database credentials
user="root"
password="1Loginxx"
db_name="orahowdb"
v_cnt=0
logfile_path=/DB_Backup
touch "$logfile_path/orahowdb_backup.log"
# Other options
backup_path="/DB_Backup"
date=$(date +"%d-%b-%Y-%H-%M-%p")
# Set default file permissions
Continue Reading ....
MySQL Backup
I have prepared a Shell Script to create a Backup of MYSQL database.
You can use it so that we have backup of our database(s).
#!/bin/bash
export PATH=/bin:/usr/bin:/usr/local/bin
TODAY=`date +"%d%b%Y_%I:%M:%S%p"`
################################################################
################## Update below values ########################
DB_BACKUP_PATH='/backup/dbbackup'
MYSQL_HOST='localhost'
MYSQL_PORT='3306'
MYSQL_USER='auriga'
MYSQL_PASSWORD='auriga#123'
DATABASE_NAME=( Project_O2 o2)
BACKUP_RETAIN_DAYS=30 ## Number of days to keep local backup copy; Enable script code in end of th script
#################################################################
{ mkdir -p ${DB_BACKUP_PATH}/${TODAY}
echo "
${TODAY}" >> ${DB_BACKUP_PATH}/Backup-Report.txt
} || {
echo "Can not make Directry"
echo "Possibly Path is wrong"
}
{ if ! mysql -u ${MYSQL_USER} -p${MYSQL_PASSWORD} -e 'exit'; then
echo 'Failed! You may have Incorrect PASSWORD/USER ' >> ${DB_BACKUP_PATH}/Backup-Report.txt
exit 1
fi
for DB in "${DATABASE_NAME[#]}"; do
if ! mysql -u ${MYSQL_USER} -p${MYSQL_PASSWORD} -e "use "${DB}; then
echo "Failed! Database ${DB} Not Found on ${TODAY}" >> ${DB_BACKUP_PATH}/Backup-Report.txt
else
# echo "Backup started for database - ${DB}"
# mysqldump -h localhost -P 3306 -u auriga -pauriga#123 Project_O2 # use gzip..
mysqldump -h ${MYSQL_HOST} -P ${MYSQL_PORT} -u ${MYSQL_USER} -p${MYSQL_PASSWORD} \
--databases ${DB} | gzip > ${DB_BACKUP_PATH}/${TODAY}/${DB}-${TODAY}.sql.gz
if [ $? -eq 0 ]; then
touch ${DB_BACKUP_PATH}/Backup-Report.txt
echo "successfully backed-up of ${DB} on ${TODAY}" >> ${DB_BACKUP_PATH}/Backup-Report.txt
# echo "Database backup successfully completed"
else
touch ${DB_BACKUP_PATH}/Backup-Report.txt
echo "Failed to backup of ${DB} on ${TODAY}" >> ${DB_BACKUP_PATH}/Backup-Report.txt
# echo "Error found during backup"
exit 1
fi
fi
done
} || {
echo "Failed during backup"
echo "Failed to backup on ${TODAY}" >> ${DB_BACKUP_PATH}/Backup-Report.txt
# ./myshellsc.sh 2> ${DB_BACKUP_PATH}/Backup-Report.txt
}
##### Remove backups older than {BACKUP_RETAIN_DAYS} days #####
# DBDELDATE=`date +"%d%b%Y" --date="${BACKUP_RETAIN_DAYS} days ago"`
# if [ ! -z ${DB_BACKUP_PATH} ]; then
# cd ${DB_BACKUP_PATH}
# if [ ! -z ${DBDELDATE} ] && [ -d ${DBDELDATE} ]; then
# rm -rf ${DBDELDATE}
# fi
# fi
### End of script ####
In the script we just need to give our Username, Password, Name of Database(or Databases if more than one) also Port number if Different.
To Run the script use Command as:
sudo ./script.sc
I also Suggest that if You want to see the Result in a file Like:
Failure Occurs or Successful in backing-up,
then Use the Command as Below:
sudo ./myshellsc.sh 2>> Backup-Report.log
Thank You.

Load a series of sql files through capistrano

I'm having an issue trying to load a series of sql files through our capistrano recipe for our testing environment.
Here's what I came up to :
desc "Empty database and play sql scripts for fresh db structure"
task :mysqlrestore, :roles => :app do
run "find #{current_release}/migration/ -name '*.sql' -print0 | xargs -0 -I file mysql -hlocalhost -u#{db_username} -p#{db_password} #{db_database} < file"
My capistrano console outputs a :
failed: "sh -c 'find /home/toolbox/www/staging/releases/20120119111819/migration/ -name '\''*.sql'\'' -print0 | xargs -0 -I file mysql -hlocalhost -uuser -ppassword DBNAME < file'" on staging.env.com
Where could I be wrong ?
I was able to execute your command from bash just by removing your single quotes from your run command.
i.e.
run "find #{current_release}/migration/ -name *.sql -print0 | xargs -0 -I file mysql -hlocalhost -u#{db_username} -p#{db_password} #{db_database} < file"