I tried many scripts for database backup but I couldn't make it. I want to backup my database every hour.
I added files to "/etc/cron.hourly/" folder, changed its chmod to 755, but it didn't run.
At least I write my pseudo code.
I would be happy if you can write a script for this operation and tell me what should I do more ?
After adding this script file to /etc/cron.hourly/ folder.
Get current date and create a variable, date=date(d_m_y_H_M_S)
Create a variable for the file name, filename="$date".gz
Get the dump of my database like this mysqldump --user=my_user --password=my_pass --default-character-set=utf8 my_database | gzip > "/var/www/vhosts/system/example.com/httpdocs/backups/$("filename")
Delete all files in the folder /var/www/vhosts/system/example.com/httpdocs/backups/ that are older than 8 days
To the file "/var/www/vhosts/system/example.com/httpdocs/backup_log.txt", this text will be written: Backup is created at $("date")
Change the file owners (chown) from root to "my_user". Because I want to open the backup and log files from the "my_user" FTP account.
I don't want an email after each cron. >/dev/null 2>&1 will be added.
After hours and hours work, I created a solution like the below. I copy paste for other people that can benefit.
First create a script file and give this file executable permission.
# cd /etc/cron.daily/
# touch /etc/cron.daily/dbbackup-daily.sh
# chmod 755 /etc/cron.daily/dbbackup-daily.sh
# vi /etc/cron.daily/dbbackup-daily.sh
Then copy following lines into file with Shift+Ins
#!/bin/sh
now="$(date +'%d_%m_%Y_%H_%M_%S')"
filename="db_backup_$now".gz
backupfolder="/var/www/vhosts/example.com/httpdocs/backups"
fullpathbackupfile="$backupfolder/$filename"
logfile="$backupfolder/"backup_log_"$(date +'%Y_%m')".txt
echo "mysqldump started at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
mysqldump --user=mydbuser --password=mypass --default-character-set=utf8 mydatabase | gzip > "$fullpathbackupfile"
echo "mysqldump finished at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
chown myuser "$fullpathbackupfile"
chown myuser "$logfile"
echo "file permission changed" >> "$logfile"
find "$backupfolder" -name db_backup_* -mtime +8 -exec rm {} \;
echo "old files deleted" >> "$logfile"
echo "operation finished at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
echo "*****************" >> "$logfile"
exit 0
Edit:
If you use InnoDB and backup takes too much time, you can add "single-transaction" argument to prevent locking. So mysqldump line will be like this:
mysqldump --user=mydbuser --password=mypass --default-character-set=utf8
--single-transaction mydatabase | gzip > "$fullpathbackupfile"
Create a script similar to this:
#!/bin/sh -e
location=~/`date +%Y%m%d_%H%M%S`.db
mysqldump -u root --password=<your password> database_name > $location
gzip $location
Then you can edit the crontab of the user that the script is going to run as:
$> crontab -e
And append the entry
01 * * * * ~/script_path.sh
This will make it run on the first minute of every hour every day.
Then you just have to add in your rolls and other functionality and you are good to go.
I got the same issue.
But I manage to write a script.
Hope this would help.
#!/bin/bash
# Database credentials
user="username"
password="password"
host="localhost"
db_name="dbname"
# Other options
backup_path="/DB/DB_Backup"
date=$(date +"%d-%b-%Y")
# Set default file permissions
umask 177
# Dump database into SQL file
mysqldump --user=$user --password=$password --host=$host $db_name >$backup_path/$db_name-$date.sql
# Delete files older than 30 days
find $backup_path/* -mtime +30 -exec rm {} \;
#DB backup log
echo -e "$(date +'%d-%b-%y %r '):ALERT:Database has been Backuped" >>/var/log/DB_Backup.log
#!/bin/sh
#Procedures = For DB Backup
#Scheduled at : Every Day 22:00
v_path=/etc/database_jobs/db_backup
logfile_path=/etc/database_jobs
v_file_name=DB_Production
v_cnt=0
MAILTO="abc#as.in"
touch "$logfile_path/kaka_db_log.log"
#DB Backup
mysqldump -uusername -ppassword -h111.111.111.111 ddbname > $v_path/$v_file_name`date +%Y-%m-%d`.sql
if [ "$?" -eq 0 ]
then
v_cnt=`expr $v_cnt + 1`
mail -s "DB Backup has been done successfully" $MAILTO < $logfile_path/db_log.log
else
mail -s "Alert : kaka DB Backup has been failed" $MAILTO < $logfile_path/db_log.log
exit
fi
Here is my mysql backup script for ubuntu in case it helps someone.
#Mysql back up script
start_time="$(date -u +%s)"
now(){
date +%d-%B-%Y_%H-%M-%S
}
ip(){
/sbin/ifconfig eth0 2>/dev/null|awk '/inet addr:/ {print $2}'|sed 's/addr://'
}
filename="`now`".zip
backupfolder=/path/to/any/folder
fullpathbackupfile=$backupfolder/$filename
db_user=xxx
db_password=xxx
db_name=xxx
printf "\n\n"
printf "******************************\n"
printf "Started Automatic Mysql Backup\n"
printf "******************************\n"
printf "TIME: `now`\n"
printf "IP_ADDRESS: `ip` \n"
printf "DB_SERVER_NAME: DB-SERVER-1\n"
printf "%sBACKUP_FILE_PATH $fullpathbackupfile\n"
printf "Starting Mysql Dump \n"
mysqldump -u $db_user -p$db_password $db_name| pv | zip > $fullpathbackupfile
end_time="$(date -u +%s)"
elapsed=$(($end_time-$start_time))
printf "%sMysql Dump Completed In $elapsed seconds\n"
printf "******************************\n"
PS: Rememember to install pv and zip in your ubuntu
sudo apt install pv
sudo apt install zip
Here is how I set crontab by using crontab -e in ubuntu to run every 6 hours
0 */6 * * * sh /path/to/shfile/backup-mysql.sh >> /path/to/logs/backup-mysql.log 2>&1
Cool thing is it will create a zip file which is easier to unzip from anywhere
Now, copy the following content in a script file (like: /backup/mysql-backup.sh) and save on your Linux system.
#!/bin/bash
export PATH=/bin:/usr/bin:/usr/local/bin
TODAY=`date +"%d%b%Y"`
DB_BACKUP_PATH='/backup/dbbackup'
MYSQL_HOST='localhost'
MYSQL_PORT='3306'
MYSQL_USER='root'
MYSQL_PASSWORD='mysecret'
DATABASE_NAME='mydb'
BACKUP_RETAIN_DAYS=30
mkdir -p ${DB_BACKUP_PATH}/${TODAY}
echo "Backup started for database - ${DATABASE_NAME}"
mysqldump -h ${MYSQL_HOST} \
-P ${MYSQL_PORT} \
-u ${MYSQL_USER} \
-p${MYSQL_PASSWORD} \
${DATABASE_NAME} | gzip > ${DB_BACKUP_PATH}/${TODAY}/${DATABASE_NAME}-${TODAY}.sql.gz
if [ $? -eq 0 ]; then
echo "Database backup successfully completed"
else
echo "Error found during backup"
exit 1
fi
##### Remove backups older than {BACKUP_RETAIN_DAYS} days #####
DBDELDATE=`date +"%d%b%Y" --date="${BACKUP_RETAIN_DAYS} days ago"`
if [ ! -z ${DB_BACKUP_PATH} ]; then
cd ${DB_BACKUP_PATH}
if [ ! -z ${DBDELDATE} ] && [ -d ${DBDELDATE} ]; then
rm -rf ${DBDELDATE}
fi
fi
After creating or downloading script make sure to set execute permission to run properly.
$ chmod +x /backup/mysql-backup.sh
Edit crontab on your system with crontab -e command. Add following settings to enable backup at 3 in the morning.
0 3 * * * root /backup/mysql-backup.sh
Add the following code to your shell script file. Replace dbname, dbuser and dbpass with your database name, username and password respectively.
#!/bin/sh
echo "starting db backup"
day="$(date +"%m-%d-%y")"
db_backup="mydb_${day}.sql"
sudo mysqldump -udbuser -pdbpass --no-tablespaces dbname >/home/${db_backup}
echo " backup complete"
If you want to compress the above backup data, just
Replace with the following code.
db_backup="mydb_${day}.gz"
sudo mysqldump -udbuser -pdbpass --no-tablespaces dbname | gzip -c >/home/${db_backup}
If you want to delete files older than 14 days in a folders,
use following code.
#!/bin/bash
fpath1=/home/ubuntu/mysql/*
fpath2=/home/ubuntu/postgsql/*
file_path=($fpath1 $fpath2)
for i in ${file_path[#]};
do
find $i -type d -mtime +13 -exec rm -Rf {} +
done
#!/bin/bash
# Add your backup dir location, password, mysql location and mysqldump location
DATE=$(date +%d-%m-%Y)
BACKUP_DIR="/var/www/back"
MYSQL_USER="root"
MYSQL_PASSWORD=""
MYSQL='/usr/bin/mysql'
MYSQLDUMP='/usr/bin/mysqldump'
DB='demo'
#to empty the backup directory and delete all previous backups
rm -r $BACKUP_DIR/*
mysqldump -u root -p'' demo | gzip -9 > $BACKUP_DIR/demo$date_format.sql.$DATE.gz
#changing permissions of directory
chmod -R 777 $BACKUP_DIR
You might consider this Open Source tool, matiri, https://github.com/AAFC-MBB/matiri which is a concurrent mysql backup script with metadata in Sqlite3. Features:
Multi-Server: Multiple MySQL servers are supported whether they are co-located on the same or separate physical servers.
Parallel: Each database on the server to be backed up is done separately, in parallel (concurrency settable: default: 3)
Compressed: Each database backup compressed
Checksummed: SHA256 of each compressed backup file stored and the archive of all files
Archived: All database backups tar'ed together into single file
Recorded: Backup information stored in Sqlite3 database
Full disclosure: original matiri author.
As a DBA, You must schedule the backup of MySQL Database in case of any issues so that you can recover your databases from the current backup.
Here, we are using mysqldump to take the backup of mysql databases and the same you can put into the script.
[orahow#oradbdb DB_Backup]$ cat .backup_script.sh
#!/bin/bash
# Database credentials
user="root"
password="1Loginxx"
db_name="orahowdb"
v_cnt=0
logfile_path=/DB_Backup
touch "$logfile_path/orahowdb_backup.log"
# Other options
backup_path="/DB_Backup"
date=$(date +"%d-%b-%Y-%H-%M-%p")
# Set default file permissions
Continue Reading ....
MySQL Backup
I have prepared a Shell Script to create a Backup of MYSQL database.
You can use it so that we have backup of our database(s).
#!/bin/bash
export PATH=/bin:/usr/bin:/usr/local/bin
TODAY=`date +"%d%b%Y_%I:%M:%S%p"`
################################################################
################## Update below values ########################
DB_BACKUP_PATH='/backup/dbbackup'
MYSQL_HOST='localhost'
MYSQL_PORT='3306'
MYSQL_USER='auriga'
MYSQL_PASSWORD='auriga#123'
DATABASE_NAME=( Project_O2 o2)
BACKUP_RETAIN_DAYS=30 ## Number of days to keep local backup copy; Enable script code in end of th script
#################################################################
{ mkdir -p ${DB_BACKUP_PATH}/${TODAY}
echo "
${TODAY}" >> ${DB_BACKUP_PATH}/Backup-Report.txt
} || {
echo "Can not make Directry"
echo "Possibly Path is wrong"
}
{ if ! mysql -u ${MYSQL_USER} -p${MYSQL_PASSWORD} -e 'exit'; then
echo 'Failed! You may have Incorrect PASSWORD/USER ' >> ${DB_BACKUP_PATH}/Backup-Report.txt
exit 1
fi
for DB in "${DATABASE_NAME[#]}"; do
if ! mysql -u ${MYSQL_USER} -p${MYSQL_PASSWORD} -e "use "${DB}; then
echo "Failed! Database ${DB} Not Found on ${TODAY}" >> ${DB_BACKUP_PATH}/Backup-Report.txt
else
# echo "Backup started for database - ${DB}"
# mysqldump -h localhost -P 3306 -u auriga -pauriga#123 Project_O2 # use gzip..
mysqldump -h ${MYSQL_HOST} -P ${MYSQL_PORT} -u ${MYSQL_USER} -p${MYSQL_PASSWORD} \
--databases ${DB} | gzip > ${DB_BACKUP_PATH}/${TODAY}/${DB}-${TODAY}.sql.gz
if [ $? -eq 0 ]; then
touch ${DB_BACKUP_PATH}/Backup-Report.txt
echo "successfully backed-up of ${DB} on ${TODAY}" >> ${DB_BACKUP_PATH}/Backup-Report.txt
# echo "Database backup successfully completed"
else
touch ${DB_BACKUP_PATH}/Backup-Report.txt
echo "Failed to backup of ${DB} on ${TODAY}" >> ${DB_BACKUP_PATH}/Backup-Report.txt
# echo "Error found during backup"
exit 1
fi
fi
done
} || {
echo "Failed during backup"
echo "Failed to backup on ${TODAY}" >> ${DB_BACKUP_PATH}/Backup-Report.txt
# ./myshellsc.sh 2> ${DB_BACKUP_PATH}/Backup-Report.txt
}
##### Remove backups older than {BACKUP_RETAIN_DAYS} days #####
# DBDELDATE=`date +"%d%b%Y" --date="${BACKUP_RETAIN_DAYS} days ago"`
# if [ ! -z ${DB_BACKUP_PATH} ]; then
# cd ${DB_BACKUP_PATH}
# if [ ! -z ${DBDELDATE} ] && [ -d ${DBDELDATE} ]; then
# rm -rf ${DBDELDATE}
# fi
# fi
### End of script ####
In the script we just need to give our Username, Password, Name of Database(or Databases if more than one) also Port number if Different.
To Run the script use Command as:
sudo ./script.sc
I also Suggest that if You want to see the Result in a file Like:
Failure Occurs or Successful in backing-up,
then Use the Command as Below:
sudo ./myshellsc.sh 2>> Backup-Report.log
Thank You.
Related
I have a shell script that I got from trante's answer in:
Linux shell script for database backup
The script runs correctly from the command line and creates the mysql backup.
However when it runs from a cron job the script executes and creates a the mysqldump file but the file is empty.
I have tried setting the PATH environment in the script but nothing I have tried works.
Regards
EDIT:
Adding script
#!/bin/sh
now="$(date +'%d_%m_%Y_%H_%M_%S')"
filename="db_backup_$now".gz
backupfolder="/mybackupfolder"
fullpathbackupfile="$backupfolder/$filename"
logfile="$backupfolder/"backup_log_"$(date +'%Y_%m')".txt
echo "mysqldump started at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
mysqldump --user=myuser --password=mypassword --default-character-set=utf8 mydatabase | gzip > "$fullpathbackupfile"
echo "mysqldump finished at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
chown myuser "$fullpathbackupfile"
chown myuser "$logfile"
echo "file permission changed" >> "$logfile"
find "$backupfolder" -name db_backup_* -mtime +8 -exec rm {} \;
echo "old files deleted" >> "$logfile"
echo "operation finished at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
echo "*****************" >> "$logfile"
exit 0
Finally I found the solution.
You have to put the path to mysql in the following line:
mysqldump --user=myuser --password=mypassword..
Which in my case is:
/opt/bitnami/mysql/bin/mysqldump --user=myuser --password=mypassword..
Need a script solution to take daily back-up of a DB and send a mail for it.
The code makes a back-up for mysql DB and the files involved (as in most use cases)
#!/bin/bash
cat /dev/null > Body.text
DATE=`date +%b/%d/%Y`
BACKUP_DIR="/home/user/Redmine/tmp"
DB_NAME="xxx"
DB_USER="xxx"
DB_PASSWORD="XXXXXX"
Redmine_Root="/home/webuser/apps/redmine"
echo "Redmine Backup Directory is $BACKUP_DIR" >> /root/Body.text
rm -rf $BACKUP_DIR/files* $BACKUP_DIR/Redmine*
# -- MySQL
echo "`date` at Redmine's MySQL db Backup on" >> /root/Body.text
mysqldump -u $DB_USER -p$DB_PASSWORD $DB_NAME > $BACKUP_DIR/Redmine.sql
gzip $BACKUP_DIR/Redmine.sql
rm -f $BACKUP_DIR/Redmine.sql
#........Redmine
echo "`date` at REDMINE_Files Backup " >> /root/Body.text
#Create back up for Files
echo "Backing up Redmine attachments..."
rsync -a $Redmine_Root/files/ $BACKUP_DIR/files/
echo "Packing into single archive redmine files"
tar -zcvf $BACKUP_DIR/redminefiles.tar $BACKUP_DIR/files
rm -rf $BACKUP_DIR/files/
#Create a single tar file
echo "Create Backup of Single File" >> /root/Body.text
cd /home/user/Redmine
tar -cvf Redmine`date +%b%d%y`.tar tmp/
tar -cvf /home/user/Redmine`date +%d%b%y`.tar /home/user/Redmine/tmp
rm -f $BACKUP_DIR/redminefiles* $BACKUP_DIR/Redmine*
rm -f $BACKUP_DIR/Redmine.sql.gz
#Cleaning Up
echo "Delete five days olderbackup" >> /root/Body.text
find /home/maitreya/Redmine/* -mtime +5 -exec rm -rf {} \;
#Sending Report
/usr/bin/mutt -e "set realname=\" Redmine-Backup\" " \ -s "Redmine Backup on $DATE" xy#yz.com -c yz#xy.com < /root/Body.text
I want to make an entry in crontab that will optimize mysql database, so I am writing a script that will check if it is already present, else it will be created.
The problem I'm facing is when I check for entry it is returning many unwanted entries, to be specific all directories, files as output, present in the directory where I am executing the script.
My overall script looks like:
check=`crontab -l | grep -i "ABC"`
if [ -z "$check" ];
then
#echo " Adding opt to crontab "
crontab -l > ./tmpfile
echo "0 * * * * /bin/bash /usr/bin/mysqlcheck --all-databases -o --user=<user> --password=<pwd> 2>&1 >> ./tmpfile
crontab ./tmpfile
rm tmpfile
echo "ABC has been successfully added to crontab"
exit 1
else
echo "ABC already exists in crontab "
exit 1
fi
I've written a bash script, initiated on cron, that backups all databases on a particular machine nightly and weekly. The script correctly removes old databases, except for those cases when there's been a change in month.
As an example, let's say is November 2nd. The script runs at 11:00pm, and correctly removes the backup made from November 1st. But come December 1st, the script gets confused, and does not correctly remove the backup made from November 30th.
How can I fix this script to correctly remove the old backups in this case?
DATABASES=$(echo 'show databases;' | mysql -u backup --password='(password)' | grep -v ^Database$)
LIST=$(echo $DATABASES | sed -e "s/\s/\n/g")
DATE=$(date +%Y%m%d)
DAYOLD=$(($DATE-1))
SUNDAY=$(date +%a)
WEEKOLD=$(($DATE-7))
for i in $LIST; do
if [[ $i != "mysql" ]]; then
mysqldump --single-transaction $i > /mnt/backups/mariadb/daily/$i.$DATE.sql
if [ -f /mnt/backups/mariadb/daily/$i.$DAYOLD.sql ]; then
rm -f /mnt/backups/mariadb/daily/$i.$DAYOLD.sql
fi
if [[ $SUNDAY == "Sun" ]]; then
cp /mnt/backups/mariadb/daily/$i.$DATE.sql /mnt/backups/mariadb/weekly/$i.$DATE.sql
rm -f /mnt/backups/mariadb/weekly/$i.$WEEKOLD.sql
fi
fi
done
If you know the number of backups performed in a specific range of time, let's say you know from 2nd Nov until 2nd Dec you know that exactly 30 backups have been made and you now want to erase those, just use the number of backups, it's super simple to do and you don't have to deal with dates which is pretty complex in bash:
$ (ls -t|head -n 30;ls)|grep -v ^Database|sort|uniq -u|xargs rm -rf
You can then easily automate this script by removing each day the older one so you only get the fix number of backups you want:
#! /bin/bash
# Create new full backup
BACKUP_DIR="/path-to-backups/"
BACKUP_DAYS=1
# Prepare backup
cd ${BACKUP_DIR}
latest=`ls -rt | grep 201 | head -1`
# Change latest reference
ln -sf ${BACKUP_DIR}${latest} latest
# Cleanup older than one week (n days)
to_remove=`(ls -t | grep 201 | head -n 3;ls)|sort|uniq -u`
echo "Cleaning up... $to_remove"
(ls -t|head -n ${BACKUP_DAYS};ls)|sort|uniq -u|xargs rm -rf
echo "Backup Finished"
exit 0
Then you can link it to daily cron. This is explained in this blog entry, how to do this stuff in a very straightforward fashion (but with hot backups, no mysqldump): http://codeispoetry.me/index.php/mariadb-daily-hot-backups-with-xtrabackup/
I was making this too complicated. Instead of using the date at all, I'm just searching for the age of the file backup with:
find /mnt/backups/mariadb/weekly/* -type f -mtime +8 -exec rm -f {} \;
So the entire script becomes:
DATABASES=$(echo 'show databases;' | mysql -u backup --password='foo' | grep -v ^Database$)
LIST=$(echo $DATABASES | sed -e "s/\s/\n/g")
DATE=$(date +%Y%m%d)
SUNDAY=$(date +%a)
for i in $LIST; do
if [[ $i != "mysql" ]]; then
/bin/nice mysqldump --single-transaction $i > /mnt/backups/mariadb/daily/$i.$DATE.sql
find /mnt/backups/mariadb/daily/* -type f -mtime +1 -exec rm -f {} \;
if [[ $SUNDAY == "Sun" ]]; then
cp /mnt/backups/mariadb/daily/$i.$DATE.sql /mnt/backups/mariadb/weekly/$i.$DATE.sql
find /mnt/backups/mariadb/weekly/* -type f -mtime +8 -exec rm -f {} \;
fi
fi
chown -R backup.backup /mnt/backups
done
I have the following shell script which writes volumes in a find command to a file, and loads that file into a mysql database:
# find all the paths and print them to a file
sudo find $FULFILLMENT/ > $FILE
sudo find $ARCH1/ >> $FILE
sudo find $ARCH2/ >> $FILE
sudo find $MASTERING/ >> $FILE
# load the file into the mysql database, `files`, table `path`
/usr/local/bin/mysql -u root files -e "TRUNCATE path"
/usr/local/bin/mysql -u root files -e "LOAD DATA INFILE '/tmp/files.txt' INTO TABLE path"
TRUNCATE is to be used to delete all the old entries before adding the new. However, if any of the find commands don't work (for example, if the volume isn't accessible), I want it to skip on the two mysql commands. How would I modify the above script to do this?
This will execute your two mysql commands only if all the find commands succeed:
# find all the paths and print them to a file
sudo find $FULFILLMENT/ > $FILE &&
sudo find $ARCH1/ >> $FILE &&
sudo find $ARCH2/ >> $FILE &&
sudo find $MASTERING/ >> $FILE &&
{
# load the file into the mysql database, `files`, table `path`
/usr/local/bin/mysql -u root files -e "TRUNCATE path"
/usr/local/bin/mysql -u root files -e "LOAD DATA INFILE '/tmp/files.txt' INTO TABLE path"
}
The && operator causes the command on the RHS to run only if the command on the LHS succeeds. The { ... } groups the two mysql commands into one compound command, so either both run (if the last find succeeds) or neither does (if the last find does not succeed).
You can use this if there is more to your script that should run whether or not the finds succeed and the mysql commands run.
Something like:
sudo find $FULFILLMENT > $FILE || exit 1
You can use #!/bin/bash -e or add a line with set -e above the find commands. That way, Bash will automatically abort as soon as a command fails. It's good practice to always do this by default, since you rarely want to continue when the previous command fails:
set -e
# find all the paths and print them to a file
sudo find $FULFILLMENT/ > $FILE
sudo find $ARCH1/ >> $FILE
sudo find $ARCH2/ >> $FILE
sudo find $MASTERING/ >> $FILE
# load the file into the mysql database, `files`, table `path`
/usr/local/bin/mysql -u root files -e "TRUNCATE path"
/usr/local/bin/mysql -u root files -e "LOAD DATA INFILE '/tmp/files.txt' INTO TABLE path"