I have some MySQL databases that I back up nightly from cron, just your standard mysqldump command. I'd like to feed only the errors from mysqldump and/or aws to a script that will then send the error into slack. I'm struggling to figure out how to do that in the middle of the command though. I want to send stderr to slacktee, but stout to gzip and on to aws s3 cp. So this works fine:
* * * * * mysqldump --host=mysql.example.com --user=mysql | gzip | aws s3 cp - s3://backs/backs/whatever.sql.gz
That's just the usual plain ol' backup thing. But I'm trying to squeeze in stderr redirects for the mysqldump command fails, I've tried every combination of 2>&1 I could and each one doesn't do the trick. Every combination either ends with an empty gzip file or stops everything from running.
* * * * * mysqldump --host=mysql.example.com --user=mysql dbname 2>&1 >/dev/null | /usr/local/bin/slacktee | gzip | aws s3 cp - s3://backs/backs/whatever.sql.gz
So if there's an error on the mysqldump command send just the error to /usr/local/bin/slacktee if there's no error, just send the mysqldump output to the pipe over to gzip.
I want the same thing with aws s3 cp, but that seems to be easier, I can just put the redirect at the end.
Edited to add: Ideally I'm hoping to avoid doing a separate script for this and keeping it all in one line in cron.
Also adding another edit. 2>&1 /dev/null was just in this example, I've tried making that 2>&1 /path/to/slacktee as well as different combinations of 2> and 1> and some | in different places as well and every other different way I could think of, and that didn't work either.
I would create a separate script, (mysqlbackup.sh), and change the crontab to:
* * * * * mysqlbackup.sh
Your script could look like (untested):
#!/bin/bash
mysqldump --host=mysql.example.com --user=mysql dbname 2>/tmp/errors | gzip > /tmp/mysqldump.gz
if [ -s /tmp/errors ]; # if file has a size
then
echo "Something went wrong"
else
echo "OK"
fi
This, of course, needs to be expanded with the aws s3 cp... stuff...
Related
Im using a bash script (sync.sh), used by cron, that is supposed to sync a file to a MySQL database. It works by copying a file from automatically uploaded location, parse it by calling SQL script which calls other MySQL internally stored scripts, and at the end emails a report text file as an attachment.
But, seems like something is not working as nothing happens to MySQL databases. All other commands are executed (first line and last line: copy initial file and e-mail sending).
MySQL command when run separately works perfectly.
Server is Ubuntu 16.04.
Cron job is run as root user and script is part of crontab for root user.
Here is the script:
#!/bin/bash
cp -u /home/admin/web/mydomain.com/public_html/dailyxchng/warehouse.txt /var/lib/mysql-files
mysql_pwd=syncit4321
cd /home/admin/web/mydomain.com/sync
mysql -u sync -p$mysql_pwd --database=database_name -e "call sp_sync_report();" > results.txt
echo "<h2>Report date $(date '+%d/%m/%Y %H:%M:%S')</h2><br/><br/> <strong>results.txt</strong> is an attached file which contains sync report." | mutt -e "set content_type=text/html" -s "Report date $(date '+%d/%m/%Y %H:%M:%S')" -a results.txt -- recipient#mydomain.com
cron will execute the script using a very stripped environment. you probably want to add the full path to the mysql command to the cron script
you can find the full path by
which mysql
at the prompt,
or you can add an expanded path to the cron invocation
1 2 * * * PATH=/usr/local/bin:$PATH scriptname
i am running a cron job, that calls a sh script with the code bellow. I have noticed that sometimes it works, but sometimes i am getting a 0KB file. I have no idea what could be causing this 0KB or what could be done to fix this.
DATE=`date +%Y-%m-%d-%H-%m`
NAME=bkp-server-207-$DATE.sql.gz
mysqldump -u root -pXXXX#2016 xxxx | gzip > /media/backup_folder/$NAME
You need to find out why the command is failing. The default output of a cron job is lost, unless you redirect it to a file and check it later.
You can log it at the cron level (see http://www.thegeekstuff.com/2012/07/crontab-log/)
59 23 * * * /home/john/bin/backup.sh >> /home/john/logs/backup.log 2>&1
The 2>&1 folds stderr into the stdout, so both are saved.
Or else you can log specific commands within your script:
echo "Creating $NAME" >>/home/john/logs/backup.log
mysqldump -u root -pXXXX#2016 xxxx 2>>/home/john/logs/backup.log | gzip > /media/backup_folder/$NAME
Once you have the error output, you should have important clues as to the cause of the failure.
I'm trying to create a cron job for database backup.
This is what I have so far:
mysqldump.sh
mysqldump -u root -ptest --all-databases | gzip > "/db-backup/backup/backup-$(date)" 2> dump.log
echo "Finished mysqldump $(date)" >> dump.log
Cron job:
32 18 * * * /db-backup/mysqldump.sh
The problem I am having is the job is not executing through cron or when I am not in the directory.
Can someone please advise. Are my paths incorrect?
Also, the following line I'm not sure will output errors to the dump.log:
mysqldump -u root -ptest --all-databases | gzip > "/db-backup/backup/backup-$(date)" 2> dump.log
What worked:
mysqldump -u root -ptest --all-databases | gzip > "../db-backup/backup/backup-$(date).sql.gz" 2> ../db-backup/dump.log
echo "Finished mysqldump $(date)" >> ../db-backup/dump.log
There are a couple of things you can check, though more information is always more helpful (permissions and location of file, entire file contents, etc).
It can never hurt to preface the mysqldump.sh file with the Shebang syntax for your environment. I would venture to guess #!/bin/bash would be sufficient.
Instead of mysqldump -u .... use the absolute path /usr/bin/mysqldump (or where ever it is on your system). Absolute paths are always a good idea in any form of scripting since it's difficult to say if the user has the same environment as you do.
As for storing the errors in dump.log, I don't believe your syntax is correct. I'm fairly sure you're piping the errors from gzip into dump.log, not the errors from mysqldump. This seems like a fairly common question which arrives at the answer of mysqldump $PARAMS | gzip -c dump-$(date)
I have set up a cronjob using crontab -e as follows:
12 22 * * * /usr/bin/mysql >> < FILE PATH >
This does not run the mysql command. It only creates a blank file.
Whereas mysqldump command is running via cron.
What could the problem be?
Surely mysql is the interactive interface into MySQL.
Assuming that you're just running mysql and appending the output to your file with >>, the first time it tries to read from standard input, it will probably get an end-of-file and exit.
Perhaps you might want to think about providing a command for it to process, something like:
12 22 * * * /usr/bin/mysql
-u me
-p never_you_mind
-e "select * from my_table"
-D my_database
>>/home/me/output_file
(split across multiple lines for readability, but should be on one line).
As an aside, that's not overly secure since your password may be visible from ps while the process is running. Since it's only an example, I'm not too worried, but you should consider storing the password in a properly secured my.cnf file if you go down this path.
In terms of running a shell script from cron which in turn executes MySQL commands, that should work as well. One choice is with a here-doc:
/usr/bin/mysql -u me -p never_you_mind -D my_database <<EOF
select * from my_table
select * from my_other_table where id = 74
EOF
12 22 * * * /usr/bin/mysql >> < FILE PATH > 2>&1
Redirect your error message to the same file so you can debug it.
There is also a good article about how to debug cron jobs:
How to debug a broken cron job
We're running a CentOS server with a lot of MySql databases atm, what I need is a really easy way for us to back those up. Since many of them are under a couple of meg. Dumping, zipping them up then sending them to a secure Google Apps account sounds like a pretty good idea.
So what I need is: a script that will dump and zip the database, then email it somewhere, if it fails email somewhere else.
I use the following script to send a small dump to a dedicated mail account.
This of course assumes you can send mails from your machine using the mail command.
#!/bin/bash
gzdate=`/bin/date +%Y-%m-%d_%H%M`;
gzfile=dump_${gzdate}.sql.gz
mailrecpt=recipient#domain.com
dumpuser=username
dbname=mydb
mysqldump --single-transaction --opt -u ${dumpuser} ${dbname} | gzip > ${gzfile}
if [ $? == 0 ]; then
( echo "Database Backup from ${gzdate}:"; uuencode ${gzfile} ${gzfile} ) | mail -s "Database Backup ${gzdate}" ${mailrecpt};
else
( echo "Database Backup from ${gzdate} failed." ) | mail -s "FAILED: Database Backup ${gzdate}" ${mailrecpt};
fi
You just need to adapt the variables at the top.