i am running a cron job, that calls a sh script with the code bellow. I have noticed that sometimes it works, but sometimes i am getting a 0KB file. I have no idea what could be causing this 0KB or what could be done to fix this.
DATE=`date +%Y-%m-%d-%H-%m`
NAME=bkp-server-207-$DATE.sql.gz
mysqldump -u root -pXXXX#2016 xxxx | gzip > /media/backup_folder/$NAME
You need to find out why the command is failing. The default output of a cron job is lost, unless you redirect it to a file and check it later.
You can log it at the cron level (see http://www.thegeekstuff.com/2012/07/crontab-log/)
59 23 * * * /home/john/bin/backup.sh >> /home/john/logs/backup.log 2>&1
The 2>&1 folds stderr into the stdout, so both are saved.
Or else you can log specific commands within your script:
echo "Creating $NAME" >>/home/john/logs/backup.log
mysqldump -u root -pXXXX#2016 xxxx 2>>/home/john/logs/backup.log | gzip > /media/backup_folder/$NAME
Once you have the error output, you should have important clues as to the cause of the failure.
Related
I have some MySQL databases that I back up nightly from cron, just your standard mysqldump command. I'd like to feed only the errors from mysqldump and/or aws to a script that will then send the error into slack. I'm struggling to figure out how to do that in the middle of the command though. I want to send stderr to slacktee, but stout to gzip and on to aws s3 cp. So this works fine:
* * * * * mysqldump --host=mysql.example.com --user=mysql | gzip | aws s3 cp - s3://backs/backs/whatever.sql.gz
That's just the usual plain ol' backup thing. But I'm trying to squeeze in stderr redirects for the mysqldump command fails, I've tried every combination of 2>&1 I could and each one doesn't do the trick. Every combination either ends with an empty gzip file or stops everything from running.
* * * * * mysqldump --host=mysql.example.com --user=mysql dbname 2>&1 >/dev/null | /usr/local/bin/slacktee | gzip | aws s3 cp - s3://backs/backs/whatever.sql.gz
So if there's an error on the mysqldump command send just the error to /usr/local/bin/slacktee if there's no error, just send the mysqldump output to the pipe over to gzip.
I want the same thing with aws s3 cp, but that seems to be easier, I can just put the redirect at the end.
Edited to add: Ideally I'm hoping to avoid doing a separate script for this and keeping it all in one line in cron.
Also adding another edit. 2>&1 /dev/null was just in this example, I've tried making that 2>&1 /path/to/slacktee as well as different combinations of 2> and 1> and some | in different places as well and every other different way I could think of, and that didn't work either.
I would create a separate script, (mysqlbackup.sh), and change the crontab to:
* * * * * mysqlbackup.sh
Your script could look like (untested):
#!/bin/bash
mysqldump --host=mysql.example.com --user=mysql dbname 2>/tmp/errors | gzip > /tmp/mysqldump.gz
if [ -s /tmp/errors ]; # if file has a size
then
echo "Something went wrong"
else
echo "OK"
fi
This, of course, needs to be expanded with the aws s3 cp... stuff...
I have a small down and dirty script to dump one of the tables all of a client's databases nightly:
#!/bin/bash
DB_BACKUP="/backups/mysql_backup/`date +%Y-%m-%d`"
DB_USER="dbuser"
DB_PASSWD="dbpass"
# Create the backup directory
mkdir -p $DB_BACKUP
# Remove backups older than 10 days
find /backups/mysql_backup/ -maxdepth 1 -type d -mtime +10 -exec rm -rf {} \;
# Backup each database on the system
for db in $(mysql --user=$DB_USER --password=$DB_PASSWD -e 'show databases' -s --skip-column-names|grep -viE '(staging|performance_schema|information_schema)');
do echo "dumping $db-uploads"; mysqldump --user=$DB_USER --password=$DB_PASSWD --events --opt --single-transaction $db uploads > "$DB_BACKUP/mysqldump-$db-uploads-$(date +%Y-%m-%d).sql";
done
Recently we've had some issues where some of the tables get corrupted, and mysqldump fails with the following message:
mysqldump: Got error: 145: Table './myDBname/myTable1' is marked as crashed and should be repaired when using LOCK TABLES
Is there a way for me to check if this happens in the bash script, and log the errors if so?
Also, as written would such an error halt the script, or would it continue to backup the rest of the databases normally? If it would halt execution is there a way around that?
Every program has an exit status. The exit status of each program is assigned to the $? builtin bash variable. By convention, this is 0 if the command was successful, or some other value 1-255 if the command was not successful. The exact value depends on the code in that program.
You can see the exit codes that mysqldump might issue here: https://github.com/mysql/mysql-server/blob/8.0/client/mysqldump.cc#L65-L72
You can check for this, and log it, output an error message of you choosing, exit the bash script, whatever you want.
mysqldump ...
if [[ $? != 0 ]] ; then
...do something...
fi
You can alternatively write this which does the same thing:
mysqldump ... || {
...do something...
}
The || means to execute the following statement or code block if the exit status of the preceding command is nonzero.
By default, commands that return errors do not cause the bash script to exit. You can optionally make that the behavior of the script by using this statement, and all following commands will cause the script to exit if they fail:
set -e
I had written a script that connects to the local mysql server every 6 seconds and checks if there is any data in the table .if there is data it runs some php commands and then deletes that data from the table. I logged into my remote server(Shared hosting) through ssh and then copied the script and executed it using command "nohup ./script.sh 0<&- &>alert.log &" so that it runs in background and writes all the output to alert.log file. my problem is that when i log in to the server through SSH and execute the script it runs perfectly , but when i log out from server its not running . when i check the alert.log file after it is showing error "cannot connect to local mysql server". any solutions ??
this is the code
while true
do
res=($(mysql -u root -p123456 --skip-column-names -Dtest -e "select id from temptab"))
if [[ "$res" > 0 ]];then
del=`mysql -u root -p123456 -Dtest -e "delete from temptab;" `
now="$(date +'%d/%m/%Y:%H.%M.%S')"
for ((i=0; i < ${#res[#]}; i++))
do
php -n /var/lib/mysql/trigger.php ${res[$i]}
echo "[$now]:Trigger called with videoid ${res[$i]}"
done
fi
sleep 6
done
and this is the sample output
cat nohup.out
X-Powered-By: PHP/5.4.20
Content-type: text/html
{"multicast_id":8864856209398719411,"success":2,"failure":1,"canonical_ids":0,"results":[{"message_id":"0:1385797766832904%4f0c6467f9fd7ecd"},{"error":"InvalidRegistration"},{"message_id":"0:1385797766832901%4f0c6467f9fd7ecd"}]}81Inserted police info
[30/11/2013:00.49.26]:Trigger called with videoid 65
/etc/bashrc: line 14: whoami: command not found
/etc/bashrc: line 20: grep: command not found
/etc/bashrc: line 59: dircolors: command not found
./alert.sh: line 15: php: command not found
[30/11/2013:07.50.27]:Trigger called with videoid 70
./alert.sh: line 15: /ramdisk/php/54/bin/php54: No such file or directory
[30/11/2013:09.09.52]:Trigger called with videoid 71
screen is what you need. There are plenty of tutorials on google on screen usage.
I suggest to move your code into a crontab even that will run every X minutes (5 minutes, or anything else you like) rather than have your user run it during a live session.
Just place the PHP script inside a call to cron, login, and run crontab -e then add:
*/5 * * * * /home/username/phpscript.php
You could try to run your script like:
/path/to/script.sh </dev/null &>/home/yourname/alert.log &
disown
I
finally i got the solution .... i was using /usr/bin/php to call my php files but .....when i edited it to /usr/bin/php.orig it started working........... but what is that php.orig...?? #all thanks
I'm trying to create a cron job for database backup.
This is what I have so far:
mysqldump.sh
mysqldump -u root -ptest --all-databases | gzip > "/db-backup/backup/backup-$(date)" 2> dump.log
echo "Finished mysqldump $(date)" >> dump.log
Cron job:
32 18 * * * /db-backup/mysqldump.sh
The problem I am having is the job is not executing through cron or when I am not in the directory.
Can someone please advise. Are my paths incorrect?
Also, the following line I'm not sure will output errors to the dump.log:
mysqldump -u root -ptest --all-databases | gzip > "/db-backup/backup/backup-$(date)" 2> dump.log
What worked:
mysqldump -u root -ptest --all-databases | gzip > "../db-backup/backup/backup-$(date).sql.gz" 2> ../db-backup/dump.log
echo "Finished mysqldump $(date)" >> ../db-backup/dump.log
There are a couple of things you can check, though more information is always more helpful (permissions and location of file, entire file contents, etc).
It can never hurt to preface the mysqldump.sh file with the Shebang syntax for your environment. I would venture to guess #!/bin/bash would be sufficient.
Instead of mysqldump -u .... use the absolute path /usr/bin/mysqldump (or where ever it is on your system). Absolute paths are always a good idea in any form of scripting since it's difficult to say if the user has the same environment as you do.
As for storing the errors in dump.log, I don't believe your syntax is correct. I'm fairly sure you're piping the errors from gzip into dump.log, not the errors from mysqldump. This seems like a fairly common question which arrives at the answer of mysqldump $PARAMS | gzip -c dump-$(date)
I have set up a cronjob using crontab -e as follows:
12 22 * * * /usr/bin/mysql >> < FILE PATH >
This does not run the mysql command. It only creates a blank file.
Whereas mysqldump command is running via cron.
What could the problem be?
Surely mysql is the interactive interface into MySQL.
Assuming that you're just running mysql and appending the output to your file with >>, the first time it tries to read from standard input, it will probably get an end-of-file and exit.
Perhaps you might want to think about providing a command for it to process, something like:
12 22 * * * /usr/bin/mysql
-u me
-p never_you_mind
-e "select * from my_table"
-D my_database
>>/home/me/output_file
(split across multiple lines for readability, but should be on one line).
As an aside, that's not overly secure since your password may be visible from ps while the process is running. Since it's only an example, I'm not too worried, but you should consider storing the password in a properly secured my.cnf file if you go down this path.
In terms of running a shell script from cron which in turn executes MySQL commands, that should work as well. One choice is with a here-doc:
/usr/bin/mysql -u me -p never_you_mind -D my_database <<EOF
select * from my_table
select * from my_other_table where id = 74
EOF
12 22 * * * /usr/bin/mysql >> < FILE PATH > 2>&1
Redirect your error message to the same file so you can debug it.
There is also a good article about how to debug cron jobs:
How to debug a broken cron job