Hi fellow StackOverFlow,
Today , I have a very weird scenario in my Unix machine.
I'm currently using this command manually to save my current database backup.
/usr/bin/mysqldump -u root -p'thetechnofreak' admin_test > /mnt/databasesql/admin$(date +%Y-%m-%d-%H.%M.%S).sql.gz
In my crontab , I access it using
crontab -e
Then I add the following to the list
30 2 * * * /usr/bin/mysqldump -u root -p'thetechnofreak' admin_test > /mnt/databasesql/admin$(date +%Y-%m-%d-%H.%M.%S).sql.gz
I realize that it's not doing it automatically, is there anything that i've missed?
Is there a flagging option or a logging method to know whether the backup is done successfully or not.
Thanks in advance.
Try this with the escaping of a \ prior to the %
30 2 * * * /usr/bin/mysqldump -u root -p'thetechnofreak' admin_test > /mnt/databasesql/admin$(date +\%Y-\%m-\%d-\%H.\%M.\%S).sql.gz
See Using the % Character in Crontab Entries by M. Ducea
Related
I wish to create a demo mode for my CMS and is there anyway with which I can set my MySQL database automatically reset after 1 hour of time.
If you have access to the hosting console, you need a database running with the initial data set. Only once, you need to get a database dump in a file:
mysqldump -u DBUSER -pDBPASS --opt DBNAME > /path/to/my/backup.sql
Then, create a cron job (run crontab -e) to run a database restore using your dump file (see http://www.adminschoice.com/crontab-quick-reference for more information about cron tabs)
mysql -u DBUSER -pDBPASS DBNAME < /path/to/my/backup.sql
For example:
# crontab -e
00 * * * * mysql -u root -p123456 demo < /path/to/my/demo_backup.sql
This will restore your database to the original state every hour (minute 00)
Note: You need to consider, also, that if some user is actually trying your demo and the database is running the reset process, the user data will be lost in the middle of the session.
I am trying to backup my mysql database using linux crontab like below command,
$crontab -e
15 * * * * /usr/bin/mysqldump -u XXX-pXXX demobackupDB > /home/user/DBbackup/$(date +%F)_backup.sql
But i dont find that .sql file in the specific folder after every 15 mins.
How to find what is error and fix it?
You have wrong crontab. What you specified is "run task at 15 minutes after each hour".
You need this crontab to run task every 15 minutes:
*/15 * * * * /usr/bin/mysqldump -u XXX-pXXX demobackupDB > /home/user/DBbackup/$(date +%F)_backup.sql
If your job is still not running there could be many reasons of this.
You can start debugging with the simple test like this:
* * * * * echo hi >> /home/user/test
(You should see a new line "hi" appended to /home/user/test file once a minute)
If this is not the case see probable reasons for ubuntu here: https://askubuntu.com/q/23009.
Also see cron log: https://askubuntu.com/q/56683/103599
You can create a file with your statement and make it executable
echo '/usr/bin/mysqldump -u XXX-pXXX demobackupDB > /home/user/DBbackup/$(date +%F)_backup.sql' > backupjob
Make them executable
chmod +x backupjob
And run it
sudo ./backupjob
If everything is fine, the mysql-backup-file should be created. If not, go on with debugging your statement else change your cronjob entry:
cronjob -e
# My MySQL-Backup-Job
*/15 * * * * /yourpath/backup
...
Important: The Crontab-Table must end with an empty line!
For further Informations: Ubuntu - Cron
I'm trying to add a cronjob in the crontab (ubuntu server) that backups the mysql db.
Executing the script in the terminal as root works well, but inserted in the crontab nothing happens. I've tried to run it each minutes but no files appears in the folder /var/db_backups.
(Other cronjobs work well)
Here is the cronjob:
* * * * * mysqldump -u root -pHERE THERE IS MY PASSWORD
--all-databases | gzip > /var/db_backups/database_`date +%d%m%y`.sql.gz
what can be the problem?
You need to escape % character with \
mysqldump -u 'username' -p'password' DBNAME > /home/eric/db_backup/liveDB_`date +\%Y\%m\%d_\%H\%M`.sql
I was trying the same but I found that dump was created with 0KB. Hence, I got to know about the solution which saved my time.
Command :
0 0 * * * mysqldump -u 'USERNAME' -p'PASSWORD' DATEBASE > /root/liveDB_`date +\%Y\%m\%d_\%H\%M\%S`.sql
NOTE:
1) You can change the time setting as per your requirement. I have set every day in above command.
2) Make sure you enter your USERNAME, PASSWORD, and DATABASE inside single quote (').
3) Write down above command in Crontab.
I hope this helps someone.
Check cron logs (should be in /var/log/syslog) You can use grep to filter them out.
grep CRON /var/log/syslog
Also you can check your local mail box to see if there are any cron mails
/var/mail/username
You can also set up other receiving mail in you crontab file
MAILTO=your#mail.com
Alternatively you can create a custom command mycommand. To which you can add more options. You must give execute permissions.
It is preferable to have a folder where they store all your backups, in this case using a writable folder "backup" which first create in "your home" for example.
My command in "usr/local/bin/mycommand":
#!/bin/bash
MY_USER="your_user"
MY_PASSWORD="your_pass"
MY_HOME="your_home"
case $1 in
"backupall")
cd $MY_HOME/backup
mysqldump --opt --password=$MY_PASSWORD --user=$MY_USER --all-databases > bckp_all_$(date +%d%m%y).sql
tar -zcvf bckp_all_$(date +%d%m%y).tgz bckp_all_$(date +%d%m%y).sql
rm bckp_all_$(date +%d%m%y).sql;;
*) echo "Others";;
esac
Cron: Runs the 1st day of each month.
0 0 1 * * /usr/local/bin/mycommand backupall
I hope it helps somewhat.
Ok, I had a similar problem and was able to get it fixed.
In your case you could insert that mysqldump command to a script
then source the profile of the user who is executing the mysqldump command
for eg:
. /home/bla/.bash_profile
then use the absolute path of the mysqldump command
/usr/local/mysql/bin/mysqldump -u root -pHERE THERE IS MY PASSWORD --all-databases | gzip > /var/db_backups/database_`date +%d%m%y`.sql.gz
Local Host mysql Backup:
0 1 * * * /usr/local/mysql/bin/mysqldump -uroot -ppassword --opt database > /path/to/directory/filename.sql
(There is no space between the -p and password or -u and username - replace root with a correct database username.)
It works for me. no space between the -p and password or -u and username
Create a new file and exec the code there to dump into a file location and zip it . Run that script via a cron
I am using Percona Server (a MySQL fork) on Ubuntu. The package (very likely the regular MySQL package as well) comes with a maintenance account called debian-sys-maint. In order for this account to be used, the credentials are created when installing the package; and they are stored in /etc/mysql/debian.cnf.
And now the surprise: A symlink /root/.my.cnf pointing to /etc/mysql/debian.cnf gets installed as well.
This file is an option file read automatically when using mysql or mysqldump. So basically you then had login credentials given twice - in that file and on command line. This was the problem I had.
So one solution to avoid this condition is to use --no-defaults option for mysqldump. The option file then won't be read. However, you provide credentials via command line, so anyone who can issue a ps can actually see the password once the backup runs. So it's best if you create an own option file with user name and password and pass this to mysqldump via --defaults-file.
You can create the option file by using mysql_config_editor or simply in any editor.
Running mysqldump via sudo from the command line as root works, just because sudo usually does not change $HOME, so .my.cnf is not found then. When running as a cronjob, it is.
You might also need to restart the service to load any of your changes.
service cron restart
or
/etc/init.d/cron restart
I have set up a cronjob using crontab -e as follows:
12 22 * * * /usr/bin/mysql >> < FILE PATH >
This does not run the mysql command. It only creates a blank file.
Whereas mysqldump command is running via cron.
What could the problem be?
Surely mysql is the interactive interface into MySQL.
Assuming that you're just running mysql and appending the output to your file with >>, the first time it tries to read from standard input, it will probably get an end-of-file and exit.
Perhaps you might want to think about providing a command for it to process, something like:
12 22 * * * /usr/bin/mysql
-u me
-p never_you_mind
-e "select * from my_table"
-D my_database
>>/home/me/output_file
(split across multiple lines for readability, but should be on one line).
As an aside, that's not overly secure since your password may be visible from ps while the process is running. Since it's only an example, I'm not too worried, but you should consider storing the password in a properly secured my.cnf file if you go down this path.
In terms of running a shell script from cron which in turn executes MySQL commands, that should work as well. One choice is with a here-doc:
/usr/bin/mysql -u me -p never_you_mind -D my_database <<EOF
select * from my_table
select * from my_other_table where id = 74
EOF
12 22 * * * /usr/bin/mysql >> < FILE PATH > 2>&1
Redirect your error message to the same file so you can debug it.
There is also a good article about how to debug cron jobs:
How to debug a broken cron job
hey I am newbie to scripting in linux.I want to take a sqldump of my database every hour, I have gone thorough couple of blogs i was able to write a script which will take the dump of my database but I do I make it run every hour in the crontab.
Kindly help me out.
Set up a crontab entry like this:
0 * * * * /usr/bin/mysqldump --user=sqluser --password=sqlpass -A > /tmp/database.sql
This will run the command /usr/bin/mysqldump --user=sqluser --password=sqlpass -A > /tmp/database.sql on the hour, every hour. This will dump all database schemas into the file /tmp/database.sql (adjust as required for your setup) using the username sqluser and the password sqlpass (again, adjust for your setup)
For more information about crontab syntax, you can refer to this page