I have set up a cronjob using crontab -e as follows:
12 22 * * * /usr/bin/mysql >> < FILE PATH >
This does not run the mysql command. It only creates a blank file.
Whereas mysqldump command is running via cron.
What could the problem be?
Surely mysql is the interactive interface into MySQL.
Assuming that you're just running mysql and appending the output to your file with >>, the first time it tries to read from standard input, it will probably get an end-of-file and exit.
Perhaps you might want to think about providing a command for it to process, something like:
12 22 * * * /usr/bin/mysql
-u me
-p never_you_mind
-e "select * from my_table"
-D my_database
>>/home/me/output_file
(split across multiple lines for readability, but should be on one line).
As an aside, that's not overly secure since your password may be visible from ps while the process is running. Since it's only an example, I'm not too worried, but you should consider storing the password in a properly secured my.cnf file if you go down this path.
In terms of running a shell script from cron which in turn executes MySQL commands, that should work as well. One choice is with a here-doc:
/usr/bin/mysql -u me -p never_you_mind -D my_database <<EOF
select * from my_table
select * from my_other_table where id = 74
EOF
12 22 * * * /usr/bin/mysql >> < FILE PATH > 2>&1
Redirect your error message to the same file so you can debug it.
There is also a good article about how to debug cron jobs:
How to debug a broken cron job
Related
Hi fellow StackOverFlow,
Today , I have a very weird scenario in my Unix machine.
I'm currently using this command manually to save my current database backup.
/usr/bin/mysqldump -u root -p'thetechnofreak' admin_test > /mnt/databasesql/admin$(date +%Y-%m-%d-%H.%M.%S).sql.gz
In my crontab , I access it using
crontab -e
Then I add the following to the list
30 2 * * * /usr/bin/mysqldump -u root -p'thetechnofreak' admin_test > /mnt/databasesql/admin$(date +%Y-%m-%d-%H.%M.%S).sql.gz
I realize that it's not doing it automatically, is there anything that i've missed?
Is there a flagging option or a logging method to know whether the backup is done successfully or not.
Thanks in advance.
Try this with the escaping of a \ prior to the %
30 2 * * * /usr/bin/mysqldump -u root -p'thetechnofreak' admin_test > /mnt/databasesql/admin$(date +\%Y-\%m-\%d-\%H.\%M.\%S).sql.gz
See Using the % Character in Crontab Entries by M. Ducea
I did tried to search, but nothing comes up that really works for me.
So i would start this thread to see if anyone can help. I hope this is not a stupid question that i overlook something simple.
I have a mac mini, that running with a MySQL server.
There is some day end job, so i put them into a script, trigger by a crontab (Actually I also tried launched as this is mac OS X, but same behavior)
crontab looks like this
15 00 * * * /Users/fgs/Documents/database/process_db.sh > /Users/fgs/Documents/database/output.txt 2>&1
the script looks like this
#!/bin/bash
#some data patching task before everything start
#This sql takes 3 sec
/usr/local/bin/mysql dbname -u root "-ppassword" < /Users/fgs/Documents/database/loadrawdata.sql
#This sql takes 90 sec
/usr/local/bin/mysql dbname -u root "-ppassword" < /Users/fgs/Documents/database/LongLongsql.sql
#This sql takes 1 sec
/usr/local/bin/mysql dbname -u root "-ppassword" < /Users/fgs/Documents/database/anothersql.sql
Behavior:
A. When i execute the shell script directly in terminal, all the 3 sql works
B. When i execute this with crontab, the 90 sec SQL doesn't work (it is an insert into with a very big join, so there is no output printed, i did also tried to > output file, adding 2>&1, also no output), but the SQL before and after it works as expected.
C. To simulate crontab behavior, I tried to use
env - /bin/sh
and then start the shell script manually.
It appears that, the 90 sec longlongsql.sql was running only 5 sec, and skipped to the next line. No error message was displayed
I am wondering if there is any kind of timeout for crontab? (I did searched but found nothing)
I did checked ulimit is unlimited (checked within "env - /bin/sh", and also did tried to put into the script)
I believe it is not related to mysql command, since it works fine by running same scripts (I also did searched this topic, and nothing interesting)
Just wondering if anyone can shed some light on me, a direction or whatever will help.
Thanks everyone in advance.
Don't forget that cron will start an isolated shell where it may not be able to read the file.
I would recommend to put your mysql-stuff inside a script. If you are able to execute the script, cron should also be able to do so.
#!/bin/bash
/usr/local/bin/mysql dbname -u root "-ppassword" < /Users/fgs/Documents/database/LongLongsql.sq
Or:
#!/bin/bash
/usr/local/bin/mysql --user=root --password=xxxxxx -e "/Users/fgs/Documents/database/LongLongsql.sq"
Then call the script from crontab...
I am trying to backup my mysql database using linux crontab like below command,
$crontab -e
15 * * * * /usr/bin/mysqldump -u XXX-pXXX demobackupDB > /home/user/DBbackup/$(date +%F)_backup.sql
But i dont find that .sql file in the specific folder after every 15 mins.
How to find what is error and fix it?
You have wrong crontab. What you specified is "run task at 15 minutes after each hour".
You need this crontab to run task every 15 minutes:
*/15 * * * * /usr/bin/mysqldump -u XXX-pXXX demobackupDB > /home/user/DBbackup/$(date +%F)_backup.sql
If your job is still not running there could be many reasons of this.
You can start debugging with the simple test like this:
* * * * * echo hi >> /home/user/test
(You should see a new line "hi" appended to /home/user/test file once a minute)
If this is not the case see probable reasons for ubuntu here: https://askubuntu.com/q/23009.
Also see cron log: https://askubuntu.com/q/56683/103599
You can create a file with your statement and make it executable
echo '/usr/bin/mysqldump -u XXX-pXXX demobackupDB > /home/user/DBbackup/$(date +%F)_backup.sql' > backupjob
Make them executable
chmod +x backupjob
And run it
sudo ./backupjob
If everything is fine, the mysql-backup-file should be created. If not, go on with debugging your statement else change your cronjob entry:
cronjob -e
# My MySQL-Backup-Job
*/15 * * * * /yourpath/backup
...
Important: The Crontab-Table must end with an empty line!
For further Informations: Ubuntu - Cron
I have a mysql table with large number of rows (10m)
From the mysql client, I want to run a query but not print results. This is because even though the query runs in 15 seconds, printing the results on to console takes many minutes.
How can I achieve this?
EDIT: My query is the following:
select user_id, count(*) as ct from user_geo_loc group by user_id, lat, lng;
EDIT 2: At the end of the execution, the mysql client prints the following
9950710 rows in set (9.31 sec)
I want to find out this time but not print the results (which takes 15 minutes)
When on Linux, you could redirect the output to /dev/null to prevent the output. Like this:
mysql -u username -p database -e "SELECT * FROM table" > /dev/null
On Windows the equivalent would be:
mysql -u username -p database -e "SELECT * FROM table" > NUL
Please note: The only thing printed on the console will be errors, to prevent this, you would have to redirect stderr to stdout by adding 2>&1 to the end (Linux)
In console, you may redirect output into the null device:
$ mysql -uUSER -pPASSWORD -e"select ..." DATABASE_NAME > /dev/null
or you may redirect into the file to look result later (this is much faster than print output into console):
$ mysql -uUSER -pPASSWORD -e"select ..." DATABASE_NAME > ./output.txt
It seems like you want a pager ?
run the following (in the MySQL console)
pager less
Which will use less and only show the first "screen" of info
I'm trying to add a cronjob in the crontab (ubuntu server) that backups the mysql db.
Executing the script in the terminal as root works well, but inserted in the crontab nothing happens. I've tried to run it each minutes but no files appears in the folder /var/db_backups.
(Other cronjobs work well)
Here is the cronjob:
* * * * * mysqldump -u root -pHERE THERE IS MY PASSWORD
--all-databases | gzip > /var/db_backups/database_`date +%d%m%y`.sql.gz
what can be the problem?
You need to escape % character with \
mysqldump -u 'username' -p'password' DBNAME > /home/eric/db_backup/liveDB_`date +\%Y\%m\%d_\%H\%M`.sql
I was trying the same but I found that dump was created with 0KB. Hence, I got to know about the solution which saved my time.
Command :
0 0 * * * mysqldump -u 'USERNAME' -p'PASSWORD' DATEBASE > /root/liveDB_`date +\%Y\%m\%d_\%H\%M\%S`.sql
NOTE:
1) You can change the time setting as per your requirement. I have set every day in above command.
2) Make sure you enter your USERNAME, PASSWORD, and DATABASE inside single quote (').
3) Write down above command in Crontab.
I hope this helps someone.
Check cron logs (should be in /var/log/syslog) You can use grep to filter them out.
grep CRON /var/log/syslog
Also you can check your local mail box to see if there are any cron mails
/var/mail/username
You can also set up other receiving mail in you crontab file
MAILTO=your#mail.com
Alternatively you can create a custom command mycommand. To which you can add more options. You must give execute permissions.
It is preferable to have a folder where they store all your backups, in this case using a writable folder "backup" which first create in "your home" for example.
My command in "usr/local/bin/mycommand":
#!/bin/bash
MY_USER="your_user"
MY_PASSWORD="your_pass"
MY_HOME="your_home"
case $1 in
"backupall")
cd $MY_HOME/backup
mysqldump --opt --password=$MY_PASSWORD --user=$MY_USER --all-databases > bckp_all_$(date +%d%m%y).sql
tar -zcvf bckp_all_$(date +%d%m%y).tgz bckp_all_$(date +%d%m%y).sql
rm bckp_all_$(date +%d%m%y).sql;;
*) echo "Others";;
esac
Cron: Runs the 1st day of each month.
0 0 1 * * /usr/local/bin/mycommand backupall
I hope it helps somewhat.
Ok, I had a similar problem and was able to get it fixed.
In your case you could insert that mysqldump command to a script
then source the profile of the user who is executing the mysqldump command
for eg:
. /home/bla/.bash_profile
then use the absolute path of the mysqldump command
/usr/local/mysql/bin/mysqldump -u root -pHERE THERE IS MY PASSWORD --all-databases | gzip > /var/db_backups/database_`date +%d%m%y`.sql.gz
Local Host mysql Backup:
0 1 * * * /usr/local/mysql/bin/mysqldump -uroot -ppassword --opt database > /path/to/directory/filename.sql
(There is no space between the -p and password or -u and username - replace root with a correct database username.)
It works for me. no space between the -p and password or -u and username
Create a new file and exec the code there to dump into a file location and zip it . Run that script via a cron
I am using Percona Server (a MySQL fork) on Ubuntu. The package (very likely the regular MySQL package as well) comes with a maintenance account called debian-sys-maint. In order for this account to be used, the credentials are created when installing the package; and they are stored in /etc/mysql/debian.cnf.
And now the surprise: A symlink /root/.my.cnf pointing to /etc/mysql/debian.cnf gets installed as well.
This file is an option file read automatically when using mysql or mysqldump. So basically you then had login credentials given twice - in that file and on command line. This was the problem I had.
So one solution to avoid this condition is to use --no-defaults option for mysqldump. The option file then won't be read. However, you provide credentials via command line, so anyone who can issue a ps can actually see the password once the backup runs. So it's best if you create an own option file with user name and password and pass this to mysqldump via --defaults-file.
You can create the option file by using mysql_config_editor or simply in any editor.
Running mysqldump via sudo from the command line as root works, just because sudo usually does not change $HOME, so .my.cnf is not found then. When running as a cronjob, it is.
You might also need to restart the service to load any of your changes.
service cron restart
or
/etc/init.d/cron restart