I'm trying to set up a Cron job for deleting MySQL records where a date field is older than three weeks, but I can't figure out what the string is that goes in the box.
Here's a pic of the Cron management screen. Can anyone help please?
http://i46.tinypic.com/id4nsj.jpg
If you know the query you want to run, you can use the -e argument for mysql at the command line for your script. So the "Command to Run" in your cron management tool would be:
mysql -u <username> -p<password> -h <name-of-mysql-server> <databasename>
-e "<YOUR-QUERY-HERE>"
The general structure of a query to delete records older than a date is:
DELETE FROM [table] WHERE [column] < DATE_SUB(NOW(), INTERVAL 3 WEEK);
Related
I have a MySQL query that has been running for about 4 days. I enter the query from the command-line:
mysql -u who -p < SQL
mail -s 'query completed' me#there < /dev/null
and got the email response before the query completed. The mysql was not backgrounded.
Is it possible for a MySQL query to change its PID? If not, any clue as to why the next command-line command would execute?
I know I could have done a mysql && mail and it would have waited for positive completion.
You'll want to join these together with && to ensure the second command doesn't fire unless the first succeeds:
mysql -u who -p < SQL && mail -s 'query completed' me#there < /dev/null
In your version the second command runs regardless.
I want to email the output of a MySQL command, I wrote the following script but it doesn't work, where am I going wrong?
mysql --user=me --password=00045344534 john_e56
SELECT table_schema,table_name,update_time
FROM information_schema.tables
WHERE update_time > (NOW() - INTERVAL 5 MINUTE);
INTO OUTFILE '/mysqlchanges.txt'
exit
mutt -s "mysql changes" me123#mail.com -a /mysqlchanges.txt < /mail.txt
What error are you getting?
I'm not sure how INTO OUTFILE works, but 99% it requires write access to the file you're telling it to write to. In your query the output file is right in the root (/) partition. I doubt the user mysql is run from allows to write there. If you're on *nix box, create a file in your home
touch /home/ubuntu/mysqlchanges.txt
sudo chown mysql:mysql /home/ubuntu/mysqlchanges.txt
in your shell first. Then to test you can do
su mysql
echo "" > /home/ubuntu/mysqlchanges.txt
and if that works with no error, you can modify your query to output to this file and mutt to get that file as attachment:
mysql --user=me --password=00045344534 john_e56
SELECT table_schema,table_name,update_time
FROM information_schema.tables
WHERE update_time > (NOW() - INTERVAL 5 MINUTE);
INTO OUTFILE '/home/ubuntu/mysqlchanges.txt'
exit
mutt -s "mysql changes" me123#mail.com -a /home/ubuntu/mysqlchanges.txt < /mail.txt
Assuming ubuntu is your user and mysql runs on behalf of mysql user.
UPDATE
If I understood your request right, you just need to receive the output of this SQL query:
SELECT table_schema,table_name,update_time
FROM information_schema.tables
WHERE update_time > (NOW() - INTERVAL 5 MINUTE);
onto your mailbox me123#mail.com
If it's not crucial to make it as attachment and just plain text in body is fine, then have a look at this bash command:
mysql -u me -p00045344534 -e 'SELECT table_schema,table_name,update_time FROM information_schema.tables WHERE update_time > (NOW() - INTERVAL 5 MINUTE);' | mutt -s "mysql changes" me123#mail.com --
first it launches that query with output to STDOUT, then pipes it to mutt
How to take a backup of only last year records from MySQL table to a file?
This should work
mysqldump --databases X --tables Y --where="1 limit 1000000"
OR mysql cli
mysql -e "select * from myTable" -u myuser -pxxxxxxxxx mydatabase > mydumpfile.txt
Reference
MySQL dump by query
Actually my requirement is like, we have one table with few years of data(approx 3 lac's records) based on the client requirement we have to keep only current year of data in the table remaining years data we have to take backup to a script and purge that data. I just tried the following code for backup. It's working fine. Please let me know is there any other way to do this simply.
mysqldump --add-lock test Employee -u root -n -t "--where=end_date between '1998-02-01' and '1998-08-30'" > sample.sql
I have a mysql table with 120 million rows and I want to write a shell script to start purging some of that useless information in that table that's not needed. Problem is I'm super new to shell scripting.
I have a datetime column with a unix timestamp in it. I want to delete every row that's not within the last 2 months since I've recently enacted a data retention policy that will allow me to only keep 2 months of certain data.
TL;DR Need to build a shell script that deletes all rows up until the last 2 months of data by using the unix timestamp in the datetime column.
UPDATE: Here's my new shell script
#!/bin/sh
i=1
while [ "$i" -ne 0 ]
do
i=mysql -h 127.0.0.1 -u halo_ylh -pCa5d8a88 halo_twittstats < mysqlpurge.sql
sleep 10
done
Wouldn't it be easier to just use the current_timestamp and unix_timestamp function to execute:
DELETE FROM Table1
WHERE datetime < unix_timestamp(current_timestamp - interval 2 month)
To run it from the shell you could put that command in a file script1.sql and run it using the mysql command line tool:
mysql db_name < script1.sql
mysql --host ? -user ? -password ? {DB_NAME} < dump_script.sql > data.dat
dump_script.sql will have your SELECT statememt to retrieve the data you want archived. it will store the output in data.dat
then
mysql --host ? -user ? -password ? {DB_NAME} < delete_script.sql
delete_script.sql will cotain the DELETE statement with the same WHERE clause as in dump_script.sql.
Make sure you lock the table so that nothing writes in between the two script exections that could make it into the WHERE clause to avoid phantom inserts that would be deleted by the delete script yet not be included in the dump script.
hey I am newbie to scripting in linux.I want to take a sqldump of my database every hour, I have gone thorough couple of blogs i was able to write a script which will take the dump of my database but I do I make it run every hour in the crontab.
Kindly help me out.
Set up a crontab entry like this:
0 * * * * /usr/bin/mysqldump --user=sqluser --password=sqlpass -A > /tmp/database.sql
This will run the command /usr/bin/mysqldump --user=sqluser --password=sqlpass -A > /tmp/database.sql on the hour, every hour. This will dump all database schemas into the file /tmp/database.sql (adjust as required for your setup) using the username sqluser and the password sqlpass (again, adjust for your setup)
For more information about crontab syntax, you can refer to this page