Mysql query not executing from script - mysql

I have prepared a script to clear replication logs of Mysql
mysql -u "$user" -p"$pass" << EOF
PURGE BINARY LOGS BEFORE DATE_SUB( NOW( ), INTERVAL 1 MINUTE);
EOF
The same command works fine when I execute it from Mysql prompt. What is the issue with my script.

When just passing in a single command it is less error prone to use the -e ... option instead of input redirection. One of the many things that can go wrong with redirection is that TAB characters in the input stream trigger tab completion which can have undesired and hard to track down results.
So I'd suggest to use
mysql -u "$user" -p"$pass" -e "PURGE BINARY LOGS BEFORE DATE_SUB( NOW( ), INTERVAL 1 MINUTE);"
instead

Related

mysql long query emailed before completion

I have a MySQL query that has been running for about 4 days. I enter the query from the command-line:
mysql -u who -p < SQL
mail -s 'query completed' me#there < /dev/null
and got the email response before the query completed. The mysql was not backgrounded.
Is it possible for a MySQL query to change its PID? If not, any clue as to why the next command-line command would execute?
I know I could have done a mysql && mail and it would have waited for positive completion.
You'll want to join these together with && to ensure the second command doesn't fire unless the first succeeds:
mysql -u who -p < SQL && mail -s 'query completed' me#there < /dev/null
In your version the second command runs regardless.

how to email the output of a mysql command

I want to email the output of a MySQL command, I wrote the following script but it doesn't work, where am I going wrong?
mysql --user=me --password=00045344534 john_e56
SELECT table_schema,table_name,update_time
FROM information_schema.tables
WHERE update_time > (NOW() - INTERVAL 5 MINUTE);
INTO OUTFILE '/mysqlchanges.txt'
exit
mutt -s "mysql changes" me123#mail.com -a /mysqlchanges.txt < /mail.txt
What error are you getting?
I'm not sure how INTO OUTFILE works, but 99% it requires write access to the file you're telling it to write to. In your query the output file is right in the root (/) partition. I doubt the user mysql is run from allows to write there. If you're on *nix box, create a file in your home
touch /home/ubuntu/mysqlchanges.txt
sudo chown mysql:mysql /home/ubuntu/mysqlchanges.txt
in your shell first. Then to test you can do
su mysql
echo "" > /home/ubuntu/mysqlchanges.txt
and if that works with no error, you can modify your query to output to this file and mutt to get that file as attachment:
mysql --user=me --password=00045344534 john_e56
SELECT table_schema,table_name,update_time
FROM information_schema.tables
WHERE update_time > (NOW() - INTERVAL 5 MINUTE);
INTO OUTFILE '/home/ubuntu/mysqlchanges.txt'
exit
mutt -s "mysql changes" me123#mail.com -a /home/ubuntu/mysqlchanges.txt < /mail.txt
Assuming ubuntu is your user and mysql runs on behalf of mysql user.
UPDATE
If I understood your request right, you just need to receive the output of this SQL query:
SELECT table_schema,table_name,update_time
FROM information_schema.tables
WHERE update_time > (NOW() - INTERVAL 5 MINUTE);
onto your mailbox me123#mail.com
If it's not crucial to make it as attachment and just plain text in body is fine, then have a look at this bash command:
mysql -u me -p00045344534 -e 'SELECT table_schema,table_name,update_time FROM information_schema.tables WHERE update_time > (NOW() - INTERVAL 5 MINUTE);' | mutt -s "mysql changes" me123#mail.com --
first it launches that query with output to STDOUT, then pipes it to mutt

MySQL purging shell script

I have a mysql table with 120 million rows and I want to write a shell script to start purging some of that useless information in that table that's not needed. Problem is I'm super new to shell scripting.
I have a datetime column with a unix timestamp in it. I want to delete every row that's not within the last 2 months since I've recently enacted a data retention policy that will allow me to only keep 2 months of certain data.
TL;DR Need to build a shell script that deletes all rows up until the last 2 months of data by using the unix timestamp in the datetime column.
UPDATE: Here's my new shell script
#!/bin/sh
i=1
while [ "$i" -ne 0 ]
do
i=mysql -h 127.0.0.1 -u halo_ylh -pCa5d8a88 halo_twittstats < mysqlpurge.sql
sleep 10
done
Wouldn't it be easier to just use the current_timestamp and unix_timestamp function to execute:
DELETE FROM Table1
WHERE datetime < unix_timestamp(current_timestamp - interval 2 month)
To run it from the shell you could put that command in a file script1.sql and run it using the mysql command line tool:
mysql db_name < script1.sql
mysql --host ? -user ? -password ? {DB_NAME} < dump_script.sql > data.dat
dump_script.sql will have your SELECT statememt to retrieve the data you want archived. it will store the output in data.dat
then
mysql --host ? -user ? -password ? {DB_NAME} < delete_script.sql
delete_script.sql will cotain the DELETE statement with the same WHERE clause as in dump_script.sql.
Make sure you lock the table so that nothing writes in between the two script exections that could make it into the WHERE clause to avoid phantom inserts that would be deleted by the delete script yet not be included in the dump script.

Require Cron syntax for MySQL maintenance

I'm trying to set up a Cron job for deleting MySQL records where a date field is older than three weeks, but I can't figure out what the string is that goes in the box.
Here's a pic of the Cron management screen. Can anyone help please?
http://i46.tinypic.com/id4nsj.jpg
If you know the query you want to run, you can use the -e argument for mysql at the command line for your script. So the "Command to Run" in your cron management tool would be:
mysql -u <username> -p<password> -h <name-of-mysql-server> <databasename>
-e "<YOUR-QUERY-HERE>"
The general structure of a query to delete records older than a date is:
DELETE FROM [table] WHERE [column] < DATE_SUB(NOW(), INTERVAL 3 WEEK);

How can I stop a MySQL query if it takes too long?

Is it possible to timeout a query in MySQL?
That is, if any query exceeds the time I specify, it will be killed by MySQL and it will return an error instead of waiting for eternity.
There is a nice Perl script on CPAN to do just this:
http://search.cpan.org/~rsoliv/mysql-genocide-0.03/mysql-genocide
One only needs to schedule it to run with the proper parameters. Create a CRONtab file /etc/cron.d/mysql_query_timeout to schedule it to run every minute:
* * * * * root /path/to/mysql-genocide -t 7200 -s -K
Where 7200 is the maxiumum allowed execution time in seconds. The -s switch filters out all except SELECT queries. The -K switch instructs the script to kill the matching processes.
The root user should be able to run local mysql tools without authentication otherwise you will need to provide credentials on the command line.
I just set up the following bash script as a cron job to accomplish this with MySQL 5.0 (kills any query that has been executing for more than 30 seconds). Sharing it here in case it proves useful to anyone (apologies if my bash scripting style is inefficient or atrocious, it is not my primary development language):
#!/bin/bash
linecount=0
processes=$(echo "show processlist" | mysql -uroot -ppassword)
oldIfs=$IFS
IFS='
'
echo "Checking for slow MySQL queries..."
for line in $processes
do
if [ "$linecount" -gt 0 ]
then
pid=$(echo "$line" | cut -f1)
length=$(echo "$line" | cut -f6)
query=$(echo "$line" | cut -f8)
#Id User Host db Command Time State Info
if [ "$length" -gt 30 ]
then
#echo "$pid = $length"
echo "WARNING: Killing query with pid=$pid with total execution time of $length seconds! (query=$query)"
killoutput=$(echo "kill query $pid" | mysql -uroot -ppassword)
echo "Result of killing $pid: $killoutput"
fi
fi
linecount=`expr $linecount + 1`
done
IFS=$oldIfs
I thought it has been around a little longer, but according to this,
MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.
SELECT
MAX_STATEMENT_TIME = 1000 --in milliseconds
*
FROM table;
Note that this only works for read-only SELECT statements.
Starting with MySQL 5.1 you can create a stored procedure to query the information_schmea.PROCESSLIST table for all queries that match your criteria for "long running" then iterate over a cursor to kill them. Then setup that procedure to execute on a recurring basis in the event scheduler.
See: http://forge.mysql.com/tools/tool.php?id=106
The MySQL forum has some threads about this.
This post details how to set up timeouts on the server using innodb_lock_wait_timeout.
Here's a way to do it programmatically, assuming you're using JDBC.
I think this old question needs an updated answer.
You can set a GLOBAL timeout for all your read-only SELECT queries like this:
SET GLOBAL MAX_EXECUTION_TIME=1000;
The time specified is in milliseconds.
If you want the timeout only for a specific query, you can set it inline like this:
SELECT /*+ MAX_EXECUTION_TIME(1000) */ my_column FROM my_table WHERE ...
MySQL returns an error instead of waiting for eternity.
Note that this method only works for read-only SELECTs. If a SELECT statement is determined not to be read-only, then any timer set for it is cancelled and the following NOTE message is reported to the user:
Note 1908 Select is not a read only statement, disabling timer
For statements with subqueries, it limits the top SELECT only. It does not apply to SELECT statements within stored programs. Using the MAX_EXECUTION_TIME hint in SELECT statements within a stored program will be ignored.
I don't think the egrep above would find "2000".
Why not try just selecting the id as well, and avoiding all of that posh shell stuff:
mysql -e 'select id from information_schema.processlist where info is not null and time > 30;'
Since MySQL 5.7.8 there is max_execution_time option that defines the execution timeout for SELECT statements.
Here is my script :
mysql -e 'show processlist\G' |\
egrep -b5 'Time: [6-9]{3,10}' |\
grep 'Id:' |\
cut -d':' -f2 |\
grep -v '155' |\ ## Binary Log PID
sed 's/^ //' |\
while read id
do
mysql -e "kill $id;"
done