How can I write a cron job that runs the MySQL "show processlist" command and stores in log file every 5 seconds between 5 am to 7 am?
I know the lowest possible timing I can have in cron is a minute not second. If I need a script, I am looking for a solution in Bash.
I think this cron job works for every 5 minutes between 5 am to 7 am.
*/5 5-7 * * * mysql -ufoo --password='' -te "show full processlist" > /home/foo/log/show_processlist.log.`date +\%Y\%m\%d-\%H\%M` 2>&1
You can use mysqladmin which is MySQL CLI command to administrate database,
mysqladmin -u root -p -i 5 processlis
Press CTRL+C to stop the running command anytime and use "--verbose" to see the full query.
Set a chron task to run at 05:00 which executes a script that loops over a time value and sleeps for 5 seconds between those time values. May not be exactly 5 seconds as sleep is usually the min time to sleep, but should be close enough.
You can write shell/Python/php/something else script which should be run by cron job minutely.
This script should have the following logic (pseudo code):
i = 0
while i < 20:
i++
show full processlist
delay 4 sec
Want a bash command on every second to see FULL queries?
while [ true ]; do mysql -u MYSQLUSERNAME -p'MYSQLUSERPASSWORD' --execute='SHOW FULL processlist;'; sleep 1; done;
Assuming you're on a safe environment to enter raw MYSQL password ;).
Related
I did tried to search, but nothing comes up that really works for me.
So i would start this thread to see if anyone can help. I hope this is not a stupid question that i overlook something simple.
I have a mac mini, that running with a MySQL server.
There is some day end job, so i put them into a script, trigger by a crontab (Actually I also tried launched as this is mac OS X, but same behavior)
crontab looks like this
15 00 * * * /Users/fgs/Documents/database/process_db.sh > /Users/fgs/Documents/database/output.txt 2>&1
the script looks like this
#!/bin/bash
#some data patching task before everything start
#This sql takes 3 sec
/usr/local/bin/mysql dbname -u root "-ppassword" < /Users/fgs/Documents/database/loadrawdata.sql
#This sql takes 90 sec
/usr/local/bin/mysql dbname -u root "-ppassword" < /Users/fgs/Documents/database/LongLongsql.sql
#This sql takes 1 sec
/usr/local/bin/mysql dbname -u root "-ppassword" < /Users/fgs/Documents/database/anothersql.sql
Behavior:
A. When i execute the shell script directly in terminal, all the 3 sql works
B. When i execute this with crontab, the 90 sec SQL doesn't work (it is an insert into with a very big join, so there is no output printed, i did also tried to > output file, adding 2>&1, also no output), but the SQL before and after it works as expected.
C. To simulate crontab behavior, I tried to use
env - /bin/sh
and then start the shell script manually.
It appears that, the 90 sec longlongsql.sql was running only 5 sec, and skipped to the next line. No error message was displayed
I am wondering if there is any kind of timeout for crontab? (I did searched but found nothing)
I did checked ulimit is unlimited (checked within "env - /bin/sh", and also did tried to put into the script)
I believe it is not related to mysql command, since it works fine by running same scripts (I also did searched this topic, and nothing interesting)
Just wondering if anyone can shed some light on me, a direction or whatever will help.
Thanks everyone in advance.
Don't forget that cron will start an isolated shell where it may not be able to read the file.
I would recommend to put your mysql-stuff inside a script. If you are able to execute the script, cron should also be able to do so.
#!/bin/bash
/usr/local/bin/mysql dbname -u root "-ppassword" < /Users/fgs/Documents/database/LongLongsql.sq
Or:
#!/bin/bash
/usr/local/bin/mysql --user=root --password=xxxxxx -e "/Users/fgs/Documents/database/LongLongsql.sq"
Then call the script from crontab...
I'm trying to run following query inside a bash script.
When it is executed from mysql command promt, execution time was 0.06sec.
mysql> delete from assign_history where offer_id not in
->('7','8','9','10','11','12','13','14','32','157','211','240','241','242','273',
->'274','275','310','312','313','314','326','328','329','333','334','335','336',
->'337','342','343','355','362','374','375','376','378','379','383','384','409','411')
->and date(action_date) < "2015-06-25" order by id limit 1000;
Query OK, 1000 rows affected (0.06 sec)
But when run it inside a bash script, it takes more than 2 minutes.
[root#localhost umap]# cat ./history_del.sh
#! /bin/bash
echo $(date)
mysql -uroot -ppassword db_offers -e "delete from assign_history where offer_id not in ('7','8','9','10','11','12','13','14','32','157','211','240','241','242','273','274','275','310','312','313','314','326','328','329','333','334','335','336','337','342','343','355','362','374','375','376','378','379','383','384','409','411') and date(action_date) < "2015-06-25" limit 1000;"
echo $(date)
[root#localhost umap]# ./history_del.sh
Wed Aug 26 19:08:45 IST 2015
Wed Aug 26 19:10:48 IST 2015
I also tried with "mysql -Bse" options. No improvement. Any ideas?
Any ideas?
First, you need to escape double-quotes inside the query string: \"2015-06-25\" (try to output your query with echo and you'll see, why ). I dont know, how your request works without properly specified quotes...
Second, it is better and preferred to place your long-line-request in the file, so your command-line will look like this:
mysql -uroot -ppassword db_offers <YOUR_FILE
Request in YOUR_FILE will be the same as in the mysql prompt (of course, you dont need to escape double-quotes here).
And yes, when you call mysql utility - it can take unpredictably long time to connect to MySQL server, so 2 minutes include this time (but 0.06 sec in mysql prompt doesnt!), so you cant say, how much time does it take to connect to server and how much - to send and execute your query.
To know, how much time does it take to connect to mysql server, try to execute (wait several seconds after previous run of the mysql utility) any empty query, such as:
time mysql -u user -ppassword -Bs <<<'select null'
I am uploading data to a MySQL database from a shell script (from cron) every 5 minutes.
But if my connection is down, it does not insert it in to my database.
I would like the script to try to insert again and again (for example every 30 minutes) until it is successful.
And if my connection is down more than 5 minutes, I would like to these requests to stand in a queue and proceed when I have connection again.
How can I do that?
Example code:
#!/bin/bash
cputemp=$(somecommands...)
/usr/bin/mysql -h 127.0.0.1 -u admin -pwpassword -e "USE logs; INSERT INTO cpu (cputemp) VALUES ('$cputemp');"
You'd do it by writing in that logic into your code. Should be fairly easy detect it couldn't detect, have it sleep, then try again.
I'm running mysql on Debian.
Is there a way to monitor mysql and restart it automatically if it locks up? For example sometimes the server starts to take 100% of cpu and starts running very slowly. If I restart mysql, things clear up and the server starts working fine, but I'm not always present to restart it manually.
Is there a way to monitor mysql and if the cpu is above 95% for more than 10 minutes straight then then mysql will automatically be restarted
You can write a cronjob to use
show processlist;
show processlist will return column Time and Id,
you can add more logic to check,
like query stuck for more than 600 seconds and the query is SELECT,
you can use Id value to perform kill $id;
This is safer than blindly restarting your server.
And if you have segregate between read/write (meaning read only SQL will use user with read privileges only), this can even simpler.
Use this bash script to check every minute.
#!/bin/bash
#Checking whether MySQL is alive or not
if mysqladmin ping | grep "alive"; then
echo "MySQL is up"
else
sudo service mysql restart
fi
`
Is it possible to timeout a query in MySQL?
That is, if any query exceeds the time I specify, it will be killed by MySQL and it will return an error instead of waiting for eternity.
There is a nice Perl script on CPAN to do just this:
http://search.cpan.org/~rsoliv/mysql-genocide-0.03/mysql-genocide
One only needs to schedule it to run with the proper parameters. Create a CRONtab file /etc/cron.d/mysql_query_timeout to schedule it to run every minute:
* * * * * root /path/to/mysql-genocide -t 7200 -s -K
Where 7200 is the maxiumum allowed execution time in seconds. The -s switch filters out all except SELECT queries. The -K switch instructs the script to kill the matching processes.
The root user should be able to run local mysql tools without authentication otherwise you will need to provide credentials on the command line.
I just set up the following bash script as a cron job to accomplish this with MySQL 5.0 (kills any query that has been executing for more than 30 seconds). Sharing it here in case it proves useful to anyone (apologies if my bash scripting style is inefficient or atrocious, it is not my primary development language):
#!/bin/bash
linecount=0
processes=$(echo "show processlist" | mysql -uroot -ppassword)
oldIfs=$IFS
IFS='
'
echo "Checking for slow MySQL queries..."
for line in $processes
do
if [ "$linecount" -gt 0 ]
then
pid=$(echo "$line" | cut -f1)
length=$(echo "$line" | cut -f6)
query=$(echo "$line" | cut -f8)
#Id User Host db Command Time State Info
if [ "$length" -gt 30 ]
then
#echo "$pid = $length"
echo "WARNING: Killing query with pid=$pid with total execution time of $length seconds! (query=$query)"
killoutput=$(echo "kill query $pid" | mysql -uroot -ppassword)
echo "Result of killing $pid: $killoutput"
fi
fi
linecount=`expr $linecount + 1`
done
IFS=$oldIfs
I thought it has been around a little longer, but according to this,
MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.
SELECT
MAX_STATEMENT_TIME = 1000 --in milliseconds
*
FROM table;
Note that this only works for read-only SELECT statements.
Starting with MySQL 5.1 you can create a stored procedure to query the information_schmea.PROCESSLIST table for all queries that match your criteria for "long running" then iterate over a cursor to kill them. Then setup that procedure to execute on a recurring basis in the event scheduler.
See: http://forge.mysql.com/tools/tool.php?id=106
The MySQL forum has some threads about this.
This post details how to set up timeouts on the server using innodb_lock_wait_timeout.
Here's a way to do it programmatically, assuming you're using JDBC.
I think this old question needs an updated answer.
You can set a GLOBAL timeout for all your read-only SELECT queries like this:
SET GLOBAL MAX_EXECUTION_TIME=1000;
The time specified is in milliseconds.
If you want the timeout only for a specific query, you can set it inline like this:
SELECT /*+ MAX_EXECUTION_TIME(1000) */ my_column FROM my_table WHERE ...
MySQL returns an error instead of waiting for eternity.
Note that this method only works for read-only SELECTs. If a SELECT statement is determined not to be read-only, then any timer set for it is cancelled and the following NOTE message is reported to the user:
Note 1908 Select is not a read only statement, disabling timer
For statements with subqueries, it limits the top SELECT only. It does not apply to SELECT statements within stored programs. Using the MAX_EXECUTION_TIME hint in SELECT statements within a stored program will be ignored.
I don't think the egrep above would find "2000".
Why not try just selecting the id as well, and avoiding all of that posh shell stuff:
mysql -e 'select id from information_schema.processlist where info is not null and time > 30;'
Since MySQL 5.7.8 there is max_execution_time option that defines the execution timeout for SELECT statements.
Here is my script :
mysql -e 'show processlist\G' |\
egrep -b5 'Time: [6-9]{3,10}' |\
grep 'Id:' |\
cut -d':' -f2 |\
grep -v '155' |\ ## Binary Log PID
sed 's/^ //' |\
while read id
do
mysql -e "kill $id;"
done