I need help in creating a cron job script. Basically, I want to grab the next scheduled item and run it through ffmpeg to stream. This would be the mysql query (I'm using PHP variables to indicate what should go there - I don't actually know how variables work in cron jobs):
SELECT show.file FROM show, schedule
WHERE channel = 1 AND start_time <= $current_time;
This would be the ffmpeg command:
ffmpeg -re -i $file http://127.0.0.1:8090/feed.ffm
How would I create a cron job to execute these commands?
First of all IMHO you don't need to pass current time to your select statement, just use CURRENT_TIME
SELECT `show.file`
FROM show, schedule
WHERE channel = 1 AND start_time <= CURRENT_TIME;
Depending on your actual table's DDL you might need to do some conversion to correctly compare time values.
Assuming that your query correct and returns ONLY ONE filename you can execute the query with mysql, output the result into predefined file and use && to chain ffmpeg command
mysql -u user -ppassword dbname -sN -e \
"SELECT show.file \
FROM show, schedule \
WHERE channel = 1 AND start_time <= CURRENT_TIME" > /tmp/cur_show_file && \
ffmpeg -re -i /tmp/cur_show_file http://127.0.0.1:8090/feed.ffm
Related
I have a MySQL query that has been running for about 4 days. I enter the query from the command-line:
mysql -u who -p < SQL
mail -s 'query completed' me#there < /dev/null
and got the email response before the query completed. The mysql was not backgrounded.
Is it possible for a MySQL query to change its PID? If not, any clue as to why the next command-line command would execute?
I know I could have done a mysql && mail and it would have waited for positive completion.
You'll want to join these together with && to ensure the second command doesn't fire unless the first succeeds:
mysql -u who -p < SQL && mail -s 'query completed' me#there < /dev/null
In your version the second command runs regardless.
i'm trying to create a batch script to do some actions in my mysql database and some other actions with linux command, I'm trying to fetch result from my query to mysql in my script to put the result in an associative array, I've found just how to do it in simple array for one column, here is my query:
result=`mysql -h $DATABASE_HOST --user=$DATABASE_USER --password=$DATABASE_PASSWORD --skip-column-names -s -e "select id,type from $DATABASE_NAME.Media where status=0 "
I'm not an expert in batch script, sorry if it is a noob question !
result=$(mysql ...)
declare -A ary
while read id type; do
ary[$id]=$type
done <<< "$result"
I have a mysql table with 120 million rows and I want to write a shell script to start purging some of that useless information in that table that's not needed. Problem is I'm super new to shell scripting.
I have a datetime column with a unix timestamp in it. I want to delete every row that's not within the last 2 months since I've recently enacted a data retention policy that will allow me to only keep 2 months of certain data.
TL;DR Need to build a shell script that deletes all rows up until the last 2 months of data by using the unix timestamp in the datetime column.
UPDATE: Here's my new shell script
#!/bin/sh
i=1
while [ "$i" -ne 0 ]
do
i=mysql -h 127.0.0.1 -u halo_ylh -pCa5d8a88 halo_twittstats < mysqlpurge.sql
sleep 10
done
Wouldn't it be easier to just use the current_timestamp and unix_timestamp function to execute:
DELETE FROM Table1
WHERE datetime < unix_timestamp(current_timestamp - interval 2 month)
To run it from the shell you could put that command in a file script1.sql and run it using the mysql command line tool:
mysql db_name < script1.sql
mysql --host ? -user ? -password ? {DB_NAME} < dump_script.sql > data.dat
dump_script.sql will have your SELECT statememt to retrieve the data you want archived. it will store the output in data.dat
then
mysql --host ? -user ? -password ? {DB_NAME} < delete_script.sql
delete_script.sql will cotain the DELETE statement with the same WHERE clause as in dump_script.sql.
Make sure you lock the table so that nothing writes in between the two script exections that could make it into the WHERE clause to avoid phantom inserts that would be deleted by the delete script yet not be included in the dump script.
I' ve searched and searched, but I wasn't able to find an easy way to get this:
Query OK, 50000 rows affected (0.35 sec)
in milliseconds or microseconds.
How can I achieve it?
I came with the same problem, I did my queries from a linux console using time
$ time mysql --user="user" -D "DataBase" -e "SELECT SQL_NO_CACHE COUNT(1) FROM table"
------------
count(1)
------------
750
------------
real 0m0.269s
user 0m0.014s
sys 0m0.015s
or
$ time -f"%e" mysql --user="user" -D "DataBase" -e "SELECT SQL_NO_CACHE COUNT(1) FROM table"
------------
count(1)
------------
750
------------
0.24
It gives different values from "mysql" but at least is something you can work with, for example this script:
#!/bin
temp = 1
while [ $temp -le 1000]
do
/usr/bin/time -f"%e" -o"/home/admin/benchmark.txt" -a mysql --user="user" -D "DataBase" -e "SELECT SQL_NO_CACHE COUNT(1) FROM table" > /dev/null 2> /dev/null
let temp=$temp+1
done
Execute the query 1000 times, -f shows only the real time, -o the output file, -a appends to the output, > /dev/null 2> /dev/null ignores the query output so it doesn't print in console each time.
That time's calculated by the mysql monitor application and isn't done by the mysql server. It's not something you can retrieve programatically by doing (say) select last_query_execution_time() (which would be nice).
You can simulate it in a coarse way by doing the timing in your application, by taking system time before and after calling the query function. Hopefully the client-side overhead would be minimal compared to the mysql portion.
You could time it yourself in the code that runs the query:
Pseudo code:
double StartTime = <now>
Execute SQL Query
double QueryTime = <now> - StartTime
Is it possible to timeout a query in MySQL?
That is, if any query exceeds the time I specify, it will be killed by MySQL and it will return an error instead of waiting for eternity.
There is a nice Perl script on CPAN to do just this:
http://search.cpan.org/~rsoliv/mysql-genocide-0.03/mysql-genocide
One only needs to schedule it to run with the proper parameters. Create a CRONtab file /etc/cron.d/mysql_query_timeout to schedule it to run every minute:
* * * * * root /path/to/mysql-genocide -t 7200 -s -K
Where 7200 is the maxiumum allowed execution time in seconds. The -s switch filters out all except SELECT queries. The -K switch instructs the script to kill the matching processes.
The root user should be able to run local mysql tools without authentication otherwise you will need to provide credentials on the command line.
I just set up the following bash script as a cron job to accomplish this with MySQL 5.0 (kills any query that has been executing for more than 30 seconds). Sharing it here in case it proves useful to anyone (apologies if my bash scripting style is inefficient or atrocious, it is not my primary development language):
#!/bin/bash
linecount=0
processes=$(echo "show processlist" | mysql -uroot -ppassword)
oldIfs=$IFS
IFS='
'
echo "Checking for slow MySQL queries..."
for line in $processes
do
if [ "$linecount" -gt 0 ]
then
pid=$(echo "$line" | cut -f1)
length=$(echo "$line" | cut -f6)
query=$(echo "$line" | cut -f8)
#Id User Host db Command Time State Info
if [ "$length" -gt 30 ]
then
#echo "$pid = $length"
echo "WARNING: Killing query with pid=$pid with total execution time of $length seconds! (query=$query)"
killoutput=$(echo "kill query $pid" | mysql -uroot -ppassword)
echo "Result of killing $pid: $killoutput"
fi
fi
linecount=`expr $linecount + 1`
done
IFS=$oldIfs
I thought it has been around a little longer, but according to this,
MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.
SELECT
MAX_STATEMENT_TIME = 1000 --in milliseconds
*
FROM table;
Note that this only works for read-only SELECT statements.
Starting with MySQL 5.1 you can create a stored procedure to query the information_schmea.PROCESSLIST table for all queries that match your criteria for "long running" then iterate over a cursor to kill them. Then setup that procedure to execute on a recurring basis in the event scheduler.
See: http://forge.mysql.com/tools/tool.php?id=106
The MySQL forum has some threads about this.
This post details how to set up timeouts on the server using innodb_lock_wait_timeout.
Here's a way to do it programmatically, assuming you're using JDBC.
I think this old question needs an updated answer.
You can set a GLOBAL timeout for all your read-only SELECT queries like this:
SET GLOBAL MAX_EXECUTION_TIME=1000;
The time specified is in milliseconds.
If you want the timeout only for a specific query, you can set it inline like this:
SELECT /*+ MAX_EXECUTION_TIME(1000) */ my_column FROM my_table WHERE ...
MySQL returns an error instead of waiting for eternity.
Note that this method only works for read-only SELECTs. If a SELECT statement is determined not to be read-only, then any timer set for it is cancelled and the following NOTE message is reported to the user:
Note 1908 Select is not a read only statement, disabling timer
For statements with subqueries, it limits the top SELECT only. It does not apply to SELECT statements within stored programs. Using the MAX_EXECUTION_TIME hint in SELECT statements within a stored program will be ignored.
I don't think the egrep above would find "2000".
Why not try just selecting the id as well, and avoiding all of that posh shell stuff:
mysql -e 'select id from information_schema.processlist where info is not null and time > 30;'
Since MySQL 5.7.8 there is max_execution_time option that defines the execution timeout for SELECT statements.
Here is my script :
mysql -e 'show processlist\G' |\
egrep -b5 'Time: [6-9]{3,10}' |\
grep 'Id:' |\
cut -d':' -f2 |\
grep -v '155' |\ ## Binary Log PID
sed 's/^ //' |\
while read id
do
mysql -e "kill $id;"
done