I have an old website that uses big database and I do not want to upgrade it now. The issue is the mysql database has some queries takes very long time when high traffic about 4000 online users causes mysql to to reach 600%-800% and I have to manually restart the mysql server from WHM.
I want to use cron job simple shell script to read the mysql process list every 10 seconds and if any process time more than say 10 seconds it kill this process.
This is the query I found for doing such task:
mysql -e 'SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST where time>10 and command<>"Sleep"'
I think to get the process ID to kill I should use:
mysql -e 'SELECT ID FROM INFORMATION_SCHEMA.PROCESSLIST where time>10 and command<>"Sleep"'
The output like that:
+------+
| ID |
+------+
| 1095 |
| 1094 |
| 1081 |
| 1079 |
| 1078 |
| 1074 |
| 1040 |
| 1038 |
+------+
Now I have this output table, I just need to wrap this task in shell script to parse these process ID's and kill them.
You can save the output in an array and use grep to filter only the digits.
mapfile -t array < <(mysql .... | grep -Ewo '[[:digit:]]+')
Another option is to use a while read loop
while read -r digits; do
if [[ $digits =~ .*([[:digit:]]{4}).* ]]; then
array+=("${BASH_REMATCH[1]}")
fi
done < <(mysql ....)
Now "${array[#]}" has all that digits only value.
Kill it check if it is running, loop through it one by one and so on.
Here is what I came up with for specific databases user "db_user" to avoid system long process like backups etc:
#!/bin/bash
for i in $(mysql -Ne 'select id from information_schema.processlist where USER="db_user" and time>20 and command<>"Sleep";'); do
mysql -e "kill ${i}"
done
saved it as mysql_kill_high_processes.sh and added it to the root user cron to run every minute.
Related
I am simply trying to understand why these 2 commands have different outputs on my screen:
$> mysql -ularavel -ppassword -e 'select id from queue.jobs;'
+-------+
| id |
+-------+
| 20945 |
| 20946 |
+-------+
$> watch "mysql -ularavel -ppassword -e 'select id from queue.jobs;'"
Every 2.0s: mysql -ularavel -ppassword -e 'select id from queue.jobs;'
id
20945
20946
Notice that the watch command does not draw the table borders. I simplified this example, but for multiple columns the table is distorted and difficult to read.
So, why? Is there a difference between the input/output of the watch command that is different from what's directly in the terminal?
Tried on OSX with iTerm2 and the default Terminal app
I am trying to create a bash shell script that runs an sql query and later on create a cronjob that runs it at an specific time.
I created my bash script see below
mysql -u $host -D $dbname -u $user -p$password -e $mySqlQuery
I have wrap -u -D -p -e all in variables. I have also change it to and executable file. When i run it. it gives out an output stating. Command not found. can anyone tell the mistake i made?
Below is the bash script
host="host"
user="user"
dbname="database"
password="password"
mySqlQuery = "SELECT *
FROM invoice i
JOIN item it
ON it.invoice_id = i.id
JOIN user u
ON i.user_id = u.id
JOIN gateway_response gr
ON gr.invoice_id = i.id
WHERE i.created_at >= '2019-03-01 00:00:00' and
i.created_at <= '2019-03-17 23:59:59' and i.status=9"
mysql -u $host -D $dbname -u $user -p$password -e $mySqlQuery
Below is the error i am receiving when i run it.
/home/chris2kus/givingDetectRun.sh: line 8: mySqlQuery: command not found /home/chris2kus/givingDetectRun.sh: line 20: mysql: command not found –
There must be no spaces around the = and the variable name mySqlQuery.
Also, I suggest you wrap your variables around double quotes, i.e., use "$host" instead of just $host.
You can write a file like this and chmod 755 filename.sh :
#!/bin/bash
host="localhost"
dbname="test"
user="root"
password="xxxxxxxxxx"
mySqlQuery="select *
from col;"
mysql -u $host -D $dbname -u $user -p$password -e "$mySqlQuery"
Sample
$chmod 755 testmysql.sh
$
$ ./testmysql.sh
+----+------+------+------+
| id | Col1 | Col2 | Col3 |
+----+------+------+------+
| 1 | 1 | 2 | 3 |
| 2 | 2 | 3 | 4 |
| 3 | 3 | 4 | 5 |
+----+------+------+------+
$
For starters, make sure your line 5 looks like
mySqlQuery="SELECT..."
(notice no spaces on either side of the assignment operator)
For seconds, try to re-format your entire MySQL query to fit into a single line.
(perhaps Heidi since it's a query, since that will keep you at least syntax-wise in the clear of errors)
For thirds, once you confirm that the bash runs as intended, add \n to tell the bash that you're continuing the command in the next row
Prototyping before optimization. Get it running before you get it flying.
The following questions will be answered.
How to enable slow query log in MySQL
How to set slow query time
How to read the logs generated by MySQL
Log analysis is becoming a menace day-by-day. Most tech companies have started using ELK stack or similar tools for Log analysis. But what if you don't have hours to spend on the set up of ELK and just want to spend some time on analysing the logs by your on (manually, that is).
Although, it is not the best way but don't underestimate the power of analysing the logs from the terminal. From the terminal too, we can efficiently analyse the logs but there are limitations to what we can or cannot do. I am posting about the basic process of analysing a MySQL log.
(In addition to the 'setup' provided by #MontyPython...)
Run
pt-query-digest, or mysqldumpslow -s t
Either will give you the details of 'worst' query first, so stop the output after a few dozen lines.
I prefer long_query_time=1. It's in seconds; you can specify less than 1.
Also, in more recent versions, you need log_output = FILE.
show variables like '%slow%';
+---------------------------+-----------------------------------+
| Variable_name | Value |
+---------------------------+-----------------------------------+
| log_slow_admin_statements | OFF |
| log_slow_slave_statements | OFF |
| slow_launch_time | 2 |
| slow_query_log | OFF |
| slow_query_log_file | /var/lib/mysql/server-slow.log |
+---------------------------+-----------------------------------+
And then,
show variables like '%long_query%';
+-----------------+----------+
| Variable_name | Value |
+-----------------+----------+
| long_query_time | 5.000000 |
+-----------------+----------+
Change the long query time to whatever you want. Queries taking more than this will be captured in the slow query log.
set global long_query_time = 2.00;
Now, switch on the slow query log.
set global slow_query_log = 'ON';
flush logs;
Go to the terminal and check the directory where the log file is supposed to be.
cd /var/lib/mysql/
la -lah | grep slow
-rw-rw---- 1 mysql mysql 4.6M Apr 24 08:32 server-slow.log
Opening the file - use one of the following commands
cat server-slow.log
tac server-slow.log
less server-slow.log
more server-slow.log
tail -f server-slow.log
How many unique slow queries have been logged during a day?
grep 'Time: 160411.*' server-slow.log | cut -c2-18 | uniq -c
at first, I am a very new on shell scripting, so please don't shoot me !! :)
What I try to do. I have a multi-site WordPress installation, and I like to write a script that will be able to export specific tables from the schema either by passing the site id as argument in shell script, or by set an option to export all selected the tables of the schema.
The WordPress, in order to recognize which table set is for which site, changes the prefix of each table set. So In example does the following :
wp_options
wp_1_options
...
wp_x_options
In addition, the WordPress store the blog id in a special table called wp_blogs
So, from my shell script I run the following code :
mysql -uUSER -pPASS -e 'SELECT `blog_id` AS `ID`, `path` AS `Slug` FROM `wp`.`wp_blogs`'
and I am getting the following results
+----+---------------------------+
| ID | Slug |
+----+---------------------------+
| 1 | / |
| 2 | /site-2-slug/ |
| 4 | /site-4-slug/ |
| 5 | /site-5-slug/ |
| 6 | /site-6-slug/ |
| 7 | /site-7-slug/ |
| 8 | /site-8-slug/ |
| 9 | /site-9-slug/ |
| 10 | /site-10-slug/ |
+----+---------------------------+
So, now the actual question is, how can I parse the MySql result line by line, in order to get the ID and the Slug information ?
Side note 1 : The whole script has been generated and run's somehow manually. I need now this information in order to automate even farther the exporting script.
*Side note 2 : The MySql executed via the Vagrant ssh like the following line : *
sudo vagrant ssh --command "mysql -uroot -proot -e 'SELECT blog_id FROM wp.wp_blogs'"
You could save the result in a file using INTO like below:
SELECT blog_id, path FROM wp.wp_blogs
INTO OUTFILE '/tmp/blogs.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
And then you could process it line by line either usingsed/awk/simple while loop. Say you want to search for site and replace it with mysite, you could do something like:
awk -F',' '{print "Id: " $1 ", path: "$2}' /tmp/blogs.csv ##or simply cat the file.
I am using this command:
mysql -u user -ppassword database -e "select distinct entityName,entitySource from AccessControl"
The output is like this:
+-----------------------+--------------+
| entityName | entitySource |
+-----------------------+--------------+
| low | Native |
| high | Native |
| All Groups | AD |
| Help Ser vices Group | AD |
| DEFAULT_USER_GROUP | Native |
| SYSTEM | Native |
| DEFAULT_BA_USER_GROUP | Native |
| soUsersGrp | Native |
+-----------------------+--------------+
My question is: how can I dynamically create an array of variables to store the values entityName and entitySource? What I need to use is use every value of entityName and entitySource to update another table.
Earlier I was trying to store the output in a file and access each line using awk, but that doesn't help because one line may contain multiple words.
Sure, this can be done. I'd like to second the idea that piping mysql to mysql in the shell is awkward, but I understand why it might need to be done (such as when piping mysql to psql or whatever).
mysql -qrsNB -u user -p password database \
-e "select distinct entityName,entitySource from AccessControl" | \
while read record; do
NAME="`echo $record|cut -d' ' -f 1`" # that's a tab delimiter
SOURCE="`echo $record|cut -d' ' -f 2`" # also a tab delimiter
# your command with $NAME and $SOURCE goes here ...
COMMAND="select trousers from namesforpants where entityName='${NAME}'" # ...
echo $COMMAND | mysql # flags ...
done
the -rs flags trim your output down so that you don't have to grok that table thing it gives you, -q asks that the result not be buffered, -B asks for batch mode, and -N asks to not have column names.
What you do with those variables is up to you; probably I would compose statements in that loop and feed those to your subsequent process rather than worry about interpolation and quotes as you have mentioned some of your data has spaces in it. Or you can write/append to a file and then feed that to your subsequent process.
As usual, the manual is your friend. I'll be your friend, too, but the manpage is where the answers are to this stuff. :-)
#!/usr/bin/env bash
mysql -u user -ppassword database -e "select distinct entityName,entitySource from AccessControl" | while read name source; do
echo "entityName: $name, entitySource: $source"
done
Please check it, I fixed it through exec.
[wcuser#localhost]$ temp=`exec mysql -h10.10.8.36 --port=3306 -uwcuser -pwcuser#123 paycentral -e "select endVersion from script_execution_detail where releaseNo='Release1.0' and versionPrefix='PS'"|tail -1`
[wcuser#localhost]$ echo $temp
19