Read MySQL result from shell script - mysql

at first, I am a very new on shell scripting, so please don't shoot me !! :)
What I try to do. I have a multi-site WordPress installation, and I like to write a script that will be able to export specific tables from the schema either by passing the site id as argument in shell script, or by set an option to export all selected the tables of the schema.
The WordPress, in order to recognize which table set is for which site, changes the prefix of each table set. So In example does the following :
wp_options
wp_1_options
...
wp_x_options
In addition, the WordPress store the blog id in a special table called wp_blogs
So, from my shell script I run the following code :
mysql -uUSER -pPASS -e 'SELECT `blog_id` AS `ID`, `path` AS `Slug` FROM `wp`.`wp_blogs`'
and I am getting the following results
+----+---------------------------+
| ID | Slug |
+----+---------------------------+
| 1 | / |
| 2 | /site-2-slug/ |
| 4 | /site-4-slug/ |
| 5 | /site-5-slug/ |
| 6 | /site-6-slug/ |
| 7 | /site-7-slug/ |
| 8 | /site-8-slug/ |
| 9 | /site-9-slug/ |
| 10 | /site-10-slug/ |
+----+---------------------------+
So, now the actual question is, how can I parse the MySql result line by line, in order to get the ID and the Slug information ?
Side note 1 : The whole script has been generated and run's somehow manually. I need now this information in order to automate even farther the exporting script.
*Side note 2 : The MySql executed via the Vagrant ssh like the following line : *
sudo vagrant ssh --command "mysql -uroot -proot -e 'SELECT blog_id FROM wp.wp_blogs'"

You could save the result in a file using INTO like below:
SELECT blog_id, path FROM wp.wp_blogs
INTO OUTFILE '/tmp/blogs.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
And then you could process it line by line either usingsed/awk/simple while loop. Say you want to search for site and replace it with mysite, you could do something like:
awk -F',' '{print "Id: " $1 ", path: "$2}' /tmp/blogs.csv ##or simply cat the file.

Related

Mysql Long Process List and Auto Restart

I have an old website that uses big database and I do not want to upgrade it now. The issue is the mysql database has some queries takes very long time when high traffic about 4000 online users causes mysql to to reach 600%-800% and I have to manually restart the mysql server from WHM.
I want to use cron job simple shell script to read the mysql process list every 10 seconds and if any process time more than say 10 seconds it kill this process.
This is the query I found for doing such task:
mysql -e 'SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST where time>10 and command<>"Sleep"'
I think to get the process ID to kill I should use:
mysql -e 'SELECT ID FROM INFORMATION_SCHEMA.PROCESSLIST where time>10 and command<>"Sleep"'
The output like that:
+------+
| ID |
+------+
| 1095 |
| 1094 |
| 1081 |
| 1079 |
| 1078 |
| 1074 |
| 1040 |
| 1038 |
+------+
Now I have this output table, I just need to wrap this task in shell script to parse these process ID's and kill them.
You can save the output in an array and use grep to filter only the digits.
mapfile -t array < <(mysql .... | grep -Ewo '[[:digit:]]+')
Another option is to use a while read loop
while read -r digits; do
if [[ $digits =~ .*([[:digit:]]{4}).* ]]; then
array+=("${BASH_REMATCH[1]}")
fi
done < <(mysql ....)
Now "${array[#]}" has all that digits only value.
Kill it check if it is running, loop through it one by one and so on.
Here is what I came up with for specific databases user "db_user" to avoid system long process like backups etc:
#!/bin/bash
for i in $(mysql -Ne 'select id from information_schema.processlist where USER="db_user" and time>20 and command<>"Sleep";'); do
mysql -e "kill ${i}"
done
saved it as mysql_kill_high_processes.sh and added it to the root user cron to run every minute.

Why redshift does not have entry for csv file in stl_load_commits ??

Even though I know aws has mentioned on their documentation that csv is more like txt file for them. But why there is no entry for CSV file.
For example:
If I am running a query like:
COPY "systemtable" FROM 's3://test/example.txt' <credentials> IGNOREHEADER 1 delimiter as ','
then its creating entry in stl_load_commits, which I can query by:
select query, curtime as updated from stl_load_commits where query = pg_last_copy_id();
But, in same way when I am tryig with:
COPY "systemtable" FROM 's3://test/example.csv'
<credentials> IGNOREHEADER 1 delimiter as ',' format csv;
then return from
select query, curtime as updated from stl_load_commits where query = pg_last_copy_id();
is blank, Why aws does not create entry for csv ?
This is the first part of the question. Secondly, there must be some way through which we can check the status of the loaded file?
How can we check if the file has successfully loaded in DB if the file is of type csv?
The format of the file does not affect the visibility of success or error information in system tables.
When you run COPY it returns confirmation of success and a count of rows loaded. Some SQL clients may not return this information to you but here's what it looks like using psql:
COPY public.web_sales from 's3://my-files/csv/web_sales/'
FORMAT CSV
GZIP
CREDENTIALS 'aws_iam_role=arn:aws:iam::01234567890:role/redshift-cluster'
;
-- INFO: Load into table 'web_sales' completed, 72001237 record(s) loaded successfully.
-- COPY
If the load succeeded you can see the files in stl_load_commits:
SELECT query, TRIM(file_format) format, TRIM(filename) file_name, lines, errors FROM stl_load_commits WHERE query = pg_last_copy_id();
query | format | file_name | lines | errors
---------+--------+---------------------------------------------+---------+--------
1928751 | Text | s3://my-files/csv/web_sales/0000_part_03.gz | 3053206 | -1
1928751 | Text | s3://my-files/csv/web_sales/0000_part_01.gz | 3053285 | -1
If the load fails you should get an error. Here's an example error (note the table I try to load):
COPY public.store_sales from 's3://my-files/csv/web_sales/'
FORMAT CSV
GZIP
CREDENTIALS 'aws_iam_role=arn:aws:iam::01234567890:role/redshift-cluster'
;
--ERROR: Load into table 'store_sales' failed. Check 'stl_load_errors' system table for details.
You can see the error details in stl_load_errors.
SELECT query, TRIM(filename) file_name, TRIM(colname) "column", line_number line, TRIM(err_reason) err_reason FROM stl_load_errors where query = pg_last_copy_id();
query | file_name | column | line | err_reason
---------+------------------------+-------------------+------+---------------------------
1928961 | s3://…/0000_part_01.gz | ss_wholesale_cost | 1 | Overflow for NUMERIC(7,2)
1928961 | s3://…/0000_part_02.gz | ss_wholesale_cost | 1 | Overflow for NUMERIC(7,2)

MySQL query splitting unexpectedly in Bash Script

I am having issues and not understanding what the underlying cause is.
I have a table that has these three fields:
______________________________________________
| cid | order_id | TxRefNum |
----------------------------------------------
I am making a simple call in my bash script (there is literally no other code to start with)
#!/bin/bash
mysql --login-path=main-data -e "SELECT
cid,
order_id,
TxRefNum
FROM database.orders_temp" |
while read this that other; do
echo "$this || $that || $other"
done
I would expect to see the following:
__________________________________________________________
| 29 | F0VIc - CHATEAU ROOFIN | 5555555 |
----------------------------------------------------------
Instead my script is splitting the string $that into two different strings .. The echo is actually:
___________________________________________________
| 29 | F0VIc | - CHATEAU ROOFIN |
---------------------------------------------------
Do I have to set a delimiter when setting my variables in my while loop?? I am truly stumped!!
Getting output from the mysql command formatted in an intelligent way is problematic. In your case bash is interpreting the as a delimiter. You need to split a different way. I was able to get this working. You'll note the | in the query as well at the IFS line at the tope
#!/bin/bash
IFS='|' # set the delimiter
mysql --login-path=main-data -e "SELECT
29 as cid, '|',
'F0VIc - CHATEAU ROOFIN' as order_id,
'|',
5555555 as TxRefNum
FROM dual" |
while read this that other; do
echo "$this || $that || $other"
done

Mysql table formatting with Ruby mysql gem

Mysql by default prints table results in mysql table formatting
+----+----------+-------------+
| id | name | is_override |
+----+----------+-------------+
| 1 | Combined | 0 |
| 2 | Standard | 0 |
+----+----------+-------------+
When calling mysql from the unix shell this table formatting is not preserved, but it's easy to request it via the -t option
mysql -t my_schema < my_query_file.sql
When using Ruby, I'm using the mysql gem to return results. Since the gem returns data as hashes, there's no option to preserve table formatting. However, is there any way I can easily print a hash/data with that formatting? Without having to calculate spacing and such?
db = Mysql.new(my_database, my_username, my_password, my_schema)
result = db.query("select * from my_table")
result.each_hash { |h|
# Print row. Any way to print it with formatting here?
puts h
}
Some gems and codes:
https://rubygems.org/gems/datagrid
http://rubygems.org/gems/text-table
https://github.com/visionmedia/terminal-table
https://github.com/geemus/formatador
https://github.com/wbailey/command_line_reporter
https://github.com/arches/table_print
http://johnallen.us/?p=347
I have not tried any of them.

How can I store the output of a mysql command into variables using the shell?

I am using this command:
mysql -u user -ppassword database -e "select distinct entityName,entitySource from AccessControl"
The output is like this:
+-----------------------+--------------+
| entityName | entitySource |
+-----------------------+--------------+
| low | Native |
| high | Native |
| All Groups | AD |
| Help Ser vices Group | AD |
| DEFAULT_USER_GROUP | Native |
| SYSTEM | Native |
| DEFAULT_BA_USER_GROUP | Native |
| soUsersGrp | Native |
+-----------------------+--------------+
My question is: how can I dynamically create an array of variables to store the values entityName and entitySource? What I need to use is use every value of entityName and entitySource to update another table.
Earlier I was trying to store the output in a file and access each line using awk, but that doesn't help because one line may contain multiple words.
Sure, this can be done. I'd like to second the idea that piping mysql to mysql in the shell is awkward, but I understand why it might need to be done (such as when piping mysql to psql or whatever).
mysql -qrsNB -u user -p password database \
-e "select distinct entityName,entitySource from AccessControl" | \
while read record; do
NAME="`echo $record|cut -d' ' -f 1`" # that's a tab delimiter
SOURCE="`echo $record|cut -d' ' -f 2`" # also a tab delimiter
# your command with $NAME and $SOURCE goes here ...
COMMAND="select trousers from namesforpants where entityName='${NAME}'" # ...
echo $COMMAND | mysql # flags ...
done
the -rs flags trim your output down so that you don't have to grok that table thing it gives you, -q asks that the result not be buffered, -B asks for batch mode, and -N asks to not have column names.
What you do with those variables is up to you; probably I would compose statements in that loop and feed those to your subsequent process rather than worry about interpolation and quotes as you have mentioned some of your data has spaces in it. Or you can write/append to a file and then feed that to your subsequent process.
As usual, the manual is your friend. I'll be your friend, too, but the manpage is where the answers are to this stuff. :-)
#!/usr/bin/env bash
mysql -u user -ppassword database -e "select distinct entityName,entitySource from AccessControl" | while read name source; do
echo "entityName: $name, entitySource: $source"
done
Please check it, I fixed it through exec.
[wcuser#localhost]$ temp=`exec mysql -h10.10.8.36 --port=3306 -uwcuser -pwcuser#123 paycentral -e "select endVersion from script_execution_detail where releaseNo='Release1.0' and versionPrefix='PS'"|tail -1`
[wcuser#localhost]$ echo $temp
19