Run sql query in if statement shell script - mysql

I am trying to run sql query in if statement. Here is my shell script
#!/bin/bash
var="select col1, col2 from table_name where condition;"
count=$(ping -c 4 192.168.7.204 | awk -F',' '{ print $2 }' | awk '{ print $1 }')
if [ $count -eq 0 ]; then
mysql -h 192.168.7.204 -u username -ppassword db_name<<EOFMYSQL
$var
EOFMYSQL
fi
But it shows me an error
./test.sh: line 18: warning: here-document at line 12 delimited by end-of-file (wanted `EOFMYSQL')
./test.sh: line 19: syntax error: unexpected end of file

The here-document sentinelEOFMYSQL has to be up against the left margin, not indented:
var="select col1, col2 from table_name where condition;"
count=$(ping -c 4 192.168.7.204 | awk -F',' '{ print $2 }' | awk '{ print $1 }')
if [ $count -eq 0 ]; then
mysql -h 192.168.7.204 -u username -ppassword db_name <<EOFMYSQL
$var
EOFMYSQL
fi
If you change the <<EOFMYSQL to <<-EOFMYSQL you can indent it, as long as you use only tabs and not spaces.
See the manual.

Related

Dump Json response to a bash variable

I have the following ouput
[
"notimportant",
[
"val1",
"val2",
...,
"valn"
]
]
I'm trying to store every value into a bash string, using jq I tried this
out=''
req=$(curl -s $url)
len=$(echo $req | jq length )
for (( i = 0; i < $len; i++ )); do
element=$(echo $req | jq '.[1]' | jq --argjson i "$i" '.[$i]')
out=${element}\n${out}
done
which feels clunky and also has a slow performance. I'm trying to dump the values at once without looping on all the elements
With an array:
mapfile -t arr < <(curl -s "$url" | jq -r '.[1] | .[]')
declare -p arr
Do you want the values separate by TAB or NEWLINE characters in a single variable? The #tsv function is useful for controlling output:
outTABS=$(curl -s "$url" | jq -r '.[1]|.|#tsv')
outLINE=$(curl -s "$url" | jq -r '.[1]|.[]|[.]|#tsv')
> echo "$outTABS"
val1 val2 valn
> echo "$outLINE"
val1
val2
valn

Read MySQL result set with multiple columns and spaces

Pretend I have a MySQL table test that looks like:
+----+---------------------+
| id | value |
+----+---------------------+
| 1 | Hello World |
| 2 | Foo Bar |
| 3 | Goodbye Cruel World |
+----+---------------------+
And I execute the query SELECT id, value FROM test.
How would I assign each column to a variable in Bash using read?
read -a truncates everything after the first space in value:
mysql -D "jimmy" -NBe "SELECT id, value FROM test" | while read -a row;
do
id="${row[0]}"
value="${row[1]}"
echo "$id : $value"
done;
and output looks like:
1 : Hello
2 : Foo
3 : Goodbye
but I need it to look like:
1 : Hello World
2 : Foo Bar
3 : Goodbye Cruel World
I'm aware there are args I could pass to MySQL to format the results in table format, but I need to parse each value in each row. This is just a simplified example of my problem.
Use individual fields in the read loop instead of the array:
mysql -D "jimmy" -NBe "SELECT id, value FROM test" | while read -r id value;
do
echo "$id : $value"
done
This will make sure that id will be read into the id field and everything else would be read into the value field - that's how read behaves when input has more fields than the number of variables being read into. If there are more columns to be read, using a delimiter (such as #) that doesn't clash with actual data would help:
mysql -D "jimmy" -NBe "SELECT CONCAT(id, '#', value, '#', column3) FROM test" | while IFS='#' read -r id value column3;
do
echo "$id : $value : $column3"
done
You can do this, also avoid piping a command to a while read loop if possible to avoid creating a subshell.
while read -r line; do
id=$(echo $line | awk '{print $1}')
value=$(echo $line | awk '{print $1=""; print $0}'|sed ':a;N;$!ba;s/\n/ /g'| sed 's/^[ \t]*//g')
echo "ID: $id"
echo "VALUE: $value"
done< <(mysql -D "jimmy" -NBe "SELECT id, value FROM test")
If you want to store all the id's and values in an array for later use, you can modify it to look like this.
#!/bin/bash
declare -A -g arr
while read -r line; do
id=$(echo $line | awk '{print $1}')
value=$(echo $line | awk '{print $1=""; print $0}'|sed ':a;N;$!ba;s/\n/ /g'| sed 's/^[ \t]*//g')
arr[$id]=$value
done< <(mysql -D "jimmy" -NBe "SELECT id, value FROM test")
for key in "${!arr[#]}"; do
echo "$key: ${arr[$key]}"
done
Which gives you this output
dumbledore#ansible1a [OPS]:~/tmp/tmp > bash test.sh
1: Hello World
2: Foo Bar
3: Goodbye Cruel World

mysql query does not execute

I am trying to run some queries from bash. first, how can I connect once and perform SELECT queries from different dbs? and the following code does not work.
> $LOG_FILE
> $SQL_FILE
for sam in $db
do
echo "USE ${sam}; SELECT login, FORMAT(SUM(PROFIT), 2) AS PROFIT FROM MT4_TRADES WHERE CLOSE_TIME >= '2016-12-01' AND CLOSE_TIME < '2016-02-29' AND CMD IN (0 , 1) GROUP BY LOGIN LIMIT 10;" >> ${SQL_FILE}
done
while read line
do
echo "beginning: `date "+%F %T"`" | tee -a ${LOG_FILE}
out=`echo "$line" | mysql -N --host=${Host} --user=${User} --password=${Passwd} 2>&1`
echo "$out" >> ${LOG_FILE}
if [[ ${?} -eq 0 ]]; then
echo "RESULTS FETCHED: `date "+%F %T"`" | tee -a ${LOG_FILE}
else
echo "FETCHING RESULT failed" | tee -a ${LOG_FILE}
exit 1
fi
done < ${SQL_FILE}

CSV file upload into database using shell script

I'm uploading a csv file using the script
export IFS=","
cat $_csv_files | read a b c d;
Now I need the values in the column c of csv file to be inserted into the column manufacture_name of the table manufacturemap in my Database.How will I accomplish that?
when I tried the code below
mysql -u $_db_user -p$_db_password $_db << eof
INSERT INTO \`manufacturemap\`
( \`manufacture_name\`) VALUES ($c)
eof
I get:
ERROR 1136 (21S01) at line 1: Column count doesn't match value count at row 1
I've been stuck here for the past few hours,Please help me.
Input(csv file):
a,b,c,d
1.01100156278101E+15,2014/07/08,2014/07/08,"Cash Withdrawal by Cheque-173320--TT1421957901"
1.01100156278101E+15,2014/07/08,2014/07/08,"Cheque Paid-173261--TT1421951241"
1.01100156278101E+15,2014/07/08,2014/07/08,"Cheque Paid-173298--TT1421951226"
1.01100156278101E+15,2014/06/08,2014/06/08,"Cash Withdrawal by Cheque-173319--TT1421858465"
Try this:
#! /bin/sh
values ()
{
cat "$#" | \
while IFS=, read -r a b c d; do
printf '%s\n' "$c"
done | \
paste -sd, -
}
printf 'INSERT INTO `manufacturemap` (`manufacture_name`) VALUES (%s)\n' "$(values $_csv_files)" | \
mysql -u"$_db_user" -p"$_db_password" "$_db"

printf span long command over multiple lines

Is it possible within bash script, to have long printf command spanned over multiple lines?
My command is something like this, and I would like it to be more readable.
Braces are there because it's acutally part of awk block.
sqlite3 -noheader -column database.db "select * from tbl_a limit $limit" | \
awk 'BEGIN { FS = "|"; }
{ printf "\t\t<tr>\n\t\t\t<td class=\"d\">%s</td>\n\t\t\t<td class=\"m\">%s</td>\n\t\t</tr>\n", $1, $2 }' | vim -
In awk, you can use line-continuation characters to split the string across multiple lines.
sqlite3 -noheader -column database.db "select * from tbl_a limit $limit" |
awk 'BEGIN { FS = "|"; }
{ printf "\t\t<tr>\n\
\t\t\t<td class=\"d\">%s</td>\n\
\t\t\t<td class=\"m\">%s</td>\n\
\t\t</tr>\n", $1, $2 }' | vim -
Or, instead of using awk, you can process the output of sqlite line-by-line in bash:
sqlite3 -noheader -column database.db "select * from t/l_a limit $limit" |
while IFS='|' read col1 col2; do
printf '\t\t<tr>
\t\t\t<td class="d">%s</td>
\t\t\t<td class="m">%s</td>
\t\t</tr>\n' "$col1" "$col2"
done | vim -