Redirecting mysql output to prompt using shell - mysql

I'm writing a shell code to automaticaly run multiple sql queries in multiple databases. My code is working well, but beside "select" queries, all other queries aren't displaying anything in the prompt while executing. How could I force query outputs to redirect to prompt?
That's some of my code:
for x in "${db[#]}"
do
found=0
for enreg in `cat /home/dbfile.csv`
do
#extracting database data from csv
DBNAME=`echo $enreg | awk -F";" '{ print $4 }'`
if [ $x = $DBNAME ]
then
PASS=`echo $enreg | awk -F";" '{ print $2 }'`
HOST=`echo $enreg | awk -F";" '{ print $3 }'`
USERNAME=`echo $enreg | awk -F";" '{ print $1 }'`
# Running queries in database $DBNAME
for y in "${req[#]}"
do
echo "
"
mysql -u $USERNAME -p$PASS -h $HOST $DBNAME -e "$y"
echo "
"
done
found=1
break
fi
done
if [ $found -eq 0 ]
then
echo "Database $x doesn't exist"
fi
done

Related

shell script-Unable to get expected response by running through loop inside .sh file, but getting it by individually defining variables in file

Fileread.sh
#!/bin/bash
s=ch.qos.logback
e=logback-access
curl -s "https://search.maven.org/solrsearch/select?q=g:$s+AND+a:$e&core=gav&rows=1&wt=json" | jq ".response.docs[].v"`
output:"1.2.11"
This code is working perfectly fine But when I try storing the s and e values in a .txt file with : seperated and then try running, I get nothing in response
textFile.txt
ch.qos.logback:logback-access
fileread.sh
#!/bin/bash
read -p "Enter file name:" filename
while IFS=':' read -r s e
do
curl -s "https://search.maven.org/solrsearch/select?q=g:${s}+AND+a:${e}&core=gav&rows=1&wt=json" | jq ".response.docs[].v"
done < $filename
I have tried :
xy=$(curl -s "https://search.maven.org/solrsearch/select?q=g:${s}+AND+a:${e}&core=gav&rows=1&wt=json" | jq ".response.docs[].v")
echo "$xy"
xy=$(curl -s "'https://search.maven.org/solrsearch/select?q=g:'${s}'+AND+a:'${e}&core=gav&rows=1&wt=json" | jq ".response.docs[].v")
echo "$xy"
url=`https://search.maven.org/solrsearch/select?q=g:${s}+AND+a:${e}&core=gav&rows=1&wt=json`
echo url
xx=`curl -s "$url" | jq ".response.docs[].v"`
echo $xx
Try this:
#!/bin/bash
echo "Enter file name:"
read filename
IFS=':' read -r s e < $filename
echo $s $e
curl -s "https://search.maven.org/solrsearch/select?q=g:${s}+AND+a:${e}&core=gav&rows=1&wt=json" | jq ".response.docs[].v"
~

Use arguments as a part of variable inside of function

If I type:
function chk_is_it_started(){
PROCC_NAME_$1="my_process_$1";
echo "PROCC_NAME_$1 is: $PROCC_NAME_$1";
PID_FILE_OF_APP_$1="/run/pidfile_$PROCC_NAME_$1.pid"
PATH_OF_PROCCESS_NAME_$1=`ps -aux|grep $PROCC_NAME_$1|grep -v grep|awk -F" " '{print $12}'`
PID_NUMBER_OF_APP_$1=`ps -aux|grep $PROCC_NAME_$1|grep -v grep|awk -F" " '{print $2}'`
NUMBER_OF_OCCURENCE_$1=`echo ${#PID_NUMBER_OF_APP_$1[#]}`
if [[ "$NUMBER_OF_OCCURENCE_$1" == 0 ]];then
echo -e "Proccess isn't started..\nNow process $PATH_OF_PROCCESS_NAME_$1 is running and I'm creating a PID file..."
python /emu/script/$PROCC_NAME_$1.py & disown & echo $! > $PID_FILE_OF_APP_$1
else
echo "Proccess is STARTRED"
fi
}
chk_is_it_started blabla;
I will got the error:
root#orangepipc:~# chk_is_it_started blabla;
Could not find the database of available applications, run update-command-not-found as root to fix this
PROCC_NAME_blabla=my_process_blabla: command not found
PROCC_NAME_blabla is: blabla
-bash: PID_FILE_OF_APP_blabla=/run/pidfile_blabla.pid: No such file or directory
Could not find the database of available applications, run update-command-not-found as root to fix this
PATH_OF_PROCCESS_NAME_blabla=: command not found
Could not find the database of available applications, run update-command-not-found as root to fix this
PID_NUMBER_OF_APP_blabla=: command not found
-bash: ${#PID_NUMBER_OF_APP_$1[#]}: bad substitution
Could not find the database of available applications, run update-command-not-found as root to fix this
NUMBER_OF_OCCURENCE_blabla=: command not found
Proccess is STARTRED
But it is not!
Where I'm Making the misstake?
If I'm using th ecode without function it work!
Thx
I found the solution...
function chk_is_it_started(){
PROCC_NAME="dht22_$1"
# echo "PROCC_NAME_$1 is: $PROCC_NAME"
PID_FILE_OF_APP="/run/pidfile_$PROCC_NAME.pid"
# echo "PID_FILE_OF_APP is: $PID_FILE_OF_APP"
PATH_OF_PROCCESS_NAME=`ps -aux|grep $PROCC_NAME_$1|grep -v grep|awk -F" " '{print $12}'`
# echo "PATH_OF_PROCCESS_NAME is: $PATH_OF_PROCCESS_NAME"
PID_NUMBER_OF_APP=`ps -aux|grep $PROCC_NAME_$1|grep -v grep|awk -F" " '{print $2}'`
# echo "PID_NUMBER_OF_APP is $PID_NUMBER_OF_APP"
PID_NUMBER_OF_APP=( $PID_NUMBER_OF_APP )
# echo "PID_NUMBER_OF_APP is $PID_NUMBER_OF_APP"
NUMBER_OF_OCCURENCE=`echo ${#PID_NUMBER_OF_APP[#]}`
# echo "NUMBER_OF_OCCURENCE is: $NUMBER_OF_OCCURENCE"
if [[ "$NUMBER_OF_OCCURENCE" == 0 ]];then
echo -e "Proccess isn't started..\nNow process $PATH_OF_PROCCESS_NAME is running and I create a PID file..."
python /emu/script/$PROCC_NAME.py & disown & echo $! > $PID_FILE_OF_APP
# exit
else
echo "Proccess is STARTRED"
fi
if [[ "$NUMBER_OF_OCCURENCE" > 1 ]];then
echo -e "Process $PROCC_NAME.py is started more than 1x"
echo -e "Now killing all proccess one by one"
while [ "$NUMBER_OF_OCCURENCE" != "1" ];
do
echo "Usao sam u while"
PID_NUMBER_OF_APP=`ps -aux|grep $PROCC_NAME|grep -v grep|awk -F" " '{print $2}'`
echo "PID_NUMBER_OF_APP is: $PID_NUMBER_OF_APP"
PID_NUMBER_OF_APP=( $PID_NUMBER_OF_APP )
echo "PID_NUMBER_OF_APP is $PID_NUMBER_OF_APP"
NUMBER_OF_OCCURENCE=`echo ${#PID_NUMBER_OF_APP[#]}`
echo "NUMBER_OF_OCCURENCE is: $NUMBER_OF_OCCURENCE"
kill $PID_NUMBER_OF_APP
rm -fr $PID_FILE_OF_APP
done
echo -e "Starting process $PROCC_NAME.py and creating a PID file..."
python /emu/script/$PROCC_NAME.py & echo $! > $PID_FILE_OF_APP
fi
}
chk_is_it_started bla1
chk_is_it_started bla2
Btw saluting user who gave vote -1 to my question :)

Modifying bash script to take each line in a file and execute command

I need to modify a bash script to to take each line in a file and execute command. I currently have this:
#!/bin/bash
if [ -z "$1" ] ; then
echo "Lipsa IP";
exit;
fi
i=1
ip=$1
while [ $i -le `wc -l pass_file | awk '{print $1}'` ] ; do
if [ -n "$ip" ]; then
rand=`head -$i pass_file | tail -1`
user=`echo $rand | awk '{print $1}'`
pass=`echo $rand | awk '{print $2}'`
CMD=`ps -eaf | grep -c mysql`
if [ "$CMD" -lt "50" ]; then
./mysql $ip $user $pass &
else
sleep 15
fi
i=`expr $i + 1`
fi
done
The password file is in format and name pfile:
username password
The intranet hosts file is in this format (line-by-line) and name hlist:
192.168.0.1
192.168.0.2
192.168.0.3
Any suggestions?
I don't understand what you want to do which you are not already doing. Do you want to use the ip number file in some fashion?
Anyway, the way you extract the username and password from the password file is unnecessarily complicated (to put it politely); you can iterate over the lines in a file in a much simpler fashion. Instead of:
while [ $i -le `wc -l pass_file | awk '{print $1}'` ] ; do
rand=`head -$i pass_file | tail -1`
user=`echo $rand | awk '{print $1}'`
pass=`echo $rand | awk '{print $2}'`
# ...
i=`expr $i + 1`
fi
Just use the bash (Posix) read command:
while read -r user pass __; do
# ...
done < pass_file
(The __ is in case there is a line in the pass_file with more than two values; the last variable name in the read command receives "the rest of the line").
I searched the web again and found a cleaner code which I adapted to suit my needs.
#!/bin/bash
while read ip
do
if [ -n "$ip" ]
then
while read user pass
do
CMD=`ps -eaf | grep -c mysql`
if [ "$CMD" -gt "50" ]
then
sleep 15
fi
./mysql $ip $user $pass &
done < pass_file
fi
done < host_file

conditional statement in bash with mysql query

im trying to write a bash script that will do a mysql query and if the number of results is 1, do something. i cant get it to work though.
#!/bin/sh
file=`mysql -uroot -proot -e "select count(*) from MyTable.files where strFilename='file.txt'"`
if [[ $file == "count(*) 1" ]];
then
echo $file
else
echo $file
echo "no"
fi
i verified the query works. i keep getting this returned
count(*) 1
no
im not sure why but i think it might have something to do with the type of variable $file is. any ideas?
To prevent exposing your database credential in the script you can store them in the local .my.cnf file located in your home directory.
This technic will allow your script to work on any server without modification.
Path: /home/youruser/.my.cnf
Content:
[client]
user="root"
password="root"
host="localhost"
[mysql]
database="MyTable"
So, Renato code could be rewritten by following:
#!/bin/sh
file=`mysql -e "select count(*) as count from files where strFilename='file.txt'" | cut -d \t -f 2`
if [ $file == "1" ];
then
echo $file
else
echo $file
echo "no"
fi
I rewrote your script, it works now:
#!/bin/sh
file=`mysql -uroot -proot -e "select count(*) as count from MyTable.files where strFilename='file.txt'" | cut -d \t -f 2`
if [ $file == "1" ];
then
echo $file
else
echo $file
echo "no"
fi
I'm giving a better name to the count field, using 'cut' to split the mysql output into two fields and putting the content of the second field into $file. Then you can test $file for "1". Hope it helps you..
I'm guessing that isn't actually count(*) 1 but instead count(*)\n1 or something. echo $file will convert all the characters in IFS to a space, but == will distinguish between those whitespace characters. If this is the case, echo "$file" will give you a different result. (Notice the quotes.)

Creating an HTML table with BASH & AWK

I am having issues creating a html table to display stats from a text file. I am sure there are 100 ways to do this better but here it is:
(The comments in the following script show the outputs)
#!/bin/bash
function getapistats () {
curl -s http://api.example.com/stats > api-stats.txt
awk {'print $1'} api-stats.txt > api-stats-int.txt
awk {'print $2'} api-stats.txt > api-stats-fqdm.txt
}
# api-stats.txt example
# 992 cdn.example.com
# 227 static.foo.com
# 225 imgcdn.bar.com
# end api-stats.txt example
function get_int () {
for i in `cat api-stats-int.txt`;
do echo -e "<tr><td>${i}</td>";
done
}
function get_fqdn () {
for f in `cat api-stats-fqdn.txt`;
do echo -e "<td>${f}</td></tr>";
done
}
function build_table () {
echo "<table>";
echo -e "`get_int`" "`get_fqdn`";
#echo -e "`get_fqdn`";
echo "</table>";
}
getapistats;
build_table > api-stats.html;
# Output fail :|
# <table>
# <tr><td>992</td>
# <tr><td>227</td>
# <tr><td>225</td><td>cdn.example.com</td></tr>
# <td>static.foo.com</td></tr>
# <td>imgcdn.bar.com</td></tr>
# Desired output:
# <tr><td>992</td><td>cdn.example.com</td></tr>
# ...
This is reasonably simple to do in pure awk:
curl -s http://api.example.com/stats > api-stats.txt
awk 'BEGIN { print "<table>" }
{ print "<tr><td>" $1 "</td><td>" $2 "</td></tr>" }
END { print "</table>" }' api-stats.txt > api-stats.html
Awk is really made for this type of use.
You can do it with one awk at least.
curl -s http://api.example.com/stats | awk '
BEGIN{print "<table>"}
{printf("<tr><td>%d</td><td>%s</td></tr>\n",$1,$2)}
END{print "</table>"}
'
this can be done w/ bash ;)
while read -u 3 a && read -u 4 b;do
echo $a$b;
done 3&lt/etc/passwd 4&lt/etc/services
but my experience is that usually it's a bad thing to do things like this in bash/awk/etc
the feature i used in the code is deeply burried in the bash manual page...
i would recommend to use some real language for this kind of data processing for example: (ruby or python) because they are more flexible/readable/maintainable