Execute commands using function calls in shell scripts - function

I am trying to execute a command using function calls in a shell script. When I pass the command to that function as an argument of that function, it does not work.
Function definition:
function ExecuteCommand() (
# $1: User#Host
# $2: Password
# $3: Command to execute
# Collect current IFS value
OLD_IFS=$IFS
# Set IFS value to new line feed
IFS=$'\n'
#Execute the command and capture the output
EXPECT_OUTPUT=($(expect ssh_exec.expect $1 $2 $3))
#Print the output
OUTPUT_LINE_COUNT=${#EXPECT_OUTPUT[#]}
for ((OUTPUT_LINE_INDEX=0; OUTPUT_LINE_INDEX<OUTPUT_LINE_COUNT; OUTPUT_LINE_INDEX++)) ;
do
echo ${EXPECT_OUTPUT[$OUTPUT_LINE_INDEX]}
done
# Get back to the original IFS
IFS=$OLD_IFS
)
Function call:
ExecuteCommand oracle#192.168.***.*** password123 "srvctl status database -d mydb"
And the output I get is:
spawn ssh oracle#192.168.***.*** {srvctl status database -d mydb}
oracle#192.168.***.***'s password:
bash: {srvctl: command not found
But when I don't pass the command as an argument of the function, it works perfectly:
Function definition in that case:
function ExecuteCommand() (
# $1: User#Host
# $2: Password
# Collect current IFS value
OLD_IFS=$IFS
# Set IFS value to new line feed
IFS=$'\n'
#Execute the command and capture the output
EXPECT_OUTPUT=($(expect ssh_exec.expect $1 $2 srvctl status database -d mydb))
#Print the output
OUTPUT_LINE_COUNT=${#EXPECT_OUTPUT[#]}
for ((OUTPUT_LINE_INDEX=0; OUTPUT_LINE_INDEX<OUTPUT_LINE_COUNT; OUTPUT_LINE_INDEX++)) ;
do
echo ${EXPECT_OUTPUT[$OUTPUT_LINE_INDEX]}
done
# Get back to the original IFS
IFS=$OLD_IFS
)
Function call:
ExecuteCommand oracle#192.168.***.*** password123
And I get the output just as I expected:
spawn ssh oracle#192.168.***.*** srvctl status database -d mydb
oracle#192.168.***.***'s password:
Instance mydb1 is running on node mydb1
Instance mydb2 is running on node mydb2
Instance mydb3 is running on node mydb3
Please help me about what was wrong while passing the command as a function parameter here in the first case.

If I am not wrong, any argument to "expect" within double quotes gets replaced with curly braces. Hence, the expect command became like:
expect ssh_exec.expect oracle#192.168.***.*** {srvctl status database -d mydb}
which made the shell to interpret "{srvctl" as a command.
Try using it like this:
EXPECT_OUTPUT=($(expect ssh_exec.expect $*))
instead of
EXPECT_OUTPUT=($(expect ssh_exec.expect $1 $2 $3))
And call your function like:
ExecuteCommand oracle#192.168.***.*** password123 srvctl status database -d mydb

Related

How to store json key value pair in and store it in one variable in linux

I am calling python command which will return data is JSON key-value pair.
I have put python command and other command in one shell script named as - a.sh
Code (a.sh):
cd /home/drg/Code/dth
a=$(python3 main.py -z shell -y droub -i 56)
echo "$a"
When I am calling this script I am getting output as:
{'password': 'XYZ', 'name': 'Stguy', 'port': '5412', 'host': 'igtet', 'db_name': 'test3'}
And after getting this output I want to pass the output value like password, name to psql command to run postgresql query.
So, what I want is that I should be able to store password value in one variable, name in one variable like:
a= xyz
b=Stguy
p= port
So, that I can use this variables to pass in psql query as:
psql -h $a -p $p -U $b -d $db -c "CREATE SCHEMA IF NOT EXISTS ${sname,,};"
Can someone please help me with this?
Note: Env is linux(Centos 8)
Thanks in advance!
One way of solving this could be a combination of jq for value extraction and shell-builtin read for multiple variable assignment:
JSON='{"name": "Stguy", "port": 5412, "host": "igtet", "db_name": "test3"}'
read -r a b c <<<$( echo $JSON | jq -r '"\(.host) \(.port) \(.name)"' )
echo "a: $a, b: $b, c: $c"
doing jq string interpolation "\( )" to print result in one line
You can aslo go with sed or awk:
PSQL="$( python3 main.py -z shell -y droub -i 56 | sed "s/^[^:]*: *'\([^']*\)'[^:]*: *'\([^']*\)'[^:]*: *'\([^']*\)'[^:]*: *'\([^']*\)'[^:]*: *'\([^']*\)'}/psql -h '\4' -p '\1' -U '\2' -d '\5'/")"
[ "${PSQL:0:5}" = "psql " ] && ${PSQL} -c "CREATE SCHEMA IF NOT EXISTS ${sname,,};"
For security consideration, i urge you anyway to avoid passing account data (user passwd) through environment variables.
It would be better if your python script had an option to directly launch psql with required parameters.

How can I use a result of command as a var in a child of input

SHELL code as follow:
#!/bin/sh
..// etc
ssh root#127.0.0.1 << EOF
KEYS=$(mysql -h ${DB_IP} -u ${USERNAME} -p${PASSWORD} -P ${DB_PORT} --database bosdb -e "select a from B where id=1");
echo "123";
echo $KEYS;
... // etc
EOF
When I run this script, the output text is
123
(this is a NULL line, that means KEYS is null)
I have tried if I log in to the machine of MYSQL directly, I can get the result of the "MYSQL" command, and the echo $KEYS also shows a string of values. But when I used subcommand as input, it didn't work.
So, how can I get the value of KEYS correctly? I'd really appreciate if someone can help me

How to remove functions from fishshell without deleting the function file directly?

I've defined a function hello in fishshell:
function hello
echo Hello
end
And save it:
funcsave hello
If I want to delete it, I can delete the file ~/.config/fish/functions/hello.fish.
Is there any other way to do it? (like built-in funcdel or funcrm)
No, there isn't any builtin to remove the file, but you can use:
functions --erase hello
or
functions -e hello
to erase the function definition from the current session.
See also
Documentation
I created another fish function for that
function funcdel
if test -e ~/.config/fish/functions/$argv[1].fish
rm ~/.config/fish/functions/$argv[1].fish
echo 'Deleted function ' $argv[1]
else
echo 'Not found function ' $argv[1]
end
end
The above solution of functions -e hello only deletes hello in the current session. Open another terminal, and the function is still there.
To delete the function in a persistent way, I had to resort to deleting the file ~/.config/fish/functions/hello.fish directly. Up till now, I do not know of another way that does deleting in a persistent way.
a more complete, self defined (and quiet) self-crafted solution, inspired by #Kanzee's answer (copy to file ~/.config/fish/functions/funcdel.fish):
function funcdel --description 'Deletes a fish function both permanently and from memory'
set -l fun_name $argv[1]
set -l fun_file ~/.config/fish/functions/$fun_name.fish
# Delete the in-memory function, if it exists
functions --erase $fun_name
# Delete the function permanently,
# if it exists as a file in the regular location
if test -e $fun_file
rm $fun_file
end
end
I combined the answer from #hoijui with some code from the function funcsave, so you can delete more than one function at once:
function funcdel --description 'Deletes a fish function both permanently and from memory'
set cf (status function)
set -l options 'h/help'
argparse -n funcdel $options -- $argv
or return
# should create a manpage
if set -q _flag_help
__fish_print_help cf
return 0
end
if not set -q argv[1]
printf (_ "%ls: Expected at least %d args, got only %d\n") $cf 1 0
return 1
end
set -l retval 0
for funcname in $argv
set -l funcfile $__fish_config_dir/functions/$funcname.fish
# Delete the in-memory function, if it exists
functions --query -- $funcname
if test $status -eq 0
functions --erase -- $funcname
printf (_ "%s: function %s removed from session\n") $cf $funcname
else
printf (_ "%s: Unknown function '%s'\n") $cf $funcname
end
# Delete the function permanently,
# if it exists as a file in the regular location
if test -e $funcfile
rm $funcfile
printf (_ "%s: %s deleted\n") $cf $funcfile
else
printf (_ "%s: %s not found\n") $cf $funcfile
end
end
return $retval
end

Bash : Command line argument issue

#usernamecheck.sh
#!/bin/bsh
DBUSER=user_1
DBPASSWORD=XXXXX
DBSOURCENAME=database_1
DBNAME=database_1
DBNAME1=snapshot_v
DBSERVER=X.X.X.X
DBCONN="-h ${DBSERVER} -u ${DBUSER} --password=${DBPASSWORD}"
echo "select user from device_id where ip='1.1.1.1'|mysql $DBCONN $DBNAME
The output of executing this script is:
user
----
JohnDoe
The above bash script which gives me user name of IP (1.1.1.1). I would like to pass the IP as a command line argument. I tried,
echo "select user from device_id where ip=$1|mysql $DBCONN $DBNAME
and executed like
$./usernamecheck.sh '1.1.1.1'
I get the error message:
Error in SQL syntax
Hope i'm passing the command line argument correct. But not sure, why does this error pop up?
If you use a query, you say:
WHERE ip='1.1.1.1'
What you are currently saying is:
WHERE ip=$1
which gets translated to
WHERE ip=1.1.1.1
So you are missing a quote around $1 to make it work:
WHERE ip='$1'
# ^ ^
All together:
echo "select user from device_id where ip='$1' | mysql $DBCONN $DBNAME
# ^ ^

Store mysql result in a bash array variable

I am trying to store MySQL result into a global bash array variable but I don't know how to do it.
Should I save the MySQL command result in a file and read the file line by line in my for loop for my other treatment?
Example:
user password
Pierre aaa
Paul bbb
Command:
$results = $( mysql –uroot –ppwd –se « SELECT * from users );
I want that results contains the two rows.
Mapfile for containing whole table into one bash variable
You could try this:
mapfile result < <(mysql –uroot –ppwd –se "SELECT * from users;")
Than
echo ${result[0]%$'\t'*}
echo ${result[0]#*$'\t'}
or
for row in "${result[#]}";do
echo Name: ${row%$'\t'*} pass: ${row#*$'\t'}
done
Nota This will work fine while there is only 2 fields by row. More is possible but become tricky
Read for reading table row by row
while IFS=$'\t' read name pass ;do
echo name:$name pass:$pass
done < <(mysql -uroot –ppwd –se "SELECT * from users;")
Read and loop to hold whole table into many variables:
i=0
while IFS=$'\t' read name[i] pass[i++];do
:;done < <(mysql -uroot –ppwd –se "SELECT * from users;")
echo ${name[0]} ${pass[0]}
echo ${name[1]} ${pass[1]}
New (feb 2018) shell connector
There is a little tool (on github) or on my own site: (shellConnector.sh you could use:
Some preparation:
cd /tmp/
wget -q http://f-hauri.ch/vrac/shell_connector.sh
. shell_connector.sh
newSqlConnector /usr/bin/mysql '–uroot –ppwd'
Following is just for demo, skip until test for quick run
Thats all. Now, creating temporary table for demo:
echo $SQLIN
3
cat >&3 <<eof
CREATE TEMPORARY TABLE users (
id bigint(20) unsigned NOT NULL PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(30), date DATE)
eof
myMysql myarray ';'
declare -p myarray
bash: declare: myarray: not found
The command myMysql myarray ';' will send ; then execute inline command,
but as mysql wont anwer anything, variable $myarray wont exist.
cat >&3 <<eof
INSERT INTO users VALUES (1,'alice','2015-06-09 22:15:01'),
(2,'bob','2016-08-10 04:13:21'),(3,'charlie','2017-10-21 16:12:11')
eof
myMysql myarray ';'
declare -p myarray
bash: declare: myarray: not found
Operational Test:
Ok, then now:
myMysql myarray "SELECT * from users;"
printf "%s\n" "${myarray[#]}"
1 alice 2015-06-09
2 bob 2016-08-10
3 charlie 2017-10-21
declare -p myarray
declare -a myarray=([0]=$'1\talice\t2015-06-09' [1]=$'2\tbob\t2016-08-10' [2]=$'3\tcharlie\t2017-10-21')
This tool are in early step of built... You have to manually clear your variable before re-using them:
unset myarray
myMysql myarray "SELECT name from users where id=2;"
echo $myarray
bob
declare -p myarray
declare -a myarray=([0]="bob")
If you're looking to get a global variable inside your script you can simply assign a value to a varname:
VARNAME=('var' 'name') # no space between the variable name and value
Doing this you'll be able to access VARNAME's value anywhere in your script after you initialize it.
If you want your variable to be shared between multiple scripts you have to use export:
script1.sh:
export VARNAME=('var' 'name')
echo ${VARNAME[0]} # will echo 'var'
script2.sh
echo ${VARNAME[1]} # will echo 'name', provided that
# script1.sh was executed prior to this one
NOTE that export will work only when running scripts in the same shell instance. If you want it to work cross-instance you would have to put the export variable code somewhere in .bashrc or .bash_profile
The answer from #F. Hauri seems really complicated.
https://stackoverflow.com/a/38052768/470749 helped me realize that I needed to use parentheses () wrapped around the query result to treat is as an array.
#You can ignore this function since you'll do something different.
function showTbl {
echo $1;
}
MOST_TABLES=$(ssh -vvv -t -i ~/.ssh/myKey ${SERVER_USER_AND_IP} "cd /app/ && docker exec laradock_mysql_1 mysql -u ${DB} -p${REMOTE_PW} -e 'SELECT table_name FROM information_schema.tables WHERE table_schema = \"${DB}\" AND table_name NOT LIKE \"pma_%\" AND table_name NOT IN (\"mail_webhooks\");'")
#Do some string replacement to get rid of the query result header and warning. https://stackoverflow.com/questions/13210880/replace-one-substring-for-another-string-in-shell-script
warningToIgnore="mysql\: \[Warning\] Using a password on the command line interface can be insecure\."
MOST_TABLES=${MOST_TABLES/$warningToIgnore/""}
headerToIgnore="table_name"
MOST_TABLES=${MOST_TABLES/$headerToIgnore/""}
#HERE WAS THE LINE THAT I NEEDED TO ADD! Convert the string to array:
MOST_TABLES=($MOST_TABLES)
for i in ${MOST_TABLES[#]}; do
if [[ $i = *[![:space:]]* ]]
then
#Remove whitespace from value https://stackoverflow.com/a/3232433/470749
i="$(echo -e "${i}" | tr -d '[:space:]')"
TBL_ARR+=("$i")
fi
done
for t in ${TBL_ARR[#]}; do
showTbl $t
done
This successfully shows me that ${TBL_ARR[#]} has all the values from the query result.
results=($( mysql –uroot –ppwd –se "SELECT * from users" ))
if [ "$?" -ne 0 ]
then
echo fail
exit
fi