This is a second part of Read from file into variable - Bash Script
I have a bash script that reads strings in a file parses and assigns it to a variable. The file looks like this (file.txt):
database1 table1
database1 table4
database2
database3 table2
Using awk in the script:
s=$(awk '{$1=$1}1' OFS='.' ORS='|' file.txt)
LIST="${s%|}"
echo "$LIST"
database1.table1|database1.table4|database2|database3.table2
But I need to add some wildcards at the end of each substring. I need this result:
database1.table1.*|database1.table4.*|database2*.*|database3.table2.*
The conditions are: if we read database2 the output should be database2*.* and if we read a database and a table the output should be database1.table1.*
Use this awk with ORS='.*|':
s=$(awk '$0=="database2"{$0=$0 "*.*";print;next} {$2=$2 ".*"}1' OFS='.' ORS='|' file.txt)
LIST="${s%|}"
echo "$LIST"
database1.table1.*|database1.table4.*|database2*.*|database3.table2.*
Assuming the (slightly odd) regex is correct the following awk script works for me on your example input.
BEGIN {OFS="."; ORS="|"}
!$2 {$1=$1"*"}
{$(NF+1)="*"}
1
Set OFS and ORS.
If we do not have a second field add a * to our first field.
Add a * as a final field.
Print the line.
Run as awk -f script.awk inputfile where the above script is in the script.awk (or whatever) file.
I'd do it like this.
script.sh containing the following code:
#!/bin/bash
while IFS='' read -r line ;do
database=$(awk '{print $1}' <<< $line)
table=$(awk '{print $2}' <<< $line)
if [ "${table}" == '' ] ;then
list=(${list}\|${database}\*\.\*)
else
list=(${list}\|${database}\.${table}\.\*)
fi
done <file.txt
list=$(cut -c 2- <<< ${list})
echo ${list}
exit 0
file.txt containing the following data:
database1 table1
database1 table4
database2
database3 table2
Script output is the following:
database1.table1.*|database1.table4.*|database2*.*|database3.table2.*
Tested in BASH version:
GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)
Related
I am calling python command which will return data is JSON key-value pair.
I have put python command and other command in one shell script named as - a.sh
Code (a.sh):
cd /home/drg/Code/dth
a=$(python3 main.py -z shell -y droub -i 56)
echo "$a"
When I am calling this script I am getting output as:
{'password': 'XYZ', 'name': 'Stguy', 'port': '5412', 'host': 'igtet', 'db_name': 'test3'}
And after getting this output I want to pass the output value like password, name to psql command to run postgresql query.
So, what I want is that I should be able to store password value in one variable, name in one variable like:
a= xyz
b=Stguy
p= port
So, that I can use this variables to pass in psql query as:
psql -h $a -p $p -U $b -d $db -c "CREATE SCHEMA IF NOT EXISTS ${sname,,};"
Can someone please help me with this?
Note: Env is linux(Centos 8)
Thanks in advance!
One way of solving this could be a combination of jq for value extraction and shell-builtin read for multiple variable assignment:
JSON='{"name": "Stguy", "port": 5412, "host": "igtet", "db_name": "test3"}'
read -r a b c <<<$( echo $JSON | jq -r '"\(.host) \(.port) \(.name)"' )
echo "a: $a, b: $b, c: $c"
doing jq string interpolation "\( )" to print result in one line
You can aslo go with sed or awk:
PSQL="$( python3 main.py -z shell -y droub -i 56 | sed "s/^[^:]*: *'\([^']*\)'[^:]*: *'\([^']*\)'[^:]*: *'\([^']*\)'[^:]*: *'\([^']*\)'[^:]*: *'\([^']*\)'}/psql -h '\4' -p '\1' -U '\2' -d '\5'/")"
[ "${PSQL:0:5}" = "psql " ] && ${PSQL} -c "CREATE SCHEMA IF NOT EXISTS ${sname,,};"
For security consideration, i urge you anyway to avoid passing account data (user passwd) through environment variables.
It would be better if your python script had an option to directly launch psql with required parameters.
I am trying to store MySQL result into a global bash array variable but I don't know how to do it.
Should I save the MySQL command result in a file and read the file line by line in my for loop for my other treatment?
Example:
user password
Pierre aaa
Paul bbb
Command:
$results = $( mysql –uroot –ppwd –se « SELECT * from users );
I want that results contains the two rows.
Mapfile for containing whole table into one bash variable
You could try this:
mapfile result < <(mysql –uroot –ppwd –se "SELECT * from users;")
Than
echo ${result[0]%$'\t'*}
echo ${result[0]#*$'\t'}
or
for row in "${result[#]}";do
echo Name: ${row%$'\t'*} pass: ${row#*$'\t'}
done
Nota This will work fine while there is only 2 fields by row. More is possible but become tricky
Read for reading table row by row
while IFS=$'\t' read name pass ;do
echo name:$name pass:$pass
done < <(mysql -uroot –ppwd –se "SELECT * from users;")
Read and loop to hold whole table into many variables:
i=0
while IFS=$'\t' read name[i] pass[i++];do
:;done < <(mysql -uroot –ppwd –se "SELECT * from users;")
echo ${name[0]} ${pass[0]}
echo ${name[1]} ${pass[1]}
New (feb 2018) shell connector
There is a little tool (on github) or on my own site: (shellConnector.sh you could use:
Some preparation:
cd /tmp/
wget -q http://f-hauri.ch/vrac/shell_connector.sh
. shell_connector.sh
newSqlConnector /usr/bin/mysql '–uroot –ppwd'
Following is just for demo, skip until test for quick run
Thats all. Now, creating temporary table for demo:
echo $SQLIN
3
cat >&3 <<eof
CREATE TEMPORARY TABLE users (
id bigint(20) unsigned NOT NULL PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(30), date DATE)
eof
myMysql myarray ';'
declare -p myarray
bash: declare: myarray: not found
The command myMysql myarray ';' will send ; then execute inline command,
but as mysql wont anwer anything, variable $myarray wont exist.
cat >&3 <<eof
INSERT INTO users VALUES (1,'alice','2015-06-09 22:15:01'),
(2,'bob','2016-08-10 04:13:21'),(3,'charlie','2017-10-21 16:12:11')
eof
myMysql myarray ';'
declare -p myarray
bash: declare: myarray: not found
Operational Test:
Ok, then now:
myMysql myarray "SELECT * from users;"
printf "%s\n" "${myarray[#]}"
1 alice 2015-06-09
2 bob 2016-08-10
3 charlie 2017-10-21
declare -p myarray
declare -a myarray=([0]=$'1\talice\t2015-06-09' [1]=$'2\tbob\t2016-08-10' [2]=$'3\tcharlie\t2017-10-21')
This tool are in early step of built... You have to manually clear your variable before re-using them:
unset myarray
myMysql myarray "SELECT name from users where id=2;"
echo $myarray
bob
declare -p myarray
declare -a myarray=([0]="bob")
If you're looking to get a global variable inside your script you can simply assign a value to a varname:
VARNAME=('var' 'name') # no space between the variable name and value
Doing this you'll be able to access VARNAME's value anywhere in your script after you initialize it.
If you want your variable to be shared between multiple scripts you have to use export:
script1.sh:
export VARNAME=('var' 'name')
echo ${VARNAME[0]} # will echo 'var'
script2.sh
echo ${VARNAME[1]} # will echo 'name', provided that
# script1.sh was executed prior to this one
NOTE that export will work only when running scripts in the same shell instance. If you want it to work cross-instance you would have to put the export variable code somewhere in .bashrc or .bash_profile
The answer from #F. Hauri seems really complicated.
https://stackoverflow.com/a/38052768/470749 helped me realize that I needed to use parentheses () wrapped around the query result to treat is as an array.
#You can ignore this function since you'll do something different.
function showTbl {
echo $1;
}
MOST_TABLES=$(ssh -vvv -t -i ~/.ssh/myKey ${SERVER_USER_AND_IP} "cd /app/ && docker exec laradock_mysql_1 mysql -u ${DB} -p${REMOTE_PW} -e 'SELECT table_name FROM information_schema.tables WHERE table_schema = \"${DB}\" AND table_name NOT LIKE \"pma_%\" AND table_name NOT IN (\"mail_webhooks\");'")
#Do some string replacement to get rid of the query result header and warning. https://stackoverflow.com/questions/13210880/replace-one-substring-for-another-string-in-shell-script
warningToIgnore="mysql\: \[Warning\] Using a password on the command line interface can be insecure\."
MOST_TABLES=${MOST_TABLES/$warningToIgnore/""}
headerToIgnore="table_name"
MOST_TABLES=${MOST_TABLES/$headerToIgnore/""}
#HERE WAS THE LINE THAT I NEEDED TO ADD! Convert the string to array:
MOST_TABLES=($MOST_TABLES)
for i in ${MOST_TABLES[#]}; do
if [[ $i = *[![:space:]]* ]]
then
#Remove whitespace from value https://stackoverflow.com/a/3232433/470749
i="$(echo -e "${i}" | tr -d '[:space:]')"
TBL_ARR+=("$i")
fi
done
for t in ${TBL_ARR[#]}; do
showTbl $t
done
This successfully shows me that ${TBL_ARR[#]} has all the values from the query result.
results=($( mysql –uroot –ppwd –se "SELECT * from users" ))
if [ "$?" -ne 0 ]
then
echo fail
exit
fi
I'm a command line newbie and I'm trying to figure out how I can add a header to multiple .csv files. The new header should have the following: 'TaxID' and 'filename'
I've tried multiple commands like sed, ed, awk, echo but if it worked it only changed the first file it found (I said *.csv in my command) and I can only manage this for TaxID.
Can anyone help me to get the filename into the header as well and do this for all my csv files?
(Note, I'm using a Mac)
Thank you!
Here's one way to do it, there are certainly others:
$ for i in *.csv;do echo $i;cp "$i" "$i.bak" && { echo "TaxID,$i"; cat "$i.bak"; } >"$i";done
Here's a sample run:
$ cat file1.csv
1,2
3,4
$ cat file2.csv
a,b
c,d
$ for i in *.csv;do echo $i;cp "$i" "$i.bak" && { echo "TaxID,$i"; cat "$i.bak"; } >"$i";done
file1.csv
file2.csv
$ cat file1.csv.bak
1,2
3,4
$ cat file1.csv
TaxID,file1.csv
1,2
3,4
$ cat file2.csv.bak
a,b
c,d
$ cat file2.csv
TaxID,file2.csv
a,b
c,d
Breaking it down:
$ for i in *.csv; do
This loops over all the files ending in .csv in the current directory. Each will be put in the shell variable i in turn.
echo $i;
This just echoes the current filename so you can see the progress. This can be safely left out.
cp "$i" "$i.bak"
Copy the current file (whose name is in i) to a backup. This is both to preserve the file if something goes awry, and gives subsequent commands something to copy from.
&&
Only run the subsequent commands if the cp succeeds. If you can't make a backup, don't continue.
{
Start a group command.
echo "TaxID,$i";
Output the desired header.
cat "$i.bak";
Output the original file.
}
End the group command.
>"$i";
Redirect the output of the group command (the new header and the contents of the original file) to the original file. This completes one file.
done
Finish the loop over all the files.
For fun, here are a couple of other ways (one JRD beat me to), including one using ed!
$ for i in *.csv;do echo $i;perl -p -i.bak -e 'print "TaxID,$ARGV\n" if $. == 1' "$i";done
$ for i in *.csv;do echo $i;echo -e "1i\nTaxID,$i\n.\nw\nq\n" | ed "$i";done
Here is on way in perl that modifies the files in place by adding a header of TaxID,{filename}, ignoring adding the header if it thinks it already exists.
ls
a.csv b.csv
cat a.csv
1,a.txt
2,b.txt
cat b.csv
3,c.txt
4,d.txt
ls *.csv | xargs -I{} -n 1 \
perl -p -i -e 'print "TaxID,{}\n" if !m#^TaxID# && !$h; $h = 1;' {}
cat a.csv
TaxID,a.csv
1,a.txt
2,b.txt
cat b.csv
TaxID,b.csv
3,c.txt
4,d.txt
You may want to create some backups of your files, or run on a few sample copies before running in earnest.
Explanatory:
List all files in directory with .csv extenstion
ls *.csv
"Pipe" the output of ls command into xargs so the perl command can run for each file. -I{} allows the filename to be subsequently referenced with {}. -n tells xargs to only pass 1 file at a time to perl.
| xargs -I{} -n 1
-p print each line of the input (file)
-i modifying the file in place
-e execute the following code
perl -p -i -e
Perl will implicitly loop over each line of the file and print it (due to -p). Print the header if we have not printed the header already and the current line doesn't already look like a header.
'print "TaxID,{}\n" if !m#^TaxID# && !$h; $h = 1;'
This is replaced with the filename.
{}
All told, in this example the commands to be run would be:
perl -p -i -e 'print "TaxID,{}\n" if !m#^TaxID# && !$h; $h = 1;' a.csv
perl -p -i -e 'print "TaxID,{}\n" if !m#^TaxID# && !$h; $h = 1;' b.csv
perl -p -i -e 'print "TaxID,{}\n" if !m#^TaxID# && !$h; $h = 1;' c.csv
perl -p -i -e 'print "TaxID,{}\n" if !m#^TaxID# && !$h; $h = 1;' d.csv
I'm working on a bash script to backup MySQL. I need to read from a file a series of strings and pass them to a variable in my script. Example:
Something like this will be in the file (file.txt)
database1 table1
database1 table4
database2
database3 table2
My script needs to read the file and put these strings in a variable like:
#! bin/bash
LIST="database1.table1|database1.table4|database2|database3.table2"
Edit. I changed my mind, now I need this output:
database1.table1.*|database1.table4.*|database2*.*|database3.table2.*
You could use tr to replace the newlines and spaces:
LIST=$(tr ' \n' '.|' < file.txt)
Since the last line of the input file is likely to contain a newline by itself, you'd need to get rid of it:
LIST=$(tr ' ' '.' < file.txt | paste -sd'|')
Using awk:
s=$(awk '{$1=$1}1' OFS='.' ORS='|' file)
LIST="${s%|}"
echo "$LIST"
database1.table1|database1.table4|database2|database3.table2
bash (version 4 I believe)
mapfile -t lines < file.txt # read lines of the file into an array
lines=("${lines[#]// /.}") # replace all spaces with dots
str=$(IFS='|'; echo "${lines[*]}") # join the array with pipe
echo "$str"
database1.table1|database1.table4|database2|database3.table2
mapfile -t lines < file.txt
for ((i=0; i<${#lines[#]}; i++)); do
[[ ${lines[i]} == *" "* ]] && lines[i]+=" *" || lines[i]+="* *"
done
str=$(IFS='|'; echo "${lines[*]// /.}")
echo "$str"
database1.table1.*|database1.table4.*|database2*.*|database3.table2.*
You can just replace the new lines with a charater that you need using sed, if it doesn't occur in the data.
For example
FOO=$(sed '{:q;N;y/ /./;s/\n/|/g;t q}' /home/user/file.txt)
I am a beginner in shell script. I am trying to store the output of a LINUX command in MySQL tables. I need to select the partition details in a column and used % in another column. I nearly made it but i get the output in a single column.In table test, disk is a column and used is another column. My desired output is
**DISK** **USED**
filesystem 45%
but my actual output is like
**DISK USED**
filesystem
45%
My code:
df -h | tee /home/abcd/test/monitor.text;
details=$(awk '{ print $1 } ' monitor.text);
echo $details;
used=$(awk '{ print $5}' monitor.text);
echo $used;
mysql test<<EOF;
INSERT INTO test_1 (details,used) VALUES ('$details','$used');
EOF
Please give me the correct code for the desired output. Thank you in advance.
Here is a script which captures the value of those 2 columns fine You can do the inserts?
#!/bin/sh
df -h | awk '{ print$1 " " $5} ' > monitor.txt
exec<monitor.txt
value=0
while read line
do
col1=`echo $line | cut -f1 -d " " `
col2=`echo $line | cut -f2 -d " " `
echo $col1
echo $col2 ;
done
Plug your insert statements inside the do-done loop.