How to run simple grep command in a tcl script and get output
grep B file1 > temp # bash grep command need to execute inside tcl commad,
file1 looks like this:
1 2 3 6 180.00 B
1 2 3 6 F
2 3 6 23 50.00 B
2 3 6 23 F
these do not work
exec grep B file.txt > temp
child process exited abnormally
exec "grep B pes_test.com > temp1"
couldn't execute "grep -e B ./pes_test.com > temp1": no such file or directory
exec /bin/sh -c {grep -e B ; true} < pes_test.com > tmp1
works but do not gives output,
exec throws an error when the process returns non-zero. See exec and the Tcl wiki
try {
set result [exec grep $pattern $file]
} on error {e} {
# typically, pattern not found
set result ""
}
Ref: try man page
Related
Let's assume that we have a file with the values as seen bellow:
% head test.csv
20220601,A,B,1
20220530,A,B,1
And we want to add two new columns, one with the date minus 1 day and one with minus 7 days, resulting the following:
% head new_test.csv
20220601,A,B,20220525,20220531,1
20220530,A,B,20220523,20220529,1
The awk that was used to produce the above is:
% awk 'BEGIN{FS=OFS=","} { a="date -d \"$(date -d \""$1"\") -7 days\" +'%Y%m%d'"; a | getline st ; close(a) ;b="date -d \"$(date -d \""$1"\") -1 days\" +'%Y%m%d'"; b | getline cb ; close(b) ;print $1","$2","$3","st","cb","$4}' test.csv > new_test.csv
But after applying the above in a large file with more than 100K lines it runs for 20 minutes, is there any way to optimize the awk?
One GNU awk approach:
awk '
BEGIN { FS=OFS=","
secs_in_day = 60 * 60 * 24
}
{ dt = mktime( substr($1,1,4) " " substr($1,5,2) " " substr($1,7,2) " 12 0 0" )
dt1 = strftime("%Y%m%d",dt - secs_in_day )
dt7 = strftime("%Y%m%d",dt - (secs_in_day * 7) )
print $1,$2,$3,dt7,dt1,$4
}
' test.csv
This generates:
20220601,A,B,20220525,20220531,1
20220530,A,B,20220523,20220529,1
NOTES:
requires GNU awk for the mktime() and strftime() functions; see GNU awk time functions for more details
other flavors of awk may have similar functions, ymmv
You can try using function calls, it is faster than calling the .
awk -F, '
function cmd1(date){
a="date -d \"$(date -d \""date"\") -1days\" +'%Y%m%d'"
a | getline st
return st
close(a)
}
function cmd2(date){
b="date -d \"$(date -d \""date"\") -7days\" +'%Y%m%d'"
b | getline cm
return cm
close(b)
}
{
$5=cmd1($1)
$6=cmd2($1)
print $1","$2","$3","$5","$6","$4
}' OFS=, test > newFileTest
I executed this against a file with 20000 records in seconds, compared to the original awk which took around 5 minutes.
I'm looking to run a command a given number of times in an Alpine Linux docker container which features the /bin/ash shell.
In Bash, this would be
bash-3.2$ for i in {1..3}
> do
> echo "number $i"
> done
number 1
number 2
number 3
However, the same syntax doesn't seem to work in ash:
> docker run -it --rm alpine /bin/ash
/ # for i in 1 .. 3
> do echo "number $i"
> done
number 1
number ..
number 3
/ # for i in {1..3}
> do echo "number $i"
> done
number {1..3}
/ #
I had a look at https://linux.die.net/man/1/ash but wasn't able to easily find out how to do this; does anyone know the correct syntax?
I ended up using seq with command substitution:
/ # for i in $(seq 10)
> do echo "number $i"
> done
number 1
number 2
number 3
number 4
number 5
number 6
number 7
number 8
number 9
number 10
Simply like with bash or shell:
$ ash -c "for i in a b c 1 2 3; do echo i = \$i; done"
output:
i = a
i = b
i = c
i = 1
i = 2
i = 3
Another POSIX compatible alternative, which does not use potentially slow expansion, is to use
i=1; while [ ${i} -le 3 ]; do
echo ${i}
i=$(( i + 1 ))
done
Pretend I have a MySQL table test that looks like:
+----+---------------------+
| id | value |
+----+---------------------+
| 1 | Hello World |
| 2 | Foo Bar |
| 3 | Goodbye Cruel World |
+----+---------------------+
And I execute the query SELECT id, value FROM test.
How would I assign each column to a variable in Bash using read?
read -a truncates everything after the first space in value:
mysql -D "jimmy" -NBe "SELECT id, value FROM test" | while read -a row;
do
id="${row[0]}"
value="${row[1]}"
echo "$id : $value"
done;
and output looks like:
1 : Hello
2 : Foo
3 : Goodbye
but I need it to look like:
1 : Hello World
2 : Foo Bar
3 : Goodbye Cruel World
I'm aware there are args I could pass to MySQL to format the results in table format, but I need to parse each value in each row. This is just a simplified example of my problem.
Use individual fields in the read loop instead of the array:
mysql -D "jimmy" -NBe "SELECT id, value FROM test" | while read -r id value;
do
echo "$id : $value"
done
This will make sure that id will be read into the id field and everything else would be read into the value field - that's how read behaves when input has more fields than the number of variables being read into. If there are more columns to be read, using a delimiter (such as #) that doesn't clash with actual data would help:
mysql -D "jimmy" -NBe "SELECT CONCAT(id, '#', value, '#', column3) FROM test" | while IFS='#' read -r id value column3;
do
echo "$id : $value : $column3"
done
You can do this, also avoid piping a command to a while read loop if possible to avoid creating a subshell.
while read -r line; do
id=$(echo $line | awk '{print $1}')
value=$(echo $line | awk '{print $1=""; print $0}'|sed ':a;N;$!ba;s/\n/ /g'| sed 's/^[ \t]*//g')
echo "ID: $id"
echo "VALUE: $value"
done< <(mysql -D "jimmy" -NBe "SELECT id, value FROM test")
If you want to store all the id's and values in an array for later use, you can modify it to look like this.
#!/bin/bash
declare -A -g arr
while read -r line; do
id=$(echo $line | awk '{print $1}')
value=$(echo $line | awk '{print $1=""; print $0}'|sed ':a;N;$!ba;s/\n/ /g'| sed 's/^[ \t]*//g')
arr[$id]=$value
done< <(mysql -D "jimmy" -NBe "SELECT id, value FROM test")
for key in "${!arr[#]}"; do
echo "$key: ${arr[$key]}"
done
Which gives you this output
dumbledore#ansible1a [OPS]:~/tmp/tmp > bash test.sh
1: Hello World
2: Foo Bar
3: Goodbye Cruel World
I'm uploading a csv file using the script
export IFS=","
cat $_csv_files | read a b c d;
Now I need the values in the column c of csv file to be inserted into the column manufacture_name of the table manufacturemap in my Database.How will I accomplish that?
when I tried the code below
mysql -u $_db_user -p$_db_password $_db << eof
INSERT INTO \`manufacturemap\`
( \`manufacture_name\`) VALUES ($c)
eof
I get:
ERROR 1136 (21S01) at line 1: Column count doesn't match value count at row 1
I've been stuck here for the past few hours,Please help me.
Input(csv file):
a,b,c,d
1.01100156278101E+15,2014/07/08,2014/07/08,"Cash Withdrawal by Cheque-173320--TT1421957901"
1.01100156278101E+15,2014/07/08,2014/07/08,"Cheque Paid-173261--TT1421951241"
1.01100156278101E+15,2014/07/08,2014/07/08,"Cheque Paid-173298--TT1421951226"
1.01100156278101E+15,2014/06/08,2014/06/08,"Cash Withdrawal by Cheque-173319--TT1421858465"
Try this:
#! /bin/sh
values ()
{
cat "$#" | \
while IFS=, read -r a b c d; do
printf '%s\n' "$c"
done | \
paste -sd, -
}
printf 'INSERT INTO `manufacturemap` (`manufacture_name`) VALUES (%s)\n' "$(values $_csv_files)" | \
mysql -u"$_db_user" -p"$_db_password" "$_db"
Is it possible within bash script, to have long printf command spanned over multiple lines?
My command is something like this, and I would like it to be more readable.
Braces are there because it's acutally part of awk block.
sqlite3 -noheader -column database.db "select * from tbl_a limit $limit" | \
awk 'BEGIN { FS = "|"; }
{ printf "\t\t<tr>\n\t\t\t<td class=\"d\">%s</td>\n\t\t\t<td class=\"m\">%s</td>\n\t\t</tr>\n", $1, $2 }' | vim -
In awk, you can use line-continuation characters to split the string across multiple lines.
sqlite3 -noheader -column database.db "select * from tbl_a limit $limit" |
awk 'BEGIN { FS = "|"; }
{ printf "\t\t<tr>\n\
\t\t\t<td class=\"d\">%s</td>\n\
\t\t\t<td class=\"m\">%s</td>\n\
\t\t</tr>\n", $1, $2 }' | vim -
Or, instead of using awk, you can process the output of sqlite line-by-line in bash:
sqlite3 -noheader -column database.db "select * from t/l_a limit $limit" |
while IFS='|' read col1 col2; do
printf '\t\t<tr>
\t\t\t<td class="d">%s</td>
\t\t\t<td class="m">%s</td>
\t\t</tr>\n' "$col1" "$col2"
done | vim -