How to specify *one* tab as field separator in AWK? - tabs

The default for white-space field separators, such as tab when using FS = "\t", in AWK is either one or many. Therefore, if you want to read in a tab separated file with null values in some columns (other than the last), it skips over them. For example:
1 "\t" 2 "\t" "" "\t" 4 "\t" 5
$3 would refer to 4, not the null "" even though there are clearly two tabs.
What should I do so that I can specify the field separator to be one tab only, so that $4 would refer to 4 and not 5?

echo '1 "\t" 2 "\t" "" "\t" 4 "\t" 5' | awk -F"\t" '{print "$3="$3 , "$4="$4}'
output
$3=" "" " $4=" 4 "
So you can remove the dbl-quotes in your original string, and get
echo '1\t2\t\t4\t5' | awk -F"\t" '{print "$3="$3 , "$4="$4}'
output2
$3= $4=4
You're right, the default FS is white space, with the caveat that space and tab char next to each other, would qualify as 1 FS instance. So to use just "\t" as your FS, you can do as above as a cmd-line argument, or you can include an explict reset on FS, usually done in a BEGIN block, like
echo '1 "\t" 2 "\t" "" "\t" 4 "\t" 5' | awk 'BEGIN{FS="\t"}{print "$3="$3 , "$4="$4}'
IHTH

Related

Subtract fixed number of days from date column using awk and add it to new column

Let's assume that we have a file with the values as seen bellow:
% head test.csv
20220601,A,B,1
20220530,A,B,1
And we want to add two new columns, one with the date minus 1 day and one with minus 7 days, resulting the following:
% head new_test.csv
20220601,A,B,20220525,20220531,1
20220530,A,B,20220523,20220529,1
The awk that was used to produce the above is:
% awk 'BEGIN{FS=OFS=","} { a="date -d \"$(date -d \""$1"\") -7 days\" +'%Y%m%d'"; a | getline st ; close(a) ;b="date -d \"$(date -d \""$1"\") -1 days\" +'%Y%m%d'"; b | getline cb ; close(b) ;print $1","$2","$3","st","cb","$4}' test.csv > new_test.csv
But after applying the above in a large file with more than 100K lines it runs for 20 minutes, is there any way to optimize the awk?
One GNU awk approach:
awk '
BEGIN { FS=OFS=","
secs_in_day = 60 * 60 * 24
}
{ dt = mktime( substr($1,1,4) " " substr($1,5,2) " " substr($1,7,2) " 12 0 0" )
dt1 = strftime("%Y%m%d",dt - secs_in_day )
dt7 = strftime("%Y%m%d",dt - (secs_in_day * 7) )
print $1,$2,$3,dt7,dt1,$4
}
' test.csv
This generates:
20220601,A,B,20220525,20220531,1
20220530,A,B,20220523,20220529,1
NOTES:
requires GNU awk for the mktime() and strftime() functions; see GNU awk time functions for more details
other flavors of awk may have similar functions, ymmv
You can try using function calls, it is faster than calling the .
awk -F, '
function cmd1(date){
a="date -d \"$(date -d \""date"\") -1days\" +'%Y%m%d'"
a | getline st
return st
close(a)
}
function cmd2(date){
b="date -d \"$(date -d \""date"\") -7days\" +'%Y%m%d'"
b | getline cm
return cm
close(b)
}
{
$5=cmd1($1)
$6=cmd2($1)
print $1","$2","$3","$5","$6","$4
}' OFS=, test > newFileTest
I executed this against a file with 20000 records in seconds, compared to the original awk which took around 5 minutes.

How can I clean a TSV file having record or fields separators in one of its fields?

Given a TSV file with col2 that contains either a field or record separator (FS/RS) being respectively a tab or a carriage return which are escaped/surrounded by quotes.
$ printf '%b\n' 'col1\tcol2\tcol3' '1\t"A\tB"\t1234' '2\t"CD\nEF"\t567' | \cat -vet
col1^Icol2^Icol3$
1^I"A^IB"^I1234$
2^I"CD$
EF"^I567$
+------+---------+------+
| col1 | col2 | col3 |
+------+---------+------+
| 1 | "A B" | 1234 |
| 2 | "CD | 567 |
| | EF" | |
+------+---------+------+
Is there a way in sed/awk/perl or even (preferably) miller/mlr to transform those pesky characters into spaces in order to generate the following result:
+------+---------+------+
| col1 | col2 | col3 |
+------+---------+------+
| 1 | "A B" | 1234 |
| 2 | "CD EF" | 567 |
+------+---------+------+
I cannot get miller 6.2 to make the proper transformation (tried with DSL put/gsub) because it doesn't recognize the tab or CR/LF being part of the columns which breaks the field number:
$ printf '%b\n' 'col1\tcol2\tcol3' '1\t"A\tB"\t1234' '2\t"CD\nEF"\t567' | mlr --opprint --barred --itsv cat
mlr : mlr: CSV header/data length mismatch 3 != 4 at filename (stdin) line 2.
A good library cleanly handles things like embedded newlines and quoted separators (in fields)
In a Perl script with Text::CSV
use warnings;
use strict;
use Text::CSV;
my $file = shift // die "Usage: $0 filename\n";
my $csv = Text::CSV->new( { binary => 1, sep_char => "\t", auto_diag => 1 } );
open my $fh, '<', $file or die "Can't open $file: $!";
while (my $row = $csv->getline($fh)) {
s/\s+/ /g for #$row; # collapse multiple spaces, tabs, newlines
$csv->say(*STDOUT, $row);
}
Note the many other options for the constructor that can help handle various irregularities.
This can fit in a one-liner; its functional interface (with csv) is particularly well suited for that.
if you run
printf '%b\n' 'col1\tcol2\tcol3' '1\t"A\tB"\t1234' '2\t"CD\nEF"\t567' | \
mlr --c2t --fs "\t" clean-whitespace
col1 col2 col3
1 A B 1234
2 CD EF 567
I'm using mlr 6.2.
A way to do it in miller 5 is to use simply the put verb:
printf '%b\n' 'col1\tcol2\tcol3' '1\t"A\tB"\t1234' '2\t"CD\nEF"\t567' | \
mlr --tsv put -S 'for (k in $*) {$[k] = gsub($[k], "\n", " ")}' then clean-whitespace
perl -MText::CSV_XS=csv -e'
csv
in => *ARGV,
on_in => sub { s/\s+/ /g for #{$_[1]} },
sep_char => "\t";
'
Or s/[\t\n]/ /g if you prefer.
Can be placed all on one line.
Input is accepted from file named by argument or STDIN.
With GNU awk for multi-char RS, RT, and gensub():
$ awk -v RS='"([^"]|"")*"' '{ORS=gensub(/[\n\t]/," ","g",RT)} 1' file
col1 col2 col3
1 "A B" 1234
2 "CD EF" 567
The above just uses RS to isolate each "..." string and saves it in RT, then replaces every \n or \t in that string with a blank and saves the result in ORS, then prints the record.
you absolutely don't need gawk to get this done - here's one that works for mawk, gawk, or macos nawk :
INPUT
--before--
col1 col2 col3
1 "A B" 1234
2 "CD
EF" 567
CODE
{m,n,g}awk '
BEGIN {
1 __=substr((OFS=FS="\t\"")(FS)(ORS=_)\
(RS = "^$"),_+=_^=_<_,_)
}
END {
1 printbefore()
3 for (_^=_<_; _<=NF; _++) {
3 sub(/[\t-\r]+/, ($_~__)?" ":"&", $_)
}
1 print
}
1 function printbefore(_)
{
1 printf("\n\n--before--\n%s\n------"\
"------AFTER------\n\n", $+_)>("/dev/stderr")
}
OUTPUT
———AFTER (using mawk)------
col1 col2 col3
1 "A B" 1234
2 "CD EF" 567
strip out the part about printbefore() that's more for debugging purposes, then it's just
{m,n,g}awk '
BEGIN { __=substr((OFS=FS="\t\"") FS \
(ORS=_) (RS="^$"),_+=_^=_<_,_)
} END {
for(--_;_<=NF;_++) {
sub(/[\t-\r]+/, $_~__?" ":"&",$_) } print }'

HTML in unix shell scripting with sequential output

I have a script called "main.ksh" which returns "output.txt" file and I am sending that file via mail.(list contains 50+ records, I just give 4 records for example)
mail output I am getting is:
DATE | FEED NAMEs | FILE NAMEs | JOB NAMEs | SCHEDULED_TIME| TIMESTAMP| SIZE(MB)| COUNT| STATUS |
Dec 17 INVEST_AI_FUNDS_FEED amlfunds_iai_20161217.txt gdcpl3392_uxmow080_ori_inv_ai TUE-SAT 02:03 0.4248 4031 On_Time
Dec 17 INVEST_AI_SECURITIES_FEED amltxn_iai_20161217.txt gdcpl3392_uxmow080_ori_inv_ai TUE-SAT 02:03 0.0015 9 On_Time
Dec 17 INVEST_AI_CONNECTED_PARTIES_FEED amlbene_iai_20161217.txt gdcpl3392_uxmow080_ori_inv_ai TUE-SAT 02:03 0.0001 1 No_Records
I am implementing coloring for Delayed,On_Time and No_Records field and I wrote below script which gives me bottom output(output is correct but there is no space separated).
awk 'BEGIN {
print "<html>" \
"<body bgcolor=\"#333\" text=\"#f3f3f3\">" \
"<pre>"
}
NR == 1 { print $0 }
NR > 1 {
if ($NF == "Delayed") color="red"
else if ($NF == "On_time") color="green"
else if ($NF == "No_records") color="yellow"
else color="#003abc"
$NF="<span style=\"color:" color "\">" $NF "</span>"
print $0
}
END {
print "</pre>" \
"</body>" \
"</html>"
}
' output.txt > output.html
output with perfect coloring:
| DATE | FEED NAMEs | FILE NAMEs | JOB NAMEs | SCHEDULED_TIME| TIMESTAMP| SIZE(MB)| COUNT| STATUS |
Dec 17 INVEST_AI_FUNDS_FEED amlfunds_iai_20161217.txt gdcpl3392_uxmow080_ori_inv_ai On_Time
Dec 17 INVEST_AI_SECURITIES_FEED amltxn_iai_20161217.txt gdcpl3392_uxmow080_ori_inv_ai On_Time
Dec 17 INVEST_AI_CONNECTED_PARTIES_FEED amlbene_iai_20161217.txt gdcpl3392_uxmow080_ori_inv_ai No_Records
There are 4 columns are skipped automatically. Could you please help me on this please ? Thanks a lot !
When your code executes this
$NF="<span style=\"color:" color "\">" $NF "</span>"
print $0
the input line is rebuilt and therefore the multiple blanks between two consecutive fields are replaced by just ONE only blank space.
My solution copies the input line in a variable, deletes the last field (changing the value of the variable, not the input line), adds the modified last field and prints:
Dummy=$0
sub("[^ ]+$","",Dummy) # removes last field
Dummy=Dummy "<span style=\"color:" color "\">" $NF "</span>"
print Dummy
Best regards
update: the last two code lines can be reduced in this way:
print Dummy "<span style=\"color:" color "\">" $NF "</span>"

python csv print first 10 rows only

I am working with a large CSV file with a lot of rows and columns. I need only the first 5 columns but only if the value for column 1 of each row is 1. (Column 1 can only have value 0 or 1).
So far I can print out the first 5 columns but can't filter to only show when column 1 is equal to 1. My .awk file looks like:
BEGIN {FS = ","}
NR!=1 {print $1", " $2", " $3", "$4", "$5}
I have tried things like $1>1 but to no luck, the output is always every row, regardless if the first column of each row is a 0 or 1.
Modifying your awk a bit:
BEGIN {FS = ","; OFS = ", "}
$1 == 1 {print $1, $2, $3, $4, $5; n++}
n == 10 {exit}

Awk: How to cut similar part of 2 fields and then get the difference of remaining part?

Let say I have 2 fields displaying epoch time in microseconds:
1318044415123456,1318044415990056
What I wanted to do is:
Cut the common part from both fields: "1318044415"
Get the difference of the remaining parts: 990056 - 123456 = 866600
Why am I doing this? Because awk uses floating point IEEE 754 but not 64 bit integers and I need to get difference of epoch time of 2 events in microseconds.
Thanks for any help!
EDIT:
Finally I found the largest number Awk could handle on Snow Leopard 10.6.8: 9007199254740992.
Try this: echo '9007199254740992' | awk -F ',' '{print $1 + 0}'
The version of Awk was 20070501 (produced by awk --version)
Here is an awk script that meets your requirements:
BEGIN {
FS = ","
}
{
s1 = $1
s2 = $2
while (length(s1) > 1 && substr(s1, 1, 1) == substr(s2, 1, 1))
{
s1 = substr(s1, 2)
s2 = substr(s2, 2)
}
n1 = s1 + 0
n2 = s2 + 0
print n2 - n1
}