Create CSV file with below output line - csv

I have below output line , from this line I want to create CSV file. In CSV
it should print below line as first column and in second column I want to print the string before second delemeter ":".I am using below script but It is separating data wherever "," is present , and I want to print that whole line in first column and the string after second delimiter ":" in second column .Please help me to sort data in proper format
output line :/home/nagios/NaCl/files/chk_raid.pl:token=$$value=undef;next};my($lhys,$lytrn,$ccdethe
shell script : input="out.txt"
while IFS= read -r LINES
do
#echo "$LINES"
if [[ $LINES = /* ]]
then
filename=echo $LINES | cut -d ":" -f1
echo "$LINES,$filename" >> out.csv
fi
done < "$input"

I don't think I understand your question correctly.
You currently have this output
:/home/nagios/NaCl/files/chk_raid.pl:token=$$value=undef;next};my($lhys,$lytrn,$ccdethe
And you would like to have this kind of CSV output
2
3
/home/nagios/NaCl/files/chk_raid.pl:token=13704value=undef;next};my($lhys,$lytrn,$ccdethe
token=13704value=undef;next};my($lhys,$lytrn,$ccdethe
If that's what you want, you can use Miller like this
echo ":/home/nagios/NaCl/files/chk_raid.pl:token=$$value=undef;next};my(\$lhys,\$lytrn,\$ccdethe" | mlr --n2c --ifs ":" cut -x -f 1 then put '$2=$2.":".$3'
and you will have this two columns CSV
2,3
"/home/nagios/NaCl/files/chk_raid.pl:token=13704value=undef;next};my($lhys,$lytrn,$ccdethe","token=13704value=undef;next};my($lhys,$lytrn,$ccdethe"

Related

CSV Column Insertion via awk

I am trying to insert a column in front of the first column in a comma separated value file (CSV). At first blush, awk seems to be the way to go but, I'm struggling with how to move down the new column.
CSV File
A,B,C,D,E,F
1,2,3,4,5,6
2,3,4,5,6,7
3,4,5,6,7,8
4,5,6,7,8,9
Attempted Code
awk 'BEGIN{FS=OFS=","}{$1=$1 OFS (FNR<1 ? $1 "0\nA\n2\nC" : "col")}1'
Result
A,col,B,C,D,E,F
1,col,2,3,4,5,6
2,col,3,4,5,6,7
3,col,4,5,6,7,8
4,col,5,6,7,8,9
Expected Result
col,A,B,C,D,E,F
0,1,2,3,4,5,6
A,2,3,4,5,6,7
2,3,4,5,6,7,8
C,4,5,6,7,8,9
This can be easily done using paste + printf:
paste -d, <(printf "col\n0\nA\n2\nC\n") file
col,A,B,C,D,E,F
0,1,2,3,4,5,6
A,2,3,4,5,6,7
2,3,4,5,6,7,8
C,4,5,6,7,8,9
<(...) is process substitution available in bash. For other shells use a pipeline like this:
printf "col\n0\nA\n2\nC\n" | paste -d, - file
With awk only you could try following solution, written and tested with shown samples.
awk -v value="$(echo -e "col\n0\nA\n2\nC")" '
BEGIN{
FS=OFS=","
num=split(value,arr,ORS)
for(i=1;i<=num;i++){
newVal[i]=arr[i]
}
}
{
$1=arr[FNR] OFS $1
}
1
' Input_file
Explanation:
First of all creating awk variable named value whose value is echo(shell command)'s output. NOTE: using -e option with echo will make sure that \n aren't getting treated as literal characters.
Then in BEGIN section of awk program, setting FS and OFS as , here for all line of Input_file.
Using split function on value variable into array named arr with delimiter of ORS(new line).
Then traversing through for loop till value of num(total values posted by echo command).
Then creating array named newVal with index of i(1,2,3 and so on) and its value is array arr value.
In main awk program, setting first field's value to array arr value and $1 and printing the line then.

transform multiline text into csv with awk sed and grep

I run a shell command that returns a list of repeated values like this (note the indentation):
Name: vm346
cpu 1 (12%) 6150m (76%)
memory 1130Mi (7%) 1130Mi (7%)
Name: vm847
cpu 6 (75%) 30150m (376%)
memory 12980Mi (87%) 12980Mi (87%)
Name: vm848
cpu 3500m (43%) 17150m (214%)
memory 6216Mi (41%) 6216Mi (41%)
I am trying to transform that data like this (in csv):
vm346,1,(12%),6150m,(76%),1130Mi,(7%),1130Mi,(7%)
vm847,6,(75%),30150m,(376%),12980Mi,(87%),12980Mi,(87%)
vm848,3500m,(43%),17150m,(214%),6216Mi,(41%),6216Mi,(41%)
The problem is that any given dataset like the one above is always on more than one line.
when I pipe that into it awk it drives me mad because even if I use:
BEGIN{ FS="\n" }
to try and stitch the data together in one line, it doesn't work. No matter what I do, awk keeps the name value as a separated line above everything else.
I am sorry I haven't much code to share but I have been spinning my wheels with this for a few hours now and I am running out of ideas...
I can solve this in Perl:
perl -ane 'print join ",", #F[1 .. $#F]; print $F[0] eq "memory" ? "\n" : ","'
It should be easy to translate it to awk if you need it.
How does it work?
-a splits each line on whitespace into the #F array
-n reads the input line by line and runs the code specified after -e for each line
We print all the elements but the first one separated by commas (see join)
We then look at the first column, if it's memory, we are at the last line of the block, so we print a newline, otherwise we print a comma
With AWK, one option is to set RS to "Name: ", and ignore the first record with NR > 1, e.g.
awk -v RS="Name: " 'BEGIN{OFS=","} NR > 1 {print $1, $3, $4, $5, $6, $8, $9, $10, $11}' file
#> vm346,1,(12%),6150m,(76%),1130Mi,(7%),1130Mi,(7%)
#> vm847,6,(75%),30150m,(376%),12980Mi,(87%),12980Mi,(87%)
#> vm848,3500m,(43%),17150m,(214%),6216Mi,(41%),6216Mi,(41%)
awk '{$1=""}1' | paste -sd' \n' - | awk '{$1=$1}1' OFS=,
Get rid of the first column. Join every three rows. Same idea with sed:
sed 's/^ *[^ ]* *//' | paste -sd' \n' - | sed 's/ */,/g'
Something else:
awk '
$1=="Name:" {
sep=ors
ors=ORS
} {
for (i=2;i<=NF;++i) {
printf "%s%s",sep,$i
sep=OFS
}
} END {printf "%s",ors}'
Or if you want to print an ORS based on the first field being "memory" (note that this program may end without printing a terminating ORS):
awk '{for (i=2;i<=NF;++i) printf "%s%s",$i,(i==NF && $1=="memory" ? ORS : OFS)}'
something else else:
awk -v OFS=, '
index($0,$1)==1 {
OFS=ors
ors=ORS
} {
$1=""
printf "%s",$0
OFS=ofs
} END {printf "%s",ors} BEGIN {ofs=OFS}'
This might work for you (GNU sed):
sed -nE '/^ +\S+ +/{s///;H;$!d};x;/./s/\s+/,/gp;x;s/^\S+ +//;h' file
In overview the sed program processes indented lines, already gathered lines (except in the case that the current line is the first line of the file) and non-indented lines.
Turn off implicit printing and enable extended regexp's. (-nE).
If the current line is indented, remove the indent, the first field and any following spaces, append the result to the hold space and if it is not the last line, delete it.
Otherwise, check the hold space for gathered lines and if found, replace one or more whitespaces by commas and print the result. Then prep the current line by removing the first field and any following spaces and replace the hold space with the result.
The solution seems logically back-to-front, but programming in this style avoids having to check for end-of-file multiple times and invoking labels and gotos.
N.B. This solution will work for any number of indented lines.
Here is a ruby to do that:
ruby -e '
s=$<.read
s.scan(/^([^ \t]+:)([\s\S]+?)(?=^\1|\z)/m). # parse blocks
map(&:last). # get data part
# parse and join the data fields:
map{|block| block.split(/\n[ \t]+[^ \t]+[ \t]+/)}.
map{|lines| lines.map(&:strip).join(" ").split().join(",")}.
each{|l| puts "#{l}"}
' file
vm346,1,(12%),6150m,(76%),1130Mi,(7%),1130Mi,(7%)
vm847,6,(75%),30150m,(376%),12980Mi,(87%),12980Mi,(87%)
vm848,3500m,(43%),17150m,(214%),6216Mi,(41%),6216Mi,(41%)
The advantage is that this is not dependent on the number of lines or the number of fields. It is parsing data that is in blocks of the form:
START: ([ \t]+[data_with_no_space])*\n
l1 ([ \t]+[data_with_no_space])*\n
...
START:
...
Works this way:
Parse the blocks with THIS REGEX;
Save an array of the data elements;
Join the sub arrays and then split into data fields;
Join(',') to make a csv.

Extract column data from csv file based on row values

I am trying to use awk/sed to extract specific column data based on row values. My actual files have 15 columns and over 1,000 rows (From a .csv file.)
Simple EXAMPLE: Input; a cdv file with a total of 5 columns and 100 rows. Output; data from column 2 through 5 based on specific row values from column 2. (I have a specific list of the row values I want the operator to filter out. The values are numbers.)
File looks like this:
"Date","IdNo","Color","Height","Education"
"06/02/16","7438","Red","54","4"
"06/02/16","7439","Yellow","57","3"
"06/03/16","7500","Red","55","3"
Recently Tried in AWK:
#!/usr/bin/awk -f
#I need to extract a full line when column 2 has a specific 5 digit value
awk '\
BEGIN { awk -F "," \
{
if ( $2 == "19650" ) { \
{print $1 "," $6} \
}
exit }
chmod u+x PPMDfUN.AWK
The operator response:
/var/folders/1_/drk_nwld48bb0vfvdm_d9n0h0000gq/T/PPMDfUN- 489939602.998.AWK.command ; exit;
/usr/bin/awk: syntax error at source line 3 source file /private/var/folders/1_/drk_nwld48bb0vfvdm_d9n0h0000gq/T/PPMDfUN- 489939602.997.AWK
context is
awk >>> ' <<<
/usr/bin/awk: bailing out at source line 17
logout
Output Example: I want full row lines based if column 2 equals 7439 & 7500.
“Date","IdNo","Color","Height","Education"
"06/02/16","7439","Yellow","57","3"
"06/03/16","7500","Red","55","3"
here you go...
$ awk -F, -v q='"' '$2==q"7439"q' file
"06/02/16","7439","Yellow","57","3"
There is not much to explain, other than convenience variable q defined for double quotes helps to eliminate escaping.
awk -F, 'NR<2;$2~/7439|7500/' file
"Date","IdNo","Color","Height","Education"
"06/02/16","7439","Yellow","57","3"
"06/03/16","7500","Red","55","3"

Read from file into variable - Bash Script

I'm working on a bash script to backup MySQL. I need to read from a file a series of strings and pass them to a variable in my script. Example:
Something like this will be in the file (file.txt)
database1 table1
database1 table4
database2
database3 table2
My script needs to read the file and put these strings in a variable like:
#! bin/bash
LIST="database1.table1|database1.table4|database2|database3.table2"
Edit. I changed my mind, now I need this output:
database1.table1.*|database1.table4.*|database2*.*|database3.table2.*
You could use tr to replace the newlines and spaces:
LIST=$(tr ' \n' '.|' < file.txt)
Since the last line of the input file is likely to contain a newline by itself, you'd need to get rid of it:
LIST=$(tr ' ' '.' < file.txt | paste -sd'|')
Using awk:
s=$(awk '{$1=$1}1' OFS='.' ORS='|' file)
LIST="${s%|}"
echo "$LIST"
database1.table1|database1.table4|database2|database3.table2
bash (version 4 I believe)
mapfile -t lines < file.txt # read lines of the file into an array
lines=("${lines[#]// /.}") # replace all spaces with dots
str=$(IFS='|'; echo "${lines[*]}") # join the array with pipe
echo "$str"
database1.table1|database1.table4|database2|database3.table2
mapfile -t lines < file.txt
for ((i=0; i<${#lines[#]}; i++)); do
[[ ${lines[i]} == *" "* ]] && lines[i]+=" *" || lines[i]+="* *"
done
str=$(IFS='|'; echo "${lines[*]// /.}")
echo "$str"
database1.table1.*|database1.table4.*|database2*.*|database3.table2.*
You can just replace the new lines with a charater that you need using sed, if it doesn't occur in the data.
For example
FOO=$(sed '{:q;N;y/ /./;s/\n/|/g;t q}' /home/user/file.txt)

(sed/awk) Extract values from text to csv file - even/odd lines pattern

I need to export some numeric values from a given ASCII text file and export it in a specific formatted csv file. The input file has got the even / odd line pattern:
SCF Done: E(UHF) = -216.432419652 A.U. after 12 cycles
CCSD(T)= -0.21667965032D+03
SCF Done: E(UHF) = -213.594303492 A.U. after 10 cycles
CCSD(T)= -0.21379841974D+03
SCF Done: E(UHF) = -2.86120139864 A.U. after 6 cycles
CCSD(T)= -0.29007031339D+01
and so on
I need the odd line value in the 5th column and the even line 2nd column value. They should be printed in a semicolon seperated csv file, with 10 values in each row. So the output should look like
-216.432419652;-0.21667965032D+03;-213.594303492;-0.21379841974D+03;-2.86120139864;-0.29007031339D+01; ...linebreak after 5 pairs of values
I started with awk '{print $5}' and awk '{print $2}', however I was not successful in creating a pattern that just acts on even/odd lines.
A simple way to do that?
The following script doesn't use a lot of the great power of awk, but will do the job for you and is hopefully understandable:
NR % 2 { printf "%s;", $5 }
NR % 2 == 0 { printf "%s;", $2 }
NR % 10 == 0 { print "" }
END { print "" }
Usage (save the above as script.awk):
awk -f script.awk input.txt
Given a file called data.txt, try:
awk '/SCF/{ printf $5 ";"; } /CCSD/{ printf($2); } NR % 10 == 0 { printf "\n"; }' data.txt
Something like this could work -
awk '{x = NF > 3 ? $5 : $2 ; printf("%s;",x)}(NR % 10 == 0){print OFS}' file
|_____________________| |________| |___________||_________|
| | | |
This is a `ternary operator`, Print with `NR` is a `OFS` is another built-in
what it does is checks the line formatting a built-in that has a default value of
for number of fields (`NF`). If to add that keeps `\n`
the number of fields is more than a ";" track of
3, we assign $5 value to variable x number of lines.
else we assign $2 value We are using modulo
operator to check when
10 lines are crossed.
This might work for you:
tr -s ' ' ',' <file | paste -sd',\n' | cut -d, -f5,11 | paste -sd',,,,\n'