How to create csv file using tcl - tcl

I have two variables, let's say $a with the list name & $b with age, when I try to redirect into csv, it creates csv but both $a & $b are falling into same same row instead of different, How can i separate them into two different rows?
puts $outfile "$b_1 \t$c"
set outfile [open "result_table_sort.csv" w+]

The recommended way of creating a CSV file is with the csv package in Tcllib. It handles all sorts of tricky edge cases for you so you don't have to.
package require csv
set outfile [open "thing.csv" w]
foreach aRow $a bRow $b {
puts $outfile [csv::join [list $aRow $bRow]]
}
close $outfile
You can switch to producing tab-separated output by passing \t as the separator character to csv::join:
puts $outfile [csv::join [list $aRow $bRow] "\t"]
Also consider using the csv::joinlist and csv::joinmatrix commands. They both let you build an entire table in one go, the first as a list of lists (list of rows, each row containing a list of columns) and the second as a matrix (structure defined in struct::matrix package in Tcllib).

Found the solution by putting comma in between two variable solved my issue....
puts $outfile "$b_1,\t$c"
set outfile [open "result_table_sort.csv" w+]

Related

Printing on multiple lines instead of on a single line

I am trying to make multiple directories and also trying to search and list all files found in a specific path.
proc filesearch {indir1 indir2 indir3 indir4 indir5} {
set infile1 [glob -nocomplain -type f $indir1$indir2/*txt*]
puts $infile1
}
When I puts $infile1, it puts all the file found into 1 long single line (as below). How can I split each file up into single line (put in $infile1).
The above puts $infile1, puts all into single line
a/b/c/d/a.txt a/b/c/d/b.xt a/b/c/d/c.txt a/b/c/d/d.txt
How do I puts out every file found into multiple lines?
a/b/c/d/a.txt
a/b/c/d/b.txt
a/b/c/d/c.txt
a/b/c/d/d.txt
Print all files found into individual line. Current output I see, list all the files in a single line separated with a space.
You just need to loop through the list.
foreach elem $infile1 {
puts $elem
}
Reference : foreach
Just join the list of files using newline as a separator:
puts [join $infile1 \n]

Tcl read first line of a csv file

I am trying to parse a CSV to check if one of the headers is present.
Sometimes I'd expect a fifth colomn with arbitraryHead
date time value result arbitraryHead
val1 d1 10 fail
val2 d2 15 norun
I was trying to read the first line then print it. But that is not working...
How can I read the first line and print all the headers?
set fh [open $csv_file r]
set data [list]
set line [gets $fh line]
lappend data [split $line ,]
close $fh
foreach x $data {
puts "$x\n"
}
When reading a CSV file, it's best to use the csv package in Tcllib, as that handles all the awkward edge cases in that format.
In particular, csv::split is especially useful (along with csv::join when creating a CSV file). Or the various functions that act as wrappers around it. Here's how you'd use it in your case
package require csv
set fh [open $csv_file r]
# Your data appears to be actually tab-separated, not comma-separated...
set data [csv::split [gets $fh] "\t"]
close $fh
foreach x $data {
puts "$x\n"
}
Your actual immediate bug was this:
set line [gets $fh line]
The two-argument form of gets writes the line it reads into the variable named in the second argument, and returns the length of line read (or -1 on failure to read a complete line, which can be useful in complex cases that aren't important here). You're then assigning that value to the same variable with set, losing the string that was written there. You should instead use one of the following (except that you should use a properly-tested package for reading CSV files):
gets $fh line
set line [gets $fh]
The one-argument form of gets returns the line it read, which can make it harder to distinguish errors but is highly convenient.
The simplest you can do is string match operation, just look for the desired header you wanted to check.
As requested in the following code I am checking "arbitraryHead"
set fh [open $csv_file r]
set contents [read $fh ]
foreach x $contents {
if {[string match "*arbitraryHead*" $x]} {
puts "HEAD FOUND"
}
}
close $fh
Hope this address your issue

How to use ::csv::split to split and process multiple lines in a CSV file

I was attempting to use ::csv::split to extract and process data from multiple lines in a CSV file using TCL. But I could not find a proper documentation about its usage. Can anyone help me with this? Is there a better way to extract data from a CSV file and process it?
If you have pulled in some rows of CSV data into the variable csv, you can convert it to a list of tuples and store it in the variable data like this:
set data {}
foreach row [split [string trim $csv] \n] {
lappend data [::csv::split $row]
}
That's about it. If the CSV data doesn't use commas, you will have to specify the column delimiter in the invocation, like ::csv::split $row \t or whatever.
Reading directly from a file:
set f [open foo.csv]
set data {}
while {[gets row] >= 0} {
lappend data [::csv::split $row]
}
close $f
or, with fileutil:
package require fileutil
set data {}
::fileutil::foreachLine row foo.csv {
lappend data [::csv::split $row]
}
I've used csv a lot and am very happy with it.
There are facilities in csv to convert CSV data to e.g. Tcllib matrix structures, which can be very handy but not necessary.

Search in file for number, increment and replace

I have a VHDL file which has a line like this:
constant version_nr :integer := 47;
I want to increment the number in this line in the file. Is there a way to accomplish this with TCL?
This is principally a string operation. The tricky bit is finding the line to operate on and picking the number out of it. This can be occasionally awkward, but it is mainly a matter of choosing a suitable regular expression (as this is the kind of parsing task that they excel at). A raw RE to do the matching would be this:
^\s*constant\s+version_nr\s*:integer\s*:=\s*\d+\s*;\s*$
This is essentially converting all possible places for a whitespace sequence into \s* (except where whitespace is mandatory, which becomes \s+) and matching the number with \d+, i.e., a digit sequence. We then add in parentheses to capture the interesting substrings, which are the prefix, the number, and the suffix:
^(\s*constant\s+version_nr\s*:integer\s*:=\s*)(\d+)(\s*;\s*)$
Now we have enough to make the line transform (which we'll do as a procedure so we can give it a nice name):
proc lineTransform {line} {
set RE {^(\s*constant\s+version_nr\s*:integer\s*:=\s*)(\d+)(\s*;\s*)$}
if {[regexp $RE $line -> prefix number suffix]} {
# If we match, we increment the number...
incr number
# And reconcatenate it with the prefix and suffix to make the new line
set line $prefix$number$suffix
}
return $line
}
In Tcl 8.7 (which you won't be using yet) you can write this as this more succinct form:
proc lineTransform {line} {
# Yes, this version can be a single (long) line if you want
set RE {^(\s*constant\s+version_nr\s*:integer\s*:=\s*)(\d+)(\s*;\s*)$}
regsub -command $RE $line {apply {{- prefix number suffix} {
# Apply the increment when the RE matches and build the resulting line
string cat $prefix [incr number] $suffix
}}}
}
Now that we have a line transform, we've just got to apply that to all the lines of the file. This is easily done with a file that fits in memory (up to a few hundred MB) but requires additional measures for larger files as you need to stream from one file to another:
proc transformSmallFile {filename} {
# Read data into memory first
set f [open $filename]
set data [read $f]
close $f
# Then write it back out, applying the transform as we go
set f [open $filename w]
foreach line [split $data "\n"] {
puts $f [transformLine $line]
}
close $f
}
proc transformLargeFile {filename} {
set fin [open $filename]
# The [file tempfile] command makes working with temporary files easier
set fout [file tempfile tmp [file normalize $filename]]
# A streaming transform; requires that input and output files be different
while {[gets $fin line] >= 0} {
puts $fout [transformLine $line]
}
# Close both channels; flushes everything to disk too
close $fin
close $fout
# Rename our temporary over the original input file, replacing it
file rename $tmp $filename
}

How to manipulate each line in a file in TCL

I'm trying to write some data from iperf to a file using tcl script.The file has more than 100 lines. Now i need to parse the first 10 lines, neglect it and consider the next set of 10 lines and print it, again i need to neglect the next set of 10 lines and print the next 10 lines and keep continuing until i reach the end of file. How could i do it programmatic ally?
exec c:\\iperf_new\\iperf -c $REF_WLAN_IPAddr -f m -w 2M -i 1 -t $run_time > xx.txt
set fp [open "xx.txt" r ]
set file_data [read $fp]
set data [split $file_data "\n"]
foreach line $data {
if {[regexp {(MBytes) +([0-9\.]*)} $line match pre tput]==1 } {
puts "Throughput: $tput Mbps"
}
Well, as your example shows, you have found out how to split a (slurped) file into lines and process them one-by-one.
Now what's the problem with implementing "skip ten lines, process ten lines, skip another ten lines etc"? It's just about using a variable which counts lines seen so far plus selecting a branch of code based on its value. This approach has nothing special when it comes to Tcl: there are commands available to count, conditionally select branches of code and control looping.
If branching based on the current value of a line counter looks too lame, you could implement a state machine around that counter variable. But for this simple case it looks like over-engeneering.
Another approach would be to pick the necessary series of lines out of the list returned by split using lrange. This approach might use a nice property of lrange which can be told to return a sublist "since this index and until the end of the list", so the solution really boils down to:
set lines [split [read $fd] \n]
parse_header [lrange $lines 0 9]
puts [join [lrange $lines 10 19] \n]
parse_something_else [lrange 20 29]
puts [join [lrange $lines 30 end] \n]
For a small file this solution looks pretty compact and clean.
If I understood you correctly, you want to print lines 11-20, 31-40, 51-60,... The following will do what you want:
package require Tclx
set counter 0
for_file line xxx.txt {
if {$counter % 20 >= 10} { puts $line }
incr counter
}
The Tclx package provides a simple way to read lines from a file: the for_file command.