Reading and re-writing specific lines on TCL - tcl

Dears, I have the following information (x y z) in a "data.dat" file:
24.441 53.481 41.474
23.920 53.389 42.572
24.470 52.228 42.012
24.875 51.313 42.524
23.663 51.323 42.701
I require to re-write the information as following:
{24.441 53.481 41.474} {23.920 53.389 42.572}
{23.920 53.389 42.572} {24.470 52.228 42.012}
{24.470 52.228 42.012} {24.875 51.313 42.524}
{24.875 51.313 42.524} {23.663 51.323 42.701}
This is for a large data file. How could I do that in TCL. Thanks in advance for the help.

set infile "data.dat"
set outfile [file tempfile]
set in [open $infile r]
set out [open $outfile w]
gets $in prev_line
while {[gets $in line] != -1} {
puts $out [format "{%s} {%s}" $prev_line $line]
set prev_line $line
}
close $in
close $out
# remove next line if you don't need to keep a backup of the initial file
file link -hard "${infile}.bak" $infile
# and overwrite the original file with the new contents
file rename -force $outfile $infile
or, call out to GNU awk to do it
exec gawk -i inplace {
NR == 1 {prev = $0; next}
{printf "{%s} {%s}\n", prev, $0; prev = $0}
} data.dat

Related

how to compare lines of two files and change the matched line in one file in tcl

tcl
I wanna compare two files line by line.
file1
abc
123
a1b2c3
file2
abc
00 a1b2c3
if the line of file1 matched one of the line of file2, change the line of file1 to the line of file2
so the output file woule be like that.
file1
abc
123
00 a1b1c3
please help me
thank you
Here's a working example, if necessary adjust file paths to fit your needs.
This code makes a temporary work file that overwrites the original file1 at end.
set file1Fp [open file1 "r"]
set file1Data [read $file1Fp]
close $file1Fp
set file2Fp [open file2 "r"]
set file2Data [read $file2Fp]
close $file2Fp
set tempFp [open tempfile "w"]
foreach lineFile1 [split $file1Data "\n"] {
set foundFlag 0
foreach lineFile2 [split $file2Data "\n"] {
if { $lineFile1 == {} } continue
if { [string match "*$lineFile1*" $lineFile2] } {
set foundFlag 1
puts $tempFp "$lineFile2"
}
}
if { $foundFlag == 0 } {
puts $tempFp "$lineFile1"
}
}
close $tempFp
file rename -force tempfile file1
You could write
set fh [open file2]
set f2_lines [split [read -nonewline $fh] \n]
close $fh
set out_fh [file tempfile tmp]
set fh [open file1]
while {[gets $fh line] != -1} {
foreach f2_line $f2_lines {
if {[regexp $line $f2_line]} {
set line $f2_line
break
}
}
puts $out_fh $line
}
close $fh
close $out_fh
file rename -force $tmp file1
Depending on how you want to compare the two lines, the regexp command can also be expressed as
if {[string match "*$line*" $f2_line]}
if {[string first $line $f2_line] != -1}

TCL: Read lines from file that contain only relevant words

I'm reading file and make some manipulation on the data.
Unfortunately I get the below error message:
unable to alloc 347392 bytes
Abort
Since the file is huge, I want to read only the lines that contain some word (describe in "regexp_or ")
Is there any way to read only the lines that contain "regexp_or" and save the foreach loop?
set regexp_or "^Err|warning|Fatal error"
set file [open [lindex $argv 1] r]
set data [ read $file ]
foreach line [ split $data "\n" ] {
if {[regexp [subst $regexp_or] $line]} {
puts $line
}
}
You could pull your input through grep:
set file [open |[list grep -E $regexp_or [lindex $argv 1]] r]
But that depends on grep being available. To do it completely in Tcl, you can process the file in chunks:
set file [open [lindex $argv 1] r]
while {![eof $file]} {
# Read a million characters
set data [read $file 1000000]
# Make sure to only work with complete lines
append data [gets $file]
foreach line [lsearch -inline -all -regexp [split $data \n] $regexp_or] {
puts $line
}
}
close $file

To copy the content of the file between between two matches to other file

I have a file in which data is shown below:
Files {
string1
string2
string3
...
...
}
but I want to copy content between { & } i.e string1, string2 to other file both the braces are in different lines to other files.
Since Tcl is so flexible, and that file has a Tcl-ish syntax, I'd do this:
proc Files {data} {set ::files_data $data}
source data_file
set fin [open other_file r]
set fout [open other_file.tmp w]
set have_string1 false
while {[gets $fin line] != -1} {
if {$have_string1 && [string match "string2" $line]} {
puts $fout $files_data
}
puts $fout $line
if {[string match "string1" $line]} {
set have_string1 true
}
}
close $fin
close $fout
# next line only if you need to keep a backup of the original
file link -hard other_file.bak other_file
file rename -force other_file.tmp other_file

Insert lines of code in a file after n numbers of lines using tcl

I am trying to write a tcl script in which I need to insert some lines of code after finding a regular expression .
For instance , I need to insert more #define lines of codes after finding the last occurrence of #define in the present file.
Thanks !
When making edits to a text file, you read it in and operate on it in memory. Since you're dealing with lines of code in that text file, we want to represent the file's contents as a list of strings (each of which is the contents of a line). That then lets us use lsearch (with the -regexp option) to find the insertion location (which we'll do on the reversed list so we find the last instead of the first location) and we can do the insertion with linsert.
Overall, we get code a bit like this:
# Read lines of file (name in “filename” variable) into variable “lines”
set f [open $filename "r"]
set lines [split [read $f] "\n"]
close $f
# Find the insertion index in the reversed list
set idx [lsearch -regexp [lreverse $lines] "^#define "]
if {$idx < 0} {
error "did not find insertion point in $filename"
}
# Insert the lines (I'm assuming they're listed in the variable “linesToInsert”)
set lines [linsert $lines end-$idx {*}$linesToInsert]
# Write the lines back to the file
set f [open $filename "w"]
puts $f [join $lines "\n"]
close $f
Prior to Tcl 8.5, the style changes a little:
# Read lines of file (name in “filename” variable) into variable “lines”
set f [open $filename "r"]
set lines [split [read $f] "\n"]
close $f
# Find the insertion index in the reversed list
set indices [lsearch -all -regexp $lines "^#define "]
if {![llength $indices]} {
error "did not find insertion point in $filename"
}
set idx [expr {[lindex $indices end] + 1}]
# Insert the lines (I'm assuming they're listed in the variable “linesToInsert”)
set lines [eval [linsert $linesToInsert 0 linsert $lines $idx]]
### ALTERNATIVE
# set lines [eval [list linsert $lines $idx] $linesToInsert]
# Write the lines back to the file
set f [open $filename "w"]
puts $f [join $lines "\n"]
close $f
The searching for all the indices (and adding one to the last one) is reasonable enough, but the contortions for the insertion are pretty ugly. (Pre-8.4? Upgrade.)
Not exactly the answer to your question, but this is the type of task that lends towards shell scripting (even if my solution is a bit ugly).
tac inputfile | sed -n '/#define/,$p' | tac
echo "$yourlines"
tac inputfile | sed '/#define/Q' | tac
should work!
set filename content.txt
set fh [open $filename r]
set lines [read $fh]
close $fh
set line_con [split $lines "\n"]
set line_num {}
set i 0
foreach line $line_con {
if [regexp {^#define} $line] {
lappend line_num $i
incr i
}
}
if {[llength $line_num ] > 0 } {
linsert $line_con [lindex $line_num end] $line_insert
} else {
puts "no insert point"
}
set filename content_new.txt
set fh [open $filename w]
puts $fh file_con
close $fh

TCL - find a regular pattern in a file and return the occurrence and number of occurrences

I am writing a code to grep a regular expression pattern from a file, and output that regular expression and the number of times it has occured.
Here is the code: I am trying to find the pattern "grep" in my file hello.txt:
set file1 [open "hello.txt" r]
set file2 [read $file1]
regexp {grep} $file2 matched
puts $matched
while {[eof $file2] != 1} {
set number 0
if {[regexp {grep} $file2 matched] >= 0} {
incr number
}
puts $number
}
Output that I got:
grep
--------
can not find channel named "qwerty
iiiiiii
wxseddtt
lsakdfhaiowehf'
jbsdcfiweg
kajsbndimm s
grep
afnQWFH
ACV;SKDJNCV;
qw qde
kI UQWG
grep
grep"
while executing
"eof $file2"
It's usually a mistake to check for eof in a while loop -- check the return code from gets instead:
set filename "hello.txt"
set pattern {grep}
set count 0
set fid [open $filename r]
while {[gets $fid line] != -1} {
incr count [regexp -all -- $pattern $line]
}
close $fid
puts "$count occurrances of $pattern in $filename"
Another thought: if you're just counting pattern matches, assuming your file is not too large:
set fid [open $filename r]
set count [regexp -all -- $pattern [read $fid [file size $filename]]]
close $fid
The error message is caused by the command eof $file2. The reason is that $file2 is not a file handle (resp. channel) but contains the content of the file hello.txt itself. You read this file content with set file2 [read $file1].
If you want to do it like that I would suggest to rename $file2 into something like $filecontent and loop over every contained line:
foreach line [split $filecontent "\n"] {
... do something ...
}
Glenn is spot on. Here is another solution: Tcl comes with the fileutil package, which has the grep command:
package require fileutil
set pattern {grep}
set filename hello.txt
puts "[llength [fileutil::grep $pattern $filename]] occurrences found"
If you care about performance, go with Glenn's solution.