Include line nr in search result - tcl

I'm trying to figure out how to search a log file for "word" and also include on what line the "word" was found at.
Also if one can read from bottom to top?
Any ideas?
thanks!
set seen_trigger2 ".foo"
bind pub -|- $seen_trigger2 seen2:main
proc number {list} {
lmap item $list {list [incr number] $item}
}
proc seen2:main {nick uhost hand chan text} {
set f [open /home/mydir/eggdrop/logs/mylog.txt]
set lines [split [read $f] "\n"]
close $f
set pattern $text
set reverseNumberedLines [lreverse [number $lines]]
foreach lineInfo [lsearch -all -inline -index 1 $lines $pattern] {
lassign $lineInfo lineNumber lineContent
putlog "$lineNumber : $lineContent"
}
}
...
Thank you glenn jackman!
Thank you Donal Fellows!

Reading from bottom to top is fairly expensive except in the degenerate case where all lines are exactly the same length. It's easier to read the whole lot in, split into lines, and reverse.
# Some features used aren't in 8.5 and before
package require Tcl 8.6
# Generates “line” numbers
proc number {list} {
lmap item $list {list [incr number] $item}
}
# Classic get-all-the-lines code snippet
set f [open theFile.txt]
set lines [split [read $f] "\n"]
close $f
# Number and reverse
set reverseNumberedLines [lreverse [number $lines]]
# Find the matching lines
foreach lineInfo [lsearch -all -inline -index 1 $lines $pattern] {
lassign $lineInfo lineNumber lineContent
puts "$lineNumber : $lineContent"
}
Note that I'm using the -index 1 option, which basically applies lindex $item 1 to each item in the list (the first sub-item is the line number, the second is the line text) before doing the search. Since we already have all the line numbers applied, we don't care about the actual indices we found them at, and can -inline the results.

Related

TCL - read a file, extra line

I have written the script below, and for some reason it output an extra 3rd line (top out.txt & name ). How do I stop it from outputing the last 2 line to the output file.
I think is because of this line but not too sure: set lines [split [read $list] "\n"]
Input put file - a.txt
**********************
abc
bcd
Output
*******
top out.txt
name abc
top out.txt
name bcd
top out.txt
name
set outfile [open "out.txt" w]
set list [open "a.list" r]
set dir "[glob a.txt]"
foreach ddd $dir {
set lines [split [read $list] "\n"]
foreach line $lines {
puts $outfile "top $dir"
puts $outfile "name $line"
}
}
close $list
close $outfile
The core of the issue is that split works with field separators and not field terminators; this is where you see the difference between the two.
If we look at the characters of your file, we'll see this:
a b c \n b c d \n. (Technically, the \n might instead be \r \n or \r; Tcl auto-adapts to that by default for text files.)
If we split those characters at the \n, we get these substrings: abc, bcd, and the empty string for the characters after that final \n. This means that when we join the strings with \n (as the joiner string) between all the elements, we get the original string back again.
You've got a few options for dealing with this. My favourite is to add code to ignore blank lines; versions of this are particularly useful for all sorts of human-written lists (as it's fairly easy to also add in comment skipping):
set lines [split [read $list] "\n"]
foreach line $lines {
if {$line eq ""} continue
puts $outfile "top $dir"
puts $outfile "name $line"
}
Another option is to use the -nonewline option to read to discard that final \n:
set lines [split [read -nonewline $list] "\n"]
foreach line $lines {
puts $outfile "top $dir"
puts $outfile "name $line"
}
You could also string trimright before splitting, but that would strip multiple newlines from the end (if they're present).
set lines [split [string trimright [read $list] "\n"] "\n"]
foreach line $lines {
puts $outfile "top $dir"
puts $outfile "name $line"
}
Finally, if you're using a machine-generated list, you could consider just getting that code to not put a newline at the end. Sometimes that's the easiest option of all.

how to split a file to list of lists TCL

I'm coding TCL and I would like to split a file into two lists of lists,
the file contain:
(1,2) (3,4) (5,6)
(7,8) (9,10) (11,12)
and I would like to get two list
one for each line, that contain lists that each one contain to two number
for example:
puts $list1 #-> {1 2} {3 4} {5 6}
puts [lindex $list1 0] #-> 1 2
puts [lindex $list2 2] #-> 11 12
I tried to use regexp and split but no success
The idea of using regexp is good, but you'll need to do some post-processing on its output.
# This is what you'd read from a file
set inputdata "(1,2) (3,4) (5,6)\n(7,8) (9,10) (11,12)\n"
foreach line [split $inputdata "\n"] {
# Skip empty lines.
# (I often put a comment format in my data files too; this is where I'd handle it.)
if {$line eq ""} continue
# Parse the line.
set bits [regexp -all -inline {\(\s*(\d+)\s*,\s*(\d+)\s*\)} $line]
# Example results of regexp:
# (1,2) 1 2 (3,4) 3 4 (5,6) 5 6
# Post-process to build the lists you really want
set list([incr idx]) [lmap {- a b} $bits {list $a $b}]
}
Note that this is building up an array; long experience says that calling variables list1, list2, …, when you're building them in a loop is a bad idea, and that an array should be used, effectively giving variables like list(1), list(2), …, as that yields a much lower bug rate.
An alternate approach is to use a simpler regexp and then have scan parse the results. This can be more effective when the numbers aren't just digit strings.
foreach line [split $inputdata "\n"] {
if {$line eq ""} continue
set bits [regexp -all -inline {\([^()]+\)} $line]
set list([incr idx]) [lmap substr $bits {scan $substr "(%d,%d)"}]
}
If you're not using Tcl 8.6, you won't have lmap yet. In that case you'd do something like this instead:
foreach line [split $inputdata "\n"] {
if {$line eq ""} continue
set bits [regexp -all -inline {\(\s*(\d+)\s*,\s*(\d+)\s*\)} $line]
set list([incr idx]) {}
foreach {- a b} $bits {
lappend list($idx) [list $a b]
}
}
foreach line [split $inputdata "\n"] {
if {$line eq ""} continue
set bits [regexp -all -inline {\([^()]+\)} $line]
set list([incr idx]) {}
foreach substr $bits {
lappend list($idx) [scan $substr "(%d,%d)"]
# In *very* old Tcl you'd need this:
# scan $substr "(%d,%d)" a b
# lappend list($idx) [list $a $b]
}
}
You have an answer already, but it can actually be done a little bit simpler (or at least without regexp, which is usually a good thing).
Like Donal, I'll assume this to be the text read from a file:
set lines "(1,2) (3,4) (5,6)\n(7,8) (9,10) (11,12)\n"
Clean it up a bit, removing the parentheses and any white space before and after the data:
% set lines [string map {( {} ) {}} [string trim $lines]]
1,2 3,4 5,6
7,8 9,10 11,12
One way to do it with good old-fashioned Tcl, resulting in a cluster of variables named lineN, where N is an integer 1, 2, 3...:
set idx 0
foreach lin [split $lines \n] {
set res {}
foreach li [split $lin] {
lappend res [split $li ,]
}
set line[incr idx] $res
}
A doubly iterative structure like this (a number of lines, each having a number of pairs of numbers separated by a single comma) is easy to process using one foreach within the other. The variable res is used for storing result lines as they are assembled. At the innermost level, the pairs are split and list-appended to the result. For each completed line, a variable is created to store the result: its name consists of the string "line" and an increasing index.
As Donal says, it's not a good idea to use clusters of variables. It's much better to collect them into an array (same code, except for how the result variable is named):
set idx 0
foreach lin [split $lines \n] {
set res {}
foreach li [split $lin] {
lappend res [split $li ,]
}
set line([incr idx]) $res
}
If you have the results in an array, you can use the parray utility command to list them in one fell swoop:
% parray line
line(1) = {1 2} {3 4} {5 6}
line(2) = {7 8} {9 10} {11 12}
(Note that this is printed output, not a function return value.)
You can get whole lines from this result:
% set line(1)
{1 2} {3 4} {5 6}
Or you can access pairs:
% lindex $line(1) 0
1 2
% lindex $line(2) 2
11 12
If you have the lmap command (or the replacement linked to below), you can simplify the solution somewhat (you don't need the res variable):
set idx 0
foreach lin [split $lines \n] {
set line([incr idx]) [lmap li [split $lin] {
split $li ,
}]
}
Still simpler is to let the result be a nested list:
set lineList [lmap lin [split $lines \n] {
lmap li [split $lin] {
split $li ,
}
}]
You can access parts of the result similar to above:
% lindex $lineList 0
{1 2} {3 4} {5 6}
% lindex $lineList 0 0
1 2
% lindex $lineList 1 2
11 12
Documentation:
array,
foreach,
incr,
lappend,
lindex,
lmap (for Tcl 8.5),
lmap,
parray,
set,
split,
string
The code works for windows :
TCL file code is :
proc captureImage {} {
#open the image config file.
set configFile [open "C:/main/image_config.txt" r]
#To retrive the values from the config file.
while {![eof $configFile]} {
set part [split [gets $configFile] "="]
set props([string trimright [lindex $part 0]]) [string trimleft [lindex $part 1]]
}
close $configFile
set time [clock format [clock seconds] -format %Y%m%d_%H%M%S]
set date [clock format [clock seconds] -format %Y%m%d]
#create the folder with the current date
set folderPath $props(folderPath)
append folderDate $folderPath "" $date "/"
set FolderCreation [file mkdir $folderDate]
while {0} {
if { [file exists $date] == 1} {
}
break
}
#camera selection to capture image.
set camera "video"
append cctv $camera "=" $props(cctv)
#set the image resolution (XxY).
set resolutionX $props(resolutionX)
set resolutionY $props(resolutionY)
append resolution $resolutionX "x" $resolutionY
#set the name to the save image
set imagePrefix $props(imagePrefix)
set imageFormat $props(imageFormat)
append filename $folderDate "" $imagePrefix "_" $time "." $imageFormat
set logPrefix "Image_log"
append logFile $folderDate "" $logPrefix "" $date ".txt"
#ffmpeg command to capture image in background
exec ffmpeg -f dshow -benchmark -i $cctv -s $resolution $filename >& $logFile &
after 3000
}
}
captureImage
thext file code is :
cctv=Integrated Webcam
resolutionX=1920
resolutionY=1080
imagePrefix=ImageCapture
imageFormat=jpg
folderPath=c:/test/
//camera=video=Integrated Webcam,Logitech HD Webcam C525
This code works for me me accept the code from text file were list of parameters are passed.

how to search for a word in a file and swap the line containing the word with the next line using tcl

I want to search for a word "VPEM" and if found swap it with the next line
for say if we find VPEM in 19 line swap line 19 and 20
Can anyone please help?
Since this is a moderately complex search-and-modify, we should read the file into memory and work on it there. Given that, we can then use split to make a list of lines, lsearch -all to find lines of interest, and lset to actually do the swaps.
# Read in; idiomatic
set f [open $theFile]
set lines [split [read $f] "\n"]
close $f
# \y is a word boundary constraint; perfect for what we want!
foreach idx [lsearch -all -regexp $lines {\yVPEM\y}] {
# Do the swap; idiomatic
set tmp [lindex $lines $idx]
set i2 [expr {$idx + 1}]
lset lines $idx [lindex $lines $i2]
lset lines $i2 $tmp
}
# Write out; idiomatic
set f [open $theFile w]
puts -nonewline $f [join $lines "\n"]
close $f

Manipulating file in tcl language

First time poster and new to TCL so please pardon my knowledge.
I've found a few examples on stackoverflow and with that help created a script.
I need to modify few lines of a file, I've tried the following (see code). I can seem to add the line of interest but it does not write it in the correct location e.g. if I want to replace line 3 it adds line after line 3
and moreover deletes subsequent lines if there is more than one line operation.
Lastly could some one kindly suggest the best way to identify the line of interest with name rather than line number. Name is always in the form Filter.HpOrd_n =
where n is 0...k
Data in info.dat
AA
BB
Filter.HpOrd_1 = 2
Filter.HpOrd_2 = 2
Filter.HpOrd_3 = 0.1
Filter.HpOrd_4 = 0.2
CC
DD
EE
FF
Code:
set fd [open "info.dat" r+]
set i 0
while { [gets $fd line] != -1 } {
set line [split $line "\n"]
incr i
if {$i == 3} {
set nLine [lreplace $line 0 0 Filter.LoPass]
puts $fd [join $nLine "\n"]
}
if {$i == 6} {
set nLine [lreplace $line 0 0 Filter.Butterworth]
puts $fd [join $nLine "\n"]
}
}
close $fd
With plain Tcl:
# the input and output file handles
set fin [open info.dat r]
set fout [file tempfile fname]
# process the file
while {[gets $fin line] != -1} {
puts $fout [string map {
"Filter.HpOrd_1" "Filter.LoPass"
"Filter.HpOrd_4" "Filter.Butterworth"
} $line]
}
close $fin
close $fout
# backup the original and overwrite it
file link -hard info.dat.bak info.dat
file rename -force -- $fname info.dat
TCL is just a meta language and set fd [open "info.dat" r+] is related to general file descriptor handling. If you open a file descriptor "r+" you can read and write to that file descriptor, but one file descriptor always points to one point in a file.
With "r+" your file descriptor initially points to the start of the file. Then you gets $fd line a line from the file, so $fd points to the start of the second line afterwards. Now you puts $fs [join $nline "\n"] blindly overwriting from the start of the second line and so on.
Generally you cannot replace lines in one file, but you will write a second file and move that after you closed both files. You can overwrite with seek, but you overwrite from a point in the file. So what you put should always have the same size, of you have read before.
Plain files (in basically all programming languages) are byte/character oriented rather than line oriented. This means 1) that you need to use a seek operation to get back to the beginning of the line you want to overwrite, and 2) unless the new line is exactly the same length as the old one, you will experience stub lines around it.
You have other problems as well. set line [split $line "\n"] doesn't do anything: you've just read line from gets, so it's guaranteed not to have any newlines in it. [join $nLine "\n"] doesn't do what you probably think it does: it will replace any sequences of whitespace in $line with single newlines, but it will not place any newline at the end of the string.
Unless your files are insanely large, I recommend something like this:
Replace by line number
proc lineReplace args {
set lines [split [lindex $args end] \n]
foreach {n line} [lrange $args 0 end-1] {
set index [incr n -1]
if {$index > 0} {
lset lines $index $line
}
}
join $lines \n
}
package require fileutil
fileutil::updateInPlace info.dat {
lineReplace
3 Filter.LoPass
6 Filter.Butterworth
}
In the "front end" you only specify the command to use and thereafter pairs of line number / new line text.
In the "back end" (the lineReplace command) the parameter args will contain those number / line pairs and at the end, as a single item, the complete contents of the file. The file contents are then split into a list of lines, and for every number / line pair you replace one of the items in that list. Finally, the list of lines are joined back into a string with newlines between each line. This string is returned by lineReplace to fileutil::updateInPlace, which replaces the old contents in the file with the returned string.
Replace by name
proc lineReplaceByName args {
set lines [split [lindex $args end] \n]
foreach {name line} [lrange $args 0 end-1] {
set index [lsearch $lines $name*]
if {$index > 0} {
lset lines $index $line
}
}
join $lines \n
}
fileutil::updateInPlace info.dat {
lineReplaceByName
Filter.HpOrd_1 Filter.LoPass
Filter.HpOrd_4 Filter.Butterworth
}
In this case the "back end" calculates the line number by searching for the given name at the beginning of each line. If the name isn't found, the replacement operation is skipped. Otherwise it's the same as before.
Replacing just the name
If you don't want to replace the complete line, but just the name part of it, some changes are necessary. If you are 100% sure that 1) the name never has any whitespace in it, and 2) there is always whitespace between the name and the =, you can just replace lset lines $index $line with lset lines $index 0 $line. If you want to play it safer, you can replace the line with
lset lines $index [regsub {.+(?=\s*=\s*)} [lindex $lines $index] $line]
which uses a regular expression to find the character region that precedes the = character (optionally with whitespace around it) and then replaces that with the text you provided.
The fileutil package is a part of the Tcllib companion library to Tcl.
Documentation: fileutil package, foreach, if, incr, join, lindex, lrange, lsearch, lset, package, proc, regsub, seek, set, split

Insert lines of code in a file after n numbers of lines using tcl

I am trying to write a tcl script in which I need to insert some lines of code after finding a regular expression .
For instance , I need to insert more #define lines of codes after finding the last occurrence of #define in the present file.
Thanks !
When making edits to a text file, you read it in and operate on it in memory. Since you're dealing with lines of code in that text file, we want to represent the file's contents as a list of strings (each of which is the contents of a line). That then lets us use lsearch (with the -regexp option) to find the insertion location (which we'll do on the reversed list so we find the last instead of the first location) and we can do the insertion with linsert.
Overall, we get code a bit like this:
# Read lines of file (name in “filename” variable) into variable “lines”
set f [open $filename "r"]
set lines [split [read $f] "\n"]
close $f
# Find the insertion index in the reversed list
set idx [lsearch -regexp [lreverse $lines] "^#define "]
if {$idx < 0} {
error "did not find insertion point in $filename"
}
# Insert the lines (I'm assuming they're listed in the variable “linesToInsert”)
set lines [linsert $lines end-$idx {*}$linesToInsert]
# Write the lines back to the file
set f [open $filename "w"]
puts $f [join $lines "\n"]
close $f
Prior to Tcl 8.5, the style changes a little:
# Read lines of file (name in “filename” variable) into variable “lines”
set f [open $filename "r"]
set lines [split [read $f] "\n"]
close $f
# Find the insertion index in the reversed list
set indices [lsearch -all -regexp $lines "^#define "]
if {![llength $indices]} {
error "did not find insertion point in $filename"
}
set idx [expr {[lindex $indices end] + 1}]
# Insert the lines (I'm assuming they're listed in the variable “linesToInsert”)
set lines [eval [linsert $linesToInsert 0 linsert $lines $idx]]
### ALTERNATIVE
# set lines [eval [list linsert $lines $idx] $linesToInsert]
# Write the lines back to the file
set f [open $filename "w"]
puts $f [join $lines "\n"]
close $f
The searching for all the indices (and adding one to the last one) is reasonable enough, but the contortions for the insertion are pretty ugly. (Pre-8.4? Upgrade.)
Not exactly the answer to your question, but this is the type of task that lends towards shell scripting (even if my solution is a bit ugly).
tac inputfile | sed -n '/#define/,$p' | tac
echo "$yourlines"
tac inputfile | sed '/#define/Q' | tac
should work!
set filename content.txt
set fh [open $filename r]
set lines [read $fh]
close $fh
set line_con [split $lines "\n"]
set line_num {}
set i 0
foreach line $line_con {
if [regexp {^#define} $line] {
lappend line_num $i
incr i
}
}
if {[llength $line_num ] > 0 } {
linsert $line_con [lindex $line_num end] $line_insert
} else {
puts "no insert point"
}
set filename content_new.txt
set fh [open $filename w]
puts $fh file_con
close $fh