Tcl/Tk write in a specific line - tcl

I want to write in a specific line in Textdocument but there´s a Problem with my code, i don´t know where the bug is.
set fp [open C:/Users/user/Desktop/tst/settings.txt w]
set count 0
while {[gets $fp line]!=-1} {
incr count
if {$count==28} {
break
}
}
puts $fp "TEST"
close $fp
The File only contains TEST.
Has anybody an idea?

With short text files (these days, short is up to hundreds of megabytes!) the easiest way is to read the whole file into memory, do the text surgery there, and then write the whole lot back out. For example:
set filename "C:/Users/user/Desktop/tst/settings.txt"
set fp [open $filename]
set lines [split [read $fp] "\n"]
close $fp
set lines [linsert $lines 28 "TEST"]
# Read a line with lindex, find a line with lsearch
# Replace a line with lset, replace a range of lines with lreplace
set fp [open $filename w]
puts $fp [join $lines "\n"]
close $fp
Doing it this way is enormously easier, and avoids a lot of complexities that can happen with updating a file in place; save those for gigabyte-sized files (which won't be called settings.txt in any sane world…)

You are using 'w' as access argument, which truncates the file. So you will loose all data from file while opening. Read more about open command
You can use 'r+' or 'a+'.
Also To write after a particular line you can move the pointer to the desired location.
set fp [open C:/Users/user/Desktop/tst/settings.txt r+]
set count 0
while {[gets $fp line]!=-1} {
incr count
if {$count==28} {
break
}
set offset [tell $fp]
}
seek $fp $offset
puts $fp "TEST"
close $fp
To replace a complete line it would be easier to do in following way. Rewrite all the lines and write new data on the desired line.
set fp [open C:/Users/user/Desktop/tst/settings.txt r+]
set count 0
set data [read $fp]
seek $fp 0
foreach line [split $data \n] {
incr count
if {$count==28} {
puts $fp "TEST"
} else {
puts $fp $line
}
}
close $fp

package require fileutil
set filename path/to/settings.txt
set count 0
set lines {}
::fileutil::foreachLine line $filename {
incr count
if {$count == 28} {
break
}
append lines $line\n
}
append lines TEST\n
::fileutil::writeFile $filename $lines
This is a simple and clean way to do it. Read the lines up to the point where you want to write, and then write back those lines with your new content added.

I'd suggest it would be easier to spawn an external program that specializes in this:
exec sed -i {28s/.*/TEST/} path/to/settings.txt

Related

Manipulating file in tcl language

First time poster and new to TCL so please pardon my knowledge.
I've found a few examples on stackoverflow and with that help created a script.
I need to modify few lines of a file, I've tried the following (see code). I can seem to add the line of interest but it does not write it in the correct location e.g. if I want to replace line 3 it adds line after line 3
and moreover deletes subsequent lines if there is more than one line operation.
Lastly could some one kindly suggest the best way to identify the line of interest with name rather than line number. Name is always in the form Filter.HpOrd_n =
where n is 0...k
Data in info.dat
AA
BB
Filter.HpOrd_1 = 2
Filter.HpOrd_2 = 2
Filter.HpOrd_3 = 0.1
Filter.HpOrd_4 = 0.2
CC
DD
EE
FF
Code:
set fd [open "info.dat" r+]
set i 0
while { [gets $fd line] != -1 } {
set line [split $line "\n"]
incr i
if {$i == 3} {
set nLine [lreplace $line 0 0 Filter.LoPass]
puts $fd [join $nLine "\n"]
}
if {$i == 6} {
set nLine [lreplace $line 0 0 Filter.Butterworth]
puts $fd [join $nLine "\n"]
}
}
close $fd
With plain Tcl:
# the input and output file handles
set fin [open info.dat r]
set fout [file tempfile fname]
# process the file
while {[gets $fin line] != -1} {
puts $fout [string map {
"Filter.HpOrd_1" "Filter.LoPass"
"Filter.HpOrd_4" "Filter.Butterworth"
} $line]
}
close $fin
close $fout
# backup the original and overwrite it
file link -hard info.dat.bak info.dat
file rename -force -- $fname info.dat
TCL is just a meta language and set fd [open "info.dat" r+] is related to general file descriptor handling. If you open a file descriptor "r+" you can read and write to that file descriptor, but one file descriptor always points to one point in a file.
With "r+" your file descriptor initially points to the start of the file. Then you gets $fd line a line from the file, so $fd points to the start of the second line afterwards. Now you puts $fs [join $nline "\n"] blindly overwriting from the start of the second line and so on.
Generally you cannot replace lines in one file, but you will write a second file and move that after you closed both files. You can overwrite with seek, but you overwrite from a point in the file. So what you put should always have the same size, of you have read before.
Plain files (in basically all programming languages) are byte/character oriented rather than line oriented. This means 1) that you need to use a seek operation to get back to the beginning of the line you want to overwrite, and 2) unless the new line is exactly the same length as the old one, you will experience stub lines around it.
You have other problems as well. set line [split $line "\n"] doesn't do anything: you've just read line from gets, so it's guaranteed not to have any newlines in it. [join $nLine "\n"] doesn't do what you probably think it does: it will replace any sequences of whitespace in $line with single newlines, but it will not place any newline at the end of the string.
Unless your files are insanely large, I recommend something like this:
Replace by line number
proc lineReplace args {
set lines [split [lindex $args end] \n]
foreach {n line} [lrange $args 0 end-1] {
set index [incr n -1]
if {$index > 0} {
lset lines $index $line
}
}
join $lines \n
}
package require fileutil
fileutil::updateInPlace info.dat {
lineReplace
3 Filter.LoPass
6 Filter.Butterworth
}
In the "front end" you only specify the command to use and thereafter pairs of line number / new line text.
In the "back end" (the lineReplace command) the parameter args will contain those number / line pairs and at the end, as a single item, the complete contents of the file. The file contents are then split into a list of lines, and for every number / line pair you replace one of the items in that list. Finally, the list of lines are joined back into a string with newlines between each line. This string is returned by lineReplace to fileutil::updateInPlace, which replaces the old contents in the file with the returned string.
Replace by name
proc lineReplaceByName args {
set lines [split [lindex $args end] \n]
foreach {name line} [lrange $args 0 end-1] {
set index [lsearch $lines $name*]
if {$index > 0} {
lset lines $index $line
}
}
join $lines \n
}
fileutil::updateInPlace info.dat {
lineReplaceByName
Filter.HpOrd_1 Filter.LoPass
Filter.HpOrd_4 Filter.Butterworth
}
In this case the "back end" calculates the line number by searching for the given name at the beginning of each line. If the name isn't found, the replacement operation is skipped. Otherwise it's the same as before.
Replacing just the name
If you don't want to replace the complete line, but just the name part of it, some changes are necessary. If you are 100% sure that 1) the name never has any whitespace in it, and 2) there is always whitespace between the name and the =, you can just replace lset lines $index $line with lset lines $index 0 $line. If you want to play it safer, you can replace the line with
lset lines $index [regsub {.+(?=\s*=\s*)} [lindex $lines $index] $line]
which uses a regular expression to find the character region that precedes the = character (optionally with whitespace around it) and then replaces that with the text you provided.
The fileutil package is a part of the Tcllib companion library to Tcl.
Documentation: fileutil package, foreach, if, incr, join, lindex, lrange, lsearch, lset, package, proc, regsub, seek, set, split

Reading multiple lines from a file using TCL?

How do I read more than a single line in a file using tcl? That is by default the gets command reads till a new line is found, how do I change this behaviour to read a file till a specific character is found?
If you don't mind reading over a bit, you can do it by looping with gets or read in a loop:
set data ""
while {[gets $chan line] >= 0} {
set idx [string first $whatToLookFor $line]
if {$idx == -1} {
append data $line\n
} else {
# Decrement idx; don't want first character of $whatToLookFor
append data [string range $line 0 [incr idx -1]]
break
}
}
# Data has everything up to but not including $whatToLookFor
If you're looking for multiline patterns, I suggest reading the whole file into memory and working on that. It's just so much easier than trying to write a correct matcher:
set data [read $chan]
set idx [string first $whatToLookFor $data]
if {$idx > -1} {
set data [string range $data 0 [incr idx -1]]
}
This latter form will also work just fine with binary data. Just remember to fconfigure $chan -translation binary first if you're doing that.
Use fconfigure.
set fp [open "somefile" r]
fconfigure $fp -eofchar "char"
set data [read $fp]
close $fp
In addition to Donal's good advice, you could get a list of records by reading the whole file and splitting on the record separator:
package require textutil::split
set records [textutil::splitx [read $chan] "record_separator"]
Documentation

Insert lines of code in a file after n numbers of lines using tcl

I am trying to write a tcl script in which I need to insert some lines of code after finding a regular expression .
For instance , I need to insert more #define lines of codes after finding the last occurrence of #define in the present file.
Thanks !
When making edits to a text file, you read it in and operate on it in memory. Since you're dealing with lines of code in that text file, we want to represent the file's contents as a list of strings (each of which is the contents of a line). That then lets us use lsearch (with the -regexp option) to find the insertion location (which we'll do on the reversed list so we find the last instead of the first location) and we can do the insertion with linsert.
Overall, we get code a bit like this:
# Read lines of file (name in “filename” variable) into variable “lines”
set f [open $filename "r"]
set lines [split [read $f] "\n"]
close $f
# Find the insertion index in the reversed list
set idx [lsearch -regexp [lreverse $lines] "^#define "]
if {$idx < 0} {
error "did not find insertion point in $filename"
}
# Insert the lines (I'm assuming they're listed in the variable “linesToInsert”)
set lines [linsert $lines end-$idx {*}$linesToInsert]
# Write the lines back to the file
set f [open $filename "w"]
puts $f [join $lines "\n"]
close $f
Prior to Tcl 8.5, the style changes a little:
# Read lines of file (name in “filename” variable) into variable “lines”
set f [open $filename "r"]
set lines [split [read $f] "\n"]
close $f
# Find the insertion index in the reversed list
set indices [lsearch -all -regexp $lines "^#define "]
if {![llength $indices]} {
error "did not find insertion point in $filename"
}
set idx [expr {[lindex $indices end] + 1}]
# Insert the lines (I'm assuming they're listed in the variable “linesToInsert”)
set lines [eval [linsert $linesToInsert 0 linsert $lines $idx]]
### ALTERNATIVE
# set lines [eval [list linsert $lines $idx] $linesToInsert]
# Write the lines back to the file
set f [open $filename "w"]
puts $f [join $lines "\n"]
close $f
The searching for all the indices (and adding one to the last one) is reasonable enough, but the contortions for the insertion are pretty ugly. (Pre-8.4? Upgrade.)
Not exactly the answer to your question, but this is the type of task that lends towards shell scripting (even if my solution is a bit ugly).
tac inputfile | sed -n '/#define/,$p' | tac
echo "$yourlines"
tac inputfile | sed '/#define/Q' | tac
should work!
set filename content.txt
set fh [open $filename r]
set lines [read $fh]
close $fh
set line_con [split $lines "\n"]
set line_num {}
set i 0
foreach line $line_con {
if [regexp {^#define} $line] {
lappend line_num $i
incr i
}
}
if {[llength $line_num ] > 0 } {
linsert $line_con [lindex $line_num end] $line_insert
} else {
puts "no insert point"
}
set filename content_new.txt
set fh [open $filename w]
puts $fh file_con
close $fh

Splitting a huge file into smaller files

How can I split a huge file into n number of smaller files using Tcl? The file name to split and number of files to be created have to be given through command line. Here is what I have so far:
proc splitter { file no } {
set lnum 0
set file_open [open $file r]
while {[gets $file_open line] >= 0 } {
incr lnum
}
puts "$lnum"
set num [expr $lnum/$no]
close $file_open
}
Here is one way to split text files, which has the advantage of not holding much in memory at once. (You can also split binary files, but then you need to use read instead of gets, and also to consider whether there are record boundaries in the data; text is mostly simpler.)
#!/usr/bin/env tclsh8.5
proc splitter {filename fileCount} {
set targetFileSize [expr {[file size $filename] / $fileCount}]
set n 0
set fin [open $filename]
while {[gets $fin line]} {
if {![info exist fout]} {
set fout [open $filename.split_[incr n] w]
}
puts $fout $line
if {[tell $fout] > $targetFileSize} {
close $fout
unset fout
}
}
if {[info exist fout]} {
close $fout
}
close $fin
}
splitter {*}$argv; # Connect to outside command line
use the global argv array to access command line parameters
after you read the file to count the lines, instead of closing the file handle, you can seek back to the top of the file.
if you're on *nix, have you considered using exec to call out to split?

Parsing a file with Tcl

I have a file in here which has multiple set statements. However I want to extract the lines of my interest. Can the following code help
set in [open filename r]
seek $in 0 start
while{ [gets $in line ] != -1} {
regexp (line to be extracted)
}
Other solution:
Instead of using gets I prefer using read function to read the whole contents of the file and then process those line by line. So we are in complete control of operation on file by having it as list of lines
set fileName [lindex $argv 0]
catch {set fptr [open $fileName r]} ;
set contents [read -nonewline $fptr] ;#Read the file contents
close $fptr ;#Close the file since it has been read now
set splitCont [split $contents "\n"] ;#Split the files contents on new line
foreach ele $splitCont {
if {[regexp {^set +(\S+) +(.*)} $ele -> name value]} {
puts "The name \"$name\" maps to the value \"$value\""
}
}
How to run this code:
say above code is saved in test.tcl
Then
tclsh test.tcl FileName
FileName is full path of file unless the file is in the same directory where the program is.
First, you don't need to seek to the beginning straight after opening a file for reading; that's where it starts.
Second, the pattern for reading a file is this:
set f [open $filename]
while {[gets $f line] > -1} {
# Process lines
if {[regexp {^set +(\S+) +(.*)} $line -> name value]} {
puts "The name \"$name\" maps to the value \"$value\""
}
}
close $f
OK, that's a very simple RE in the middle there (and for more complicated files you'll need several) but that's the general pattern. Note that, as usual for Tcl, the space after the while command word is important, as is the space between the while expression and the while body. For specific help with what RE to use for particular types of input data, ask further questions here on Stack Overflow.
Yet another solution:
as it looks like the source is a TCL script, create a new safe interpreter using interp which only has the set command exposed (and any others you need), hide all other commands and replace unknown to just skip anything unrecognised. source the input in this interpreter
Here is yet another solution: use the file scanning feature of Tclx. Please look up Tclx for more info. I like this solution for that you can have several scanmatch blocks.
package require Tclx
# Open a file, skip error checking for simplicity
set inputFile [open sample.tcl r]
# Scan the file
set scanHandle [scancontext create]
scanmatch $scanHandle {^\s*set} {
lassign $matchInfo(line) setCmd varName varValue; # parse the line
puts "$varName = $varValue"
}
scanfile $scanHandle $inputFile
close $inputFile
Yet another solution: use the grep command from the fileutil package:
package require fileutil
puts [lindex $argv 0]
set matchedLines [fileutil::grep {^\s*set} [lindex $argv 0]]
foreach line $matchedLines {
# Each line is in format: filename:line, for example
# sample.tcl:set foo bar
set varName [lindex $line 1]
set varValue [lindex $line 2]
puts "$varName = $varValue"
}
I've read your comments so far, and if I understand you correctly your input data file has 6 (or 9, depending which comment) data fields per line, separated by spaces. You want to use a regexp to parse them into 6 (or 9) arrays or lists, one per data field.
If so, I'd try something like this (using lists):
set f [open $filename]
while {[gets $f line] > -1} {
# Process lines
if {[regexp {(\S+) (\S+) (\S+) (\S+) (\S+) (\S+)} $line -> name source drain gate bulk inst]} {
lappend nameL $name
lappend sourceL $source
lappend drainL $drain
lappend gateL $gate
lappend bulkL $bulk
lappend instL $inst
}
}
close $f
Now you should have a set of 6 lists, one per field, with one entry in the list for each item in your input file. To access the i-th name, for example, you grab $nameL[$i].
If (as I suspect) your main goal is to get the parameters of the device whose name is "foo", you'd use a structure like this:
set name "foo"
set i [lsearch $nameL $name]
if {$i != -1} {
set source $sourceL[$i]
} else {
puts "item $name not found."
set source ''
# or set to 0, or whatever "not found" marker you like
}
set File [ open $fileName r ]
while { [ gets $File line ] >= 0 } {
regex {(set) ([a-zA-Z0-0]+) (.*)} $line str1 str2 str3 str4
#str2 contains "set";
#str3 contains variable to be set;
#str4 contains the value to be set;
close $File
}