SaveTCL input Entry in a file and later load it - tcl

I have developed a TCL UI with couple of inputs needs to be entered by user. First time user will enter all files path but then i wanted to save the user defined entries in a file and then later load it.
Saving is fine... i think of saving all these variables in a file, but loading it from a file needs a mapping, how it can be done ?
Any example will be helpful

If you have the flexibility to define the format of the file where the contents will be stored, I'd recommend storing the contents in a way such that reading/writing maps to the keys and is order independent. This will allow you to update your UI to add/delete input fields without worrying about the order in which they are captured in the file.
For e.g., your file format could be something this:
Top Directory: <value>
LEF File: <value>
.
.
.
You'll have to carefully choose a separator between the key (label) and the value.
If this is going to be used in TCL always, you can make it simpler by storing an array in a file. This'll also speed up when you load the file to populate the entries in the UI. For e.g., your file format could be something this:
set inputFields("Top Directory") <value>
set inputFields("LEF File") <value>

I achieved this by following code, though not very optimized.
First i am saving a input file with variable values and then reading them in same order.
proc save_input_entries {} {
global ENTRYfilename ENTRYfilename2 ENTRYfilename3 ENTRYfilename4 ENTRYfilename5 ENTRYfilename6 ENTRYfilename7 ENTRYfilename8 ENTRYfilename9 ENTRYfilename10 ENTRYfilename11 ENTRYfilename12 ENTRYfilename13 ENTRYfilename14 ENTRYfilename15 ENTRYfilename16 ENTRYfilename17 topdir corner_dir corner_name
set filename Input_entries.txt
set fileId [open $filename "w"]
puts $fileId $ENTRYfilename
puts $fileId $ENTRYfilename3
puts $fileId $ENTRYfilename4
puts $fileId $ENTRYfilename5
puts $fileId $ENTRYfilename7
puts $fileId $ENTRYfilename8
puts $fileId $ENTRYfilename15
puts $fileId $ENTRYfilename14
puts $fileId $ENTRYfilename16
puts $fileId $ENTRYfilename17
close $fileId
}
proc load_input_entries {} {
global ENTRYfilename ENTRYfilename2 ENTRYfilename3 ENTRYfilename4 ENTRYfilename5 ENTRYfilename6 ENTRYfilename7 ENTRYfilename8 ENTRYfilename9 ENTRYfilename10 ENTRYfilename11 ENTRYfilename12 ENTRYfilename13 ENTRYfilename14 ENTRYfilename15 ENTRYfilename16 ENTRYfilename17
set fp [open Input_entries.txt]
set stuff [read $fp]
set lines [split $stuff "\n"]
set ENTRYfilename [lindex $lines 0]
set ENTRYfilename3 [lindex $lines 1]
set ENTRYfilename4 [lindex $lines 2]
set ENTRYfilename5 [lindex $lines 3]
set ENTRYfilename7 [lindex $lines 4]
set ENTRYfilename8 [lindex $lines 5]
set ENTRYfilename15 [lindex $lines 6]
set ENTRYfilename14 [lindex $lines 7]
set ENTRYfilename16 [lindex $lines 8]
set ENTRYfilename17 [lindex $lines 9]
}

Related

TCL: Read lines from file that contain only relevant words

I'm reading file and make some manipulation on the data.
Unfortunately I get the below error message:
unable to alloc 347392 bytes
Abort
Since the file is huge, I want to read only the lines that contain some word (describe in "regexp_or ")
Is there any way to read only the lines that contain "regexp_or" and save the foreach loop?
set regexp_or "^Err|warning|Fatal error"
set file [open [lindex $argv 1] r]
set data [ read $file ]
foreach line [ split $data "\n" ] {
if {[regexp [subst $regexp_or] $line]} {
puts $line
}
}
You could pull your input through grep:
set file [open |[list grep -E $regexp_or [lindex $argv 1]] r]
But that depends on grep being available. To do it completely in Tcl, you can process the file in chunks:
set file [open [lindex $argv 1] r]
while {![eof $file]} {
# Read a million characters
set data [read $file 1000000]
# Make sure to only work with complete lines
append data [gets $file]
foreach line [lsearch -inline -all -regexp [split $data \n] $regexp_or] {
puts $line
}
}
close $file

Error: Cannot file channel name in Tcl

I am trying to open a file for read, asking input from user from a Tk file open dialog box, but facing an Error “cannot file channel named”
Here is my code.
Can you let me know the issue with below code?
proc load_input_entries {} {
global sa sd sb sc
set types {
{{Text Files} {.txt} }
{{CSV Files} {.csv} }
{{All Files} * }
}
set fp [tk_getOpenFile -parent . \
-title "Select File" \
-filetypes $types -multiple true \
-initialdir "/simulation/safe/ip/work" ]
if {[file exists $fp]} {
set stuff [read $fp]
set lines [split $stuff "\n"]
set sa [lindex $lines 0]
set sb [lindex $lines 1]
set sc [lindex $lines 2]
set sd [lindex $lines 3]
}
}
tk_getOpenFile gives you the file name. You still have to open the file to be able to read it. Try
set filename [tk_getOpenFile ...
if {[file exists $filename]} {
set fp [open $filename]
...
If you get a problem like this, it's often useful to temporarily insert a puts command to see what the value of your variable is. If you had done that, you would have seen that it had a file name instead of a file handle.

splitting input line with varying formats in tcl with

Good afternoon,
I am attempting to write a tcl script which given the input file
input hreadyin;
input wire htrans;
input wire [7:0] haddr;
output logic [31:0] hrdata;
output hreadyout;
will produce
hreadyin(hreadyin),
htrans(htrans),
haddr(haddr[7:0]),
hrdata(hrdata[31:0]),
hready(hreadyout)
In other words, the format is:
<input/output> <wire/logic optional> <width, optional> <paramName>;
with the number of whitespaces unrestricted between each of them.
I have no problem reading from the input file and was able to put each line in a $line element. Now I have been trying things like:
set param0 [split $line "input"]
set param1 [lindex $param0 1]
But since not all lines have "input" line in them i am unable to get the elements i want (the name and the width if it exists).
Is there another command in tcl capable for doing this kind of parsing?
The regexp command is useful to find words separated by arbitrary whitespace:
while {[gets $fh line] != -1} {
# get all whitespace-separated words in the line, ignoring the semi-colon
set i [string first ";" $line]
set fields [regexp -inline -all {\S+} [string range $line 0 $i-1]]
switch -exact -- [llength $fields] {
2 - 3 {
set name [lindex $fields end]
puts [format "%s(%s)," $name $name]
}
4 {
lassign $fields - - width name
puts [format "%s(%s%s)," $name $name $width]
}
}
}
I think you should look at something like
# Compress all multiple spaces to single spaces
set compressedLine [resgub " +" $line " "]
set items [split [string range $compressedLine 0 end-1] $compressedLine " "]
switch [llength $items] {
2 {
# Handle case where neither wire/logic nor width is specificed
set inputOutput [lindex $items 0]
set paramName [lindex $items 1]
.
.
.
}
4 {
# Handle case where both wire/logic and width are specified
set inputOutput [lindex $items 0]
set wireLogic [lindex $items 1]
set width [lindex $items 2]
set paramName [lindex $items 3]
.
.
.
}
default {
# Don't know how to handle other cases - add them in if you know
puts stderr "Can't handle $line
}
}
I hope it's not legal to have exactly one of wire/logic and width specified - you'd need to work hard to determine which is which.
(Note the [string range...] fiddle to discard the semicolon at the end of the line)
Or if you can write up a regex that catches the right data, you can do this with this:
set data [open "file.txt" r]
set output [open "output.txt" w]
while {[gets $data line] != -1} {
regexp -- {(\[\d+:\d+\])?\s*(\w+);} $line - width params
puts $output "$params\($params$width\),"
}
close $data
close $output
This one will also print the comma you have inserted in your expected output, but will insert it in the last line as well so you get:
hreadyin(hreadyin),
htrans(htrans),
haddr(haddr[7:0]),
hrdata(hrdata[31:0]),
hready(hreadyout),
If you don't want it and the file is not too large (apparently the limit is 2147483672 bytes for a list, which I'm gonna use), you could use a group like this:
set data [open "file.txt" r]
set output [open "output.txt" w]
set listing "" #Empty list
while {[gets $data line] != -1} {
regexp -- {(\[\d+:\d+\])?\s*(\w+);} $line - width params
lappend listing "$params\($params$width\)" #Appending to list instead
}
puts $output [join $listing ",\n"] #Join all in a single go
close $data
close $output

Insert lines of code in a file after n numbers of lines using tcl

I am trying to write a tcl script in which I need to insert some lines of code after finding a regular expression .
For instance , I need to insert more #define lines of codes after finding the last occurrence of #define in the present file.
Thanks !
When making edits to a text file, you read it in and operate on it in memory. Since you're dealing with lines of code in that text file, we want to represent the file's contents as a list of strings (each of which is the contents of a line). That then lets us use lsearch (with the -regexp option) to find the insertion location (which we'll do on the reversed list so we find the last instead of the first location) and we can do the insertion with linsert.
Overall, we get code a bit like this:
# Read lines of file (name in “filename” variable) into variable “lines”
set f [open $filename "r"]
set lines [split [read $f] "\n"]
close $f
# Find the insertion index in the reversed list
set idx [lsearch -regexp [lreverse $lines] "^#define "]
if {$idx < 0} {
error "did not find insertion point in $filename"
}
# Insert the lines (I'm assuming they're listed in the variable “linesToInsert”)
set lines [linsert $lines end-$idx {*}$linesToInsert]
# Write the lines back to the file
set f [open $filename "w"]
puts $f [join $lines "\n"]
close $f
Prior to Tcl 8.5, the style changes a little:
# Read lines of file (name in “filename” variable) into variable “lines”
set f [open $filename "r"]
set lines [split [read $f] "\n"]
close $f
# Find the insertion index in the reversed list
set indices [lsearch -all -regexp $lines "^#define "]
if {![llength $indices]} {
error "did not find insertion point in $filename"
}
set idx [expr {[lindex $indices end] + 1}]
# Insert the lines (I'm assuming they're listed in the variable “linesToInsert”)
set lines [eval [linsert $linesToInsert 0 linsert $lines $idx]]
### ALTERNATIVE
# set lines [eval [list linsert $lines $idx] $linesToInsert]
# Write the lines back to the file
set f [open $filename "w"]
puts $f [join $lines "\n"]
close $f
The searching for all the indices (and adding one to the last one) is reasonable enough, but the contortions for the insertion are pretty ugly. (Pre-8.4? Upgrade.)
Not exactly the answer to your question, but this is the type of task that lends towards shell scripting (even if my solution is a bit ugly).
tac inputfile | sed -n '/#define/,$p' | tac
echo "$yourlines"
tac inputfile | sed '/#define/Q' | tac
should work!
set filename content.txt
set fh [open $filename r]
set lines [read $fh]
close $fh
set line_con [split $lines "\n"]
set line_num {}
set i 0
foreach line $line_con {
if [regexp {^#define} $line] {
lappend line_num $i
incr i
}
}
if {[llength $line_num ] > 0 } {
linsert $line_con [lindex $line_num end] $line_insert
} else {
puts "no insert point"
}
set filename content_new.txt
set fh [open $filename w]
puts $fh file_con
close $fh

Parsing a file with Tcl

I have a file in here which has multiple set statements. However I want to extract the lines of my interest. Can the following code help
set in [open filename r]
seek $in 0 start
while{ [gets $in line ] != -1} {
regexp (line to be extracted)
}
Other solution:
Instead of using gets I prefer using read function to read the whole contents of the file and then process those line by line. So we are in complete control of operation on file by having it as list of lines
set fileName [lindex $argv 0]
catch {set fptr [open $fileName r]} ;
set contents [read -nonewline $fptr] ;#Read the file contents
close $fptr ;#Close the file since it has been read now
set splitCont [split $contents "\n"] ;#Split the files contents on new line
foreach ele $splitCont {
if {[regexp {^set +(\S+) +(.*)} $ele -> name value]} {
puts "The name \"$name\" maps to the value \"$value\""
}
}
How to run this code:
say above code is saved in test.tcl
Then
tclsh test.tcl FileName
FileName is full path of file unless the file is in the same directory where the program is.
First, you don't need to seek to the beginning straight after opening a file for reading; that's where it starts.
Second, the pattern for reading a file is this:
set f [open $filename]
while {[gets $f line] > -1} {
# Process lines
if {[regexp {^set +(\S+) +(.*)} $line -> name value]} {
puts "The name \"$name\" maps to the value \"$value\""
}
}
close $f
OK, that's a very simple RE in the middle there (and for more complicated files you'll need several) but that's the general pattern. Note that, as usual for Tcl, the space after the while command word is important, as is the space between the while expression and the while body. For specific help with what RE to use for particular types of input data, ask further questions here on Stack Overflow.
Yet another solution:
as it looks like the source is a TCL script, create a new safe interpreter using interp which only has the set command exposed (and any others you need), hide all other commands and replace unknown to just skip anything unrecognised. source the input in this interpreter
Here is yet another solution: use the file scanning feature of Tclx. Please look up Tclx for more info. I like this solution for that you can have several scanmatch blocks.
package require Tclx
# Open a file, skip error checking for simplicity
set inputFile [open sample.tcl r]
# Scan the file
set scanHandle [scancontext create]
scanmatch $scanHandle {^\s*set} {
lassign $matchInfo(line) setCmd varName varValue; # parse the line
puts "$varName = $varValue"
}
scanfile $scanHandle $inputFile
close $inputFile
Yet another solution: use the grep command from the fileutil package:
package require fileutil
puts [lindex $argv 0]
set matchedLines [fileutil::grep {^\s*set} [lindex $argv 0]]
foreach line $matchedLines {
# Each line is in format: filename:line, for example
# sample.tcl:set foo bar
set varName [lindex $line 1]
set varValue [lindex $line 2]
puts "$varName = $varValue"
}
I've read your comments so far, and if I understand you correctly your input data file has 6 (or 9, depending which comment) data fields per line, separated by spaces. You want to use a regexp to parse them into 6 (or 9) arrays or lists, one per data field.
If so, I'd try something like this (using lists):
set f [open $filename]
while {[gets $f line] > -1} {
# Process lines
if {[regexp {(\S+) (\S+) (\S+) (\S+) (\S+) (\S+)} $line -> name source drain gate bulk inst]} {
lappend nameL $name
lappend sourceL $source
lappend drainL $drain
lappend gateL $gate
lappend bulkL $bulk
lappend instL $inst
}
}
close $f
Now you should have a set of 6 lists, one per field, with one entry in the list for each item in your input file. To access the i-th name, for example, you grab $nameL[$i].
If (as I suspect) your main goal is to get the parameters of the device whose name is "foo", you'd use a structure like this:
set name "foo"
set i [lsearch $nameL $name]
if {$i != -1} {
set source $sourceL[$i]
} else {
puts "item $name not found."
set source ''
# or set to 0, or whatever "not found" marker you like
}
set File [ open $fileName r ]
while { [ gets $File line ] >= 0 } {
regex {(set) ([a-zA-Z0-0]+) (.*)} $line str1 str2 str3 str4
#str2 contains "set";
#str3 contains variable to be set;
#str4 contains the value to be set;
close $File
}