I have a script which generates a numbered file..
like this:
set no 0
file fn [open log_file_$no w]
I want to remember this no every time I run the script, i.e when running for the first time, the file name should be log_file_0 , 2nd time it should be log_file_1 etc.
Is there a way to "remember" the value of the variable so that it can be used later?
You need to store the value to disk somehow. Hoodiecrow gives you one sensible way to do it: in the actual filename. Other options:
in a config file somewhere
in a database (sqlite is good for this)
Demo for (1)
# read the value
if {[file exists num.dat]} {
set fh [open num.dat r]
set no [read -nonewline $fh]
close $fh
} else {
set no 0
}
set logfile log_file_${no}
# ...
# write the value
set fh [open num.dat w]
puts $fh [incr no]
close $fh
Demo for (2)
package require Tcl 8.6
package require tdbc::sqlite3
set config_file config.db
tdbc::sqlite3::connection create db $config_file
# read the value
if {[file exists $config_file]} {
set stmt [$db prepare "create table config (number integer)"]
$stmt execute
$stmt close
set no 0
} else {
set stmt [$db prepare "select number from config limit 1"]
$stmt foreach row {
set no [dict get $row number]
}
$stmt close
}
# ...
# write the value
set stmt [$db prepare "update config set number=:new limit 1"]
$stmt execute [dict create new [incr no]]
$stmt close
$db close
You don't need a variable, you have the number you need in the file list:
set no [scan [lindex [lsort -dictionary [glob log_file_*]] end] log_file_%2d]
incr no
set fn [open log_file_$no w]
Let's break that up a bit. Create a list of log files:
set filelist [glob log_file_*]
Sort the list in dictionary order (where 10 comes after 2) and pick the last element:
set highest_numbered [lindex [lsort -dictionary $filelist] end]]
Extract the number from the file name:
set no [scan $highest_numbered log_file_%2d]
Increase the number and open a new log file:
incr no
set fn [open log_file_$no w]
If there is a possibility that no log files exist, the glob command will fail. To take care of this case, either do this:
set filelist [glob -nocomplain log_file_*]
if {[llength $filelist]} {
set highest_numbered [lindex [lsort -dictionary $filelist] end]]
set no [scan $highest_numbered log_file_%2d]
} else {
set no 0
}
incr no
set fn [open log_file_$no w]
or this slightly safer version (if you have Tcl 8.6):
try {
glob log_file_*
} on ok filelist {
set highest_numbered [lindex [lsort -dictionary $filelist] end]
set no [scan $highest_numbered log_file_%2d]
} trap {TCL OPERATION GLOB NOMATCH} {} {
set no 0
} on error message {
puts stderr "error when listing log files: $message"
set no -1
}
if {$no > -1} {
incr no
set fn [open log_file_$no w]
# ...
chan close $fn
}
Documentation: chan, glob, if, incr, lindex, lsort, open, scan, set, try
(Note: the 'Hoodiecrow' mentioned elsewhere is me, I used that nick earlier.)
Related
I am trying to open a file for read, asking input from user from a Tk file open dialog box, but facing an Error “cannot file channel named”
Here is my code.
Can you let me know the issue with below code?
proc load_input_entries {} {
global sa sd sb sc
set types {
{{Text Files} {.txt} }
{{CSV Files} {.csv} }
{{All Files} * }
}
set fp [tk_getOpenFile -parent . \
-title "Select File" \
-filetypes $types -multiple true \
-initialdir "/simulation/safe/ip/work" ]
if {[file exists $fp]} {
set stuff [read $fp]
set lines [split $stuff "\n"]
set sa [lindex $lines 0]
set sb [lindex $lines 1]
set sc [lindex $lines 2]
set sd [lindex $lines 3]
}
}
tk_getOpenFile gives you the file name. You still have to open the file to be able to read it. Try
set filename [tk_getOpenFile ...
if {[file exists $filename]} {
set fp [open $filename]
...
If you get a problem like this, it's often useful to temporarily insert a puts command to see what the value of your variable is. If you had done that, you would have seen that it had a file name instead of a file handle.
The AUT creates logs for a particular function run and appends the log in a central file.
The line to search in this file is:
LatestTimeStamp>MyFunction SomeStep timeLapsed SOME_TIME_VALUE
Every time the log is generated by AUT, fresh multiple logs of similar pattern are generated as above and its required to extract these fresh logs.
The simple approach I am using is:
class structure
itcl::class clsLogs {
variable _oldTimeStamp ""
variable _logRec
variable _runCtr 0
method _extractInfoForRun {runType} {
#read log
catch {close $fp}
set log [read [set fp [open [file join [file normalize $env(APPDATA)] Logs Action.log]]]]
#garbage everything before old time stamp and collect all fresh log
if {[info exists _oldTimeStamp] && $_oldTimeStamp!=""} {
regsub [subst -nobackslashes -nocommands {.*$_oldTimeStamp[^\n]*\n}] [set freshLog $log] "" freshLog
}
#increment run counter for this run
incr _runCtr
#get all fresh entry lines for reporting timelapsed for different steps of MyFunction in this run
set freshEntries [regexp -inline -all [subst -nocommands -nobackslashes {[^\n]*MyFunction[^\n]*timeLapsed[^\n]*}] $freshLog]
#iterate and collect time lapsed info for each step of MyFunction for this run
foreach ent $freshEntries {
regexp {(.*?)>.*>>MyFunction\s+(.*)\s+timeLapsed\s+(.*)$} $ent -> timeStamp runStep lapsedTime ;
puts ************runTyp>$runTyp***********\n\t$ent\n\ttimeStamp->$timeStamp\nlapsedTime->$lapsedTime
set _logRec(MyFunction_Run-$_runCtr:$runStep,lapsedTime) $lapsedTime
}
#reset old time stamp variable for next run
set _oldTimeStamp $timeStamp
}
}
But this file could be huge and storing everything in one read output variable could result in overflow:
set log [read [set fp [open [file join [file normalize $env(APPDATA)] Logs Action.log]]]]
Is it somehow possible to use a combination to get the current position of the file pointer and use it to offset to last cursor position and then start reading each time from that position?
What are the Tcl command options for the same?
so this does it:
seek [set fp [open $file]] $_fOffset
set txt [read $fp]
set _fOffset [tell $fp]
In context:
::itcl::class clsLogs {
private {
variable _fOffset 0
}
public {
method _fFreshRead {file args} {
set options(-resetOffSet) false
array set options $args
if {$options(-resetOffSet)} {
set _fOffset 0
}
seek [set fp [open $file]] $_fOffset
set txt [read $fp]
set _fOffset [tell $fp]
close $fp
return $txt
}
}
}
i have to perform following operation..
copy file from one location to another
search a word in the given file
and move the file pointer to beginning of that line
place the data in that location which are copied from other file...
3 files are as follows:
C:\program Files(X86)\Route\*.tcl
C:\Sanity_Automation\Route\*.tcl
C:\Script.tcl
First i need to copy files from Route folder in Program Files to
Sanity_Automation\Route*.tcl
Then i need to search "CloseAllOutputFile keyword in
C:/Sanity_Automation/Route/SystemTest.tcl
once found, move cursor to the beginning of that line where "CloseAllOutputFile " keyword found.
and place data found on script.tcl to that location.
Firstly, that first "file" is actually a pattern. We need to expand that to a list of real filenames. We do that with glob.
# In braces because there are backslashes
set pattern {C:\Program Files(X86)\Route\*.tcl}
# De-fang the backslashes
set pattern [file normalize $pattern]
# Expand
set sourceFilenames [glob $pattern]
Then we want to copy them. We could do this with:
set target {C:\Sanity_Automation\Route\}
file copy {*}$sourceFilenames [file normalize $target]
But really we also want to build up a list of moved files so that we can process them in the next step. So we do this:
set target {C:\Sanity_Automation\Route\}
foreach f $sourceFilenames {
set t [file join $target [file tail $f]]
file copy $f $t
lappend targetFilenames $t
}
OK, now we're going to do the insertion processing. Let's start by getting the data to insert:
set f [open {C:\Script.tcl}]
set insertData [read $f]
close $f
Now, we want to go over each of the files, read them in, find where to do the insertion, actually do the insertion if we find the place, and then write the files back out. (You do text edits by read/modify-in-memory/write rather than trying to modify the file directly. Always.)
# Iterating over the filenames
foreach t $targetFilenames {
# Read in
set f [open $t]
set contents [read $f]
close $f
# Do the search (this is the easiest way!)
if {[regexp -indices -line {^.*CloseAllOutputFile} $contents where]} {
# Found it, so do the insert
set idx [lindex $where 0]
set before [string range $contents 0 [expr {$idx-1}]]
set after [string range $contents $idx end]
set contents $before$insertData$after
# We did the insert, so write back out
set f [open $t "w"]
puts -nonewline $f $contents
close $f
}
}
Normally, I'd do the modify as part of the copy, but we'll do it your way here.
Try this:
set sourceDir [file join / Files(x86) Route]
set destinationDir [file join / Sanity_Automation Route]
# Read the script to be inserted
set insertFnm [file join / Script.tcl]
set fil [open $insertFnm]
set insertData [read $fil]
close $fil
# Loop around all the Tcl scripts in the source directory
foreach inFnm [glob [file join $sourceDir *.tcl]] {
# Determine the name of the output file
set scriptName [file tail $inFnm]
set outFnm [file join $destinationDir $scriptName]
# Open source and destination files, for input and output respectively
set inFil [open $inFnm]
set outFil [open $outFnm w]
while {![eof $inFil]} {
set line [gets $inFil]
if {[string match *CloseAllOutputFile* $line]} {
puts $outFil $insertData
puts $outFil ""; # Ensure there's a newline at the end
# of the insertion
}
puts $outFil $line
}
# Close input and output files
close $inFil
close $outFil
}
It seems to work for me.
I am writing a code to grep a regular expression pattern from a file, and output that regular expression and the number of times it has occured.
Here is the code: I am trying to find the pattern "grep" in my file hello.txt:
set file1 [open "hello.txt" r]
set file2 [read $file1]
regexp {grep} $file2 matched
puts $matched
while {[eof $file2] != 1} {
set number 0
if {[regexp {grep} $file2 matched] >= 0} {
incr number
}
puts $number
}
Output that I got:
grep
--------
can not find channel named "qwerty
iiiiiii
wxseddtt
lsakdfhaiowehf'
jbsdcfiweg
kajsbndimm s
grep
afnQWFH
ACV;SKDJNCV;
qw qde
kI UQWG
grep
grep"
while executing
"eof $file2"
It's usually a mistake to check for eof in a while loop -- check the return code from gets instead:
set filename "hello.txt"
set pattern {grep}
set count 0
set fid [open $filename r]
while {[gets $fid line] != -1} {
incr count [regexp -all -- $pattern $line]
}
close $fid
puts "$count occurrances of $pattern in $filename"
Another thought: if you're just counting pattern matches, assuming your file is not too large:
set fid [open $filename r]
set count [regexp -all -- $pattern [read $fid [file size $filename]]]
close $fid
The error message is caused by the command eof $file2. The reason is that $file2 is not a file handle (resp. channel) but contains the content of the file hello.txt itself. You read this file content with set file2 [read $file1].
If you want to do it like that I would suggest to rename $file2 into something like $filecontent and loop over every contained line:
foreach line [split $filecontent "\n"] {
... do something ...
}
Glenn is spot on. Here is another solution: Tcl comes with the fileutil package, which has the grep command:
package require fileutil
set pattern {grep}
set filename hello.txt
puts "[llength [fileutil::grep $pattern $filename]] occurrences found"
If you care about performance, go with Glenn's solution.
I have a file in here which has multiple set statements. However I want to extract the lines of my interest. Can the following code help
set in [open filename r]
seek $in 0 start
while{ [gets $in line ] != -1} {
regexp (line to be extracted)
}
Other solution:
Instead of using gets I prefer using read function to read the whole contents of the file and then process those line by line. So we are in complete control of operation on file by having it as list of lines
set fileName [lindex $argv 0]
catch {set fptr [open $fileName r]} ;
set contents [read -nonewline $fptr] ;#Read the file contents
close $fptr ;#Close the file since it has been read now
set splitCont [split $contents "\n"] ;#Split the files contents on new line
foreach ele $splitCont {
if {[regexp {^set +(\S+) +(.*)} $ele -> name value]} {
puts "The name \"$name\" maps to the value \"$value\""
}
}
How to run this code:
say above code is saved in test.tcl
Then
tclsh test.tcl FileName
FileName is full path of file unless the file is in the same directory where the program is.
First, you don't need to seek to the beginning straight after opening a file for reading; that's where it starts.
Second, the pattern for reading a file is this:
set f [open $filename]
while {[gets $f line] > -1} {
# Process lines
if {[regexp {^set +(\S+) +(.*)} $line -> name value]} {
puts "The name \"$name\" maps to the value \"$value\""
}
}
close $f
OK, that's a very simple RE in the middle there (and for more complicated files you'll need several) but that's the general pattern. Note that, as usual for Tcl, the space after the while command word is important, as is the space between the while expression and the while body. For specific help with what RE to use for particular types of input data, ask further questions here on Stack Overflow.
Yet another solution:
as it looks like the source is a TCL script, create a new safe interpreter using interp which only has the set command exposed (and any others you need), hide all other commands and replace unknown to just skip anything unrecognised. source the input in this interpreter
Here is yet another solution: use the file scanning feature of Tclx. Please look up Tclx for more info. I like this solution for that you can have several scanmatch blocks.
package require Tclx
# Open a file, skip error checking for simplicity
set inputFile [open sample.tcl r]
# Scan the file
set scanHandle [scancontext create]
scanmatch $scanHandle {^\s*set} {
lassign $matchInfo(line) setCmd varName varValue; # parse the line
puts "$varName = $varValue"
}
scanfile $scanHandle $inputFile
close $inputFile
Yet another solution: use the grep command from the fileutil package:
package require fileutil
puts [lindex $argv 0]
set matchedLines [fileutil::grep {^\s*set} [lindex $argv 0]]
foreach line $matchedLines {
# Each line is in format: filename:line, for example
# sample.tcl:set foo bar
set varName [lindex $line 1]
set varValue [lindex $line 2]
puts "$varName = $varValue"
}
I've read your comments so far, and if I understand you correctly your input data file has 6 (or 9, depending which comment) data fields per line, separated by spaces. You want to use a regexp to parse them into 6 (or 9) arrays or lists, one per data field.
If so, I'd try something like this (using lists):
set f [open $filename]
while {[gets $f line] > -1} {
# Process lines
if {[regexp {(\S+) (\S+) (\S+) (\S+) (\S+) (\S+)} $line -> name source drain gate bulk inst]} {
lappend nameL $name
lappend sourceL $source
lappend drainL $drain
lappend gateL $gate
lappend bulkL $bulk
lappend instL $inst
}
}
close $f
Now you should have a set of 6 lists, one per field, with one entry in the list for each item in your input file. To access the i-th name, for example, you grab $nameL[$i].
If (as I suspect) your main goal is to get the parameters of the device whose name is "foo", you'd use a structure like this:
set name "foo"
set i [lsearch $nameL $name]
if {$i != -1} {
set source $sourceL[$i]
} else {
puts "item $name not found."
set source ''
# or set to 0, or whatever "not found" marker you like
}
set File [ open $fileName r ]
while { [ gets $File line ] >= 0 } {
regex {(set) ([a-zA-Z0-0]+) (.*)} $line str1 str2 str3 str4
#str2 contains "set";
#str3 contains variable to be set;
#str4 contains the value to be set;
close $File
}