TCL write and read only one value - tcl

Hi guys I am using TCL (IVR/TCL) for Cisco Voice Gateway.. and I need to have a text file that inside only have a OPEN or CLOSED value.. just 1 value.. so when a call arrieves I check if the business is open or closed..
Then I make another TCL just to the manager place a call and open/close the bussiness..
I have read that you could use a temp file to before writing the file... Is that really necesary
Basically what I just need is take the first line and write OPEN or CLOSED and then in the other tcl just read the file and read the value..
What I must have in mind is take care that the file has only one line... and on closed or open value set..
for reading I am using
set fd [open $filename]
while {[gets $fd line] >= 0} {
set data [lindex $line 0]
puts "\n Date: $data ::"
if { [expr { $data == "closed" }] } {
set closed "1"
puts "\n Date Found on the List"
}
But is really necessary couse I am just reading one line ??
How could I write the file...??

If you assume that the line of interest is always the first one, it's easy. For one thing, there's no real need to use looping or to try to split the line into words; a simple glob-match with string match (which returns a boolean) is quite enough.
# Reader
set fd [open $filename]
set closed [string match "closed*" [gets $fd]]
close $fd
# Writer
set fd [open $filename w]
if {$closed} {
puts $fd "closed"
} else {
puts $fd "open"
}
close $fd
And that's all that's really required (except for the rest of the logic to turn the fragments into a whole program, of course) though you can also do things like also writing the date of the change. (Of course, that would also be preserved in the file's metadata… but it's an illustration, OK?)
set timestamp [clock format [clock seconds]]
if {$closed} {
puts $fd "closed - $timestamp"
} else {
puts $fd "open - $timestamp"
}
And so on.

Related

Tcl read first line of a csv file

I am trying to parse a CSV to check if one of the headers is present.
Sometimes I'd expect a fifth colomn with arbitraryHead
date time value result arbitraryHead
val1 d1 10 fail
val2 d2 15 norun
I was trying to read the first line then print it. But that is not working...
How can I read the first line and print all the headers?
set fh [open $csv_file r]
set data [list]
set line [gets $fh line]
lappend data [split $line ,]
close $fh
foreach x $data {
puts "$x\n"
}
When reading a CSV file, it's best to use the csv package in Tcllib, as that handles all the awkward edge cases in that format.
In particular, csv::split is especially useful (along with csv::join when creating a CSV file). Or the various functions that act as wrappers around it. Here's how you'd use it in your case
package require csv
set fh [open $csv_file r]
# Your data appears to be actually tab-separated, not comma-separated...
set data [csv::split [gets $fh] "\t"]
close $fh
foreach x $data {
puts "$x\n"
}
Your actual immediate bug was this:
set line [gets $fh line]
The two-argument form of gets writes the line it reads into the variable named in the second argument, and returns the length of line read (or -1 on failure to read a complete line, which can be useful in complex cases that aren't important here). You're then assigning that value to the same variable with set, losing the string that was written there. You should instead use one of the following (except that you should use a properly-tested package for reading CSV files):
gets $fh line
set line [gets $fh]
The one-argument form of gets returns the line it read, which can make it harder to distinguish errors but is highly convenient.
The simplest you can do is string match operation, just look for the desired header you wanted to check.
As requested in the following code I am checking "arbitraryHead"
set fh [open $csv_file r]
set contents [read $fh ]
foreach x $contents {
if {[string match "*arbitraryHead*" $x]} {
puts "HEAD FOUND"
}
}
close $fh
Hope this address your issue

I want to search a pattern [Severity Level: Critical] in whole file in tcl

I have tried the below code, but it is checking line by line and want to check it in whole file. Please help me out in writing the correct code, once i get the pattern break it and says pattern is found else pattern is not found
set search "Severity Level: Critical"
set file [open "outputfile.txt" r]
while {[gets $file data] != -1} {
if {[string match *[string toupper $search]* [string toupper $data]] } {
puts "Found '$search' in the line '$data'"
} else {
puts "Not Found '$search' in the line '$data'"
}
}
If the file is “small” with respect to available memory (e.g., no more than a few hundred megabytes) then the easiest way to find if the string is present is to load it all in with read.
set search "Severity Level: Critical"
set f [open "thefilename.txt"]
set data [read $f]
close $f
set idx [string first $search $data]
if {$idx >= 0} {
puts "Found the search term at character $idx"
# Not quite sure what you'd do with this info...
} else {
puts "Search term not present"
}
If you want to know what line it is in, you might split the data up and then use lsearch with the right options to find it.
set search "Severity Level: Critical"
set f [open "thefilename.txt"]
set data [split [read $f] "\n"]
close $f
set lineidx [lsearch -regexp -- $data ***=$search]
if {$idx >= 0} {
puts "Found the search term at line $lineidx : [lindex $data $lineidx]"
} else {
puts "Search term not present"
}
The ***= is a special escape to say “treat the rest of the RE as literal characters” and it's ideal for the case where you can't be sure that the search term is free of RE metacharacters.
The string first command is very simple, so it's easy to use correctly and to work out whether it can do what you want. The lsearch command is not simple at all, and neither are regular expressions; determining when and how to use them is correspondingly trickier.

Open/read command in Tcl 8.5 for large files

Sorry if the title doesn't match my question well, I'm still unsure as to how I should put it.
Anyway, I've been using Tcl/Tk on Windows (wish) for a while now and haven't encountered any problem on the script I wrote until recently. The script is supposed to break down a large txt file into smaller files that can be imported to excel (I'm talking about breaking down a file with maybe 25M lines which comes around 2.55 GB).
My current script is something like that:
set data [open "file.txt" r]
set data1 [open "File Part1.txt" w]
set data2 [open "File Part2.txt" w]
set data3 [open "File Part3.txt" w]
set data4 [open "File Part4.txt" w]
set data5 [open "File Part5.txt" w]
set count 0
while {[gets $data line] != -1} {
if {$count > 4000000} {
puts $data5 $line
} elseif {$count > 3000000} {
puts $data4 $line
} elseif {$count > 2000000} {
puts $data3 $line
} elseif {$count > 1000000} {
puts $data2 $line
} else {
puts $data1 $line
}
incr count
}
close $data
close $data1
close $data2
close $data3
close $data4
close $data5
And I alter the numbers within the if to get the desired number of lines per file, or add/remove any elseif where required.
The problem is, with the latest file I got, I end up with only about half the data (1.22 GB instead of 2.55 GB) and I was wondering if there was a line which told Tcl to ignore the limit that it can read. I tried to look for it, but I didn't find anything (or anything that I could understand well; I'm still quite the amateur at Tcl ^^;). Can anyone help me?
EDIT (update):
I found a program to open large text files and managed to get a preview of the contents of the file directly. There are actually 16,756,263 lines. I changed the script to:
set data [open "file.txt" r]
set data1 [open "File Part1.txt" w]
set count 0
while {[gets $data line] != -1} {
incr count
}
puts $data1 $count
close $data
close $data1
to get where the script is blocking and it stopped here:
There's a character that the text editor is not recognising in the middle line showing as a little square. I tried to use fconfigure like evil otto suggested but I'm afraid I don't quite understand how the channelID, name or value work exactly to escape that character. Um... help?
reEDIT : I managed to find out how fconfigure worked! Thanks evil otto! Um, I'm not sure how I can 'choose' your answer since it's a comment instead of a proper answer...
Is it possible there is any binary data in "file.txt"? Under windows, tcl will flag eof if it reads a ^Z (the default eofchar) in a file. You can turn this off with fconfigure:
fconfigure $data -eofchar {}
See the docs for full details.
I ran your script on a Mac, which is Unix-based, and noticed the following:
The incr count should be at the beginning of the loop--a minor point.
More importantly, File.txt contains 25M lines, yet you divided unevenly: the first four each contains 1M, and the rest goes into File5.txt. If you want to evenly divide the files, then the break points should be 20M, 15M, 10M and 5M.
Other than that, I did not notice any data loss. I don't have a Windows machine to try it out.

Reading multiple lines from a file using TCL?

How do I read more than a single line in a file using tcl? That is by default the gets command reads till a new line is found, how do I change this behaviour to read a file till a specific character is found?
If you don't mind reading over a bit, you can do it by looping with gets or read in a loop:
set data ""
while {[gets $chan line] >= 0} {
set idx [string first $whatToLookFor $line]
if {$idx == -1} {
append data $line\n
} else {
# Decrement idx; don't want first character of $whatToLookFor
append data [string range $line 0 [incr idx -1]]
break
}
}
# Data has everything up to but not including $whatToLookFor
If you're looking for multiline patterns, I suggest reading the whole file into memory and working on that. It's just so much easier than trying to write a correct matcher:
set data [read $chan]
set idx [string first $whatToLookFor $data]
if {$idx > -1} {
set data [string range $data 0 [incr idx -1]]
}
This latter form will also work just fine with binary data. Just remember to fconfigure $chan -translation binary first if you're doing that.
Use fconfigure.
set fp [open "somefile" r]
fconfigure $fp -eofchar "char"
set data [read $fp]
close $fp
In addition to Donal's good advice, you could get a list of records by reading the whole file and splitting on the record separator:
package require textutil::split
set records [textutil::splitx [read $chan] "record_separator"]
Documentation

Parsing a file with Tcl

I have a file in here which has multiple set statements. However I want to extract the lines of my interest. Can the following code help
set in [open filename r]
seek $in 0 start
while{ [gets $in line ] != -1} {
regexp (line to be extracted)
}
Other solution:
Instead of using gets I prefer using read function to read the whole contents of the file and then process those line by line. So we are in complete control of operation on file by having it as list of lines
set fileName [lindex $argv 0]
catch {set fptr [open $fileName r]} ;
set contents [read -nonewline $fptr] ;#Read the file contents
close $fptr ;#Close the file since it has been read now
set splitCont [split $contents "\n"] ;#Split the files contents on new line
foreach ele $splitCont {
if {[regexp {^set +(\S+) +(.*)} $ele -> name value]} {
puts "The name \"$name\" maps to the value \"$value\""
}
}
How to run this code:
say above code is saved in test.tcl
Then
tclsh test.tcl FileName
FileName is full path of file unless the file is in the same directory where the program is.
First, you don't need to seek to the beginning straight after opening a file for reading; that's where it starts.
Second, the pattern for reading a file is this:
set f [open $filename]
while {[gets $f line] > -1} {
# Process lines
if {[regexp {^set +(\S+) +(.*)} $line -> name value]} {
puts "The name \"$name\" maps to the value \"$value\""
}
}
close $f
OK, that's a very simple RE in the middle there (and for more complicated files you'll need several) but that's the general pattern. Note that, as usual for Tcl, the space after the while command word is important, as is the space between the while expression and the while body. For specific help with what RE to use for particular types of input data, ask further questions here on Stack Overflow.
Yet another solution:
as it looks like the source is a TCL script, create a new safe interpreter using interp which only has the set command exposed (and any others you need), hide all other commands and replace unknown to just skip anything unrecognised. source the input in this interpreter
Here is yet another solution: use the file scanning feature of Tclx. Please look up Tclx for more info. I like this solution for that you can have several scanmatch blocks.
package require Tclx
# Open a file, skip error checking for simplicity
set inputFile [open sample.tcl r]
# Scan the file
set scanHandle [scancontext create]
scanmatch $scanHandle {^\s*set} {
lassign $matchInfo(line) setCmd varName varValue; # parse the line
puts "$varName = $varValue"
}
scanfile $scanHandle $inputFile
close $inputFile
Yet another solution: use the grep command from the fileutil package:
package require fileutil
puts [lindex $argv 0]
set matchedLines [fileutil::grep {^\s*set} [lindex $argv 0]]
foreach line $matchedLines {
# Each line is in format: filename:line, for example
# sample.tcl:set foo bar
set varName [lindex $line 1]
set varValue [lindex $line 2]
puts "$varName = $varValue"
}
I've read your comments so far, and if I understand you correctly your input data file has 6 (or 9, depending which comment) data fields per line, separated by spaces. You want to use a regexp to parse them into 6 (or 9) arrays or lists, one per data field.
If so, I'd try something like this (using lists):
set f [open $filename]
while {[gets $f line] > -1} {
# Process lines
if {[regexp {(\S+) (\S+) (\S+) (\S+) (\S+) (\S+)} $line -> name source drain gate bulk inst]} {
lappend nameL $name
lappend sourceL $source
lappend drainL $drain
lappend gateL $gate
lappend bulkL $bulk
lappend instL $inst
}
}
close $f
Now you should have a set of 6 lists, one per field, with one entry in the list for each item in your input file. To access the i-th name, for example, you grab $nameL[$i].
If (as I suspect) your main goal is to get the parameters of the device whose name is "foo", you'd use a structure like this:
set name "foo"
set i [lsearch $nameL $name]
if {$i != -1} {
set source $sourceL[$i]
} else {
puts "item $name not found."
set source ''
# or set to 0, or whatever "not found" marker you like
}
set File [ open $fileName r ]
while { [ gets $File line ] >= 0 } {
regex {(set) ([a-zA-Z0-0]+) (.*)} $line str1 str2 str3 str4
#str2 contains "set";
#str3 contains variable to be set;
#str4 contains the value to be set;
close $File
}