Expect Script - How can the first and last line of a file be validated - tcl

I'm using Expect to create a CSR file on a remote system. I'm capturing the output from the system and placing it into a file on my local PC (where it is needed).
I need to validate the first and last line of this file to make sure the file looks like the following:
-----BEGIN CERTIFICATE REQUEST-----
.
.
.
-----END CERTIFICATE REQUEST-----
Originally I was only looking for the last line (or so I thought) by looping through the lines of the file looking for -----END CERTIFICATE REQUEST-----
set fp [ open $csrname ]
while {[gets $fp line] != -1} {
if { $line == "-----END CERTIFICATE REQUEST-----" } {
puts "The Certificate Signing Request file \"$csrname\" has been succesfully created"
} else {
puts "The certificate file is invalid."
puts $line
exit 41
}
}
I have a flaw in my logic because I end up in the error leg of that if statement and exit.
How can I validate just the first and last line of the file?

A CSR isn't really all that long; just a few kilobytes at most. We can validate the lot in one go!
# Load *everything* from a file at once
set f [open $csrname]
set contents [read $f]
close $f
# Validate it using this regular expression:
set RE {^-----BEGIN CERTIFICATE REQUEST-----\n.*\n-----END CERTIFICATE REQUEST-----\n*$}
if {![regexp $RE $contents]} {
puts "The certificate file is invalid."
exit 1
# This spot is unreachable, of course...
}
puts "The Certificate Signing Request file \"$csrname\" has been succesfully created"
I'd recommend checking that the bit between the separators is only using valid characters too (it's base64-encoded PKCS#10), but that's rather more complicated once you go beyond the basics. Probably best to just confirm that you've not got truncation or something that just isn't a CSR at all.

Donal has a nice answer. Here are a couple of alternatives:
set first [exec sed {1q} $csrname]
set last [exec sed -n {$p} $csrname]
or
set f [open $csrname]
set lines [split [read -nonewline $f] \n]
close $f
set first [lindex $lines 0]
set last [lindex $lines end]
In either case, you can
if {$first eq "-----BEGIN CERTIFICATE REQUEST-----" &&
$last eq "-----END CERTIFICATE REQUEST-----"} {...}

Also you can refer to http://wiki.tcl.tk/1466
[eof] is used to determine whether the channel has reached the end of input.

Related

Why does expect-tcl use the semicolon to trim the output?

My script garbage piece of a configuration from the network devices by telnet for further modifications. Some of pieces of configurations contain encoded data with semicolon character. However, in a strange way, expect cuts the output after the first semicolon character, with that all the information gets into the log. For example, the configuration contains the following lines, they are also in the log:
snmp-agent
snmp-agent local-engineid 000007DB7F00000100000DBD
snmp-agent community write cipher %#%#f:x6"^s,6.L~~BE~%c*0S6NH2#Y_W4I`NP6,W}VF'NN86NKSYoixJc$>;88sTj2yu2*/NTS6%#%#
snmp-agent community read cipher %#%#]$OdG*7WdV#{aSD9vx"DH+]]*_[8D+2\u%7Ozr<,W3zP+]`HBK(\=oJuKL'IT|+w*3o4]iH+%#%#
snmp-agent sys-info version v1 v2c
undo snmp-agent sys-info version v3
I tried two different ways, but the result is unchanged:
1)
expect {
"Error: Failed to authenticate." {exit}
">" {send "disp current-configuration | include snmp\r"}
}
expect "$device>" {
set results [regexp -all -inline {[^\r\n]+} $expect_out(buffer)]
puts "Length of output buffer is : [llength $results]"
for {set i 1} {$i<[llength $results]-1} {incr i} {
set confline [lindex $results $i]
puts "\$confline\[$i\] = $confline\r"
expect {
"Error: Failed to authenticate." {exit}
">" {send "disp current-configuration | include snmp\r"}
}
expect "$device>" {
set outfl [open "$SCROOT/$model-$device.out" w]
puts $outfl $expect_out(buffer)
flush $outfl
close $outfl
And this is what happens at the output:
snmp-agent
snmp-agent local-engineid 000007DB7F00000100000DBD
snmp-agent community write cipher %#%#f:x6"^s,6.L~~BE~%c*0S6NH2#Y_W4I`NP6,W}VF'NN86NKSYoixJc$>
Who knows how to solve this problem? Please help me to do it.
UPD: I extended the match condition and got the expected result. Thank Colin Macleod for the hint.
You are trying to read all the data up to the next prompt by doing
expect ">" {
but the output data contains a ">" character so expect stops when it sees that. It's just coincidence that the next character happens to be ";".
You can probably work around this by writing your pattern to match ">" only when it is the first non-white-space character on a line, e.g.
expect "\n\s*>" {

Tcl read first line of a csv file

I am trying to parse a CSV to check if one of the headers is present.
Sometimes I'd expect a fifth colomn with arbitraryHead
date time value result arbitraryHead
val1 d1 10 fail
val2 d2 15 norun
I was trying to read the first line then print it. But that is not working...
How can I read the first line and print all the headers?
set fh [open $csv_file r]
set data [list]
set line [gets $fh line]
lappend data [split $line ,]
close $fh
foreach x $data {
puts "$x\n"
}
When reading a CSV file, it's best to use the csv package in Tcllib, as that handles all the awkward edge cases in that format.
In particular, csv::split is especially useful (along with csv::join when creating a CSV file). Or the various functions that act as wrappers around it. Here's how you'd use it in your case
package require csv
set fh [open $csv_file r]
# Your data appears to be actually tab-separated, not comma-separated...
set data [csv::split [gets $fh] "\t"]
close $fh
foreach x $data {
puts "$x\n"
}
Your actual immediate bug was this:
set line [gets $fh line]
The two-argument form of gets writes the line it reads into the variable named in the second argument, and returns the length of line read (or -1 on failure to read a complete line, which can be useful in complex cases that aren't important here). You're then assigning that value to the same variable with set, losing the string that was written there. You should instead use one of the following (except that you should use a properly-tested package for reading CSV files):
gets $fh line
set line [gets $fh]
The one-argument form of gets returns the line it read, which can make it harder to distinguish errors but is highly convenient.
The simplest you can do is string match operation, just look for the desired header you wanted to check.
As requested in the following code I am checking "arbitraryHead"
set fh [open $csv_file r]
set contents [read $fh ]
foreach x $contents {
if {[string match "*arbitraryHead*" $x]} {
puts "HEAD FOUND"
}
}
close $fh
Hope this address your issue

TCL write and read only one value

Hi guys I am using TCL (IVR/TCL) for Cisco Voice Gateway.. and I need to have a text file that inside only have a OPEN or CLOSED value.. just 1 value.. so when a call arrieves I check if the business is open or closed..
Then I make another TCL just to the manager place a call and open/close the bussiness..
I have read that you could use a temp file to before writing the file... Is that really necesary
Basically what I just need is take the first line and write OPEN or CLOSED and then in the other tcl just read the file and read the value..
What I must have in mind is take care that the file has only one line... and on closed or open value set..
for reading I am using
set fd [open $filename]
while {[gets $fd line] >= 0} {
set data [lindex $line 0]
puts "\n Date: $data ::"
if { [expr { $data == "closed" }] } {
set closed "1"
puts "\n Date Found on the List"
}
But is really necessary couse I am just reading one line ??
How could I write the file...??
If you assume that the line of interest is always the first one, it's easy. For one thing, there's no real need to use looping or to try to split the line into words; a simple glob-match with string match (which returns a boolean) is quite enough.
# Reader
set fd [open $filename]
set closed [string match "closed*" [gets $fd]]
close $fd
# Writer
set fd [open $filename w]
if {$closed} {
puts $fd "closed"
} else {
puts $fd "open"
}
close $fd
And that's all that's really required (except for the rest of the logic to turn the fragments into a whole program, of course) though you can also do things like also writing the date of the change. (Of course, that would also be preserved in the file's metadata… but it's an illustration, OK?)
set timestamp [clock format [clock seconds]]
if {$closed} {
puts $fd "closed - $timestamp"
} else {
puts $fd "open - $timestamp"
}
And so on.

TCL: Check file existance by SHELL environment variable (another one)

I have a file contain lines with path to the files. Sometimes a path contain SHELL environment variable and I want to check the file existence.
The following is my solution:
set fh [open "the_file_contain_path" "r"]
while {![eof $fh]} {
set line [gets $fh]
if {[regexp -- {\$\S+} $line]} {
catch {exec /usr/local/bin/tcsh -c "echo $line" } line
if {![file exists $line]} {
puts "ERROR: the file $line is not exists"
}
}
}
I sure there is more elegant solution without using
/usr/local/bin/tcsh -c
You can capture the variable name in the regexp command and do a lookup in Tcl's global env array. Also, your use of eof as the while condition means your loop will interate one time too many (see http://phaseit.net/claird/comp.lang.tcl/fmm.html#eof)
set fh [open "the_file_contain_path" "r"]
while {[gets $fh line] != -1} {
# this can handle "$FOO/bar/$BAZ"
if {[string first {$} $line] != -1} {
regsub -all {(\$)(\w+)} $line {\1::env(\2)} new
set line [subst -nocommand -nobackslashes $new]
}
if {![file exists $line]} {
puts "ERROR: the file $line does not exist"
}
}
First off, it's usually easier (for small files, say of no more than 1–2MB) to read in the whole file and split it into lines instead of using gets and eof in a while loop. (The split command is very fast.)
Secondly, to do the replacement you need the place in the string to replace, so you use regexp -indices. That does mean that you need to take a little more complex approach to doing the replacement, with string range and string replace to do some of the work. Assuming you're using Tcl 8.5…
set fh [open "the_file_contain_path" "r"]
foreach line [split [read $fh] "\n"] {
# Find a replacement while there are any to do
while {[regexp -indices {\$(\w+)} $line matchRange nameRange]} {
# Get what to replace with (without any errors, just like tcsh)
set replacement {}
catch {set replacement $::env([string range $line {*}$nameRange])}
# Do the replacement
set line [string replace $line {*}$matchRange $replacement]
}
# Your test on the result
if {![file exists $line]} {
puts "ERROR: the file $line is not exists"
}
}
TCL programs can read environment variables using the built-in global variable env. Read the line, look for $ followed by a name, look up $::env($name), and substitute it for the variable.
Using the shell for this is very bad if the file is supplied by untrusted users. What if they put ; rm * in the file? And if you're going to use a shell, you should at least use sh or bash, not tcsh.

Parsing a file with Tcl

I have a file in here which has multiple set statements. However I want to extract the lines of my interest. Can the following code help
set in [open filename r]
seek $in 0 start
while{ [gets $in line ] != -1} {
regexp (line to be extracted)
}
Other solution:
Instead of using gets I prefer using read function to read the whole contents of the file and then process those line by line. So we are in complete control of operation on file by having it as list of lines
set fileName [lindex $argv 0]
catch {set fptr [open $fileName r]} ;
set contents [read -nonewline $fptr] ;#Read the file contents
close $fptr ;#Close the file since it has been read now
set splitCont [split $contents "\n"] ;#Split the files contents on new line
foreach ele $splitCont {
if {[regexp {^set +(\S+) +(.*)} $ele -> name value]} {
puts "The name \"$name\" maps to the value \"$value\""
}
}
How to run this code:
say above code is saved in test.tcl
Then
tclsh test.tcl FileName
FileName is full path of file unless the file is in the same directory where the program is.
First, you don't need to seek to the beginning straight after opening a file for reading; that's where it starts.
Second, the pattern for reading a file is this:
set f [open $filename]
while {[gets $f line] > -1} {
# Process lines
if {[regexp {^set +(\S+) +(.*)} $line -> name value]} {
puts "The name \"$name\" maps to the value \"$value\""
}
}
close $f
OK, that's a very simple RE in the middle there (and for more complicated files you'll need several) but that's the general pattern. Note that, as usual for Tcl, the space after the while command word is important, as is the space between the while expression and the while body. For specific help with what RE to use for particular types of input data, ask further questions here on Stack Overflow.
Yet another solution:
as it looks like the source is a TCL script, create a new safe interpreter using interp which only has the set command exposed (and any others you need), hide all other commands and replace unknown to just skip anything unrecognised. source the input in this interpreter
Here is yet another solution: use the file scanning feature of Tclx. Please look up Tclx for more info. I like this solution for that you can have several scanmatch blocks.
package require Tclx
# Open a file, skip error checking for simplicity
set inputFile [open sample.tcl r]
# Scan the file
set scanHandle [scancontext create]
scanmatch $scanHandle {^\s*set} {
lassign $matchInfo(line) setCmd varName varValue; # parse the line
puts "$varName = $varValue"
}
scanfile $scanHandle $inputFile
close $inputFile
Yet another solution: use the grep command from the fileutil package:
package require fileutil
puts [lindex $argv 0]
set matchedLines [fileutil::grep {^\s*set} [lindex $argv 0]]
foreach line $matchedLines {
# Each line is in format: filename:line, for example
# sample.tcl:set foo bar
set varName [lindex $line 1]
set varValue [lindex $line 2]
puts "$varName = $varValue"
}
I've read your comments so far, and if I understand you correctly your input data file has 6 (or 9, depending which comment) data fields per line, separated by spaces. You want to use a regexp to parse them into 6 (or 9) arrays or lists, one per data field.
If so, I'd try something like this (using lists):
set f [open $filename]
while {[gets $f line] > -1} {
# Process lines
if {[regexp {(\S+) (\S+) (\S+) (\S+) (\S+) (\S+)} $line -> name source drain gate bulk inst]} {
lappend nameL $name
lappend sourceL $source
lappend drainL $drain
lappend gateL $gate
lappend bulkL $bulk
lappend instL $inst
}
}
close $f
Now you should have a set of 6 lists, one per field, with one entry in the list for each item in your input file. To access the i-th name, for example, you grab $nameL[$i].
If (as I suspect) your main goal is to get the parameters of the device whose name is "foo", you'd use a structure like this:
set name "foo"
set i [lsearch $nameL $name]
if {$i != -1} {
set source $sourceL[$i]
} else {
puts "item $name not found."
set source ''
# or set to 0, or whatever "not found" marker you like
}
set File [ open $fileName r ]
while { [ gets $File line ] >= 0 } {
regex {(set) ([a-zA-Z0-0]+) (.*)} $line str1 str2 str3 str4
#str2 contains "set";
#str3 contains variable to be set;
#str4 contains the value to be set;
close $File
}