I'm looking for a way to mark a pause between each entry to let the script checks each entry before going further.
example, I have this simple code :
for {set i 0} {$i<5} {incr i} {
set x [gets stdin]
if {[string is integer -strict $x]} {
puts "It's OK"
} else {
puts "It's not OK"
}
}
with this code, if I put manually the entries one by one, the script has the time to check each entry, here is the output :
5
It's OK
dd
It's not OK
kk
It's not OK
55
It's OK
99
It's OK
but now if I do a copy/paste of :
5
dd
kk
55
99
here is now the output :
5
dd
kk
55
99
It's OK
It's not OK
It's not OK
It's OK
It's OK
Is there a way to give enough time after each entry to let the script the time to check before going to the next entry ?
Thank you.
This is surprisingly hard to do. Here's why: the output of the pasted text is actually handled by the OS (it's part of the terminal emulation) before it gets into Tcl at all. While there are some things you can do (typically by calling exec /bin/stty with the right options), they don't help really all that much. For example, you can turn off echoing of the values and process all the keystrokes exactly as done (that's the -echo and raw options) but that leaves you having to do a lot of work to pretend that things are still in cooked mode (-raw) as that's what provides normal terminal input. It's a lot of work.
Theoretically, a library like readline would help: they already do the evil stty hacking for you. Except that in your specific case, they won't help as the strict interleaving model you want isn't one that is a common-enough requirement.
What I'd actually do in your position is rewrite the output so that it says what input is being checked each time, as well as the result (“"5" is OK”) as then I could take the values to parse from a file and still end up able to figure out what's going on without lots of fuss.
Either add a puts $x in there, or change the messages:
for {set i 0} {$i<5} {incr i} {
set x [gets stdin]
set ok [string is integer -strict $x]
puts [format {%s %s OK} $x [expr {$ok ? "is" : "is not"}]]
}
Related
I write 2 script to do somting like this:
#script1, to dump info:
proc script1 {} {
puts $file "set a 123"
puts $file "set b 456"
.....
}
(The file size I dump is 8GB)
#And use script2 to source it and do data category:
while { [get $file_wrtie_out_by_script1 line] != -1 } {
eval $line
}
close $file_wrtie_out_by_script1
Do the job....
return
In this case, the script is hang in return, how to solve the issue... stuck 3+ days, thnaks
Update:
Thanks for Colin, now I use source instead of eval, but even remove the "Do the job...", just keep return, still hang
The gets command will return the number of characters in the line that it just read from the file channel.
When all the lines of the file have been read, then gets will return -1.
Your problem is that you have a while loop that is never ending. Your while loop will terminate when gets returns 1. You need to change the condition to -1 for the while loop to terminate.
I agree with the comment from Colin that you should just use source instead of eval for each line. Using eval line-by-line will fail if you have a multi-line command (but that might not be the case in your example).
I'm not a Tcl programmer, but I need to modify a Tcl script that invokes an external command and tries to separate stdout and stderr. The following is a minimal example of how the script currently does this.
#!/usr/bin/tclsh8.4
set pipe [open "|cmd" r]
while {[gets $pipe line] >= 0} {puts $line}
catch "close $pipe" errorMsg
puts "$errorMsg"
Here, cmd is a an external command, and for the sake of this example, I will replace it with the following shell script. (I'm working on a Linux machine, but you can modify this to write to stdout and stderr however is appropriate for your system.)
#!/bin/sh -f
echo "A" > /dev/stdout
echo "B" > /dev/stdout
echo "C" > /dev/stderr
echo "D" > /dev/stderr
When I execute cmd, I get the following four lines as expected:
% ./cmd
A
B
C
D
However, when I execute my Tcl script, I get:
% ./test.tcl
A
B
D
This is an example of a more general phenomenon, which is that catch seems to swallow all but the last line of stderr.
To me, the "obvious" way to approach this is to try to mimic what is happening with stdout, which obviously works and prints all lines of the output. However, the current implementation is based on getting a Tcl channel by using open "|cmd", which requires running an external command. I can't figure out how to create a channel without opening an external command, and even if I could figure that out, there are subsequent issues with this approach. (How do I get the output of close into the channel? And if I need to open a new channel to get the output of each channel I am closing, then wouldn't I need an infinite number of channels?)
If anyone has any idea why errorMsg drops the initial lines or another approach that does not suffer from this problem, please let me know.
I know that this will come up, so I will say in advance that switching to Tcl 8.5 is probably not an option for me in the short term, since I do not control the environment in which this script is run.
I'm trying to use an expect script to access a remote device via telnet, read/save the remote "EVENTLOG" locally, and then extract specific lines (serial numbers) from the log file. Problem is the log files are constantly changing so I need a way to search for specific strings. The remote device is Linux based, but doesn't have things like grep, vi, less, etc as it's QNX Neutrino, hence having to do it locally.
I've successfully gotten the telnet, read the file and save locally under control, but when I get to "reading" the file is when I have issues. Currently I'm just trying to get it to print what it found, but the script just exits without reporting anything except some extra braces??
#!/usr/bin/expect -f
set timeout -1
log_user 1
spawn telnet $IP
match_max 100000
expect "login:"
send -- "$USER\r"
expect "Password:"
send -- "$PW\r"
expect "# "
send -- "\r"
#at this point logged into device
#send command to generate the "dallaslog"
set dallaslog [open dallaslog.txt w]
expect "#"
send -- "cat `ls -rt /LOG/event*`\r"
expect "(cat) exited status=0"
set logout $expect_out(buffer)
puts $dallaslog "$logout"
close $dallaslog
unset expect_out(buffer)
set dallasread [open dallaslog.txt r]
set lines [split [read $dallasread] "\r"]
close $dallasread
puts "${green}$lines{$normal}"
#a debug line to print $dallasread in green so I can verify it works up to here
foreach line $lines {
if {[regexp {.*Dallas ID: 0.*\n} $lines match]} {
if {$match == 1} {
puts $line ;# Prints whole line which has 1 at end
}
}
}
expect "# "
send -- "exit\r"
interact
What I'm (eventually) looking for is the script to catch any line starting with "Dallas ID:" and then to save that information to a variable, so I can use the "scan" command to parse the line and extract information.
What I get is:
(the results from $lines being "puts" in green)
"...
<ENTRY TIME="01/01/1970 00:48:07" PROC="syncd" FILE="mips.cc" LINE="208" NUM="10000">
UTC step from 01/01/1970 00:48:08 to 01/01/1970 00:48:07
</ENTRY>
Process 3174431 (cat) exited status=0
}{}
# exit
Process 3162142 (sh) exited status=0.
Connection closed by foreign host."
Thank you in advance for all the help. I'm a newbie to TCL/expect (been toying with it since last July) but I'm finding it to be a pretty powerful tool, just hard for me to debug!
EDIT: Added more information per #meuh 's reponse.
Example: There can be up to 4 Dallas ID, but generally I only have 0 and 1. Goal is to get the SN, BC, CN for reach Dallas ID saved as variables to put in a separate text file.
<ENTRY TIME="01/01/1970 00:00:06" PROC="sys" FILE="PlatformUtils.cpp" LINE="1227" NUM="10044">
Dallas ID: 1 SN:00000622393A BC: J4AD945 CN: IS200BPPBH2BMD R0: 001C
</ENTRY>
The foreach loop I used was an example from an old question on stack overflow I tried to modify to use here, unsuccessfully.
EDIT: I should also probably mention that this event log is approximately 800 lines long every time it gets read, which is why I haven't posted an excerpt from it.
This regexp line is probably not doing what you want:
if {[regexp {.*Dallas ID: 0.*\n} $lines match]} {
if {$match == 1} {
puts $line
You are passing the list $lines instead of, presumably, the single line $line. The variable match will be set to the string that matched which must therefore include the words "Dallas" and so on, so it can never be 1.
Your code comment says Prints whole line which has 1 at end, but I'm not sure what you are looking for as you do not have any example data that fits the regexp.
If you choose your regexp pattern using grouping you could capture parts of the line so perhaps not need a further scan. Eg
regexp {PROC="([a-z]*)"} $line match submatch
would set variable submatch to syncd in your above example.
You may also have a fundamental problem caused by tcl's handling of \r\n on input from a file. The lines you got from $expect_out(buffer) do indeed have the 2 characters as end-of-line delimiters. However,
when using read, by default I believe, it will translate the same sequence to a normalised \n. So your split will not do anything, and you need to split on \n rather than \r. You can check the size of the list of lines you have with
puts [llength $lines]
If it is 1, then your split is not working. Replace it with
set lines [split [read $dallasread] "\n"]
This should help your loop, where for example you can try
foreach line $lines {
if {[regexp {.*Dallas ID: (\d+) SN:([^ ]+)} $line match idnum SN]} {
puts $line
puts "$idnum, $SN"
}
}
You must remove the \n at the end of your regexp, as this is no longer present after the split. I've extended the regexp example with (\d+) to match for the id number (\d matches a digit), and ([^ ]+) to match any number of non-space characters after the text SN:.
These values are captured by the use of () grouping, and are placed in the variables idnum and SN, which you should be able to see output by the second puts command.
I have process were variables are defined, and following that procedure the variables should be used after a delay.
The problem is that the delayed command process the variables when the command is executed instead of when the command is given. Consider the following example:
The code is not tested, but the point should be clear anyway:
for {set i 0} {$i < 100} {incr i} {
set outputItem $i
set time [expr 1000+100*$i]
after $time {puts "Output was $outputItem"}
}
Which I would hope print something like:
Output was 1
Output was 2
Output was 3
...
But actually it prints:
Output was 100
Output was 100
Output was 100
Which I guess shows that tcl keeps the parameter name (and not the value of the variable) when the after command is initiated.
Is there any way to substitute the variable name to the variable content, so that the delayed command (after xxx yyy) works as desired?
The problem is this line:
after $time {puts "Output was $outputItem"}
The substitution of $outputItem is happening when the after event fires, not at the time you defined it. (The braces prevent anything else.) To get what you want, you need list quoting, and that's done with the list command:
after $time [list puts "Output was $outputItem"]
The list command builds lists… and pre-substituted commands (because of the way Tcl's syntax is defined). It's great for building things that you're going to call later. I guess it could have been called make-me-a-callback too, but then people would have wondered about its use for creating lists. It does both.
If your callback needs to be two or more commands, use a helper procedure (or an apply) to wrap it up into a single command; the reduction in confusion at trying to make callbacks work with multiple direct commands is totally worth it.
Considering the following code:
puts "What show command would you like to execute?"
set cmd [gets stdin]
proc makeLC {str} {
puts "begin"
puts $str
set lStr [string tolower $str]
set lStr [string trim $lStr]
puts "after low and trim"
puts $lStr
set lenStr [string length $lStr]
for {set i 0} {$i < $lenStr} {incr i} {
puts [string index $lStr $i]
}
return $lStr
}
set lcmd [makeLC $cmd]
When a user types "test12345" then backspaces to display "test123" then adds "67" to finally display "test12367"
puts $lStr returns "test12367"
but the "for" loop will display "test12345 67" the spaces between "12345" and "67" I believe are "\b\b".
Why the inconsistancy?
and how do I ensure that when passing $lStr that "test12367" is assigned and "test12345 67" is not
Normally, Tcl programs on Unix are run in a terminal in “cooked” mode. Cooked mode terminals handle all the line editing for you; you can simply just read the finished lines as they are produced. It's very easy to work with cooked mode.
But you can also put the terminal into raw mode, where (typically) the application decides to handle all the key strokes directly itself. (It's usual to turn off echoing of characters at the same time so applications handle the output side as well as the input side.) This is what editors like vi and emacs do, and so does the readline library (used in many programs, like bash). Raw mode is a lot more fiddly to work with, but gives you much more control. Separately from this is whether what is typed is echoed so it can be seen; for example, there's also non-echoing cooked mode, which is useful for passwords.
In your case, it sounds very much like the terminal is in echoing raw mode (unusual!) and your application expects it to be in echoing cooked mode; you're getting the actual character sent from the keyboard when the delete key is pressed (or maybe the backspace key; there's a lot of complexity down there!) which is highly unusual. To restore sanity, do:
# No raw, Yes echo
exec stty -raw echo <#stdin >#stdout
There's something conceptually similar on Windows, but it works through totally different system calls.
Consider using tclreadline which wraps GNU readline providing full support for interactive command-line editing.
Another solution which relies on the presence of an external tool is to wrap the call to the Tcl shell in rlwrap:
rlwrap tclsh /path/to/script/file.tcl