I am trying to pass the parameters to Spirent test center tool using command line arguments, where I am passing slots, ports, frame size and load. I want to store the Slots and ports in array, where number of ports are dynamic.
I tried simple code with cmdline which can handle fixed ports
package require cmdline
set parameters {
{s.arg "" "Slot"}
{p.arg "" "Port"}
{l.arg "100" "Load"}
{f.arg "256" "Framesize"}
{debug "Turn on debugging, default=off"}
}
#set option(l) 100
set usage "- A simple script to demo cmdline parsing"
if {[catch {array set options [cmdline::getoptions ::argv $parameters $usage]}]} {
puts [cmdline::usage $parameters $usage]
} else {
parray options
}
#puts [array get options]
puts $options(l)
puts $options(f)
script Output:
C:\Tcl\bin>tclsh opt.tcl -s 1 -f 128
options(debug) = 0
options(f) = 128
options(l) = 100
options(p) =
options(s) = 1
100
128
Here I would like to pass all the ports for each slots onetime ,
tclsh opt.tcl -s 1 2 -p 11 12 13 14 -f 256 -l 100
Where slots are 1 and 2 and ports in each slot are 11,12,13,14 and need to create array of slot and ports. Could you please suggest some method to achieve this.
Try
tclsh opt.tcl -s "1 2" -p "11 12 13 14" -f 256 -l 100
It works for me under Windows 10, at least. The thing is that the lists of slots and ports need to be one value each: the quotes ensure that.
I tried the following method with some corrections:
set arglen [llength $argv]
while {$index < $arglen} {
set arg [lindex $argv $index]
#puts $arg
switch -exact -- $arg {
-s {
set args($arg) [lindex $argv [incr index]]
set slot($y) $args($arg)
incr y
}
-p {
set args($arg) [lindex $argv [incr index]]
set port($z) $args($arg)
incr z
}
-l {
set args($arg) [lindex $argv [incr index]]
global Load
set Load $args($arg)
}
-f {
set args($arg) [lindex $argv [incr index]]
set frameLength $args($arg)
}
}
incr index
}
Command to run:
C:\Tcl\bin>tclsh l1.tcl -s 1 -p 11 -p 12 -l 10 -f 1
Related
The script is intended to retrieve the contents of some directory when it is getting full.
For development, the 'full' was set at 15%, the directory is /var/crash.
expect "#*" {
foreach part $full {
puts "part: $part"
set dir [split $part]
puts "dir: $dir [llength $dir]"
set d [lindex $dir 0]
puts "d: $d"
send -s -- "ls -lhS $d\n"
expect "#*" { puts "for $dir :: $expect_out(buffer)"}
}
}
send "exit\r"
The output of the script is:
part: /var/crash 15%
dir: {/var/crash} 15% 2
d: /var/crash
send: sending "ls -lhS \u001b[01;31m\u001b[K/var\u001b[m\u001b[K/crash\n" to { exp7 }
expect: does "" (spawn_id exp7) match glob pattern "#*"? no
expect: does "ls -lhS \u00071;31m\u0007/var\u0007\u0007/" (spawn_id exp7) match glob pattern "#*"? no
expect: does "ls -lhS \u00071;31m\u0007/var\u0007\u0007/crash\r\n" (spawn_id exp7) match glob pattern "#*"? no
As can be seen, although $d is /var/crash, when it is sent via ssh it becomes something like \u001b[01;31m\u001b[K/var\u001b[m\u001b[K/crash.
I cannot change the remote machine definitions for the command prompt.
How to get rid of these escape sequences that are sent?
Edit: Info about $full as requested
The proc analyze just tries to filter meaningful data.
proc analyze_df {cmd txt} {
set full [list]
set lines [split $txt \n]
foreach l $lines {
if {[string match $cmd* $l]} { continue }
set lcompact [regsub -all {\s+} $l " "]
set data [split $lcompact]
if {[string match 8?% [lindex $data 4]] \
|| [string match 9?% [lindex $data 4]] \
|| [string match 1??% [lindex $data 4]] \
|| [string match 5?% [lindex $data 4]] \
|| [string match 1?% [lindex $data 4]] } {
lappend full "[lindex $data 5] [lindex $data 4]"
}
}
return $full
}
The extract about the $full that was missing.
set command0 "df -h | grep /var"
send -- "$pass\r"
expect {
-nocase "denied*" {puts "$host denied"; continue}
-nocase "Authentication failed*" {puts "$host authentication failed"; continue}
"$*" {send -s -- "$command0\n"}
timeout {puts "$host TIMEOUT"; continue}
}
expect "$*" {puts "$host -> $expect_out(buffer)" }
set full [analyze_df $command0 $expect_out(buffer)]
Taking the suggestion received, perhaps it's grep that is adding the escape sequences, no?
You don't show how $full gets its value. But it must already have the escape codes. When printing $d those escape codes are interpreted by the terminal, so they may not be obvious. But Expect/Tcl definitely doesn't insert them. This is also confirmed by the braces around the first element when you print $dir. If this element was plain /var/crash, there would be no braces.
Your remark about the command prompt would suggest that $full may be taken from there. Maybe you cannot permanently change the remote machine's command prompt, but you should be able to change it for your session by setting the PS1 environment variable.
Another trick that may help in such situations is to do set env(TERM) dumb before spawning the ssh command. If the prompt (or other tools) correctly use the tput command to generate their escape codes, a dumb terminal will result in empty strings. This won't work if the escape codes are hard-coded for one specific TERM. But that's a bug on the remote side.
If you're absolutely stuck with that input data (and can't tell things to not mangle it with those ANSI terminal colour escape codes) then you can strip them out with:
set dir [split [regsub -all {\u001b[^a-zA-z]*[a-zA-Z]} $part ""]]
This makes use of the fact that the escape sequences start with the escape character (encoded as \u001b) and continue to the first ASCII letter. Replacing them all with the empty string should de-fang them cleanly.
You are recommended to try things like altering the TERM environment variable before calling spawn so that you don't have to do such cleaning. That tends to be easier than attempting to "clean up" the data after the fact.
Currently, I am working on a script to automatize a Data Collector process. Through a long term run i have split these Collector script into four pieces. Now I want to start these Collectorscripts simultan, but i dont know how to do this. my Code ist working a bit:
package require Expect
log_user 0
set timeout 10200
spawn ./Log.tcl 2 5 1; set spawn1 $spawn_id
spawn ./Log.tcl 3 4 2; set spawn2 $spawn_id
spawn ./Log.tcl 7 8 2; set spawn3 $spawn_id
spawn ./Log.tcl 6 9 3; set spawn4 $spawn_id
expect -i $spawn1 eof {wait ; puts "--- 2,5 fertig ---"}
expect -i $spawn2 eof {wait ; puts "--- 3,4 fertig ---"}
expect -i $spawn3 eof {wait ; puts "--- 7,8 fertig ---"}
expect -i $spawn4 eof {puts "--- 6,9 fertig ---"}
this is run and made the thing. But if one job is ready before the other it will produces zombies. Is there a possibility to made this easy and beautiful? I have try a few thinks with exp_after, exp_background, $any_spawn_id with a while loop. But nothing worked. expect never get eof.
If you need to know the order in which the spawned commands finished, it is difficult to do with wait, but expect will accept a list of spawn ids to listen to simultaneously. For example,
spawn sleep 2
lappend allids $spawn_id
set cmd($spawn_id) "sleep 2"
spawn sleep 1
lappend allids $spawn_id
set cmd($spawn_id) "sleep 1"
while { [llength $allids]>0 } {
expect -i "$allids" eof {
puts "eof on $expect_out(spawn_id) from cmd $cmd($expect_out(spawn_id))"
set idx [lsearch -exact $allids $expect_out(spawn_id)]
set allids [lreplace $allids $idx $idx]
}
}
This runs 2 commands, sleep 2 and sleep 1, and appends each spawn id into the list allids. For convenience, the command is also noted in the array cmd indexed by the spawn id of that command.
The list of all spawn ids is then given to expect eof using -i.
When an eof is matched, the global variable $expect_out(spawn_id) contains the spawn id of the process causing the match. A message is printed after indexing this value in the cmd array.
Finally, the spawn id with eof is removed from the list, and the loop repeated until the list is empty.
Note, you cannot use exp_continue to continue the expect loop, as the -i value does not seem to be re-evaluated.
I'm writing a socket utility to communicate a client to a server. When input to the socket from the client side, the server is receiving it fine. However, when input to the socket from the server, the client can't read. When checking for fblocked $channel. It is 1. I've tried everything including adding new line, ...
Please help.
Below is my code
proc read_command { sock } {
variable self
global connected
set len [gets $sock line]
set bl [fblocked $sock]
puts "Characters Read: $len Fblocked: $bl"
if {$len < 0} {
if {$bl} {
puts "Input is blocked"
} else {
set connected 1
puts "The socket was closed - closing my end"
close $sock
}
} else {
if {!$bl} {
puts "Read $len characters: $line"
catch {uplevel #0 $line} output
puts "1==>$output<=="
puts $sock "$output"
puts $sock "\n"
flush $sock
}
}
}
proc client { host port } {
variable self
set s [socket $host $port]
set self(csock) $s
set self($s,addr) $host
set self($s,port) $port
fconfigure $s -buffering line -blocking 0
return $s
}
proc prun { sock args} {
variable self
set result [list]
set cmd $args
set cmd [regsub -all {(^\s*\{)||(\}\s*$)} $cmd ""]
set cmd [string trimleft $cmd]
set o1 [eval $cmd]
#catch {uplevel #0 $cmd} o1
puts "1_$sock ==> $o1"
lappend result $o1
#--------------
puts $sock $cmd
flush $sock
set bl [fblocked $sock]
set file [read $sock]
set bl [fblocked $sock]
puts "Fblocked: $bl"
puts "Output: $file"
puts "2_$Comm::self(csock) ==> $file ==> $bl"
lappend result $file
return $result
}
Here is how I run it.
I call server on 1 of the terminal. It will echo the ip address and the port.
Then I call client using the address and the port above to get back the client socket
Then I call prun on the client shell to get back a pair of values, one with the value of the cmd call on the client, and the other the value of the cmd call on the server. Basically I would like to get the pair of values so I can use them for correlation between the 2 set of data.
Below is the code:
1.
On server shell
$ server
2.
On client shell
$ set s [client $addr $port]
3.
Call a proc to get the value from the client shell, then send the command to the server to get the value from the server shell, and return that value back to the client.
$ set res [prun $s {set val [get_attribute [get_nets mynet] pin_capacitance_max]}]
You wrote:
puts "2_$Comm::self(csock) ==> $file ==> $bl"
and defined self with variable. Are you working with packages?. May be you forgot something related to it.
For test you can use just global but using arrays would be a little more complicated.
I am trying to automate xterm window using Expect (though I already knew Expect cant control such GUI applications, but there is a tweaked mechanism explained in Exploring Expect)
package require Expect
spawn -pty
stty raw -echo < $spawn_out(slave,name)
regexp ".*(.)(.)" $spawn_out(slave,name) dummy c1 c2
if {[string compare $c1 "/"] == 0} {
set c1 "0"
}
set xterm_pid [exec xterm -S$c1$c2$spawn_out(slave,fd) &]
close -slave
expect "\n" ;# match and discard X window id
set xterm $spawn_id
spawn $env(SHELL)
Don Libes mentioned that from this point, xterm can be automated and he has given example to use xterm with interact command as follows,
interact -u $xterm "X" {
send -i $xterm "Press return to go away: "
set timeout -1
expect -i $xterm "\r" {
send -i $xterm "Thanks!\r\n"
exec kill $xterm_pid
exit
}
}
But, my expectation is send and expect commands to/from xterm. I have tried the following,
send -i $xterm "ls -l\r"; # Prints commands on xterm
expect -i $xterm "\\\$" ; # Trying to match the prompt
But it didn't workout. This example mainly relies on the xterm's command line option -Sccn.
-Sccn
This option allows xterm to be used as an input and output channel for
an existing program and is sometimes used in specialized applications.
The option value specifies the last few letters of the name of a
pseudo-terminal to use in slave mode, plus the number of the inherited
file descriptor. If the option contains a "/" character, that delimits
the characters used for the pseudo-terminal name from the file
descriptor. Otherwise, exactly two characters are used from the option
for the pseudo-terminal name, the remainder is the file descriptor.
Examples:
-S123/45
-Sab34
Note that xterm does not close any file descriptor which it did not open for its own use. It is possible (though probably not
portable) to have an application which passes an open file descriptor
down to xterm past the initialization or the -S option to a process
running in the xterm.
Where am I making the mistake ?
Here I have you a view from my code I used. It is extracted from a complex part.
# create pty for xterm
set spawn(PTTY,PID) [spawn -noecho -pty]
set spawn(PTTY,DEVICE) $spawn_out(slave,name)
set spawn(PTTY) $spawn_id
stty raw -echo < $spawn(PTTY,DEVICE)
regexp ".*(.)(.)" $spawn_out(slave,name) dummy c1 c2
if {[string compare $c1 "/"] == 0} { set c1 0 }
# Start XTERM (using -into can place the xterm in a TK widget)
set pid(XTERM) [::exec xterm -S$c1$c2$spawn_out(slave,fd) {*}$addidtionlXtermOptions &]
close -slave
# Link
set spawn(SHELL,PID) [spawn -noecho {*}$commandInXterm]
set spawn(SHELL) $spawn_id
set spawn(SHELL,DEVICE) $spawn_out(slave,name)
# ...
# send a key or string into the xterm
exp_send -raw -i $spawn(SHELL) -- $key
exp_send -raw -i $spawn(SHELL) -- "$str\r
As Mr.Thomas Dickey pointed out here, I started exploring on the multixterm
and finally able to make a standalone version where the commands are sent to xterm directly.
The part which mainly I have missed in my code is expect_background which actually does the linking in the background. Hope it helps to all those who all wanted to automate the xterm. All credits to Mr.Thomas Dickey and Mr.Don Libes!!!
#!/usr/bin/tclsh
package require Expect
set ::xtermStarted 0
set xtermCmd $env(SHELL)
set xtermArgs ""
# set up verbose mechanism early
set verbose 0
proc verbose {msg} {
if {$::verbose} {
if {[info level] > 1} {
set proc [lindex [info level -1] 0]
} else {
set proc main
}
puts "$proc: $msg"
}
}
# ::xtermSid is an array of xterm spawn ids indexed by process spawn ids.
# ::xtermPid is an array of xterm pids indexed by process spawn id.
######################################################################
# create an xterm and establish connections
######################################################################
proc xtermStart {cmd name} {
verbose "starting new xterm running $cmd with name $name"
######################################################################
# create pty for xterm
######################################################################
set pid [spawn -noecho -pty]
verbose "spawn -pty: pid = $pid, spawn_id = $spawn_id"
set ::sidXterm $spawn_id
stty raw -echo < $spawn_out(slave,name)
regexp ".*(.)(.)" $spawn_out(slave,name) dummy c1 c2
if {[string compare $c1 "/"] == 0} {
set c1 0
}
######################################################################
# start new xterm
######################################################################
set xtermpid [eval exec xterm -name dinesh -S$c1$c2$spawn_out(slave,fd) $::xtermArgs &]
verbose "xterm: pid = $xtermpid"
close -slave
# xterm first sends back window id, save in environment so it can be
# passed on to the new process
log_user 0
expect {
eof {wait;return}
-re (.*)\n {
# convert hex to decimal
# note quotes must be used here to avoid diagnostic from expr
set ::env(WINDOWID) [expr "0x$expect_out(1,string)"]
}
}
######################################################################
# start new process
######################################################################
set pid [eval spawn -noecho $cmd]
verbose "$cmd: pid = $pid, spawn_id = $spawn_id"
set ::sidCmd $spawn_id
######################################################################
# link everything back to spawn id of new process
######################################################################
set ::xtermSid($::sidCmd) $::sidXterm
set ::xtermPid($::sidCmd) $xtermpid
######################################################################
# connect proc output to xterm output
# connect xterm input to proc input
######################################################################
expect_background {
-i $::sidCmd
-re ".+" {
if {!$::xtermStarted} {set ::xtermStarted 1}
sendTo $::sidXterm
}
eof [list xtermKill $::sidCmd]
-i $::sidXterm
-re ".+" {
if {!$::xtermStarted} {set ::xtermStarted 1}
sendTo $::sidCmd
}
eof [list xtermKill $::sidCmd]
}
vwait ::xtermStarted
}
######################################################################
# connect main window keystrokes to all xterms
######################################################################
proc xtermSend {A} {
exp_send -raw -i $::sidCmd -- $A
}
proc sendTo {to} {
exp_send -raw -i $to -- $::expect_out(buffer)
}
######################################################################
# clean up an individual process death or xterm death
######################################################################
proc xtermKill {s} {
verbose "killing xterm $s"
if {![info exists ::xtermPid($s)]} {
verbose "too late, already dead"
return
}
catch {exec /bin/kill -9 $::xtermPid($s)}
unset ::xtermPid($s)
# remove sid from activeList
verbose "removing $s from active array"
catch {unset ::activeArray($s)}
verbose "removing from background handler $s"
catch {expect_background -i $s}
verbose "removing from background handler $::xtermSid($s)"
catch {expect_background -i $::xtermSid($s)}
verbose "closing proc"
catch {close -i $s}
verbose "closing xterm"
catch {close -i $::xtermSid($s)}
verbose "waiting on proc"
wait -i $s
wait -i $::xtermSid($s)
verbose "done waiting"
unset ::xtermSid($s)
set ::forever NO
}
######################################################################
# create windows
######################################################################
# xtermKillAll is not intended to be user-callable. It just kills
# the processes and that's it. A user-callable version would update
# the data structures, close the channels, etc.
proc xtermKillAll {} {
foreach sid [array names ::xtermPid] {
exec /bin/kill -9 $::xtermPid($sid)
}
}
rename exit _exit
proc exit {{x 0}} {xtermKillAll;_exit $x}
xtermStart $xtermCmd $xtermCmd
xtermSend "ls -l\r"
xtermSend "pwd\r"
vwait ::forever
Several times I run into mentioning that it is best to put script into proc in order to boost run time performance, e.g. this answer has the following:
That is one reason for the advices to put all your code inside procedures (they get byte-compiled that way)
Something does not click in me.
Just as described in the answer, the first time a script runs, there is a check if a command can be byte-code compiled, if it is, then it is compiled. This makes total sense. But I do not see how "proc" plays an important role. E.g. compare the following 2 scripts:
set v [concat [lindex $::argv 1] [lindex $::argv 2]]
myCmd $v
and
proc p1 {v1 v2} {
set v [concat $v1 $v2]
return [myCmd $v]
}
p1 [lindex $::argv 1] [lindex $::argv 2]
My high level interpretation of the 2 scripts tells the following:
In running either script the first time, "set", "concat", "lindex" and "return" commands are compiled
The second script also has "proc" compiled.
"myCmd" is not compiled in either script
Subsequent running of either script runs the bycode except "myCmd".
So what is the advantage of "proc"?
I did run dissamble on the scripts:
The first script:
ByteCode 0x0x83fc70, refCt 1, epoch 3, interp 0x0x81d680 (epoch 3)
Source "set v [concat [lindex $::argv 1] [lindex $::argv 2]]\nmy"
Cmds 5, src 61, inst 50, litObjs 4, aux 0, stkDepth 4, code/src 0.00
Commands 5:
1: pc 0-41, src 0-51 2: pc 2-39, src 7-50
3: pc 4-20, src 15-30 4: pc 21-37, src 34-49
5: pc 42-48, src 53-60
Command 1: "set v [concat [lindex $::argv 1] [lindex $::argv 2]]"
(0) push1 0 # "v"
Command 2: "concat [lindex $::argv 1] [lindex $::argv 2]"
(2) push1 1 # "concat"
Command 3: "lindex $::argv 1"
(4) startCommand +17 1 # next cmd at pc 21
(13) push1 2 # "::argv"
(15) loadScalarStk
(16) listIndexImm 1
Command 4: "lindex $::argv 2"
(21) startCommand +17 1 # next cmd at pc 38
(30) push1 2 # "::argv"
(32) loadScalarStk
(33) listIndexImm 2
(38) invokeStk1 3
(40) storeScalarStk
(41) pop
Command 5: "myCmd $v"
(42) push1 3 # "myCmd"
(44) push1 0 # "v"
(46) loadScalarStk
(47) invokeStk1 2
(49) done
The second script:
ByteCode 0x0xc06c80, refCt 1, epoch 3, interp 0x0xbe4680 (epoch 3)
Source "proc p1 {v1 v2} {\n set v [concat $v1 $v2]\n return"
Cmds 4, src 109, inst 50, litObjs 5, aux 0, stkDepth 4, code/src 0.00
Commands 4:
1: pc 0-10, src 0-67 2: pc 11-48, src 69-108
3: pc 13-29, src 73-88 4: pc 30-46, src 92-107
Command 1: "proc p1 {v1 v2} {\n set v [concat $v1 $v2]\n return"
(0) push1 0 # "proc"
(2) push1 1 # "p1"
(4) push1 2 # "v1 v2"
(6) push1 3 # "\n set v [concat $v1 $v2]\n return ["
(8) invokeStk1 4
(10) pop
Command 2: "p1 [lindex $::argv 1] [lindex $::argv 2]"
(11) push1 1 # "p1"
Command 3: "lindex $::argv 1"
(13) startCommand +17 1 # next cmd at pc 30
(22) push1 4 # "::argv"
(24) loadScalarStk
(25) listIndexImm 1
Command 4: "lindex $::argv 2"
(30) startCommand +17 1 # next cmd at pc 47
(39) push1 4 # "::argv"
(41) loadScalarStk
(42) listIndexImm 2
(47) invokeStk1 3
(49) done
So script 2 does have 1 less TCL command, but both scripts have 49 byte code commands.
Finally the running test, I comment out "myCmd" because I actually do not have such extension. Here is the result:
% time {source 1.tcl} 10000
242.8156 microseconds per iteration
% time {source 2.tcl} 10000
257.9389 microseconds per iteration
So the proc version is even slower.
What do I miss? Or rather, what is the exact understanding of proc and performance?
The really big reason that putting things in a procedure matters is that procedures have a local variable table. Variables in the LVT can be accessed by numerical index, which is stupendously faster than the alternative (a lookup via a hash table, even though Tcl's got an extremely fast hash table implementation). It doesn't make much difference for a one-off call, but with repeated calls or a loop, the performance differences rapidly add up to something significant. This can quite easily make the extra cost of the extra compilation and stack frame management (procedures aren't free to enter, though we try to keep them cheap) basically irrelevant in real scripts.
And yes, Tcl actually bytecode-compiles everything. It's just that it often generates sub-optimal bytecode outside of procedure(-like context)s; in the limit case for suboptimality, all the bytecode is doing is assembling arguments into a list, doing a dynamic command invoke, and routing the result.
(It's important when reading Tcl's disassembled bytecode to remember that the costs of particular bytecodes are not all the same. You cannot just count the number of instructions to work out the cost in any useful way. For example, push1 is very cheap but invokeStk1 is potentially very costly. Another example, loadScalarStk is usually much more expensive than loadScalar1; the latter is used inside procedures only.)
The following two scripts demonstrate the performance gain due to usage of procs. In the second script the internal loop is extracted into a proc, leading to a 5x speedup.
without_proc.tcl
#!/usr/bin/env tclsh
set sum 0
set n 10000
set k 100
for { set i 0 } { $i < $k } { incr i } {
set s 0
for { set j 0 } { $j < $n } { incr j } {
set s [expr {$s + $j}]
}
set sum [expr {$sum + $s}]
}
puts "sum=$sum"
with_proc.tcl
#!/usr/bin/env tclsh
proc foo {n} {
set s 0
for { set j 0 } { $j < $n } { incr j } {
set s [expr {$s + $j}]
}
return $s
}
set sum 0
set n 10000
set k 100
for { set i 0 } { $i < $k } { incr i } {
set s [foo $n]
set sum [expr {$sum + $s}]
}
puts "sum=$sum"
Benchmark:
$ tclsh
% time {source with_proc.tcl} 1
sum=4999500000
67482 microseconds per iteration
% time {source without_proc.tcl} 1
sum=4999500000
406557 microseconds per iteration
or
$ time tclsh with_proc.tcl
sum=4999500000
real 0m0.089s
user 0m0.080s
sys 0m0.004s
$ time tclsh without_proc.tcl
sum=4999500000
real 0m0.401s
user 0m0.388s
sys 0m0.016s