"gets stdin" is not waiting for user input and exits - tcl

I am working on some TCL script where i want the user to enter his choice from the given list of OPTIONS.
I am using "gets stdin" for the user input.
But my opened terminal is not waiting for user input and just exists.
Input → gets stdin abc
Output → -1

The -1 indicates that either EOF was reached on stdin, or that your stdin is in non-blocking mode. You can check by running eof stdin and fconfigure stdin -blocking. On a normal terminal those commands should return 0 and 1 respectively.
To set stdin to blocking mode, you can use fconfigure stdin -blocking 1

Related

How will I be able to senda ctrl + c in tcl

Is there a way that I could send a Ctrl+C signal a tcl program?
I am having a tcl code in which when I execute it, internally it should undergo through Ctrl+C signal and print something like:
puts "sent ctrl+c" within the same file.
proc abc {
# Want to sent ctrl + c"
Here I want the command for ctrl+c
puts " sent ctrl+c"
}
If you are sending the signal to a program under the control of Expect, you do:
send "\003"
That's literally the character that your keyboard generates immediately when you do Ctrl+C; it gets translated into a signal by the terminal driver.
Otherwise, you need to use the TclX package (or Expect, though you should only use that if you need its full capabilities) which provides a kill command:
package require Tclx
kill SIGINT $theProcessID
# You could also use INT or 15 to specify the signal to send.
# You can provide a list of PIDs instead of just one too.
Knowing what process ID to send to is a matter of keeping track of things when you create the process. The current process's PID is returned by the pid command if you don't give it any arguments. The process ID(s) of the subprocesses created are returned by exec ... & for all the (known) processes in the background pipeline it creates. For pipelines created with open |..., pass the channel handle for the pipeline to the pid command to get the subprocess IDs.
set pipeline [open |[list program1 ... | program2 ... | program3 ...] "r+"]
puts $pipeline "here is some input"
set outputLine [gets $pipeline]
kill SIGINT [pid $pipeline]
# This close *should* probably produce errors; you've killed the subprocesses after all
catch {close $pipeline}
If you're handling the interrupt signal, use the signal command from TclX to do it:
package require Tclx
signal error SIGINT; # Generate a normal Tcl error on signal
signal trap SIGINT {; # Custom signal handler
puts "SIGNALLED!"
exit
}
signal default SIGINT; # Restore default behaviour
If you use signal error SIGINT, the error generated will have this message “SIGINT signal received” and this error code “POSIX SIG SIGINT”. This is easy to test for (especially with Tcl 8.6's try … trap … command).

Communication between Tcl scripts

I was successfully able to redirect the standard output of a script called by my GUI (tcl/tk) using:
exec [info nameofexecutable] jtag.tcl >#$file_id
Here's a description of my system.
Now I want to be able to tell jtag.tcl to stop data acquisition (which is in an infinite loop) when I click "stop" button. Is it possible through exec or should I use open instead?
The exec command waits until the subprocess finishes before returning control to you at all (unless you run totally disconnected in the background). To maintain control you need to open a pipeline:
# Not a read pipe since output is redirected
set pipe [open |[list [info nameofexecutable] jtag.tcl >#$file_id] "w"]
You also need to ensure that the other process listens for when the pipe is closed or have some other protocol for telling the other end to finish. The easiest mechanism to do that is for the remote end to put the pipe (which is its stdin) into non-blocking mode and to check periodically for a quit message.
# Putting the pipe into nonblocking mode
fconfigure stdin -blocking 0
# Testing for a quit message; put this in somewhere it can be called periodically
if {[gets stdin] eq "quit"} {
exit
}
Then the shutdown protocol for the subprocess becomes this in the parent process:
puts $pipe "quit"
close $pipe
Alternatively, kill the subprocess and pick up the results:
exec kill [pid $pipe]
# Need the catch; this will throw an error otherwise because of the signal
catch {close $pipe}

Tcl open channel

how does one open a channel that is not a filename in tcl? I've read the docs but I'm not a programmer so I must not understand the open and chan commands because when I try to open a new custom channel
open customchannel1 RDWR
I get errors such as
couldn't execute "customchannel1": no such file or directory
And I'm fully aware that I don't do this correctly:
chan create read customchannel1
invalid command name "customchannel1" ...and... invalid command name "initialize"
All I want is two tcl scripts to be able to talk to each other. I thought I could use channels to do this.
I have, however, successfully created a socket test version of what I want:
proc accept {chan addr port} {
puts "$addr:$port says [gets $chan]"
puts $chan goodbye
close $chan
}
puts -nonewline "master or slave? "
flush stdout
set name [gets stdin]
if {$name eq "master"} {
puts -nonewline "Whats the port? "
flush stdout
set port [gets stdin]
socket -server accept $port
vwait forever
} else {
puts "slave then."
puts -nonewline "Whats the id? "
flush stdout
set myid [gets stdin]
set chan [socket 127.0.0.1 $myid]
puts $chan hello
flush $chan
puts "127.0.0.1:$myid says [gets $chan]"
close $chan
}
In the above example I can run 3 instances of the program: 2 'masters' with different port numbers, and a 'slave' that can talk to either one depending on the port/'id' it chooses.
If I knew how to open a channel with the open command instead of the socket command I could implement the above code without using sockets, or jimmy-rigging the ports to be used as uniq ids, but every example I can find opens files and writes out to files or standard out which you don't have to create in the first place.
Thanks for helping me understand these concepts and how to implement them better!
A channel is simply a high level method for working with already open files or sockets.
From the manual page:
This command provides several operations for reading from, writing to and otherwise manipulating open channels (such as have been created with the open and socket commands, or the default named channels stdin, stdout or stderr which correspond to the process's standard input, output and error streams respectively).
So what you are doing with sockets is correct. You can use the chan command to configure the open socket.
When connecting two scripts together, you might think in terms of using a pipeline. For example, you could run one script as a subordinate process of the other. The master does this:
set pipe [open |[list [info nameofexecutable] $thescriptfile] "r+"]
to get a bidirectional (because r+) pipeline to talk to the child, which can in turn just use stdout and stdin as normal.
Within a process, chan pipe is available, which returns a pair of channels that are connected by an OS anonymous pipe.
When working with these, it really helps if you remember to use fconfigure to turn -buffering to none. Otherwise you can get deadlocks while output to a pipe sits in a buffer somewhere, which you don't want. The ultimate answer to that is to use Expect, which uses Unix ptys instead of pipes, but you can be quite productive provided you remember to tune the buffering.

piping plink output in real time into text widget

I'm trying to connect with a server using plink, execute a command and 'pipe' output into my text widget:
set myCommand "echo command | plink.exe -ssh server -pw lucas"
catch {eval exec cmd.exe $myCommand } res
.text insert end $res
# remote command is not working that's why I'm sending command using echo and I'm executing it in cmd.exe because tcl not recognize echo command
Unfortunatelly catch is not working here. Command is executed in the background and nothing happens. No piping into my text widget. It would be great if catched output would be transmited in real time.
That's why I've tried this:
http://wiki.tcl.tk/3543
package require Tk
set shell cmd.exe
proc log {text {tags {}}} {
.output configure -state normal
.output insert end $text $tags
.output configure -state disabled
.output see end
}
proc transmit {} {
global chan
log "\$ [.input get]\n" input
puts $chan [.input get]
.input delete 0 end
}
proc receive {} {
global chan
log [read $chan]
}
entry .input
scrollbar .scroll -orient vertical -command {.output yview}
text .output -state disabled -yscrollcommand {.scroll set}
.output tag configure input -background gray
pack .input -fill x -side bottom
pack .scroll -fill y -side right
pack .output -fill both -expand 1
focus .input
set chan [open |$shell a+]
fconfigure $chan -buffering line -blocking 0
fileevent $chan readable receive
bind .input <Return> transmit
and it's working under cygwin, but after wrapped to .exe when I'm trying execute a command, plink is opening me new black cmd window (why???) where the command is executing and output appears. From this wndow I cannot anymore pipe the output.
Many issues here. Hopefully these notes will help you figure out what to do.
It seems a bit over-complicated to run things through cmd.exe just to make echo piped into plink.exe work. I'd write that like this:
catch { exec plink.exe -ssh server -pw lucas << "command" } res
Note that like this, we don't need to use eval at all.
Failing that, if that command is something coming from the user and you need to support command shell syntax with it, you can do this:
set myCommand "echo command | plink.exe -ssh server -pw lucas"
catch { exec cmd.exe /C $myCommand } res
Otherwise you get into the mess of stuff related to the parsing of options to cmd.exe and that's probably not what you want! (That /C is important; it tells cmd.exe “here comes a command”.)
Note that we're still avoiding eval here. That eval is (almost) certainly a bad idea with what you're trying to do.
When working with wrapped code, the problem is a different one. The issue there is that Windows processes each use a particular subsystem (it's effectively a compilation option when building the executable if I remember right) and the wrapped wish.exe and cmd.exe use different subsystems. Going across the subsystem boundaries is very messy, as the OS tries to be helpful and does things like allocating a terminal for you. Which you didn't want. (I don't remember if you have the problem with direct use of plink.exe, and I can't check from here due to being on entirely the wrong platform and not having a convenient VM set up.)
To run the plink.exe in the background, assemble the pipeline like this:
set myCmd [list plink.exe -ssh server -pw lucas]
set chan [open |$myCmd a+]; # r+ or w+ would work just as well for read/write pipe
fconfigure $chan -buffering none -blocking 0
puts $chan "command"
Also, be aware that when using fileevent you need to take care to detect an EOF condition and close the pipe when that happens:
proc receive {} {
global chan
log [read $chan]
if {[eof $chan]} {
close $chan
}
}
Otherwise when the pipe is closed you'll get an infinite sequence of events on the pipe channel (and the read will always produce a zero-length result since there's provably nothing there).

how to create log file for tcl script

I am running Tcl script for connecting Telnet port. For this script I want to store all the CMD output in a log file. how to do this in Tcl script? Script as below:
#!/usr/bin/expect -f
#!usr/bin/expect
package require Expect
spawn telnet $serverName $portNum
expect "TradeAggregator>"
send "Clients\r"
expect "Client:"
send "1\r"
expect "1-Client>"
send "Pollers\r"
expect "Client Pollers"
send "2\r"
send "n\r"
expect ">"
set passwordOption $expect_out(buffer)
set searchString "Not confirmed"
if {[string match *$searchString* $passwordOption]} {
puts "match found" }\
else {
puts "match not found"
xmlPasswordChange $polName
}
all the puts output and xmlPasswordChange procedure output is not printing in the log file. can you please point out where i am doing wrong.
Thanks for your help in advance.
You want to insert a log_file command where you want to start saving the output. For example, if you want to save all the output, then stick it to the beginning:
#!/usr/bin/expect -f
log_file myfile.log ;# <<< === append output to a file
package require Expect
spawn telnet $serverName $portNum
expect "TradeAggregator>"
send "Clients\r"
expect "Client:"
send "1\r"
expect "1-Client>"
send "Pollers\r"
expect "Client Pollers"
send "2\r"
Be default, log_file appends if the file exists, or create a new one. If you want to start a new log every time, then:
log_file -noappend myfile.log
Update
Per your question regarding why puts outputs go to the console and not the log file. The way I understand is puts will go to the console. If you want to log to the log file (open that was opened with the log_file command), then use the send_log command instead:
log_file -noappend myfile.log
send_log "This line goes into the log file"
send_user "This line goes into both the log file and console\n"
puts "This line goes to the console"
The above example introduced another command: send_user, which acts like puts and send_log combined. Note that send_user does not include a new line, so the caller should include it.
Also We can create our own log file by stranded [open $file_name w] command and keep writing everything to that file.