Socket timeout after some time - tcl

If the server went into a infinite loop, how we can close the server connection after sometime?
Here is the code in which I am trying:
set s [socket $host $port]
fconfigure $s -blocking 1 -buffering line
after 2000 set end 1
vwait end
if { $s != "" } {
puts -nonewline $s "$msg\n.\n"
flush $s
fileevent $s readable [set answer [read $s]]
puts "$answer"
if {[catch {close $s}]} {
puts "Server hanged"
}
This above code is working if the answer was given by the server without any problem. If the server went into infinite loop, it is keep on hanging in read $s. Please help on how to handle this read socket in a non-blocking mode as like in fconfigure.

If you're using blocking sockets, you'll have this problem: putting the channel in non-blocking mode is the fix (together with using after to write a timeout). This does mean that you'll have to deal with all the complexity of asynchronous programming, but that's the tradeoff that you need here.
The two places where things can hang are in connection establishment and in production of the data. You would therefore use asynchronous connection and non-blocking retrieval.
set s [socket -async $host $port]
fconfigure $s -blocking 0
fileevent $s writeable [list connected $s]
proc connected {s} {
global msg
fileevent $s writeable {}
puts -nonewline $s "$msg\n.\n"
flush $s
fileevent $s readable [list accumulateBytes $s]
}
set accumulate ""
proc accumulateBytes {s} {
global accumulate end
append accumulate [read $s]
if {[eof $s]} {
set end 0
}
}
# Longer *total* time to connect and communicate.
after 5000 set end 1
vwait end
catch {close $s}
if {$end} {puts "timed out"}
puts "received message: $accumulate"

Related

Why does the channel event readable keep firing in this code?

Could you tell me what in this code would cause the chan event $sock readable [list ReadLine $sock] to repeatedly be fired after a socket has been reset?
I'm trying to keep the socket open on an extremely simple local server by resetting the socket to read again at the first line.
I'm not sure if it is the browser or the Tcl code. I flush the socket before sending each response using chan flush $sock; so, I don't think there should be any data left in the input buffer to fire the readable event.
Everything works fine until I stop using the application for a few minutes, and then procedure ReadLine is repeatedly invoked with a state of 1 but no data.
I included procedure GetLexi at the end because its the one I've been testing when this started occurring and, perhaps, I'm doing something wrong there and the browser doesn't know the response is complete. I should add also that it is called from with procedure GetHandler and should return back to the switch block and be reset. I did test that the socket is reset after an invocation of GetLexi.
Thank you for any guidance you may be able to provide.
proc ResetSock {sock} {
global state
set state($sock) {1}
chan configure $sock -encoding iso8859-1 -translation crlf
}; #close ResetSock
proc ClientConnect {sock client_ip client_port} {
global state
if {![info exists state($sock)]} {
set state($sock) {1}; # 1 for first line; 2 for any other header line.
chan configure $sock -buffering line -blocking 0 -encoding iso8859-1 -translation crlf
}
chan event $sock readable [list ReadLine $sock]
}; #close ClientConnect
proc ReadLine {sock} {
global state
set sptr state($sock)
set row [lindex [set $sptr] 0]
if {[catch {gets $sock line} len]} {
# Handle this error.
return
}
if {$len == 0} {
#According to Tclhttpd, in between requests, a row 1 and len 0
#combination can occur. There, it is ignored.
if {$row == 2 } {
switch [lindex [set $sptr] 1] {
"POST" {
set post [PostHandler $sock [lindex [set $sptr] 3]]
puts stdout "Posted: $post"
ResetSock $sock
}
"GET" {
GetHandler $sock [lindex [set $sptr] 2]
ResetSock $sock
}
default { CloseSock $sock }
}
}
} elseif {$len > 0} {
switch $row {
1 {
# First row of request.
lassign [split [string trim $line]] op arg rest
lappend $sptr $op $arg
lset $sptr 0 0 2
}
2 {
# Read headers.
}
default { }
}
} else {
# Only other option for $len is a negative value;
# thus, an error to be handled.
}
}; #close ReadLine
proc GetLexi { nbr sock } {
chan flush $sock
set sql { select img_base64 from lexi_raw where strongs_no = $nbr }
dbws eval $sql {
set lexi [binary format a* "{\"lexi\":\"$img_base64\"}"]
}
set headers ""
append headers "HTTP/1.1 200 OK\n"
append headers "Content-Type: application/json; charset: utf-8\n"
append headers "Content-length: [string length $lexi]\n"
append headers "Connection: Keep-Alive\n"
puts $sock $headers
chan configure $sock -translation binary
puts $sock $lexi
}; #close GetLexi
set state(x) {}
if [catch {set listener [socket -server ClientConnect -myaddr 127.0.0.1 8000]}] {
error "couldn't open listening socket"
}
vwait forever
catch {close $listener}
It's possible for the amount of data received in a packet to be not enough to complete a line. TCP hides most of the details, of course, but it remains entirely possible for the readable event to fire when a complete line is not available. As you've put the socket in non-blocking mode, that means that gets will do a zero length read (empty string written to line, len becomes 0); if the socket was blocking, the gets would block the thread until a complete line was available. You can also get zero-length reads if the channel is closed; detection of closure isn't very reliable with TCP (because networking is like that) but may happen. When the socket is closed, all reading from it in non-blocking mode results in zero-length results.
How do you distinguish these cases?
First, we check for end-of-stream:
if {[chan eof $sock]} {
# It's gone; there's not much else you can do at this point except log it I guess
close $sock
return
}
Then we need to see if things are blocked, and if so, how much is buffered:
if {[chan blocked $sock]} {
set bufferSize [chan pending input $sock]
if {$bufferSize > 4096} { # 4k is enough for most things
# Line is too long; client not well-behaved…
# You *might* send an error here.
close $sock
return
}
}
If it's neither of those cases, we've actually read a line.
if {$len == 0} {
# Empty line; end of HTTP header
} else {
# etc.
}

Check abnormal connection drop before writing

I am new to TCL scripting and writing a production code to open a socket to our server and write command and then read its output. Below is my code:
set chan [socket 123.345.45.33 23]
fconfigure $chan -buffering line
foreach item [dict keys $command] {
set cmd [dict get $command $item]
set res [Data_write $chan "get data $cmd"]
}
Where Data_write procedure is mentioned below:
proc Data_write { channel data } {
if {[eof $channel]} {
close $channel
ST_puts "Data_write: Error while writing to chanel"
return -1
} else {
puts $channel $data
return 0
}
}
I am not sure that how can we achive the validations below:
set chan [socket 123.345.45.33 23] - socket connection open is success
fconfigure on the channel is success
How to know before any write that any abnormal connection drop has happen on channel?
set chan [socket 123.345.45.33 23] - socket connection open is success
fconfigure on the channel is success
These are simple enough: if there's a failure, you get a Tcl error, which is a sort of exception. You can use catch or try to trap the error if you want:
try {
set chan [socket 123.345.45.33 23]
fconfigure $chan -buffering line
} on error msg {
puts "a serious problem happened: $msg"
# Maybe also exit here...
}
How to know before any write that any abnormal connection drop has happen on channel?
The bad news is that you can't know this. The OS itself won't really know this until you do one of two things: write to the channel, or read from the channel. (There are sometimes hints available, such as fileevent events firing, but they're not certain at all.) Instead, you need to trap the error when actually you do the write. See above for the general pattern.
Remember: Tcl operations throw errors when they fail, and EOF is not an error when reading, but is an error when writing.
Use socket -async and the readable and writeable fileevents to make the whole connection process event oriented.
In the writable event you can check the status of the connection using fconfigure $channel -error. If something failed in the connection, the socket is made writable and the error condition presented on the error property. If this is empty then you can configure the readable event and start processing data from the socket. In any readable event handler you should check for eof after reading and disable the readable event handler or close the socket once eof is seen as a socket in eof state becomes constantly readable.
This roughly works out to be:
proc OnWritable {chan} {
set err [fconfigure $chan -error]
if {$err ne "" } {
puts stderr "ERROR: $err"
exit 1
}
fconfigure $chan -blocking 0 -buffering none -encoding binary -translation lf
fileevent $chan writable {}
fileevent $chan readable [list OnReadable $chan]
}
proc OnReadable {chan} {
set data [read $chan]
puts "[string length $data]: $data"
if {[eof $chan]} {
fileevent $chan readable {}
puts "closed $chan"
set ::forever "closed"
}
}
set sock [socket -async $address $port]
fileevent $sock writable [list OnWriteable $sock]
vwait ::forever

Breaking Condition in If construct

Hi I am using this piece of code for inserting pipe in TCL. Can anybody please let me understand when this condition [gets $pipe line] >= 0 fails.
For eg: only when [gets $pipe line] is a negative number this will fail.
In my case it is never returning a negative number and the TestEngine hangs forever
set pipeline [open "|Certify.exe filename" "r+"]
fileevent $pipeline readable [list handlePipeReadable $pipeline]
fconfigure $pipeline -blocking 0
proc handlePipeReadable {pipe} {
if {[gets $pipe line] >= 0} {
# Managed to actually read a line; stored in $line now
} elseif {[eof $pipe]} {
# Pipeline was closed; get exit code, etc.
if {[catch {close $pipe} msg opt]} {
set exitinfo [dict get $opt -errorcode]
} else {
# Successful termination
set exitinfo ""
}
# Stop the waiting in [vwait], below
set ::donepipe $pipe
} else {
puts ""
# Partial read; things will be properly buffered up for now...
}
}
vwait ::donepipe
The gets command (when given a variable to receive the line) returns a negative number when it is in a minor error condition. There are two such conditions:
When the channel has reached end-of-file. After the gets the eof command (applied to the channel) will report a true value in this case.
When the channel is blocked, i.e., when it has some bytes but not a complete line (Tcl has internal buffering to handle this; you can get the number of pending bytes with chan pending). You only see this when the channel is in non-blocking mode (because otherwise the gets will wait indefinitely). In this case, the fblocked command (applied to the channel) will return true.
Major error conditions (such as the channel being closed) result in Tcl errors.
If the other command only produces partial output or does something weird with buffering, you can get an eternally blocked pipeline. It's more likely with a bidirectional pipe, such as you're using, as the Certify command is probably waiting for you to close the other end. Can you use it read-only? There are many complexities to interacting correctly with a process bidirectionally! (For example, you probably want to make the pipe's output buffering mode be unbuffered, fconfigure $pipeline -buffering none.)
Please find the way the certify process is being triggered from the command prompt and the print statements are given just for the understanding. At the end the process hangs and the control is not transferred back to the TCL
From the documentation for gets:
If varName is specified and an empty string is returned in varName because of end-of-file or because of insufficient data in nonblocking mode, then the return count is -1.
Your script is working completely fine. checked with set pipeline [open "|du /usr" "r+"]
instead of your pipe and included puts "Line: $line" to check the result. So its clear that there is some problem in Certify command. Can you share your command, how do you use on terminal and how did you use with exec?
################### edited by Drektz
set pipeline [open "|du /usr" "r+"]
fileevent $pipeline readable [list handlePipeReadable $pipeline]
fconfigure $pipeline -blocking 0
proc handlePipeReadable {pipe} {
if {[gets $pipe line] >= 0} {
# Managed to actually read a line; stored in $line now
################### included by Drektz
puts "Line: $line"
} elseif {[eof $pipe]} {
# Pipeline was closed; get exit code, etc.
if {[catch {close $pipe} msg opt]} {
set exitinfo [dict get $opt -errorcode]
} else {
# Successful termination
set exitinfo ""
}
# Stop the waiting in [vwait], below
set ::donepipe $pipe
} else {
puts ""
# Partial read; things will be properly buffered up for now...
}
}
vwait ::donepipe
you can see it in the CMDwith& screenshot mentioned.Is there any workaround that I have to overcome the issue
Please see the issue when run with the help of exec executable args &

Strange line buffering behaviour in Tcl 8.6?

EDIT: Original example and alternative solution framework modified for clarity.
The line buffering behaviour might behave differently than expected in Tcl 8.6. The following code blocks without any output, unless the "chan close" line is uncommented:
set data {one two four}
set stream [open |[list cat -n] r+]
chan configure $stream -buffering line
chan puts $stream "$data\n"
chan puts $stream "\n"
chan flush $stream
#chan close $stream write
set out [chan read $stream]
puts "output: $out"
chan close $stream
So this simplistic solution does not work for interactive I/O, and this might be related to synchronization problems at both ends of the pipe.
Using a channel event structure (e.g., based on http://www.beedub.com/book/2nd/event.doc.html), seems to be preferable:
proc chanReader { pipe } {
global extState
while 1 {
set len [chan gets $pipe line]
if { $len > 0 } {
puts "<< $line."
continue
} else {
if { [chan blocked $pipe] } {
set extState 1
return
} elseif { [chan eof $pipe] } {
set extState 2
return
}
}
}
}
set data {one two foure}
set timeout 5000
#set stream [open [list | cat -n] r+]
#set stream [open [list | ispell -a] r+]
set stream [open [list | tr a-z A-Z] r+]
#set stream [open [list | fmt -] r+]
chan configure $stream -blocking 0 -buffering line
set extState 0
chan event $stream readable [list chanReader $stream]
foreach word $data {
puts "> $word\n"
chan puts $stream "$word\n"
chan flush $stream
#chan close $stream write
set aID [after $timeout {set extState 3}]
vwait extState
if { $extState == 1 } {
# Got regular output.
after cancel $aID
puts "Cancel $aID."
continue
} elseif { $extState == 2 } {
puts "External program closed."
chan close $stream
exit 2
} elseif { $extState == 3 } {
puts "Timeout."
chan close $stream
exit 3
}
}
puts "End of task."
chan close $stream
exit 0
This code fragment works with the "cat -n" and "ispell -a" external programs (commented lines), but still fails with other external programs. For instance it does not work with the "tr a-z A-Z" and "fmt" examples above.
If the line "chan close $stream write" above is uncommented, we receive output from the external program, but this terminates the interaction with it. How to reliably connect (interactively) to these external programs?
I'm guessing that the core issue here is that there's two sources of buffering going on, and Tcl only has control over one of them. But both stem from the fact that virtually all output, when not going to an “interactive” destination (i.e., a terminal), is buffered. There's basically a call in the C standard library that determines this and enables the buffering feature, and Tcl follows that rule too (despite using its entirely independent I/O library). Doing this massively speeds up non-interactive pipeline processing, but means that if you're expecting to see every byte output exactly at the point when the program thinks it is writing it, you're going to be disappointed.
Of course, programs can switch this buffering off if they want. In Tcl, this is done by fconfigure $channel -buffering none (or line for line-oriented buffering). In cat, the -n option makes it do the equivalent (calling setvbuf() in C) and ispell is probably doing the same. But most programs don't. Some instead call fflush() from time to time; that works too, but is also a minority practice. So with a bidirectional pipeline such as you're using, you can easily force the side where you feed into it from Tcl not buffer, but you can't usually get the other side to do the same.
There is a workaround: run the subprocess with Expect. That puts a fake terminal between Tcl and the subprocess (instead of a pipe) and tricks it into thinking it is talking direct to the user. But the consequence of this is that you have to substantially rewrite your Tcl program and you gain a dependency on a (very fine!) external package.

Read socket is blocked

I'm writing a socket utility to communicate a client to a server. When input to the socket from the client side, the server is receiving it fine. However, when input to the socket from the server, the client can't read. When checking for fblocked $channel. It is 1. I've tried everything including adding new line, ...
Please help.
Below is my code
proc read_command { sock } {
variable self
global connected
set len [gets $sock line]
set bl [fblocked $sock]
puts "Characters Read: $len Fblocked: $bl"
if {$len < 0} {
if {$bl} {
puts "Input is blocked"
} else {
set connected 1
puts "The socket was closed - closing my end"
close $sock
}
} else {
if {!$bl} {
puts "Read $len characters: $line"
catch {uplevel #0 $line} output
puts "1==>$output<=="
puts $sock "$output"
puts $sock "\n"
flush $sock
}
}
}
proc client { host port } {
variable self
set s [socket $host $port]
set self(csock) $s
set self($s,addr) $host
set self($s,port) $port
fconfigure $s -buffering line -blocking 0
return $s
}
proc prun { sock args} {
variable self
set result [list]
set cmd $args
set cmd [regsub -all {(^\s*\{)||(\}\s*$)} $cmd ""]
set cmd [string trimleft $cmd]
set o1 [eval $cmd]
#catch {uplevel #0 $cmd} o1
puts "1_$sock ==> $o1"
lappend result $o1
#--------------
puts $sock $cmd
flush $sock
set bl [fblocked $sock]
set file [read $sock]
set bl [fblocked $sock]
puts "Fblocked: $bl"
puts "Output: $file"
puts "2_$Comm::self(csock) ==> $file ==> $bl"
lappend result $file
return $result
}
Here is how I run it.
I call server on 1 of the terminal. It will echo the ip address and the port.
Then I call client using the address and the port above to get back the client socket
Then I call prun on the client shell to get back a pair of values, one with the value of the cmd call on the client, and the other the value of the cmd call on the server. Basically I would like to get the pair of values so I can use them for correlation between the 2 set of data.
Below is the code:
1.
On server shell
$ server
2.
On client shell
$ set s [client $addr $port]
3.
Call a proc to get the value from the client shell, then send the command to the server to get the value from the server shell, and return that value back to the client.
$ set res [prun $s {set val [get_attribute [get_nets mynet] pin_capacitance_max]}]
You wrote:
puts "2_$Comm::self(csock) ==> $file ==> $bl"
and defined self with variable. Are you working with packages?. May be you forgot something related to it.
For test you can use just global but using arrays would be a little more complicated.