Get attributes of an Agent in NS2 - tcl

I actually need to know the attributes of my UDP Agent in my TCL script (to print some values and use it for statistics) and this is my first time with this script language. I tried to use the command info but I failed to use it.
This is my code :
#Setup a UDP connection
set udp [new Agent/UDP]
puts [$udp info class] # Work and print "Agent/UDP"
puts [info class variables Agent/UDP] #Fail with the error "Agent/UDP does not refer to an object"
I tried with :
puts [info class variables udp] #Fail (same error)
puts [info class variables $udp] #Error : _o87 does not refer to an object
No more result.
Can you tell me what I did wrong and how to get the attributes of my Agent/UDP object.

The problem is that there are multiple object systems about. Agent/UDP is an OTcl class, whereas info class operates on TclOO classes. TclOO (the standard object system from Tcl 8.6 onwards) is quite a lot newer than OTcl and has more features (it is faster too) but the syntax is a bit different in the detail so we don't expect ns-2 to ever be ported over. (There is a twisted heritage from OTcl to TclOO via XOTcl… but the syntax isn't one of the things that made the transition, as that was drawn more from another object system, [incr Tcl]. Tcl's been “blessed” with a plague of object systems.)
Documentation for OTcl isn't the easiest to find, but this page is helpful, as is the equivalent for instances. In particular, it tells us that we can do introspection via the info instproc (i.e., method):
set udp [new Agent/UDP]
puts [$udp info vars]
puts [$udp info commands]

Related

How to use an external variable in linkage section in COBOL and pass values from it into a new module and write into my new output file

Could someone please tell me why a variable is declared as "External" in a module and how to use that in other modules through Linkage section and how to pass them into new fields so I can write it to a new file.
EXTERNAL items are commonly found in WORKING-STORAGE. These are normally not passed from one program to another via CALL and LINKAGE but shared directly via the COBOL runtime.
Declaring an item as EXTERNAL behaves like "runtime named global storage", you assign a name and a length to a global piece of memory and can access it anywhere in the same runtime unit (no direct CALL needed), even in cases like the following:
MAIN
-> CALL B
B: somevar EXTERNAL
-> MOVE 'TEST' TO somevar
-> CANCEL B
-> CALL C
C: somevar EXTERNAL -> now contains 'TEST'
On an IBM Z mainframe, running z/OS, the runtime routines for all High Level Languages (HLLs) is called Language Environment (LE). Decades ago, each HLL had its own runtime and this caused some problems when they were mixed into the same run unit; starting in the early 1990s IBM switched all HLLs to LE for their runtime.
LE has the concept of an enclave. Part of the text at that link says an enclave is the equivalent of a run unit in COBOL.
Your question is tagged CICS, and sometimes behavior is different when running in that environment. Quoting from that link...
Under CICS the execution of a CICS LINK command creates what Language Environment calls a Child Enclave. A new environment is initialized and the child enclave gets its runtime options. These runtime options are independent of those options that existed in the creating enclave.
[...]
Something similar happens when a CICS XCTL command is executed. In this case we do not get a child enclave, but the existing enclave is terminated and then reinitialized with the runtime options determined for the new program. The same performance considerations apply.
So, as #SimonSobich noted, if you use CALLs to invoke your subroutines when running in CICS, EXTERNAL data is global to the run unit. But, if you use EXEC CICS XCTL to invoke your subroutines, you may see different behavior and have to design your application differently.

Dealing with invalid filehandles (and maybe other invalid objects too)

As indicated by Tom Browder in this issue, the $*ARGFILES dynamic variable might contain invalid filehandles if any of the files mentioned in the command line is not present.
for $*ARGFILES.handles -> $fh {
say $fh;
}
will fail with and X::AdHoc exception (this should probably be improved too):
Failed to open file /home/jmerelo/Code/perl6/my-perl6-examples/args/no-file: No such file or directory
The problem will occur as soon as the invalid filehandle is used for anything. Would there be a way of checking if the filehandle is valid before incurring in an exception?
You can check if something is a Failure by checking for truthiness or definedness without the Failure throwing:
for $*ARGFILES.handles -> $fh {
say $fh if $fh; # check truthiness
.say with $fh; # check definedness + topicalization
}
If you still want to throw the Exception that the Failure encompasses, then you can just .throw it.
TL;DR I thought Liz had it nailed but it seems like there's a bug or perhaps Ugh.
A bug?
It looks like whenever the IO::CatHandle class's .handles method reaches a handle that ought by rights produce a Failure (delaying any exception throw) it instead immediately throws an exception (perhaps the very one that would work if it were just delayed or perhaps something broken).
This seems either wrong or very wrong.
Ugh
See the exchange between Zoffix and Brad Gilbert and Zoffix's answer to the question How should I handle Perl 6 $*ARGFILES that can't be read by lines()?
Also:
https://github.com/rakudo/rakudo/issues/1313
https://github.com/rakudo/rakudo/search?q=argfiles&type=Issues
https://github.com/rakudo/rakudo/search?q=cathandle&type=Issues
A potential workaround is currently another bug?
In discussing "Implement handler for failed open on IO::CatHandle" Zoffix++ closed it with this code as a solution:
.say for ($*ARGFILES but role {
method next-handle {
loop {try return self.IO::CatHandle::next-handle}
}
})
I see that tbrowder has reopened this issue as part of the related issue this SO is about saying:
If this works, it would at least be a usable example for the $*ARGFILES var in the docs.
But when I run it in 6.d (and see similar results for a 6.c), with or without valid input, I get:
say not yet implemented
(similar if I .put or whatever).
This is nuts and suggests something gutsy is getting messed up.
I've searched rt and gh/rakudo issues for "not yet implemented" and see no relevant matches.
Another workaround?
Zoffix clearly intended their code as a permanent solution, not merely a workaround. But it unfortunately doesn't seem to work at all for now.
The best I've come up with so far:
try {$*ARGFILES} andthen say $_ # $_ is a defined ArgFiles instance
orelse say $!; # $! is an error encountered inside the `try`
Perhaps this works as a black-and-white it either all works or none of it does solution. (Though I'm not convinced it's even that.)
What the doc has to say about $*ARGFILES
$*ARGFILES says it is an instance of
IO::ArgFiles which is doc'd as a class which
exists for backwards compatibility reasons and provides no methods.
And
All the functionality is inherited from
IO::CatHandle which is subtitled as
Use multiple IO handles as if they were one
and doc'd as a class that is
IO::Handle which is subtitled as
Opened file or stream
and doc'd as a class that doesn't inherit from any other class (so defaults to inheriting from Any) or do any role.
So, $*ARGFILES is (exactly functionally the same as) a IO::CatHandle object which is (a superset of the functionality of) an IO::Handle object, specifically:
The IO::CatHandle class provides a means to create an IO::Handle that seamlessly gathers input from multiple IO::Handle and IO::Pipe sources. All of IO::Handle's methods are implemented, and while attempt to use write methods will (currently) throw an exception, an IO::CatHandle is usable anywhere a read-only IO::Handle can be used.
Exploring the code for IO::CatHandle
(To be filled in later?)

TCL Destructor is not called on window close

I have a class DataDialog, which contains a destructor like
destructor {
puts "DataDialog has been destructed"
#further code
}
If I close the application via the X-window-button this destructor is not called. If I close it over file->close it is called.
On the toplevel I have the following
wm protocol . WM_DELETE_WINDOW {
Exit 0
}
How can I change this behaviour to call all destructors (or at least the one of my class DataDialog)?
How about
wm protocol . WM_DELETE_WINDOW {
DataDialog destroy
Exit 0
}
If you call exit (or if you delete the interpreter) then Tcl does not guarantee to call destructors. That's because tearing down everything in memory can be surprisingly expensive. Critical resources typically have extra exit handlers registered at the C level to ensure that they get cleaned up correctly, but they're very much the exception; the only ones you likely use on a common basis are channels (which are flushed on exit). There isn't any Tcl-level for doing this; those handlers are usually called at points where it is no longer safe to call Tcl commands.
However, the default behaviour for handling cooperative window closure is effectively to send a <Destroy> message to the window. Those aren't entirely interceptable (the window will go away) but you can bind to them to find out when they occur. Be aware of one quirk though: toplevel windows also listen to all the events of their children (though they don't get killed by passing <Destroy>s unless they're sent to them directly). Check that %W actually refers to the window that you think you're really listening to before taking special action.

ClojureScript Eval. How to use libraries included in the calling code

I have a Clojurescript program running in the browser.
It imports a number of libraries, and then I want to allow the user to enter some small clojurescript "glue-code" that calls those libraries.
I can see (from https://cljs.github.io/api/cljs.js/eval) that you call eval with four arguments, the first being the state of the environment, which is an atom. But can I actually turn my current environment with all the functions I've required from elsewhere, into an appropriate argument to eval?
Update :
I thought that maybe I could set the namesspace for the eval using the :ns option of the third, opts-map, argument. I set it to the namespace of my application :
:ns "fig-pat.core"
But no difference.
Looking at the console, it's definitely the case that it's trying to do the evaluation, but it's complaining that names referenced in the eval-ed code are NOT recognised :
WARNING: Use of undeclared Var /square
for example. (square is a function I'm requiring. It's visible in the application itself ie. the fig-pat.core namespace)
I then get :
SyntaxError: expected expression, got '.'[Learn More]
Which I'm assuming this the failure of eval-ed expression as a whole.
Update 2 :
I'm guessing this problem might actually be related to : How can I get the Clojurescript namespace I am in from within a clojurescript program?
(println *ns*)
is just printing nil. So maybe Clojurescript can't see its own namespace.
And therefore the :ns in eval doesn't work?
Calling eval inside a clojurescript program is part of what is called "self-hosted clojurescript".
In self-hosted clojurescript, namespaces are not available unless you implement a resolve policy. It means that have to let the browser know how to resolve the namespace e.g. loads a cljs file from a cdn.
It's not so trivial to implement namespace resolving properly.
This is explained in a cryptic way in the docstring of load-fn from cljs.js namespace.
Several tools support namespaces resolving in self-host cljs running in the browser e.g Klipse and crepl

Access higher level from script called by fileevent

I'm trying to draw on a canvas that is in the top level of my Tcl/Tk script, but from inside a call by fileevent like this:
canvas .myCanvas {}
proc plot_Data { myC inp } { $myC create rectangle {} }
fileevent $inp readable [list plot_Data .myCanvas $inp ]
pack .myCanvas
I have found out that the script called by fileevent (plot_Data) lives in a different space.
The script for a file event is executed at global level (outside the context of any Tcl procedure) in the interpreter in which the fileevent command was invoked.
I cannot make the two meet. I have definitely narrowed it down to this: plot_Data just can't access .myCanvas . Question: How can the fileevent script plot on the canvas?
The goal of this is live plotting, by the way. $inp is a pipe to a C-program that reads data from a measurement device. It is imho rightly configured with fconfigure $inp -blocking 0 -buffering none.
Callback scripts (except for traces, which you aren't using) are always called from the context of the global namespace. They cannot see any stack frames above them. This is because they are called at times that aren't closely controlled; there's no telling what the actual stack would be, so it is forced into a known state.
However, canvases (and other widgets) have names in the global namespace as well. Your callbacks most certainly can access them, provided the widget has not been destroyed, and might indeed be working. You just happen to have given it an empty list of coordinates to create, which is not usually legal to use with a canvas item.
Since you are using non-blocking I/O, you need to be aware that gets may return the empty string when reading an incomplete line. Use fblocked to determine if a partial read happened; if it does, the data is in a buffer on the Tcl side waiting for the rest of the line to turn up, and it is usually best to just go to sleep and wait for the next fileevent to fire.
A problem that might bite you overall is if the C program is in fully buffered mode; this is true by default when writing output from C to a pipe. Setting the buffering on the Tcl side won't affect it; you need to use setvbuf on the C side, or insert regular fflush calls, or use Expect (which pretends to be an interactive destination, though at quite a lot of cost of complexity of interaction) or even unbuffer (if you can find a copy).