Function definition and Source ordering of tcl files - tcl

I have multiple tcl files getting sourced
source fg_lib.tcl
source stc_lib.tcl
In stc_lib.tcl, there is a function which is only defined in fg_lib.tcl. Can I assume that since fg_lib.tcl is getting sourced, automatically the function will be usable to stc_lib.tcl?
One more question: if a certain function is defined in both the tcl files, depending on the ordering of source as above, which version of the function will be executed? I think function defined in stc_lib.tcl will be, but still would like to clarify.
Thanks,

The source command acts, immediately, as if the content of the file was in the script at the point where the source appears (except for the difference of what info script returns). If both scripts define a procedure foobar, it will be the later script (stc_lib.tcl in your case) that produces the version that is used.
However, if the scripts just define procedures that don't have overlapping names and don't otherwise call the commands they create, the order in which the sources are placed is typically unimportant. The proc command just creates a command; the body of the procedure isn't evaluated until the procedure is called. (This sounds obvious, but it really is exactly like that. The code is exactly what it says it is, and Tcl is all about immediate operational semantics and code that is registered to be run in response to some event.)
Bear in mind that if you are having problems with sources smashing each other, it's probably best to look into putting the code into namespaces or to otherwise find a way to stop entangling things. Writing confusing code is not recommended.

Related

Racket: Using "csv-reading" package within a function

I am using csv-reading to read from a csv file to convert it into a list.
When I call at the top level, like this
> (call-with-input-file "to-be-asked.csv" csv->list)
I am able to read csv file and convert it into list of lists.
However, if I call the same thing within a function, I am getting the error.
> (read-from-file "to-be-asked.csv")
csv->list: undefined;
cannot reference an identifier before its definition
in module: top-level
I am not getting what's going wrong. I have added (require csv-reading) before the function call.
My read-from-file code is:
(define (read-from-file file-name)
(call-with-input-file file-name csv->list))
EDIT
I am using racket within emacs using Geiser. When I (exit) the buffer and type C-c C-z, it is showing the error.
When I kill the buffer and start the Geiser again, it is working properly.
Is it the mistake of Geiser and emacs?
You've hit the classic problem with what I'll call resident programming environments (I don't know the right word for then). A resident programming environment is one where you talk to a running instance of the language, successively modifying its state.
The problem with these environments is that the state of the running language instance is more-or-less opaque and in particular it can get out of sync with the state you can see in files or buffers. That means that it can become obscure why something is happening and, worse, you can get into states where the results you get from the resident environment are essentially unreproducible later. This matters a lot for things like Jupyter notebooks where people doing scientific work can end up with results which they can't reproduce because the notebook was evaluated out of sequence or some of it was not evaluated at all.
On the other hand, these environments are an enormous joy to use which is why I use them. That outweighs the problems for me: you just have to be careful you can recreate the session and be willing to do so occasionally.
In this case you probably had something like this in the buffer/file:
(require csv-reading)
(define (read-from-file file-name)
(call-with-input-file file-name csv->list))
But you either failed to evaluate the first form at all, or (worse!) you evaluated the forms out of order. If you did this in Common Lisp or any traditional Lisp this would all be fine: evaluating the first form would make the second form work. But Racket decides once and for all what csv->list means (or does not mean) at the point the read-from-file is defined, and a later provide won't fix that. You then end up in the mysterious situation where the function you defined does not work, but if you define a new function which uses csv->list it will work. This is because it has much cleverer semantics than CL, but also semantics not designed for this kind of interactive development as far as I can tell (certainly DrRacket strongly discourages it).

How to find dependend functions in octave

I would like to identify all functions needed to run a specific function in octave. I need this to deploy an application written in Octave.
While Matlab offers some tools to analyse a function on its dependencies, I could not find something similar for Octave.
Trying inmem as recommended in matlab does not produce the expected result:
> inmem
warning: the 'inmem' function is not yet implemented in Octave
Is there any other solution to this problem available?
First, let me point out that from your description, the matlab tool you're after is not inmem, but deprpt.
Secondly, while octave does not have a built-in tool for this, there is a number of ways to do so yourself. I have not tried these personally, so, ymmv.
1) Run your function while using the profiler, then inspect the functions used during the running process. As suggested in the octave archives: https://lists.gnu.org/archive/html/help-octave/2015-10/msg00135.html
2) There are some external tools on github that attempt just this, e.g. :
https://git.osuv.de/m/about
https://github.com/KaeroDot/mDepGen
3) If I had to attack this myself, I would approach the problem as follows:
Parse and tokenise the m-file in question. (possibly also use binary checks like isvarname to further filter useless tokens before moving to the next step.)
For each token x, wrap a "help(x)" call to a try / catch block
Inspect the error, this will be one of:
"Invalid input" (i.e. token was not a function)
"Not found" (i.e. not a valid identifier etc)
"Not documented" (function exists but has no help string)
No error, in which case you stumbled upon a valid function call within the file
To further check if these are builtin functions or part of a loaded package, you could further parse the first line of the "help" output, which typically tells you where this function came from.
If the context for this is that you're trying to check if a matlab script will work on octave, one complication will be that typically packages that will be required on octave are not present in matlab code. Then again, if this is your goal, you should probably be using deprpt from matlab directly instead.
Good luck.
PS. I might add that the above is for creating a general tool etc. In terms of identifying dependencies in your own code, good software engineering practices go a long way towards providing maintenable code and easily resolving dependency problems for your users. E.g: -- clearly identifying required packages (which, unlike matlab, octave does anyway by requiring such packages to be visibly loaded in code) -- similarly, for custom dependencies, consider wrapping and providing these as packages / namespaces, rather than scattered files -- if packaging dependencies isn't possible, you can create tests / checks in your file that throw errors if necessary files are missing, or at least mention such dependencies in comments in the file itself, etc.
According to Octave Compatibility FAQ here,
Q. inmem
A. who -functions
You can use who -function. (Note: I have not tried yet.)

How do I find where a function is declared in Tcl?

I think this is more of a Tcl configuration question rather than a Tcl coding question...
I inherited a whole series of Tcl scripts that are used within a simulation tool that my company built in-house. In my scripts, I'm finding numerous instances where there are function calls to functions that don't seem to be declared anywhere. How can I trace the path to these phantom functions?
For example, rather than use source, someone build a custom include function that they named INCLUDE. Tclsh obviously balks when I try to run it there, but with my simulation software, it runs fine.
I've tried grep-ing through the entire simulation software for INCLUDE, but I'm not having any luck. Are there any other obvious locations outside the simulation software where a Tcl function might be defined?
The possibilities:
Within your software. (you have checked for this).
Within some other package included by the software.
Check and see if the environment variable TCLLIBPATH is set.
Also check and see if the simulation software sets TCLLIBPATH.
This will be a list of directories to search for Tcl packages, and you
will need to search the packages that are located outside of the
main source tree.
Another possibility is that the locations are specified in the pkgIndex.tcl file.
Check any pkgIndex.tcl files and look for locations outside the main source tree.
Within an unknown command handler. This could be in
your software or within some other package. You should be able to find
some code that processes the INCLUDE statement.
Within a binary package. These are shared libraries that are loaded
by Tcl. If this is the case, there should be some C code used to
build the shared library that can be searched.
Since you say there are numerous instances of unknown functions, my first
guess is that you have
not found all the directories where packages are loaded from. But an
''unknown'' command handler is also a possibility.
Edit:
One more possibility I forgot. Check and see if your software sets the auto_path variable. Check any directories added to the auto_path for
other packages.
This isn't a great answer for you, but I suspect it is the best you're going to get...
The procedure could be defined in a great many places. Your best bet for finding it is to use a tool like findstr (on Windows) or grep -R (on POSIX platforms) to search across all the relevant source files. But that still might not help! It might not be a procedure but instead a general command, which could be implemented in C and not as a procedure, or it could be defined in a packaged application archive (which are usually awkward to look inside). There are also other types of script-implemented command too, which could make things awkward. Generally searching and investigating is your best bet, but it might not work.
Tcl doesn't really differentiate strongly between different types of command except in some introspection operations. If you're lucky, you could find that info body tells you the definition of the procedure (and info args and info default tell you about the arguments) but that won't help with other command types at all. Tcl 8.7 will include a command (info cmdtype) that would help a lot with narrowing down what to do next, but that's no use to you now and it definitely doesn't exist in older versions.

How should I manage too many TCL procedures?

Me and my team have been working on multiple tools flow for awhile now.
We keep adding new procedure in either a same file or creating new file in the same directory. There are also a lot of nested procedures; one calls others.
The number of procedures will only keep growing, and the flow involves at least 10 people who love to do their own things.
My question is, how would we go about managing all these procedures in a tidy manner?
We assume you follow good practices with general software engineering (keeping files in source control, etc.) as without those you're stuck anyway.
Tcl doesn't really support nested procs; you can call proc from inside another procedure, but it doesn't do any kind of scoping.
You should be thinking in terms of dividing up your code into pieces of “coherent API”. What exactly that is depends on your application, but it is only rarely a single procedure; a particular dialog box or screen is a much more useful unit. That might end up as one procedure, but it's often several related ones.
Once you've identified these coherent pieces, they form the contents of what you put in a file, typically one coherent piece per file, though if the file is rather long when you do that, using a group of files instead (probably in their own directory) makes a lot of sense. At the same time, you probably should make the variables and commands defined by each coherent piece be all in a Tcl namespace, which isolates the piece a little bit from the rest of the world, mostly to stop code from treading on the toes of other code.
Now that you've done that, and if you've got what you think is a stable API to your coherent piece, you can make that piece into a Tcl package. That's just done by giving it a higher-level name and version number; you put this in one of your files in the coherent piece:
package provide YourPackageName 1.0
and then (usually in the same directory) you make a pkgIndex.tcl file with contents like this:
package ifneeded YourPackageName 1.0 [list source [file join $dir yourFilename.tcl]]
That is, it says that to get YourPackageName version 1.0 in a Tcl interpreter, you source the file $dir/yourFilename.tcl; the $dir is a convenience in package index files that refers to the directory containing the current package index file. Then the rest of your code can stop thinking about “reading the right files”, and start thinking in terms of “use this defined API”. (This is great if you then choose to start implementing the package using a mixture of Tcl and C or even pure C code; a change to the index file to use load of the right thing and everything else can be oblivious.) It does that by doing:
package require YourPackageName
# Give the version if necessary, of course
Then write yourself some documentation (even if it is just listing the entry point commands into the package) and tests, and you've migrated to being a very well behaved piece of code indeed.
There are some additional techniques that can help you in some cases with making coherent pieces. In particular, if you're using an OO system like TclOO, iTcl, or XOTcl, each class is almost certainly a candidate coherent piece. Also, it's sometimes better to put several related coherent pieces together in a package. However, there's absolutely no hard and fast rule on that.
Finally, Tcl uses a bunch of techniques to find packages, but they mostly come down to looking using the auto_path global variable. In your application main script, it's best (if the rest of your code is mostly in the library directory) to use something like this as one of the first steps:
lappend auto_path [file join [file dirname [info script]] library]
You can also gather the contents of many pkgIndex.tcl files in one place, provided you take into account any pathname changes needed from moving things around.
So regarding the TCL, You can look for creating the packages and namespaces. Let me know if that can help. So more details can be provided

SSIS 2012 - script task won't run second time (unless debugging)

I'm getting a really tough error in SSIS 2012.
I am just running in SSDT.
I have a script task inside a For...Each block.
It runs fine the first time it is reached.
The second time it is reached, I get a generic "Exception thrown by object of invocation error", attributed to the script, at the script task.
It is a small script, all inside Main(), and with a Try...Catch block.
I am not hitting the Catch, which adds custom text.
It seems like it is behaving as if it never enters the Script...
except
if I actually set a breakpoint in it.... in which case it runs fine,
whether I step line-by-line or just hit F5.
I know this isn't terribly specific, but I'm hoping someone has seen this.
Has anyone seen anything like this before?
As mentioned, I have tried debugging (obviously), but then I don't get any error.
I have tried changing my variable access from the basic to through VariablesDispenser.LockOneForRead, in case it is something with variables that happens before Main().
I think I got all the places the variables are used in the loop, but that didn't help.
Because this was so killer, I'm going ahead and answering it.
It was actually an un-"declared" variable, but in my Catch block.
Copy-paste error :/
I was using a variable as
"Dts.Variables["TaskName"]"
in the Catch block, but I had not selected it in the Script Task window.
I have no idea why it didn't give me the specific "not found in collection" error.
I have certainly run into this before and seen that. :/
Just ran into that and it was a bear to figure out.
What it was was that I had a static variable (actually a singleton class) defined. Evidently, SSIS does NOT re-initialize a program on second and subsequent invocation, but holds the image and simply re-launches at its entry point.
My Singelton class (and I've verified for several static variables now) does NOT get re-initialized. It still exists. The issue was that it was created with the Dts Variable set that existed on first invocation of the script. Since it's "self" values was not null it never re-instantiated.
When I recognized what was happening, it was of course easy to fix, but one gets used to a stand-alone environment where every program instance has its static values null or set with a static initial value. We presume automatically that a new "run" of the program will likewise have its global spaces "clean" .... in point of fact I'm fairly sure that was what I read as part of the C# "contract" that I'd never need to worry about historical cruft in memory spaces for variables.
Well it turns out that that "contract" is about as good as any Microsoft will make you sign.
It's actually a mixed blessing. Knowing that it happens I can use it to save a lot of overhead in scripts invoked in loops ... but as it's not well, or perhaps un- documented I'll need to be careful to have work-arounds and default loading tests if it turns out not to be true in some future release or version.
(Be gentle in your criticism... I'm new to SSIS. Not so new to program paradigms. CICS mainframe programs would re-init global spaces unless you did things in the linkage to signal it not to ... if you're going to re-invent wheels at least look at old wheels).
-- TWZ