It will be easier to explain using an example in C. When you build an application in C(or C++, etc) you can build a "release" one that would not include some code that you would have in an none release one. Ex: test code, etc.
I'm trying to do similar in TCL. We have some tracing functions that I would like to be empty shell when in "release".
So I thought I could use two different package to do that and use one in release and one in designer so designer could use a "define" or something similar.
I know I could also "replace" each functions using "rename" and "alias" but my application start many threads(and there is one interpreter per thread) so I would have to replace multiple functions in multiple threads and that make things more complicated, I think. I thought that instead using two different package would do a "one shot solve them all" kind of solution.
Thanks
One of the simplest techniques is to put some extra magic in the pkgIndex.tcl script for the package. Usually it looks something like (cookiejar is a little package I wrote that's in 8.7):
package ifneeded cookiejar 0.1 [list source [file join $dir cookiejar.tcl]]
But if you want to make things more conditional, you can do instead:
if {[info exist ::developermode]} { # Or however you want to detect it!
package ifneeded cookiejar 0.1 [list source [file join $dir cookiejar-dev.tcl]]
} else {
package ifneeded cookiejar 0.1 [list source [file join $dir cookiejar-release.tcl]]
}
You can then have two implementations, one a version for development and another for release; in your case, the release version should probably be just some empty stand in functions that provide the same API but do nothing. (You could not provide any commands at all, or make things inconsistent, but that's likely to cause code that works in development to fail in prod.)
If it helps, note that if you define a procedure like this:
proc someCommand {args} {}
(That is, it just takes args as its formal argument and has an empty body.) then Tcl will make that procedure be removed entirely from the runtime bytecode of your procedures that use it. This is probably going to be very useful to you; it lets your production code refer to your debugging helpers, yet have no (meaningful) cost for doing so.
Related
I think this is more of a Tcl configuration question rather than a Tcl coding question...
I inherited a whole series of Tcl scripts that are used within a simulation tool that my company built in-house. In my scripts, I'm finding numerous instances where there are function calls to functions that don't seem to be declared anywhere. How can I trace the path to these phantom functions?
For example, rather than use source, someone build a custom include function that they named INCLUDE. Tclsh obviously balks when I try to run it there, but with my simulation software, it runs fine.
I've tried grep-ing through the entire simulation software for INCLUDE, but I'm not having any luck. Are there any other obvious locations outside the simulation software where a Tcl function might be defined?
The possibilities:
Within your software. (you have checked for this).
Within some other package included by the software.
Check and see if the environment variable TCLLIBPATH is set.
Also check and see if the simulation software sets TCLLIBPATH.
This will be a list of directories to search for Tcl packages, and you
will need to search the packages that are located outside of the
main source tree.
Another possibility is that the locations are specified in the pkgIndex.tcl file.
Check any pkgIndex.tcl files and look for locations outside the main source tree.
Within an unknown command handler. This could be in
your software or within some other package. You should be able to find
some code that processes the INCLUDE statement.
Within a binary package. These are shared libraries that are loaded
by Tcl. If this is the case, there should be some C code used to
build the shared library that can be searched.
Since you say there are numerous instances of unknown functions, my first
guess is that you have
not found all the directories where packages are loaded from. But an
''unknown'' command handler is also a possibility.
Edit:
One more possibility I forgot. Check and see if your software sets the auto_path variable. Check any directories added to the auto_path for
other packages.
This isn't a great answer for you, but I suspect it is the best you're going to get...
The procedure could be defined in a great many places. Your best bet for finding it is to use a tool like findstr (on Windows) or grep -R (on POSIX platforms) to search across all the relevant source files. But that still might not help! It might not be a procedure but instead a general command, which could be implemented in C and not as a procedure, or it could be defined in a packaged application archive (which are usually awkward to look inside). There are also other types of script-implemented command too, which could make things awkward. Generally searching and investigating is your best bet, but it might not work.
Tcl doesn't really differentiate strongly between different types of command except in some introspection operations. If you're lucky, you could find that info body tells you the definition of the procedure (and info args and info default tell you about the arguments) but that won't help with other command types at all. Tcl 8.7 will include a command (info cmdtype) that would help a lot with narrowing down what to do next, but that's no use to you now and it definitely doesn't exist in older versions.
Me and my team have been working on multiple tools flow for awhile now.
We keep adding new procedure in either a same file or creating new file in the same directory. There are also a lot of nested procedures; one calls others.
The number of procedures will only keep growing, and the flow involves at least 10 people who love to do their own things.
My question is, how would we go about managing all these procedures in a tidy manner?
We assume you follow good practices with general software engineering (keeping files in source control, etc.) as without those you're stuck anyway.
Tcl doesn't really support nested procs; you can call proc from inside another procedure, but it doesn't do any kind of scoping.
You should be thinking in terms of dividing up your code into pieces of “coherent API”. What exactly that is depends on your application, but it is only rarely a single procedure; a particular dialog box or screen is a much more useful unit. That might end up as one procedure, but it's often several related ones.
Once you've identified these coherent pieces, they form the contents of what you put in a file, typically one coherent piece per file, though if the file is rather long when you do that, using a group of files instead (probably in their own directory) makes a lot of sense. At the same time, you probably should make the variables and commands defined by each coherent piece be all in a Tcl namespace, which isolates the piece a little bit from the rest of the world, mostly to stop code from treading on the toes of other code.
Now that you've done that, and if you've got what you think is a stable API to your coherent piece, you can make that piece into a Tcl package. That's just done by giving it a higher-level name and version number; you put this in one of your files in the coherent piece:
package provide YourPackageName 1.0
and then (usually in the same directory) you make a pkgIndex.tcl file with contents like this:
package ifneeded YourPackageName 1.0 [list source [file join $dir yourFilename.tcl]]
That is, it says that to get YourPackageName version 1.0 in a Tcl interpreter, you source the file $dir/yourFilename.tcl; the $dir is a convenience in package index files that refers to the directory containing the current package index file. Then the rest of your code can stop thinking about “reading the right files”, and start thinking in terms of “use this defined API”. (This is great if you then choose to start implementing the package using a mixture of Tcl and C or even pure C code; a change to the index file to use load of the right thing and everything else can be oblivious.) It does that by doing:
package require YourPackageName
# Give the version if necessary, of course
Then write yourself some documentation (even if it is just listing the entry point commands into the package) and tests, and you've migrated to being a very well behaved piece of code indeed.
There are some additional techniques that can help you in some cases with making coherent pieces. In particular, if you're using an OO system like TclOO, iTcl, or XOTcl, each class is almost certainly a candidate coherent piece. Also, it's sometimes better to put several related coherent pieces together in a package. However, there's absolutely no hard and fast rule on that.
Finally, Tcl uses a bunch of techniques to find packages, but they mostly come down to looking using the auto_path global variable. In your application main script, it's best (if the rest of your code is mostly in the library directory) to use something like this as one of the first steps:
lappend auto_path [file join [file dirname [info script]] library]
You can also gather the contents of many pkgIndex.tcl files in one place, provided you take into account any pathname changes needed from moving things around.
So regarding the TCL, You can look for creating the packages and namespaces. Let me know if that can help. So more details can be provided
I am new to tcl and am trying to extend one of the existing packages.
package provide trial 1.0
namespace eval ::trial {
namespace export create delete
}
proc ::trial::create { arg1 arg2 } {
....
}
proc ::trial::delete { arg1 } {
....
}
I want to write package trial 2.0 which can add one more proc status. How can I do this? And how can I overload create proc and call the version 1.0 create proc?
Thanks in advance.
There needs to be at most one call to package provide for the named package per interpreter. It's possible to make a pkgIndex.tcl that describes how to provide multiple versions of the same package, but it isn't a common thing to do. Without that, you've got the problem that you can't really safely refer to the implementation of another version of the package as you don't know that it is going to be installed in exactly the same place.
Instead, it's usual to just copy the code and only then modify. Trying to avoid duplicating one or two fairly small files is typically more trouble than it is worth!
As a point of order, just adding another command would usually be just reason to go from 1.0 to 1.1, not go to 2.0, since code that expects just the old interface would most likely work fine with the updated version. But that does depend on whether the addition is semantically-compatible, and it's in general hard to make code work that out for you as it actually depends on the call orchestration pattern, and that's potentially non-trivial to compute (though often easy in easy cases).
I have a customized in version of wish 8.6 shell with own environment loaded.
The issue is in native wish shell, short command work.
eg. packa r xxx for package require or stri e $str1 $str2 for string comparison.
But the same thing when i run in my customized shell, it says
invalid command name "packa"
But it works for the options for the command, as package re works for requiring the package.
What could be the possible cause, that wish is unable to resolve command name?
I know it it's bit difficult to answer for a customized shell but if someone could share probable causes based of logics, that would be of great help.
It sounds like you're not setting the global tcl_interactive to 1. That enables expansion of abbreviated command names as well as calling external programs without an explicit exec and a few other things (all of which is done in the unknown command handler procedure, or things it calls; if you want to customise things instead of working like tclsh does, look there).
Handling of unique prefixes of subcommand names is entirely separate.
I was creating a Tcl script which will allow me to automate the installation of software. But the problem I am running into is that the software needs some environment variables set beforehand and I was wondering if its possible to set environment variables inside of the tcl script.
I was going to try exec /bin/sh -c "source /path/to/.bash_profile but that would create a shell script and source the variables into there and the tcl script wont pick it up then.
Can anyone give any other ideas?
In Tcl you have the global env array:
set ::env(foo) bar
And then any child process has the variable foo in its environment.
If you want to put environment variables in a central file (i.e. .bash_profile) so that other programs can source them, then it should be pretty easy to get Tcl to parse that file and set the variables in the env array.
Generally speaking (at least for Linux and Unix-like systems), it's not possible from within a child process to alter the environment of the parent. It's a frequently asked question about tcl
However, if you're launching some other software from within the Tcl script, you can do a couple of things, the simplest of which may be to create a shell script file which both sets environment variables and then launches your software. Then run the shell script from within Tcl.
The environment is exposed via the global env array in Tcl. The keys and values of the array default to the environment inherited from the parent process, any process that Tcl creates will inherit a copy of it, and code that examines the environment in the current process (including directly from C) will see the current state of it.
Picking up environment set in a shell script is quite tricky. The issue is that a .bashrc (for example) can do quite complex things as well as setting a bunch of environment variables. For example, it can also print out a message of the day, or conditionally take actions. But you can at least make a reasonable attempt by using the shell's env command:
set data [exec sh -c "source /path/to/file.sh.rc; env"]
# Now we parse with some regular expression magic
foreach {- key value} [regexp -all -inline {(?wi)^(\w+)=((?!')[^\n]+|'[^']+')$} $data] {
set extracted_env($key) [string trim $value "'"]
}
It's pretty awful, and isn't quite right (there are things that could confuse it) but it's pretty close. The values will be populated in the extracted_env array by it.
I think it's easier to get people to configure things via Tcl scripts…