The use of packages to parse command arguments employing options/switches? - tcl

I have a couple questions about adding options/switches (with and without parameters) to procedures/commands. I see that tcllib has cmdline and Ashok Nadkarni's book on Tcl recommends the parse_args package and states that using Tcl to handle the arguments is much slower than this package using C. The Nov. 2016 paper on parse_args states that Tcl script methods are or can be 50 times slower.
Are Tcl methods really signicantly slower? Is there some minimum threshold number of options to be reached before using a package?
Is there any reason to use parse_args (not in tcllib) over cmdline (in tcllib)?
Can both be easily included in a starkit?
Is this included in 8.7a now? (I'd like to use 8.7a but I'm using Manjaro Linux and am afraid that adding it outside the package manager will cause issues that I won't know how to resolve or even just "undo").
Thank you for considering my questions.

Are Tcl methods really signicantly slower? Is there some minimum threshold number of options to be reached before using a package?
Potentially. Procedures have overhead to do with managing the stack frame and so on, and code implemented in C can avoid a number of overheads due to the way values are managed in current Tcl implementations. The difference is much more profound for numeric code than for string-based code, as the cost of boxing and unboxing numeric values is quite significant (strings are always boxed in all languages).
As for which is the one to use, it really depends on the details as you are trading off flexibility for speed. I've never known it be a problem for command line parsing.
(If you ask me, fifty options isn't really that many, except that it's quite a lot to pass on an actual command line. It might be easier to design a configuration file format — perhaps a simple Tcl script! — and then to just pass the name of that in as the actual argument.)
Is there any reason to use parse_args (not in tcllib) over cmdline (in tcllib)?
Performance? Details of how you describe things to the parser?
Can both be easily included in a starkit?
As long as any C code is built with Tcl stubs enabled (typically not much more than define USE_TCL_STUBS and link against the stub library) then it can go in a starkit as a loadable library. Using the stubbed build means that the compiled code doesn't assume exactly which version of the Tcl library is present or what its path is; those are assumptions that are usually wrong with a starkit.
Tcl-implemented packages can always go in a starkit. Hybrid packages need a little care for their C parts, but are otherwise pretty easy.
Many packages either always build in stubbed mode or have a build configuration option to do so.
Is this included in 8.7a now? (I'd like to use 8.7a but I'm using Manjaro Linux and am afraid that adding it outside the package manager will cause issues that I won't know how to resolve or even just "undo").
We think we're about a month from the feature freeze for 8.7, and builds seem stable in automated testing so the beta phase will probably be fairly short. The list of what's in can be found here (filter for 8.7 and Final). However, bear in mind that we tend to feel that if code can be done in an extension then there's usually no desperate need for it to be in Tcl itself.

Related

How do I find where a function is declared in Tcl?

I think this is more of a Tcl configuration question rather than a Tcl coding question...
I inherited a whole series of Tcl scripts that are used within a simulation tool that my company built in-house. In my scripts, I'm finding numerous instances where there are function calls to functions that don't seem to be declared anywhere. How can I trace the path to these phantom functions?
For example, rather than use source, someone build a custom include function that they named INCLUDE. Tclsh obviously balks when I try to run it there, but with my simulation software, it runs fine.
I've tried grep-ing through the entire simulation software for INCLUDE, but I'm not having any luck. Are there any other obvious locations outside the simulation software where a Tcl function might be defined?
The possibilities:
Within your software. (you have checked for this).
Within some other package included by the software.
Check and see if the environment variable TCLLIBPATH is set.
Also check and see if the simulation software sets TCLLIBPATH.
This will be a list of directories to search for Tcl packages, and you
will need to search the packages that are located outside of the
main source tree.
Another possibility is that the locations are specified in the pkgIndex.tcl file.
Check any pkgIndex.tcl files and look for locations outside the main source tree.
Within an unknown command handler. This could be in
your software or within some other package. You should be able to find
some code that processes the INCLUDE statement.
Within a binary package. These are shared libraries that are loaded
by Tcl. If this is the case, there should be some C code used to
build the shared library that can be searched.
Since you say there are numerous instances of unknown functions, my first
guess is that you have
not found all the directories where packages are loaded from. But an
''unknown'' command handler is also a possibility.
Edit:
One more possibility I forgot. Check and see if your software sets the auto_path variable. Check any directories added to the auto_path for
other packages.
This isn't a great answer for you, but I suspect it is the best you're going to get...
The procedure could be defined in a great many places. Your best bet for finding it is to use a tool like findstr (on Windows) or grep -R (on POSIX platforms) to search across all the relevant source files. But that still might not help! It might not be a procedure but instead a general command, which could be implemented in C and not as a procedure, or it could be defined in a packaged application archive (which are usually awkward to look inside). There are also other types of script-implemented command too, which could make things awkward. Generally searching and investigating is your best bet, but it might not work.
Tcl doesn't really differentiate strongly between different types of command except in some introspection operations. If you're lucky, you could find that info body tells you the definition of the procedure (and info args and info default tell you about the arguments) but that won't help with other command types at all. Tcl 8.7 will include a command (info cmdtype) that would help a lot with narrowing down what to do next, but that's no use to you now and it definitely doesn't exist in older versions.

tcl - when to use package+namespace vs interp?

I'm just starting with TCL and trying to get my head around how to best define and integrate modules. There seem to be much effort put into the package+namespace concept, but from what I can tell interp is more powerful and lean for every thinkable scenario. In particular when it comes to hiding and renaming procedures, but also the lack of creep in the global namespace. The only reason to use package+namespaces seem to be because "once upon a time Sun said so".
When should I ever use package+namespace instead of interp?
Namespaces and packages work together. Interpreters are something else.
A namespace is a small scale naming context in Tcl. It can contain commands, variables and other namespaces. You can refer to entities in a namespace via either local names (foo) or via qualified names (bar::foo); if a qualified name starts with ::, it is relative to the (interpreter-)global namespace, and can be used to refer to its command or variable from anywhere in the interpreter. (FWIW, the TclOO object system builds extensively on top of namespaces; there is one namespace per object.)
A package is a high-level concept for a bunch of code supplied by some sort of library. Packages have abstract names (the name do not have to correspond to how the library's implementation is stored on disk) and a distinct version; you can ask for a particular version if necessary, though most of the time you don't bother. Packages can be implemented by multiple mechanisms, but they almost all come down to sourceing some number of Tcl scripts and loading some number of DLLs. Almost all packages declare commands, and they conventionally are encouraged to put those commands in a namespace with the same general name as the package. However, quite a few older packages do not do this for various reasons, mostly to do with compatibility with existing code.
An interpreter is a security context in Tcl. By default, Tcl creates one interpreter (plus another if it sets up the console window in wish). Named entities in one interpreter are completely distinct from named entities in another interpreter with a few key exceptions:
Channels have common names across all interpreters. This means that an interpreter can talk about channels owned by another interpreter, but merely being able to mention its name doesn't give permission to access the channel. (The stdin, stdout and stderr channels are shared by default.)
The interp alias command can be used to make alias commands, which are such that invoking a command (the alias) in one interpreter can cause a command (the implementation) in another interpreter to be called, with all arguments safely passed over. This allows one interpreter to expose whatever special calls it wants another interpreter to access without losing control, but it is up to the implementation of those commands to act safely on those arguments.
A safe interpreter is one with the unsafe commands of Tcl profiled out by default. (That's things like open, socket, source, load, cd, etc.) The parent interpreter that created the safe child interpreter can use the alias mechanism to add in exactly the functionality desired; it's very much analogous to an OS system call except you can easily make your own application-specific ones.
Tcl's threading package is designed to create one interpreter per thread (and the aliasing mechanism does not work across threads). That means that there's very little in the way of shared resources by default, and inter-thread communication is done via queued message passing.
In general, packages are required at most once per interpreter and are how you are recommended to access most third-party functionality. Namespaces are fairly lightweight and are used for all sorts of things, and interpreters are considered to be expensive; lots of quite thoroughly production-grade Tcl scripts only ever work with a single interpreter. (Threads are even more expensive than interpreters; it's good practice to match the number of threads you create to the hardware load that you wish to impose, probably through the use of suitable thread pools.)
The purpose of a module is to provide modular code, i.e. code that can easily be used by applications beyond the module writer's knowledge and control, and that encapsulates their own internals.
Package-namespace- and interpreter-based modules are probably equally good at encapsulation, but it's not as easy to make interpreter-based modules that play well with arbitrary applications (it is of course possible).
My own opinion is that interpreters are application level (I mostly use them for user input and for controlled evaluation), not module level. Both namespaces and packages have their warts, but in most cases they do what is expected of them with a minimum of fuss.
My recommendation is that if you are writing modules for your own benefit and interpreters serve you well, by all means use them. If you write modules that other people are to use, possibly including yourself in 18 months, you should stick with namespaces and packages.

Advantages of a VM

The majority of languages I have come across utilise a VM, or virtual machine. Languages such as Java (the JVM), Python, Ruby, PHP (the HHVM), etc.
Then there are languages such as C, C++, Haskell, etc. which compile directly to native.
My question is, what is the advantage of using a VM (outside of OS-independence)? Isn't using a VM just creating an extra interpretation step, by going [source code -> bytecode -> native] instead of just [source code -> native]?
Why use a VM when you can compile directly?
EDIT
My understanding is that Python, Ruby, et al. use something akin to a VM, if not exactly fitting under such a definition, where scripts are compiled to an intermediate representation (for Python, e.g. .pyc files).
EDIT 2
Yep. Looked it up. Python, Ruby and PHP all use intermediate representations, but are simply not stored in seperate files but executed by the VM directly. See question : Java "Virtual Machine" vs. Python "Interpreter" parlance?
" Even though Python uses a virtual machine under the covers, from a
user's perspective, one can ignore this detail most of the time. "
An advantage of VM is that, it is much easier to modify some parts of the code on runtime, which is called Reflection. It brings some elegance capabilities. For example, you can ask the user which function/class he want to call, and call the function/class by its STRING name. In Java programs (and maybe some other VM-based languages) users can add additional library to the program in runtime, and the library can be run immediately!
Another advantage is the ability to use advanced garbage collection, because the bytecode's structure is easier to analyze.
Let me note that a virtual machine does not always interpret the code, and therefore it is not always slower than machine code. For example, Java has a component named hotspot which searches for code blocks that are frequently called, and replaces their bytecode with native code (machine code). For instance, if a for loop is called for, say , 100+ times, hotspot converts it to machine-code, so that in the next calls it will run natively! This insures that just the bottlenecks of your code are running natively, while the rest part allows for the above advantages.
P.S. It is not impossible to compile the code directly to native code. Many VM-based languages have compiler versions (e.g. there is a compiler for PHP: http://www.phpcompiler.org). However, remember that you are disabling some of the above features by compiling the whole program to native code.
P.S. The [source-code -> byte-code] part is not a problem, it is compiled once and does not relate to execution time. I presumed you are asking why they do not execute the machine code while it is possible.
Python, Ruby, and PhP do not utilize VMs. They are, however, interpreted.
To answer your actual question: Java utilizes a VM in order to add some distance between the operating system/hardware and the code being executed. The goal there was security and hardiness (hardiness meaning there was a lower likelihood of code having an averse effect on other processes in the system.)
All the languages you listed are interpreted so I think what you may have actually meant to ask was the difference between interpreted and compiled languages. Interpreted languages are cross-platform. That is the biggest, and main, advantage. You need not compile them for each different set of hardware or operating system they operate on, and instead they will simply work everywhere.
The advantage of a compiled language, traditionally, is speed and efficiency.
Because a VM allows for the same set of instructions to be run on my different operating systems (provided they have the interperetor)
Let's take Java as an example. Java gets compiled into bytecode, which is basically a set of operations for a computer to follow. However, not all processors in computers understand the same set of instructions the same way - meaning, what one set of native instruction means on computer A could be something different on computer B.
As a result, a VM is run, with one specific to each computer. This way, the Java bytecode that is written is standardized, and only the interpreter has to work to convert it to machine language.
OS independence is a big part of it but you also get abstractions from other things like CPUs... the same Java code can execute on ARM, x86, whatever without modification so long as there is a JVM in place.

Is it bad practice to use the system() function when library functions could be used instead? Why?

Say there is some functionality needed for an application under development which could be achieved by making a system call to either a command line program or utilizing a library. Assuming efficiency is not an issue, is it bad practice to simply make a system call to a program instead of utilizing a library? What are the disadvantages of doing this?
To make things more concrete, an example of this scenario would be an application which needs to download a file from a web server, either the cURL program or the libcURL library could be used for this.
Unless you are writing code for only one OS, there is no way of knowing if your system call will even work. What happens when there is a system update or an OS upgrade?
Never use a system call if there is a library to do the same function.
I prefer libraries because of the dependency issue, namely the executable might not be there when you call it, but the library will be (assuming external library references get taken care of when the process starts on your platform). In other words, using libraries would seem to guarantee a more stable, predictable outcome in more environments than system calls would.
There are several factors to take into account. One key one is the reliability of whether the external program will be present on all systems where your software is installed. If there is a possibility that it will be missing, then maybe it is better to do it inside your program.
Weighing against that, you might consider that the extra code loaded into your program is prohibitive - you don't need the code bloat for such a seldom-used part of your application.
The system() function is convenient, but dangerous, not least because it invokes a shell, usually. You may be better off calling the program more directly - on Unix, via the fork() and exec() system calls. [Note that a system call is very different from calling the system() function, incidentally!] OTOH, you may need to worry about ensuring all open file descriptors in your program are closed - especially if your program is some sort of daemon running on behalf of other users; that is less of a problem if your are not using special privileges, but it is still a good idea not to give the invoked program access to anything you did not intend. You may need to look at the fcntl() system call and the FD_CLOEXEC flag.
Generally, it is easier to keep control of things if you build the functionality into your program, but it is not a trivial decision.
Security is one concern. A malicious cURL could cause havoc in your program. It depends if this is a personal program where coding speed is your main focus, or a commercial application where things like security play a factor.
System calls are much harder to make safely.
All sorts of funny characters need to be correctly encoded to pass arguments in, and the types of encoding may vary by platform or even version of the command. So making a system call that contains any user data at all requires a lot of sanity-checking and it's easy to make a mistake.
Yeah, as mentioned above, keep in mind the difference between system calls (like fcntl() and open()) and system() calls. :)
In the early stages of prototyping a c program, I often make external calls to programs like grep and sed for manipulation of files using popen(). It's not safe, it's not secure, and it's certainly not portable. But it can allow you to get going quickly. That's valuable to me. It lets me focus on the really important core of the program, usually the reason I used c in the first place.
In high level languages, you'd better have a pretty good reason. :)
Instead of doing either, I'd Unix it up and build a script framework around your app, using the command line arguments and stdin.
Other's have mentioned good points (reliability, security, safety, portability, etc) - but I'll throw out another. Performance. Generally it is many times faster to call a library function or even spawn a new thread then it is to start an entire new process (and then you still have to correctly check/verify it's execution and parse it's output!)

How To Distribute a Project Built In a Interpreted Language?

I've started a project(developer text editor) in a interpreted language(Tcl/Tk) and another with Perl(both are open-source), but with some time, when it gets in a Beta version, I will need to distribute it for the users(developers of course), but I want to know some things about this:
It's possible to compile it to a executable?
How?
Can I compile for other platforms?
Or in this case it's better to use a compiled language or a interpreted?
Is usual things like this?
In the users machine, he will need to have Tcl/Tk or Perl?
Both Tcl and Perl can be compiled into executables. For windows, there's perl2exe and perlcc for systems running UNIX style operating systems. As for Tcl, there is freewrap and starpacks.
If you're just doing this for the benefit of a single executable, and eliminating the need for installing Perl and other dependencies, then there's no real reason you can't do this. It's quite a nice method for testing your application without having to constantly compile, though defeats the point of using an interpreted language in the first place.
Also take a look at The Simplest Steps to Converting TCL TK to a Stand Alone Application, this page is also useful, How can I compile Tcl type scripts into binary code
The usual and common way for such scripts is to distribute the source. A binary would only work on some very specific systems but Tcl/Tk/Perl runs on so many systems, so that would be a really big restriction for no real reason. It also helps other developers more to reuse your scripts in some good way. In most cases, even when somebody could execute your binary, it wouldn't be of much help without the source.