How to unload the package in TCL? - tcl

I loaded a package first:
package require Tktable
then i wanted to unload this package. I searched some info, and used "package ifneeded" to get the library path. I tried as below:
unload $path Tktable
but i got error message "cannot be unloaded under a trusted interpreter". How to unload a package?

Most packages do not support unloading at all. (Specifically, Tktable does not; it doesn't define either a Tktable_Unload function or a Tktable_SafeUnload function in its public C API.) Unloading is rare as it requires the author of the C code to take special care to ensure that it is possible at all, and most of the time programmers have other higher-priority concerns.
Unloading is disabled in safe interpreters, as it is considered to be an insecure operation. (load is also not supported, but is often profiled in restricted fashion by the parent master interpreter, such as via package require doing clever things behind the scenes.)
If the problem is that some package is interfering with your code (as seems to be the case from your comments), put your code in a namespace. There's often an easy way to pick the namespace name, typically the name of your application or library works fine. If you're wanting to call your code the same thing as someone else's and their code is more well-known than yours, that's going to cause you trouble anyway.

Related

Is there an alternative to the load command to import binary Tcl package

I am using a commercial tool interfaced with an homebrew tclsh(Synopsys EDA).
In their version, they removed the load command. Thus I cannot use third party libraries (Graphviz library in my case).
I wonder if there is a another way to import binary files (.so files)
The only command in standard Tcl that brings in a dynamic library is load. (OK, package require can do too, but that's because it can call load inside.) Without that command, you only have options like statically linking your own code in and creating the commands in the Tcl_AppInit function, but that's really unlikely to work if you're already using someone else's code that's already done that sort of thing.
The easiest approach might be to run a normal tclsh as a subprocess via exec tclsh script.tcl (run and wait for termination) or open |tclsh r+ (open pipeline). If they've not turned off those capabilities as well; you might be running in a safe interpreter where all those things are systematically disabled. I don't know of any way to break out of a standard safe interpreter (the mechanism for locking them down errs on the side of caution) so if that's the case, you'll just have to save the data you want to a file somewhere (by any mechanism that works; safe interpreters also can't touch the filesystem at all by default though that is often profiled back in in protected ways) and use a completely separate program to work with it.

packages from tcllib not found

I have a strange problem I am using fedora 20 and installed tcllib on my system.
But if I use package require uri in example I got an package not found in response.
Does anyone know what the issue here is or how to determine if the tcllib is added in the package index?
Tcl looks up packages in two ways: with auto_path and with tcl::tm::path.
1. The auto_path — the traditional mechanism.
When you do package require, the package manager looks to see if the package is already present, or if instructions for obtaining the package from the filesystem are present. If neither of these is true, it asks the package unknown handler to load it (strictly, it's the handler installed using the package unknown command). The default implementation of that handler loads packages by looking for pkgIndex.tcl files in the directories on your auto_path, and their immediate subdirectories.
auto_path is a global variable holding a Tcl list of directories to search. You can probably just lappend the right place to it. pkgIndex.tcl is a Tcl script that describes how to make the package available, which it does by calling an appropriate package ifneeded command. The actual loading of the
Once a package is required that isn't present but its instructions for obtaining are, Tcl will simply eval those instructions: they're just a plain old script (that usually just calls source and/or load to do the grunt work).
2. Tcl modules — the new (in 8.5) mechanism.
The Tcl module system uses a different search system managed with the tcl::tm::path command. The tcl::tm::path list subcommand will tell you where it looks (a huge list, to be honest) and you can use the tcl::tm::path add subcommand to extend the path with extra locations to search. Tcl modules have the entire package placed into a single file (with the extension .tm) and have a structured name so that they can avoid having a separate pkgIndex.tcl file; the TM loader can synthesise the package ifneeded calls from the filename itself (in all cases, this is done with source; there are some clever ways to package binary code inside files so they can be loaded, but they're far outside the scope of this answer).
At that point, you're back to the source of the file when the package is actually required; that part is the same whether you're using a module or a traditional package.
The module system is much faster than the traditional search mechanism since it doesn't need to open any files to figure out what to do: it just uses glob with the right options. It is, however, less flexible in how things can be packaged: multi-file packages (e.g., almost anything you make yourself) can't be made into modules (well, not without extra work).

PowerShell module design - Export-ModuleMember

I am building a module that exports a cmdlet that I would like to make available through my profile. The implementation of this cmdlet is spread across multiple implementation files that contain implementation functions I don't want to make publicly available. So I use Export-ModuleMember to hide them.
File get_something.psm1
import-module .\get_something_impl.psm1
function Get-Something {
[cmdletbinding()]
Get-SomethingImplementation
}
Export-ModuleMember -Function Get-Something
I then add get_something.psm1 to my profile. By exporting only Get-Something, all of my implementation functions remain "private".
The issue I'm experiencing is that when using the Export-ModuleMember command, I have to import a module in my implementation files every time I need a function inside of it. For example, assume I have a module, person.psm1, with a function, Get-Person, that I need to call throughout all of my implementation files. Now I must import person.psm1 in every single file that I need to call Get-Person. This is a result of using Export-ModuleMember-Function Get-Something. Without it, I would only need to import person.psm1 once and it would be available.
In essence, Export-ModuleMember is not only blocking my implementation to the outside. It's blocking it from my own implementation.
Is this expected and considered a normal aspect of designing PowerShell modules?
This was actually a bit of debate during the development of modules. Originally, Export-ModuleMember was required to export any function. This became tedious and limiting. So, by default, all functions from a module are visible, but variables and aliases are not, as long as you've never used Export-ModuleMember within the .PSM1.
If you use Export-ModuleMember, it begins to restrict that list. It may not be a bad idea to export a smaller number of functions, but you have to use it somewhat carefully.
You can either write:
Export-ModuleMember -Function a,b,c
which exports a few functions.
or
Export-ModuleMember -Function *
The latter one is equivalent to omitting Export-ModuleMember altogether.
You can use more restrictive wildcards if you'd like, but I find that 99% of the time, you don't need to bother with it at all.
The other thing you seem to be asking is how best to handle module dependencies. Nowadays, it's fairly common to import a module or two when writing a script, just like it's fairly common to include an assembly or two in a C# project. If you're doing this inside of a module, you can use the -Global flag on Import-Module, and avoid using -Force (which will reload the module). This makes it a notch more efficient to reuse the module in different functions. It also makes it less likely to have problems with "cycling" (unloading and reloading) the module, which, unfortunately, many modules do not do well.
The alternative to referencing the module in each function is using a module manifest (Get-Help New-ModuleManifest). Module manifests are very interesting, and required learning for many parts of module development. If you include a module in the RequiredModules list of the Module manifest, it will be automatically loaded before the module is imported (at least in PowerShell 3 and greater). If you include a module in the NestedModules list of the module manifest, it will be loaded as part of the module, and the commands exported by the module will be exported by your module instead.
Module design is a tricky beast, but it's very rewarding to do right. Best of luck.

Tcl version change issue from 8.4 to 8.5.12

I have a problem with changing tcl version from 8.4 to 8.5.12 on RHEL machine. Our product uses TclDevKit components like Tcldom, Tclxml, etc. Also we are using Incr Tcl (Itcl). I am trying to create pkgIndex.tcl file in Itcl in order to find Itcl when that package is required as follown:
package ifneeded Itcl 3.4 [list load [file join $dir "libitcl-O.a"] Itcl ]
but when I use
package require Itcl
Getting report: couldn't load file "/somepath/itcl/lib/libitcl-O.a": /somepath/lib/libitcl-O.a: invalid ELF header
It seems I can't load files with .a extention, but the same is done with previous version of tcl (8.4) and it works fine. I googled a lot, read a lot of documentation, but it doesn't help to go further.
Please help.
Thanks in advance
Libraries come in two general sorts, static libraries and shared libraries. On Linux, static libraries have the extension .a by default, and shared libraries have the extension .so (plus optionally some numbers to indicate the version). Only shared libraries will work with Tcl's load command and even then they have to be designed to work that way (with an appropriate Foobar_Init function, as documented).
When dealing with stub-exporting extensions (fairly rare) or Tcl and Tk themselves, the linking is done in two parts. There's a stub library, normally called somethingstub.a, and there's a main shared library. The main shared library contains the implementation of the code; all that is in the stub library is an ABI/API adaptor so that you can avoid binding your code to an explicit version of the implementation library. By building an extension stub-enabled and linking against the stub library, you gain the ability to have your extension loaded into future versions of Tcl/Tk without any recompilation or relinking steps at all. (You also become able to put the extension inside a starkit for deployment, as those use a rather unusual way of managing shared libraries that the stub mechanism conceals from you.)

Why make global Lua functions local?

I've been looking at some Lua source code, and I often see things like this at the beginning of the file:
local setmetatable, getmetatable, etc.. = setmetatable, getmetatable, etc..
Do they only make the functions local to let Lua access them faster when often used?
Local data are on the stack, and therefore they do access them faster. However, I seriously doubt that the function call time to setmetatable is actually a significant issue for some program.
Here are the possible explanations for this:
Prevention from polluting the global environment. Modern Lua convention for modules is to not have them register themselves directly into the global table. They should build a local table of functions and return them. Thus, the only way to access them is with a local variable. This forces a number of things:
One module cannot accidentally overwrite another module's functions.
If a module does accidentally do this, the original functions in the table returned by the module will still be accessible. Only by using local modname = require "modname" will you be guaranteed to get exactly and only what that module exposed.
Modules that include other modules can't interfere with one another. The table you get back from require is always what the module stores.
A premature optimization by someone who read "local variables are accessed faster" and then decided to make everything local.
In general, this is good practice. Well, unless it's because of #2.
In addition to Nicol Bolas's answer, I'd add on to the 3rd point:
It allows your code to be run from within a sandbox after it's been loaded.
If the functions have been excluded from the sandbox and the code is loaded from within the sandbox, then it won't work. But if the code is loaded first, the sandbox can then call the loaded code and be able to exclude setmetatable, etc, from the sandbox.
I do it because it allows me to see the functions used by each of my modules
Additionally it protects you from others changing the functions in global environment.
That it is a free (premature) optimisation is a bonus.
Another subtle benefit: It clearly documents which variables (functions, modules) are imported by the module. And if you are using the module statement, it enforces such declarations, because the global environment is replaced (so globals are not available).