I was creating a Tcl script which will allow me to automate the installation of software. But the problem I am running into is that the software needs some environment variables set beforehand and I was wondering if its possible to set environment variables inside of the tcl script.
I was going to try exec /bin/sh -c "source /path/to/.bash_profile but that would create a shell script and source the variables into there and the tcl script wont pick it up then.
Can anyone give any other ideas?
In Tcl you have the global env array:
set ::env(foo) bar
And then any child process has the variable foo in its environment.
If you want to put environment variables in a central file (i.e. .bash_profile) so that other programs can source them, then it should be pretty easy to get Tcl to parse that file and set the variables in the env array.
Generally speaking (at least for Linux and Unix-like systems), it's not possible from within a child process to alter the environment of the parent. It's a frequently asked question about tcl
However, if you're launching some other software from within the Tcl script, you can do a couple of things, the simplest of which may be to create a shell script file which both sets environment variables and then launches your software. Then run the shell script from within Tcl.
The environment is exposed via the global env array in Tcl. The keys and values of the array default to the environment inherited from the parent process, any process that Tcl creates will inherit a copy of it, and code that examines the environment in the current process (including directly from C) will see the current state of it.
Picking up environment set in a shell script is quite tricky. The issue is that a .bashrc (for example) can do quite complex things as well as setting a bunch of environment variables. For example, it can also print out a message of the day, or conditionally take actions. But you can at least make a reasonable attempt by using the shell's env command:
set data [exec sh -c "source /path/to/file.sh.rc; env"]
# Now we parse with some regular expression magic
foreach {- key value} [regexp -all -inline {(?wi)^(\w+)=((?!')[^\n]+|'[^']+')$} $data] {
set extracted_env($key) [string trim $value "'"]
}
It's pretty awful, and isn't quite right (there are things that could confuse it) but it's pretty close. The values will be populated in the extracted_env array by it.
I think it's easier to get people to configure things via Tcl scripts…
Related
It will be easier to explain using an example in C. When you build an application in C(or C++, etc) you can build a "release" one that would not include some code that you would have in an none release one. Ex: test code, etc.
I'm trying to do similar in TCL. We have some tracing functions that I would like to be empty shell when in "release".
So I thought I could use two different package to do that and use one in release and one in designer so designer could use a "define" or something similar.
I know I could also "replace" each functions using "rename" and "alias" but my application start many threads(and there is one interpreter per thread) so I would have to replace multiple functions in multiple threads and that make things more complicated, I think. I thought that instead using two different package would do a "one shot solve them all" kind of solution.
Thanks
One of the simplest techniques is to put some extra magic in the pkgIndex.tcl script for the package. Usually it looks something like (cookiejar is a little package I wrote that's in 8.7):
package ifneeded cookiejar 0.1 [list source [file join $dir cookiejar.tcl]]
But if you want to make things more conditional, you can do instead:
if {[info exist ::developermode]} { # Or however you want to detect it!
package ifneeded cookiejar 0.1 [list source [file join $dir cookiejar-dev.tcl]]
} else {
package ifneeded cookiejar 0.1 [list source [file join $dir cookiejar-release.tcl]]
}
You can then have two implementations, one a version for development and another for release; in your case, the release version should probably be just some empty stand in functions that provide the same API but do nothing. (You could not provide any commands at all, or make things inconsistent, but that's likely to cause code that works in development to fail in prod.)
If it helps, note that if you define a procedure like this:
proc someCommand {args} {}
(That is, it just takes args as its formal argument and has an empty body.) then Tcl will make that procedure be removed entirely from the runtime bytecode of your procedures that use it. This is probably going to be very useful to you; it lets your production code refer to your debugging helpers, yet have no (meaningful) cost for doing so.
Reading about bash exec one can create and redirect pipes other than the standard ones, e.g. exec 3>4.
Reading about Tcl exec there is no mentioning of non-standard pipes. Seems explicit.
The use case is a launcher starting many executables communicating over multiple pipes (possibly circular fashion). I was thinking something like:
lassign [chan pipe] a2b_read a2b_write
exec a 3 3>#$a2b_write
exec b 3 3<#$a2b_read
...
...where 'a' is an executable taking a file descriptor argument controlling where a should write stuff, and vice versa for executable 'b'. Using the standard pipes does not work when executable communicate over multiple pipes.
I know how to do this using a named pipe, but would much rather tie pipe lifetime to that of the process'.
Tcl has no built-in binding for dup() at all, and only uses dup2() in a very limited way (only for the three standard channels). Without those, this functionality is not going to work. This is where you need TclX, where you can take full control of the channel handling and process launching and do whatever you want (via fork, dup and execl; note that that's not at all like exec and much more like the POSIX system call).
Or do the trickery in a subordinate shell script.
I have an environment variable which I can access without any problem when used in isolation
puts "$::env(LIB)"
/home/asic/lib
However when I try and use this as part of a longer string, the env var returns an empty string!
puts "$::env(LIB)/add/path/to/target"
/add/path/to/target
I am using Riviera Pro with $tcl_version=8.5 on a Linux system. It works fine on the Windows version.
How can I access the env var?
I have tried re-assigning to a local, but I still get the same issue. Neither do {} around the variable.
The perils of different line ending conventions.
The script used to create the env vars was created on a windows system, and when the vars were interpreted by TCL, it was seeing control characters in the variable. Once pushed through dos2unix, the vars are now being used correctly.
In standard Tcl, that code as written ought to work; the $…(…) variable form will not get confused by the surrounding "…" or by the trailing /… material. I don't know how Riviera Pro might be altering things, but I suppose that is possible.
What does parray ::env report? That should print all the environment variables and their contents. (The only really big differences between Windows and Linux with environment variables are that their names are case-sensitive on Linux, and each platform tends to have different characteristic variables set.)
I have a customized in version of wish 8.6 shell with own environment loaded.
The issue is in native wish shell, short command work.
eg. packa r xxx for package require or stri e $str1 $str2 for string comparison.
But the same thing when i run in my customized shell, it says
invalid command name "packa"
But it works for the options for the command, as package re works for requiring the package.
What could be the possible cause, that wish is unable to resolve command name?
I know it it's bit difficult to answer for a customized shell but if someone could share probable causes based of logics, that would be of great help.
It sounds like you're not setting the global tcl_interactive to 1. That enables expansion of abbreviated command names as well as calling external programs without an explicit exec and a few other things (all of which is done in the unknown command handler procedure, or things it calls; if you want to customise things instead of working like tclsh does, look there).
Handling of unique prefixes of subcommand names is entirely separate.
I'm developing a tool that will perform several types of analysis, and each analysis can have different levels of thoroughness. This app will have a fair amount of options to be given before it starts. I started implementing this using a configuration file, since the number of types of analysis specified were little. As the number of options implemented grew, I created more configuration files. Then, I started mixing some command line parameters since some of the options could only be flags. Now, I've mixed a bunch of command line parameters with configuration files and feel I need refactoring.
My question is, When and why would you use command line parameters instead of configuration files and vice versa?
Is it perhaps related to the language you use, personal preference, etc.?
EDIT: I'm developing a java app that will work in Windows and Mac. I don't have a GUI for now.
Command line parameters are useful for quickly overriding some parameter setting from the configuration file. As well, command line parameters are useful if there are not so many parameters. For your case, I'd suggest that you export parameter presets to command line.
Command line arguments:
Pros:
concise - no extra config files to maintain by itself
great interaction with bash scripts - e.g. variable substitution, variable reference, bash math, etc.
Cons:
it could get very long as the options become more complex
formatting is inflexible - besides some command line utilities that help you parse the high level switches and such, anything more complex (e.g. nested structured information) requires custom syntax such as using Regex, and the structure could be quite rigid - while JSON or YAML would be hard to specify at the command line level
Configuration files:
Pros:
it can be very large, as large as you need it to be
formatting is more flexible - you can use JSON, YAML, INI, or any other structural format to represent the information in a more human consumable way
Cons:
inflexible to interact with bash variable substitutions and references (as well as bash math) - you have to probably define your own substitution rules if you want the config file to be "generic" and reusable, while this is the biggest advantage of using command line arguments - variable math would be difficult in config files (if not impossible) - you have to define your own "operator" in the config files, or you have to rely on another bash script to carry out the variable math, and perform your custom variable substitution so the "generic" config file could become "concretely usable".
for all that it takes to have a generic config file (with custom defined variable substitution rules) ready, a bash script is still needed to carry out the actual substitution, and you still have to code your command line to accept all the variable substitutions, so either you have config files with no variable substitution, which means you "hard code" and repeat the config file for different scenarios, or the substitution logic with custom variable substitution rules make your in-app config file logic much more complex.
In my use case, I value being able to do variable substitution / reference (as well as bash math) in the bash scripts more important, since I'm using the same binary to start many server nodes with different responsibilities in a server backend cluster, and I kind of use the bash scripts as sort of a container or actually a config file to start the many different nodes with differing command line arguments.
my vote = both ala mysqld.exe
What environment/platform? In Windows you'd rather use a config file, or even a configuration panel/window in the gui.
I place configuration that don't really change in a configuration file.
Configuration that change often I place on the command-line.