TK/TCL console does not run full script, but works with manual input - tcl

New programmer here. I have been trying to run my script through Tk console through a VMD program which works when I copy it into tkconsole, however when I source/load my script into tkconsole, it only runs part of the script before stopping and gives me two issues.
The issue I am having are:
it loads up molecules but does not visually display it in the VMD window
it runs most of my script, but gets stuck at the put $total section and feeds me back invalid command name "put"
I am unsure if I have missed a step when sourcing scripts, however when manually pasting in the whole script it seems to work. Wondering if anyone has input. Please see the script below:
mol new ubiquitin.psf
mol new pulling.dcd
set sel [atomselect top "index 942 963"]
set x [measure bond {59 60} frame all]
set total 0
for {set i 0} {$i <100 } {incr i} {
puts "I inside first loop: $[measure bond {59 60} frame $i]"; set total [expr {$total + [measure bond {59 60} frame $i]}]
}
put $total
expr {$total/100}

As Donal commented, your script fails due to a typo: put instead of puts.
The reason it works when run manually is because of a procedure called unknown. This procedure is called whenever the interpreter encounters an unknown command. It then tries different things to handle the command:
It will load a library, if that is known to contain the command.
It executes an external executable file, if that exists.
It runs a command from the command history, if applicable.
If the name is a unique prefix of an existing Tcl command, it runs that command instead.
All except the first point are only attempted in interactive mode. So, in that situation the last option kicks in and runs puts when you type put. However, when running a script, that doesn't happen and you get the error you mentioned.

Related

Can you set an artificial starting point in your code in Octave?

I'm relatively new to using Octave. I'm working on a project that requires me to collect the RGB values of all the pixels in a particular image and compare them to a list of other values. This is a time-consuming process that takes about half a minute to run. As I make edits to my code and test it, I find it annoying that I need to wait for 30 seconds to see if my updates work or not. Is there a way where I can run the code once at first to load the data I need and then set up an artificial starting point so that when I rerun the code (or input something into the command window) it only runs a desired section (the section after the time-consuming part) leaving the untouched data intact?
You may set your variable to save into a global variable,
and then use clear -v instead of clear all.
clear all is a kind of atomic bomb, loved by many users. I have never understood why. Hopefully, it does not close the session: Still some job for quit() ;-)
To illustrate the proposed solution:
>> a = rand(1,3)
a =
0.776777 0.042049 0.221082
>> global a
>> clear -v
>> a
error: 'a' undefined near line 1, column 1
>> global a
>> a
a =
0.776777 0.042049 0.221082
Octave works in an interactive session. If you run your script in a new Octave session each time, you will have to re-compute all your values each time. But you can also start Octave and then run your script at the interactive terminal. At the end of the script, the workspace will contain all the variables your script used. You can type individual statements at the interactive terminal prompt, which use and modify these variables, just like running a script one line at the time.
You can also set breakpoints. You can set a breakpoint at any point in your script, then run your script. The script will run until the breakpoint, then the interactive terminal will become active and you can work with the variables as they are at that point.
If you don't like the interactive stuff, you can also write a script this way:
clear
if 1
% Section 1
% ... do some computations here
save my_data
else
load my_data
end
% Section 2
% ... do some more computations here
When you run the script, Section 1 will be run, and the results saved to file. Now change the 1 to 0, and then run the script again. This time, Section 1 will be skipped, and the previously saved variables will be loaded.

Separating stdout and stderr in Tcl drops all but last line of stderr

I'm not a Tcl programmer, but I need to modify a Tcl script that invokes an external command and tries to separate stdout and stderr. The following is a minimal example of how the script currently does this.
#!/usr/bin/tclsh8.4
set pipe [open "|cmd" r]
while {[gets $pipe line] >= 0} {puts $line}
catch "close $pipe" errorMsg
puts "$errorMsg"
Here, cmd is a an external command, and for the sake of this example, I will replace it with the following shell script. (I'm working on a Linux machine, but you can modify this to write to stdout and stderr however is appropriate for your system.)
#!/bin/sh -f
echo "A" > /dev/stdout
echo "B" > /dev/stdout
echo "C" > /dev/stderr
echo "D" > /dev/stderr
When I execute cmd, I get the following four lines as expected:
% ./cmd
A
B
C
D
However, when I execute my Tcl script, I get:
% ./test.tcl
A
B
D
This is an example of a more general phenomenon, which is that catch seems to swallow all but the last line of stderr.
To me, the "obvious" way to approach this is to try to mimic what is happening with stdout, which obviously works and prints all lines of the output. However, the current implementation is based on getting a Tcl channel by using open "|cmd", which requires running an external command. I can't figure out how to create a channel without opening an external command, and even if I could figure that out, there are subsequent issues with this approach. (How do I get the output of close into the channel? And if I need to open a new channel to get the output of each channel I am closing, then wouldn't I need an infinite number of channels?)
If anyone has any idea why errorMsg drops the initial lines or another approach that does not suffer from this problem, please let me know.
I know that this will come up, so I will say in advance that switching to Tcl 8.5 is probably not an option for me in the short term, since I do not control the environment in which this script is run.

Evaluate variables at delayed command with after

I have process were variables are defined, and following that procedure the variables should be used after a delay.
The problem is that the delayed command process the variables when the command is executed instead of when the command is given. Consider the following example:
The code is not tested, but the point should be clear anyway:
for {set i 0} {$i < 100} {incr i} {
set outputItem $i
set time [expr 1000+100*$i]
after $time {puts "Output was $outputItem"}
}
Which I would hope print something like:
Output was 1
Output was 2
Output was 3
...
But actually it prints:
Output was 100
Output was 100
Output was 100
Which I guess shows that tcl keeps the parameter name (and not the value of the variable) when the after command is initiated.
Is there any way to substitute the variable name to the variable content, so that the delayed command (after xxx yyy) works as desired?
The problem is this line:
after $time {puts "Output was $outputItem"}
The substitution of $outputItem is happening when the after event fires, not at the time you defined it. (The braces prevent anything else.) To get what you want, you need list quoting, and that's done with the list command:
after $time [list puts "Output was $outputItem"]
The list command builds lists… and pre-substituted commands (because of the way Tcl's syntax is defined). It's great for building things that you're going to call later. I guess it could have been called make-me-a-callback too, but then people would have wondered about its use for creating lists. It does both.
If your callback needs to be two or more commands, use a helper procedure (or an apply) to wrap it up into a single command; the reduction in confusion at trying to make callbacks work with multiple direct commands is totally worth it.

Button with multiple command line arguments

Can a button in tcl could be linked to multiple command line arguments ?
I have a code which runs when a button is clicked, a progressbar code with time in seconds is also linked with it , and should start at same time when this button is pressed.
I put both procs as a command in button command argument using {} but it fails with Error.
Code Snippet
button .b -image $p -command {progressbar 300 run_structural_comparision}
proc progressbar {seconds} {
ttk::progressbar .pg -orient horizontal -mode determinate -maximum $seconds
pack .pg -side left
update idletasks
# Do some real work here for $seconds seconds
for {set i 0} {$i < $seconds} {incr i} {
after 1000; # Just waiting in this example, might as well do something useful here
.pg step; # After some part of the work, advance the progressbar
update idletasks; # Needed to update the progressbar
}
# Done, clean up the dialog and progressbar
}
proc run_structural_comparision {} {
type_run
global ENTRYfilename ENTRYfilename2 curDIR curDIR2 typep reflib compLib rundir hvt_verilog logfile
set path [concat $reflib $compLib]
## set path [concat $ENTRYfilename $ENTRYfilename2]
puts $path
set str "compare_structure -overlap_when -type {timing constraint} -report compare_structure_"
set trt ".txt"
set structure [concat [string trim $str][string trim $typep][string trim $trt] $path]
puts $structure
puts $rundir
cd $rundir
set filename [concat "compare_structure_" $typep ".tcl"]
if {[ file exists $rundir/$filename] == 1 } {
exec rm -rf $rundir/compare_structure_$typep.tcl
}
A button's -command callback is a Tcl script. It will be evaluated at the global level of the stack. If you want to run two commands, you can just put a script in there to run the two commands:
button .b -command { command_1; command_2 }
This will run them sequentially. Tcl is naturally single-threaded as that is by far the easiest programming model for people to work with. It's possible to make code that works by doing callbacks to appear to be doing multiple things at once (that's how Tk works, just like virtually all other GUIs on the planet) but you're really only doing one thing at a time.
But your real question…
The core of what you need is a way to run the program that takes a long time in the background so that you can monitor it and continue to update the GUI. This is not a trivial topic, unfortunately, and the right answer will depend on exactly what is going on.
One of the simplest techniques is where the CPU-bound processing is done in a subprocess. In that case, you can run the subprocess via a pipeline and set fileevent to give you a notification callback when output is produced or the program terminates. Tcl is very good at this sort of thing; things that many languages have as very advanced techniques just feel natural when done with Tcl, as a great deal of thought has been put into how to make I/O callbacks work nicely.
If it's in-process and long-running without the opportunity for callbacks, things get more complex as you have to have the processing and the GUI updates in different threads. Which isn't too hard if you've got everything set up right, but which might require substantial re-architecting of your program (as it is usual for threads in Tcl to be extremely strongly partitioned from each other).
The simplest thing to do is to create a procedure that calls the two functions. If you wantie:
proc on_button_press {seconds} {
after idle [list progressbar $seconds]
after idle [list run_structural_comparision]
}
You can put multiple calls in the immediate button handler command string but it quickly gets complicated. But in short, use a semicolon to separate the two commands.
Your use if update idletasks should be considered a "code smell". ie: avoid it. In this case, in the progressbar function, setup the bar then just have everything else called by after calls to update the state of the progress.
I suspect your rm -rf may not do what you want. It it likely to lockup the interface as you get nothing back until the command has completed. Better is to write a function to walk the directory tree and delete the files with file delete and you can then raise progress events as you go and keep the UI alive by breaking up the processing into chunks using after again.

How do you spawn application from tcl/tk such that it's independant of the parent app?

I've tried just about EVERYTHING. I've tried exec. I've tried open with a pipe. I've tried expect's spawn command. I've even tried creating scripts that launch the application. None of this works. It should be something so freaking simple but it isn't. Basically I want something that executes "./myapp &". I don't want the parent app to wait for any input. I don't want the parent app to kill the child when it's closed. Nor do I want the closure of the child to affect the parent.
I've spent 4 hours trying to do this with no dice.
EDIT 4: I have produced a small test case. Actually. I know think the bug is related to a shell script called from tcl not recognizing it's finished and wanting more input. But oddly, the script does run and the following tcl commands are still executed.
Create a file called "data.dat"
9.2877556131E+03 1.2512862771E-15 1.5337670739E-18
9.2910873127E+03 1.2439911921E-15 1.5339531650E-18
9.2944190123E+03 1.2253566905E-15 1.5354833054E-18
9.2977507119E+03 1.2115157273E-15 1.6142496743E-18
9.3010824115E+03 1.2080944425E-15 1.7533261348E-18
9.3044141111E+03 1.2076649858E-15 1.8405304542E-18
9.3077458106E+03 1.1904275879E-15 1.8336914953E-18
9.3110775102E+03 1.1790369061E-15 1.8270892064E-18
9.3144092098E+03 1.1620102216E-15 1.8244075829E-18
9.3177409094E+03 1.1178361307E-15 1.8581382655E-18
9.3210726090E+03 1.0969704027E-15 1.8237852087E-18
9.3244043086E+03 1.0721326432E-15 1.7016959482E-18
9.3277360082E+03 1.0699729320E-15 1.6841145916E-18
9.3310677078E+03 1.0266229217E-15 1.7253936877E-18
9.3343994074E+03 9.0875918248E-16 1.6065954292E-18
9.3377311069E+03 7.9157363851E-16 1.5220001484E-18
and then run this tcl file on it
#!/usr/bin/wish
frame .main
button .button -text "Make plot" -command { makeplot }
pack .button
proc makeplot {} {
set data {#!/bin/bash
cat << EOF > plot.gnu
splot "data.dat" using 1:2:3 with points
EOF
}
set fileId [open "plot.csh" "w"]
puts -nonewline $fileId $data
flush $fileId
close $fileId
exec chmod +x ./plot.csh
exec ./plot.csh
exec gnuplot -persist < plot.gnu &
}
The gnuplot plot will not appear until you "foreground" it. I have tried many variations on this (with/without the ampersand). Strangely, I DID get a version of this working under bash.
EDIT 5: I GIVE UP! This test case is now working. It was not just moments ago. I have NO idea what's going on here. This has been one of the hardest bugs to fix in 15 years of programming.
Try exec with & at the end:
exec ./myapp &
Have you tried piping to 'at now'?
./myapp | at now