I have seen a topic regarding to how we plot graphs with Gnuplot in Tk Canvas. Here are the simple code sample from Donal Fellows#Donal Fellows. Can someone help me on these two commands in Bold(set term tk;gnuplot .c)? I can not understand what does it mean.Thanks.
package require Tk
eval [exec gnuplot << "
**set term tk**
plot x*x
"]
pack [canvas .c]
**gnuplot .c**
When you run the gnuplot program with the terminal set to tk, it writes to its standard output a Tcl script that will create a procedure. That procedure is called gnuplot, and it takes a single argument which is the name of the canvas to plot onto. So we call the gnuplot program with the appropriate arguments to get it to tell you how to make a command that will actually do the plotting. We eval that result, make the canvas, and delegate to that newly-created gnuplot command the actual plotting on the canvas.
It's a little odd, and theoretically unsafe (what if the gnuplot is hacked?!?!?! Oh noes!) but actually works quite well in practice.
To see why it works, try doing:
puts [exec gnuplot << "
set term tk
plot x*x
"]
Instead of evaluating the code, that will print it out. You'll see that it's a procedure definition, and how exactly it all works. (Alas, I've not got gnuplot installed on this computer at the moment, so I can't do the check quite instantly for you…)
I'm not an expert of gnuplot, but as far as understand the 2 command are very simple.
set term tk
Assign the value string tk to the variable term.
gnuplot .c
Launch the command gnuplot with argument .c.
In your code the .c is just the name of the tk canvas widget.
More intriguing is the first [exec gnuplot <<...] that execute an external command called gnuplot that initialize the tk script and define the tk command gnuplot used to draw the plot on a canvas.
It looks like the external gnuplot command generate the tck code to define all what is needed.
Related
I am running a Julia script in a Jupyter Notebook on a remote host by using the following command in a jupyter-environment
jupyter nbconvert --to notebook --execute my_notebook.ipynb
which works fine. However, if I try to pass arguments to the Jupyter Notebook with the intention to be finally passed to the Julia script I fail. My question is how to do it?
To pass arguments to the Jupyter Notebook I modified the above command to
jupyter nbconvert --to notebook --execute my_notebook.ipynb arg1 arg2 arg3
Also, in the Julia script I try to recover the arguments (which are supposed to be small enough integers) via
x1 = parse(Int16, ARGS[1])
x2 = parse(Int16, ARGS[2])
x3 = parse(Int16, ARGS[3])
which doesn't work.
I tried to understand what is in ARGS, but I can't decipher what it means. The output of
println(ARGS)
included in the Julia script is
"/tmp/tmp8vuj5f79.json"
Coming back to the second bullet point, a few errors occur since ARGS[1] obviously can't be converted to an integer.
Another error which occurs when passing the arguments to the Jupyter Notebook execution is
[NbConvertApp] WARNING | pattern 'arg1' matched no files
[NbConvertApp] WARNING | pattern 'arg2' matched no files
[NbConvertApp] WARNING | pattern 'arg3' matched no files
I might be approaching the problem from a completely wrong perspective, so any help would be very much appreciated!
It looks like it's not possible to pass in command-line arguments to --executed notebooks.
The WARNING | pattern 'arg1' matched no files messages indicate that these arguments are seen by nbconvert as additional files to convert, not as arguments to the notebook.
The common suggestion is to use environment variables instead.
X=12 Y=5 Z=42 jupyter nbconvert --to notebook --execute my_notebook.ipynb
which you can then access from within the notebook as
ENV["X"], ENV["Y"], and ENV["Z"].
Given an Octave m file heading with #! /usr/bin/octave -q, can a function be defined inside this file?, or the only way to do it is to invoke functions defined in another file?
Yes, they can. The only thing is that the first Octave statement must not be a function definition which is why many Octave programs will start with 1;. However, my experience is that most Octave programs need a package so the first statements can be just the loading of said packages.
Here's an example Octave program:
#!/usr/bin/env octave
## Do not forget your license
pkg load foo;
pkg load bar;
1; # not really necessary because of the pkg load statements above
function foobar ()
## this function does something amazing
endfunction
function main (argv)
disp (argv);
endfunction
main (argv ());
As far as I know, the source command only accepts the name of the script. Is there any workaround to source a script with any number of arguments?
set argv [list your parameters go here]
source myscript.tcl
Currently, I will tcl handle the data processing part. Let's say the data is stored in a tcl list variable.
If I want to use gnuplot to show the data. what is the best way to do this.
Based on my study, gnuplot needs data file provided. so I can not directly pass the list variable to gnuplot command.
I guess that creating the gnu command inside tcl with string operation. and then print all the command into a file,for example name it "command.dat".
And call the gnuplot this way in tcl:
exec gnuplot "command.dat"
any other method? or good reference.
I have a tcl script drakon_gen.tcl . I am running it, from another script run.tcl like this:
source "d:\\del 3\\drakon_editor1.22\\drakon_gen.tcl"
When I run run.tcl I have following output:
This utility generates code from a .drn file.
Usage: tclsh8.5 drakon_gen.tcl <options>
Options:
-in <filename> The input filename.
-out <dir> The output directory. Optional.
Now I need to add to run.tcl options that are in the output. I tried many ways but I receive errors. What is the right way to add options?
When you source a script into a tcl interpreter, you are evaluating the script file in the context of the current interpreter. If it was written to be a standalone program you may run into problems with conflicting variables and procedures in the global namespace. One way to avoid that is to investigate the use of slave interpreters (see the interp command) to provide a separate environment for the child script.
In your specific example it looks like you just need to provide some command line arguments. These are normally provided by the argv variable which holds a list of all the command line arguments. If you define this list before sourcing the script you can feed it the required command line. eg:
set original_argv $argv
set argv [list "--optionname" "value"]
source $additional_script_filename
set argv $original_argv