I'm relatively new to using Octave. I'm working on a project that requires me to collect the RGB values of all the pixels in a particular image and compare them to a list of other values. This is a time-consuming process that takes about half a minute to run. As I make edits to my code and test it, I find it annoying that I need to wait for 30 seconds to see if my updates work or not. Is there a way where I can run the code once at first to load the data I need and then set up an artificial starting point so that when I rerun the code (or input something into the command window) it only runs a desired section (the section after the time-consuming part) leaving the untouched data intact?
You may set your variable to save into a global variable,
and then use clear -v instead of clear all.
clear all is a kind of atomic bomb, loved by many users. I have never understood why. Hopefully, it does not close the session: Still some job for quit() ;-)
To illustrate the proposed solution:
>> a = rand(1,3)
a =
0.776777 0.042049 0.221082
>> global a
>> clear -v
>> a
error: 'a' undefined near line 1, column 1
>> global a
>> a
a =
0.776777 0.042049 0.221082
Octave works in an interactive session. If you run your script in a new Octave session each time, you will have to re-compute all your values each time. But you can also start Octave and then run your script at the interactive terminal. At the end of the script, the workspace will contain all the variables your script used. You can type individual statements at the interactive terminal prompt, which use and modify these variables, just like running a script one line at the time.
You can also set breakpoints. You can set a breakpoint at any point in your script, then run your script. The script will run until the breakpoint, then the interactive terminal will become active and you can work with the variables as they are at that point.
If you don't like the interactive stuff, you can also write a script this way:
clear
if 1
% Section 1
% ... do some computations here
save my_data
else
load my_data
end
% Section 2
% ... do some more computations here
When you run the script, Section 1 will be run, and the results saved to file. Now change the 1 to 0, and then run the script again. This time, Section 1 will be skipped, and the previously saved variables will be loaded.
Related
New programmer here. I have been trying to run my script through Tk console through a VMD program which works when I copy it into tkconsole, however when I source/load my script into tkconsole, it only runs part of the script before stopping and gives me two issues.
The issue I am having are:
it loads up molecules but does not visually display it in the VMD window
it runs most of my script, but gets stuck at the put $total section and feeds me back invalid command name "put"
I am unsure if I have missed a step when sourcing scripts, however when manually pasting in the whole script it seems to work. Wondering if anyone has input. Please see the script below:
mol new ubiquitin.psf
mol new pulling.dcd
set sel [atomselect top "index 942 963"]
set x [measure bond {59 60} frame all]
set total 0
for {set i 0} {$i <100 } {incr i} {
puts "I inside first loop: $[measure bond {59 60} frame $i]"; set total [expr {$total + [measure bond {59 60} frame $i]}]
}
put $total
expr {$total/100}
As Donal commented, your script fails due to a typo: put instead of puts.
The reason it works when run manually is because of a procedure called unknown. This procedure is called whenever the interpreter encounters an unknown command. It then tries different things to handle the command:
It will load a library, if that is known to contain the command.
It executes an external executable file, if that exists.
It runs a command from the command history, if applicable.
If the name is a unique prefix of an existing Tcl command, it runs that command instead.
All except the first point are only attempted in interactive mode. So, in that situation the last option kicks in and runs puts when you type put. However, when running a script, that doesn't happen and you get the error you mentioned.
I am running a shell script which emits lots of line while executing...they are just status output rather than the actual output....
I want them to be displayed on a JTextArea. I am working on jython. The piece of my code looks like:
self.console=JTextArea(20,80)
cmd = "/Users/name/galaxy-dist/run.sh"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE,stderr=subprocess.PIPE, shell=True)
self.console.append(p.stdout.read())
This will wait until the command finishes and prints the output. But I want to show the realtime out put to mimic the console. Anybody have the idea ?
You're making things more complicated than they need to be. The Popen docs state the following about the stream arguments:
With the default settings of None, no redirection will occur; the child’s file handles will be inherited from the parent. [my emphasis]
Therefore, if you want the subprocess' output to go to your stdout, simply leave those arguments blank:
subprocess.Popen(cmd, shell=True)
In fact, you aren't using any of the more advanced features of the Popen constructor, and this particular example doesn't need any parsing by the shell, so you can simplify it further with the subprocess.call() function:
subprocess.call(cmd)
If you still want the return code, simply set a variable equal to this call:
return_code = subprocess.call(cmd)
I have an AMPL script that involves calling "solve" on a linear program many times. The solver I'm using is MINOS. After every time it solves, it outputs:
MINOS 5.51:
"option abs_boundtol 2.220446049250313e-16;" or "option
rel_boundtol 2.220446049250313e-16;" will change deduced dual values.
Is there a way to suppress this message?
I read this in the MINOS instructions:
For invocations from AMPL's solve command or of the form
minos stub ...
(where stub.nl is from AMPL's -ob or -og output options), you can use
outlev= to control the amount and kind of output:
outlev=0 no chatter on stdout
outlev=1 only report options on stdout
outlev=2 summary file on stdout
outlev=3 log file on stdout, no solution
outlev=4 log file, including solution, on stdout
which might be relevant but I don't understand it.
I have included "option solver_msg 0;" in my script; it turns off the announcement from MINOS that it got such-and-such an optimal value with so many iterations, but it doesn't affect the message I'm asking about here.
You can redirect the remaining solver output to /dev/null (or equivalent for your system) as follows:
solve > /dev/null;
As for the message about abs_boundtol and rel_boundtol, I think you can set them to a small positive value larger than 2.220446049250313e-16 to make the message go away. Note that this will affect the dual values computed for presolved constraints.
See also https://groups.google.com/d/msg/ampl/ERJ8nF_LnNU/75yWK9deBjUJ
for me "option show_boundtol 0;" worked. You can try this. By default it is "option show_boundtol 1;".
You can read about it here (http://ftp.icm.edu.pl/packages/netlib/ampl/changes)
it seems that it is not possible for me to trigger an event in OpenNMS using a threshold...
first the fact (as much detail as i can)
i want to monitor a html file, better, the content.
if a value is not what i expected OpenNMS should call be.
my html file:
Document Count: 5
in /var/lib/opennms/rrd/snmp/NODE are two files named: "documentCount" (.jbr & .meta)
--> because of the http-datacollection-config.xml
in my logfiles is written:
INFO [LegacyScheduler-Thread-2-of-50] RrdUtils: updateRRD: updating RRD file /var/lib/opennms/rrd/snmp/21/documentCount.jrb with values '1385031023:5'"
so the "5" is collected correctly.
now i created a threshold for this case:
<threshold type="high" ds-type="node"
value="4.0" rearm="2.0" trigger="1" triggeredUEI="uei.opennms.org/threshold/highThresholdExceeded"
filterOperator="or" ds-name="documentCount"
/>
in my collectd-configuration.xml is the threshold also enabled:
in my opinion the threshold of 4 is exceeded, because the value is 5. so the highTresholdEvent should be fired. BUT IT DOESNT.
so i'm here to ask if someone had an idea.
regards dawn
Check collectd.log with the following
tail -f collectd.log | grep -i thresholding
Threshold checking was moved to evaluate while the data is being retrieved a while back as opposed to a post process of rrd files.
Even with the log setting at info you should find some clues as to why the threshold rule is not matching any data.
I have a Shell script that needs to run in a loop, and perform a series of commands, and when it's finished repeat, hence the loop. Between each command there is a sleep command for a few minutes. The "job" should never terminate. I can have the script start a boot time, but it needs to continue where it left off in the sequence for the commands when the system is rebooted.
How can I best accomplished this? Should I create a MySQL table of the queue of commands, and have it delete each row after each time it successfully executes it? Then when it completes the loops it would re-populate the queue table and start from the top.
It seems like I'm missing something to make this more simple. Thanks in advance for your helpful insight!
You may want to rewrite your code so that it looks like this:
while: ; do
case $step in
0) command_1 && ((step++)) ;;
1) command_2 && ((step++)) ;;
...
9) command_9 && step=0 ;;
*) echo "ERROR" >&2 ; exit 1 ;;
esac
done
So you would be aware of what has been done by testing the value of step.
Then, you may want to set a trap before the while loop is executed, so that, on exit, the value of step is written to a log file:
trap "echo step=$step > log_file" EXIT
Then, all you need to do is to source the log file at the beginning of the script, and the last one will continue its job where it has been stopped.
MySQL sounds like a pretty complex solution for this case. In general I would think about some sort of filesystem based markers. You could keep the current state of execution in one or more files e. g. in /var/run and make your script check for these files when it starts up.
When you complete one step, you rename the file to reflect the next step that needs to be done and so on.
At the end, rename it or remove it so that the next time the script runs, it will start a new cycle.
I think you can use a cron job for this. A cron job can run each minute and with a "lock file" strategy you can run the script only if the lock file is not present hence when the previous running script was ended.