How to have a Shell script continue after reboot? - mysql

I have a Shell script that needs to run in a loop, and perform a series of commands, and when it's finished repeat, hence the loop. Between each command there is a sleep command for a few minutes. The "job" should never terminate. I can have the script start a boot time, but it needs to continue where it left off in the sequence for the commands when the system is rebooted.
How can I best accomplished this? Should I create a MySQL table of the queue of commands, and have it delete each row after each time it successfully executes it? Then when it completes the loops it would re-populate the queue table and start from the top.
It seems like I'm missing something to make this more simple. Thanks in advance for your helpful insight!

You may want to rewrite your code so that it looks like this:
while: ; do
case $step in
0) command_1 && ((step++)) ;;
1) command_2 && ((step++)) ;;
...
9) command_9 && step=0 ;;
*) echo "ERROR" >&2 ; exit 1 ;;
esac
done
So you would be aware of what has been done by testing the value of step.
Then, you may want to set a trap before the while loop is executed, so that, on exit, the value of step is written to a log file:
trap "echo step=$step > log_file" EXIT
Then, all you need to do is to source the log file at the beginning of the script, and the last one will continue its job where it has been stopped.

MySQL sounds like a pretty complex solution for this case. In general I would think about some sort of filesystem based markers. You could keep the current state of execution in one or more files e. g. in /var/run and make your script check for these files when it starts up.
When you complete one step, you rename the file to reflect the next step that needs to be done and so on.
At the end, rename it or remove it so that the next time the script runs, it will start a new cycle.

I think you can use a cron job for this. A cron job can run each minute and with a "lock file" strategy you can run the script only if the lock file is not present hence when the previous running script was ended.

Related

Can you set an artificial starting point in your code in Octave?

I'm relatively new to using Octave. I'm working on a project that requires me to collect the RGB values of all the pixels in a particular image and compare them to a list of other values. This is a time-consuming process that takes about half a minute to run. As I make edits to my code and test it, I find it annoying that I need to wait for 30 seconds to see if my updates work or not. Is there a way where I can run the code once at first to load the data I need and then set up an artificial starting point so that when I rerun the code (or input something into the command window) it only runs a desired section (the section after the time-consuming part) leaving the untouched data intact?
You may set your variable to save into a global variable,
and then use clear -v instead of clear all.
clear all is a kind of atomic bomb, loved by many users. I have never understood why. Hopefully, it does not close the session: Still some job for quit() ;-)
To illustrate the proposed solution:
>> a = rand(1,3)
a =
0.776777 0.042049 0.221082
>> global a
>> clear -v
>> a
error: 'a' undefined near line 1, column 1
>> global a
>> a
a =
0.776777 0.042049 0.221082
Octave works in an interactive session. If you run your script in a new Octave session each time, you will have to re-compute all your values each time. But you can also start Octave and then run your script at the interactive terminal. At the end of the script, the workspace will contain all the variables your script used. You can type individual statements at the interactive terminal prompt, which use and modify these variables, just like running a script one line at the time.
You can also set breakpoints. You can set a breakpoint at any point in your script, then run your script. The script will run until the breakpoint, then the interactive terminal will become active and you can work with the variables as they are at that point.
If you don't like the interactive stuff, you can also write a script this way:
clear
if 1
% Section 1
% ... do some computations here
save my_data
else
load my_data
end
% Section 2
% ... do some more computations here
When you run the script, Section 1 will be run, and the results saved to file. Now change the 1 to 0, and then run the script again. This time, Section 1 will be skipped, and the previously saved variables will be loaded.

TK/TCL console does not run full script, but works with manual input

New programmer here. I have been trying to run my script through Tk console through a VMD program which works when I copy it into tkconsole, however when I source/load my script into tkconsole, it only runs part of the script before stopping and gives me two issues.
The issue I am having are:
it loads up molecules but does not visually display it in the VMD window
it runs most of my script, but gets stuck at the put $total section and feeds me back invalid command name "put"
I am unsure if I have missed a step when sourcing scripts, however when manually pasting in the whole script it seems to work. Wondering if anyone has input. Please see the script below:
mol new ubiquitin.psf
mol new pulling.dcd
set sel [atomselect top "index 942 963"]
set x [measure bond {59 60} frame all]
set total 0
for {set i 0} {$i <100 } {incr i} {
puts "I inside first loop: $[measure bond {59 60} frame $i]"; set total [expr {$total + [measure bond {59 60} frame $i]}]
}
put $total
expr {$total/100}
As Donal commented, your script fails due to a typo: put instead of puts.
The reason it works when run manually is because of a procedure called unknown. This procedure is called whenever the interpreter encounters an unknown command. It then tries different things to handle the command:
It will load a library, if that is known to contain the command.
It executes an external executable file, if that exists.
It runs a command from the command history, if applicable.
If the name is a unique prefix of an existing Tcl command, it runs that command instead.
All except the first point are only attempted in interactive mode. So, in that situation the last option kicks in and runs puts when you type put. However, when running a script, that doesn't happen and you get the error you mentioned.

When loading a fixture ends with "Killed"

In my local envoriment I can find load all my fixtures.
But on my staging server, one of the fixtures just ends with "Killed"
I grep'ed through my whole project for the word "Killed" but did not find it anywhere.
php bin/console doctrine:fixures:load --fixtures src/ReviewBundle/DataFixtures/ORM/LoadRatingTypes.php -vvv
Careful, database will be purged. Do you want to continue y/N ?y
> purging database
> loading ReviewBundle\DataFixtures\ORM\LoadRatingTypes
Killed
I tail my dev.log file and all I got was
[2016-03-17 15:26:54] doctrine.DEBUG: "START TRANSACTION" [] []
[2016-03-17 15:26:54] doctrine.DEBUG: DELETE FROM cms_block [] []
[2016-03-17 15:26:54] doctrine.DEBUG: DELETE FROM cms_page [] []
[2016-03-17 15:26:54] doctrine.DEBUG: DELETE FROM rating_type [] []
My php log is empty
My mysql.log and mysql.err log is empty.
I have no idea where I should look
"Killed" often happens when PHP runs out of memory, and shuts down your fixture process. This happens mainly when your PHP script tries to load too many entities. Are you trying to load a massive number of fixtures?
There are workarounds to avoid memory overflows in your fixtures scripts. I answered a pretty similar question some weeks ago, you should have a look at it.
If you still have problems after implementing these modifications, you could also increase PHP memory limit.

kill mysql query in perl

I have a perl script in which i create a table from the existing mysql databases. I have hundreds of databases and tables in each database contains millions of records, because of this, sometimes, query takes hours because of problems with indexing, and sometimes i run out disk space due to improper joins. Is there a way i can kill the query in the same script watching for memory consumption and execution time?
P.S. I am using DBI module in perl for mysql interface
As far as execution time, you use alarm Perl functionaility to time out.
Your ALRM handle can either die (see example below), or issue a DBI cancel call (sub { $sth->cancel };)
The DBI documentation actually has a very good discussion of this as well as examples:
eval {
local $SIG{ALRM} = sub { die "TIMEOUT\n" }; # N.B. \n required
eval {
alarm($seconds);
... code to execute with timeout here (which may die) ...
};
# outer eval catches alarm that might fire JUST before this alarm(0)
alarm(0); # cancel alarm (if code ran fast)
die "$#" if $#;
};
if ( $# eq "TIMEOUT\n" ) { ... }
elsif ($#) { ... } # some other error
As far as watching for memory, you just need for ALRM handler - instead of simply dying/cancelling - first checks the memory consumption of your script.
I won't go into details of how to measure memory consumption since it's an unrelated question that was likely already answered comprehensively on SO, but you can use size() method from Proc::ProcessTable as described in the Perlmonks snippet Find memory usage of perl program.
I used KILL QUERY command as described in http://www.perlmonks.org/?node_id=885620
This is the code from my script
eval {
eval { # Time out and interrupt work
my $TimeOut=Sys::SigAction::set_sig_handler('ALRM',sub {
$dbh->clone()->do("KILL QUERY ".$dbh->{"mysql_thread_id"});
die "TIMEOUT\n";
});
#Set alarm
alarm($seconds);
$sth->execute();
# Clear alarm
#alarm(0);
};
# Prevent race condition
alarm(0);
die "$#" if $#;
};
This code kills the query and also removes all the temporary tables
Watch out that you can't kill a query, using the same connection handler if you have a query that is stuck because of a table lock. You must open another connection with the same user and will that thread id.
Of course, you have to store in a hash the list of currently opened thread ids.
Mind that once you've terminated a thread id, the rest of the Perl code will execute .. on a unblessed handler.

restart multi-threaded perl script and close mysql connection

I have a Perl script that reads a command file and restarts itself if necessary by doing:
myscript.pl:
exec '/home/foo/bin/myscript.pl';
exit(0);
Now, this works fine except for one issue. The thread that reads the command file does not have access to the DBI handle I use. And over a number of restarts I seem to build up the number of open mysql connections till I get the dreaded "Too Many Connections" error. The DBI spec says:
"Because of this (possibly temporary) restriction, newly created threads must make their own connections to the database. Handles can't be shared across threads."
Any way to close connections or perhaps a different way to restart the script?
Use a flag variable that is shared between threads. Have the command line reading thread set the flag to exit, and the thread holding the DB handle release it and actually do the re-exec:
#!/usr/bin/perl
use threads;
use threads::shared;
use strict; use warnings;
my $EXIT_FLAG :shared;
my $db_thread = threads->create('do_the_db_thing');
$db_thread->detach;
while ( 1 ) {
sleep rand 10;
$EXIT_FLAG = 1 if 0.05 > rand or time - $^T > 20;
}
sub do_the_db_thing {
until ( $EXIT_FLAG ) {
warn sprintf "%d: Working with the db\n", time - $^T;
sleep rand 5;
}
# $dbh->disconnect; # here
warn "Exit flag is set ... restarting\n";
exec 'j.pl';
}
You could try registering an atexit function to close the DBI handle at the point where it is opened, and then use fork & exec to restart the script rather then just exec. The parent would then call exit, invoking the atexit callback to close the DBI handle. The child could re-exec itself normally.
Edit: After thinking for a couple more minutes, I believe you could skip the atexit entirely because the handle would be closed automatically upon the parent's exit. Unless, of course, you need to do a more complex operation upon closing the DB handle than a simple filehandle close.
my $pid = fork();
if (not defined $pid) {
#Could not fork, so handle the error somehow
} elsif ($pid == 0) {
#Child re-execs itself
exec '/home/foo/bin/myscript.pl';
} else {
#Parent exits
exit(0);
}
If you expect a lot of connections, you probably want DBI::Gofer to act as a DBI proxy for you. You create as many connections in as many scripts as you like, and DBI::Gofer shares them when it can.