restart multi-threaded perl script and close mysql connection - mysql

I have a Perl script that reads a command file and restarts itself if necessary by doing:
myscript.pl:
exec '/home/foo/bin/myscript.pl';
exit(0);
Now, this works fine except for one issue. The thread that reads the command file does not have access to the DBI handle I use. And over a number of restarts I seem to build up the number of open mysql connections till I get the dreaded "Too Many Connections" error. The DBI spec says:
"Because of this (possibly temporary) restriction, newly created threads must make their own connections to the database. Handles can't be shared across threads."
Any way to close connections or perhaps a different way to restart the script?

Use a flag variable that is shared between threads. Have the command line reading thread set the flag to exit, and the thread holding the DB handle release it and actually do the re-exec:
#!/usr/bin/perl
use threads;
use threads::shared;
use strict; use warnings;
my $EXIT_FLAG :shared;
my $db_thread = threads->create('do_the_db_thing');
$db_thread->detach;
while ( 1 ) {
sleep rand 10;
$EXIT_FLAG = 1 if 0.05 > rand or time - $^T > 20;
}
sub do_the_db_thing {
until ( $EXIT_FLAG ) {
warn sprintf "%d: Working with the db\n", time - $^T;
sleep rand 5;
}
# $dbh->disconnect; # here
warn "Exit flag is set ... restarting\n";
exec 'j.pl';
}

You could try registering an atexit function to close the DBI handle at the point where it is opened, and then use fork & exec to restart the script rather then just exec. The parent would then call exit, invoking the atexit callback to close the DBI handle. The child could re-exec itself normally.
Edit: After thinking for a couple more minutes, I believe you could skip the atexit entirely because the handle would be closed automatically upon the parent's exit. Unless, of course, you need to do a more complex operation upon closing the DB handle than a simple filehandle close.
my $pid = fork();
if (not defined $pid) {
#Could not fork, so handle the error somehow
} elsif ($pid == 0) {
#Child re-execs itself
exec '/home/foo/bin/myscript.pl';
} else {
#Parent exits
exit(0);
}

If you expect a lot of connections, you probably want DBI::Gofer to act as a DBI proxy for you. You create as many connections in as many scripts as you like, and DBI::Gofer shares them when it can.

Related

mariadb c-connector bind / execute mess up memory allocation

i'm using mariadb c-connector with prepare, bind and execute. it works usualy. but one case end up in "corrupted unsorted chunks" and core dumping when freeing bind buffer. i suggest the whole malloc organisation is messed up after calling mysql_stmt_execute(). my test's MysqlDynamic.c show:
the problem only is connected to x509cert variable bound by bnd[9]
freeing memory only fails if bnd[9].is_null = 0, if is_null execute end normally
freeing memory (using FreeStmt()) after bind and before execute end normally
print of bnd[9].buffer before execute show (void*) is connected to the correct string buffer
same behavior for setting bnd[9].buffer_length to STMT_INDICATOR_NTS or strlen()
other similar bindings (picture, bnd[10]) do not lead to corrupted memory and core dump.
i defined a c structure test for test data in my test program MysqlDynamic.c which is bound in MYSQL_BIND structure.
bindings for x509cert (string buffer) see bindInsTest():
bnd[9].buffer_type = MYSQL_TYPE_STRING;
bnd[9].buffer_length = STMT_INDICATOR_NTS;
bnd[9].is_null = &para->x509certI;
bnd[9].buffer = (void*) para->x509cert;
please get the details out of source file MysqlDynamic.c. please adapt defines in the source to your environment, verify content, and run it. you will find compile info in source code. MysqlDynymic -c will create the table. MysqlDynamic -i will insert 3 records each run. And 'MysqlDynamic -d` drop the the table again.
MysqlDynamic -vc show:
session set autocommit to <0>
connection id: 175
mariadb server ver:<100408>, client ver:<100408>
connected on localhost to db test by testA
>> if program get stuck - table is locked
table t_test created
mysql connection closed
pgm ended normaly
MysqlDynamic -i show
ins2: BufPara <92> name<master> stamp<> epoch<1651313806000>
cert is cert<(nil)> buf<(nil)> null<1>
picure is pic<0x5596a0f0c220> buf<0x5596a0f0c220> null<0> length<172>
ins1: BufPara <91> name<> stamp<2020-04-30> epoch<1650707701123>
cert is cert<0x5596a0f181d0> buf<0x5596a0f181d0> null<0>
picure is pic<(nil)> buf<(nil)> null<1> length<0>
ins0: BufPara <90> name<gugus> stamp<1988-10-12T18:43:36> epoch<922337203685477580>
cert is cert<(nil)> buf<(nil)> null<1>
picure is pic<(nil)> buf<(nil)> null<1> length<0>
free(): corrupted unsorted chunks
Aborted (core dumped)
checking t_test table content show all records are inserted as expected.
you can disable loading of x509cert and/or picture by commenting out the defines line 57/58. the program than end normally. you also can comment out line 208. the buffers are then indicated as NULL.
Questions:
is there a generic coding mistake in the program causing this behavior?
can you run the program in your environment without core dumping? i'm currently using version 10.04.08.
any improvment in code will be welcome.

Fileevent executed before proc finishes

i have the following code ...
lassign [ chan pipe ] chan chanW
fileevent $chan readable [ list echo $chan ]
proc echo { chan } {
...
}
proc exec { var1 var2 } {
....
puts $chanW "Some output"
....
}
Now according to man fileevent will be executed when the programs idles
is it possible to forse fileevent to be executed before that. For instance is it possible to force the fileevent to be executed immediately after the channel becomes readable, to somehow give it priority .... without using threads :)
Tcl never executes an event handler at “unexpected” points; it only runs them at points where it is asked to do so explicitly or, in some configurations (such as inside wish) when it is doing nothing else. You can introduce an explicit wait for events via two commands:
update
vwait
The update command clears down the current event queue, but does not wait for incoming events (strictly, it does an OS-level wait of length zero). The vwait command will also allow true waiting to happen, waiting until a named Tcl global variable has been written to. (It uses a C-level variable trace to do this, BTW.) Doing either of these will let your code process events before returning. Note that there are a number of other wrappers around this functionality; the geturl command in the http package (in “synchronous” mode) and the tkwait command in the Tk package both do this.
The complication? It's very easy to make your code reenter itself by accident while running the event loop. This can easily end up with you making lots of nested event loop calls, running you out of stack space; don't do that. Instead, prepare for reentrancy issues (a global variable check is on of the easiest approaches to do that) so that you don't nest event loops.
Alternatively, if you're using Tcl 8.6 you can switch your code around to use coroutines. They let you stop the current procedure evaluation and return to the main event loop to wait for a future event before starting execution again: you end up with code that returns at the expected time, but which was suspended for a while first. If you want more information about this approach, please ask another separate question here.

Can I do multiple MySQL work in Laravel?

I wrote a code like this (I use MySQL, PDO, InnoDB, Laravel4, localhost & MAC) :
$all_queue = Queue1::all()->toArray(); //count about 10000
ob_end_clean();
foreach($all_queue as $key=>$value) {
$priceCreate=array(...);
Price::create($priceCreate);
Queue1::where('id',$value['id'])->delete();
}
This worked for me (65mg ram usage), but when it was working, other parts of my program(such as other tables) didn't work. I can't open my database on mysql even. My program and my sql wait and when process is completed ,they work.
I don't know what am i supposed to do.
I think this is not for laravel and this is for my php or mysql configuration.
this is my php.ini and mysql config
I assume
$all_foreach($all_queue as $key=>$value) {
Is
foreach($all_queue as $key=>$value) {
And that you have no errors (you have set debug true in your app config).
Try to set no time limit for your script.
In your php.ini
max_execution_time = 3600 ;this is one hour, set to 0 to no limit
Or in code
set_time_limit(0)
And if it's a memory problem try to free memory and unset unused vars. I'ts a good practice in long scripts to free space.
...
}//end foreach loop
unset($all_queue); //no longer needed, so unset it to free memory

How to interrupt Tcl_eval

I have a small shell application that embeds Tcl 8.4 to execute some set of Tcl code. The Tcl interpreter is initialized using Tcl_CreateInterp. Everything is very simple:
user types Tcl command
the command gets passed to Tcl_Eval for evaluation
repeat
Q: Is there any way to interrupt a very long Tcl_Eval command? I can process a 'Ctrl+C' signal, but how to interrupt Tcl_Eval?
Tcl doesn't set signal handlers by default (except for SIGPIPE, which you probably don't care about at all) so you need to use an extension to the language to get the functionality you desire.
By far the simplest way to do this is to use the signal command from the TclX package (or from the Expect package, but that's rather more intrusive in other ways):
package require Tclx
# Make Ctrl+C generate an error
signal error SIGINT
Just evaluate a script containing those in the same interpreter before using Tcl_Eval() to start running the code you want to be able to interrupt; a Ctrl+C will cause that Tcl_Eval() to return TCL_ERROR. (There are other things you can do — such as running an arbitrary Tcl command which can trap back into your C code — but that's the simplest.)
If you're on Windows, the TWAPI package can do something equivalent apparently.
Here's a demonstration of it in action in an interactive session!
bash$ tclsh8.6
% package require Tclx
8.4
% signal error SIGINT
% puts [list [catch {
while 1 {incr i}
} a b] $a $b $errorInfo $errorCode]
^C1 {can't read "i": no such variableSIGINT signal received} {-code 1 -level 0 -errorstack {INNER push1} -errorcode {POSIX SIG SIGINT} -errorinfo {can't read "i": no such variableSIGINT signal received
while executing
"incr i"} -errorline 2} {can't read "i": no such variableSIGINT signal received
while executing
"incr i"} {POSIX SIG SIGINT}
%
Note also that this can leave the interpreter in a somewhat-odd state; the error message is a little bit odd (and in fact that would be a bug, but I'm not sure what in). It's probably more elegant to do it like this (in 8.6):
% try {
while 1 {incr i}
} trap {POSIX SIG SIGINT} -> {
puts "interrupt"
}
^Cinterrupt
%
Another way to solve this problem would be to fork your tcl interpreter into a separate process and driving the stdin and stdout of the tcl interpreter from your main process. Then, in the main process, you can intercept Ctrl-C and use it to kill the process of your forked tcl interpreter and to refork a new tcl interpreter.
With this solution the tcl interpreter will never lock up on your main program. However, its really annoying to add c-function extension if they need them to run in the main process because you need to use inter-process communication to invoke functions.
I have a similar problem I was trying to solve, where I start the TCL interpret in a worker thread. Except, there's really no clean way to kill a worker thread because it leave allocated memory in an uncleaned up state, leading to memory leaks. So really the only way to fix this problem is to use a process model instead or to just keep quitting and restarting your application. Given the amount of time it takes to go with process solution I just decided to stick with threads and fix the problem one of these days to get the ctrl-c to work in a separate process, rather than leaking memory everytime i kill a thread. and potential destabilizing and crashing my program.
UPDATE:
My conclusion is that Tcl Arrays are not normal variables and you can't use Tcl_GetVar2Ex to read "tmp" variable after Eval and tmp doesn't show up under "info globals". So to get around this I decided to directly call the Tcl-Library API rather than Eval shortcut to build a dictionary object to return.
Tcl_Obj* dict_obj = Tcl_NewDictObj ();
if (!dict_obj) {
return TCL_ERROR;
}
Tcl_DictObjPut (
interp,
dict_obj,
Tcl_NewStringObj ("key1", -1),
Tcl_NewStringObj ("value1", -1)
);
Tcl_DictObjPut (
interp,
dict_obj,
Tcl_NewStringObj ("key2", -1),
Tcl_NewStringObj ("value2", -1)
);
Tcl_SetObjResult(interp, dict_obj);

kill mysql query in perl

I have a perl script in which i create a table from the existing mysql databases. I have hundreds of databases and tables in each database contains millions of records, because of this, sometimes, query takes hours because of problems with indexing, and sometimes i run out disk space due to improper joins. Is there a way i can kill the query in the same script watching for memory consumption and execution time?
P.S. I am using DBI module in perl for mysql interface
As far as execution time, you use alarm Perl functionaility to time out.
Your ALRM handle can either die (see example below), or issue a DBI cancel call (sub { $sth->cancel };)
The DBI documentation actually has a very good discussion of this as well as examples:
eval {
local $SIG{ALRM} = sub { die "TIMEOUT\n" }; # N.B. \n required
eval {
alarm($seconds);
... code to execute with timeout here (which may die) ...
};
# outer eval catches alarm that might fire JUST before this alarm(0)
alarm(0); # cancel alarm (if code ran fast)
die "$#" if $#;
};
if ( $# eq "TIMEOUT\n" ) { ... }
elsif ($#) { ... } # some other error
As far as watching for memory, you just need for ALRM handler - instead of simply dying/cancelling - first checks the memory consumption of your script.
I won't go into details of how to measure memory consumption since it's an unrelated question that was likely already answered comprehensively on SO, but you can use size() method from Proc::ProcessTable as described in the Perlmonks snippet Find memory usage of perl program.
I used KILL QUERY command as described in http://www.perlmonks.org/?node_id=885620
This is the code from my script
eval {
eval { # Time out and interrupt work
my $TimeOut=Sys::SigAction::set_sig_handler('ALRM',sub {
$dbh->clone()->do("KILL QUERY ".$dbh->{"mysql_thread_id"});
die "TIMEOUT\n";
});
#Set alarm
alarm($seconds);
$sth->execute();
# Clear alarm
#alarm(0);
};
# Prevent race condition
alarm(0);
die "$#" if $#;
};
This code kills the query and also removes all the temporary tables
Watch out that you can't kill a query, using the same connection handler if you have a query that is stuck because of a table lock. You must open another connection with the same user and will that thread id.
Of course, you have to store in a hash the list of currently opened thread ids.
Mind that once you've terminated a thread id, the rest of the Perl code will execute .. on a unblessed handler.