Process doesn't terminate after Just-In-Time - swig

I have a cpp SWIG dll (.pyd).
I set a hard-coded breakpoint using __debugbreak() or assert(0).
JIT pops a window to select a debugger. Two scenarios:
I close it. The process terminates.
I choose vs2019 and resume the running and exit as usual or terminate it. The process seems to end/terminate fine.
In both cases, the dll file is locked. When I use LockHunter, it lists a python.exe process. If I terminate it, then the lock is released. Before termination, process explorer shows that the process consumes memory.
Even after termination, the process is still listed in process explorer ("dead" zombie)?
My current hack to deal with this:
print( 'Killing leftover python.exe processes that are locking _pyconfstruct.pyd' )
for p in psutil.process_iter():
if p.name() != 'python.exe':
continue
if p.status() != psutil.STATUS_STOPPED:
continue
#mem = p.memory_info()
#print( p.name(), p.pid, p.status(), mem.rss )
fl1 = r'c:\prj\confstruct\Debug\_pyconfstruct.pyd'
fl2 = r'c:\prj\confstruct\Release\_pyconfstruct.pyd'
try:
#opened_files = str(p.open_files()) # slow and doesn't list dlls
#print( opened_files )
#if fl1 in opened_files or fl2 in opened_files:
if 1: # just kill all zombies
#print( '%s (%d) is locking _pyconfstruct.pyd; killing it' % (p.name(), p.pid) )
p.kill()
except:
continue

Related

mariadb c-connector bind / execute mess up memory allocation

i'm using mariadb c-connector with prepare, bind and execute. it works usualy. but one case end up in "corrupted unsorted chunks" and core dumping when freeing bind buffer. i suggest the whole malloc organisation is messed up after calling mysql_stmt_execute(). my test's MysqlDynamic.c show:
the problem only is connected to x509cert variable bound by bnd[9]
freeing memory only fails if bnd[9].is_null = 0, if is_null execute end normally
freeing memory (using FreeStmt()) after bind and before execute end normally
print of bnd[9].buffer before execute show (void*) is connected to the correct string buffer
same behavior for setting bnd[9].buffer_length to STMT_INDICATOR_NTS or strlen()
other similar bindings (picture, bnd[10]) do not lead to corrupted memory and core dump.
i defined a c structure test for test data in my test program MysqlDynamic.c which is bound in MYSQL_BIND structure.
bindings for x509cert (string buffer) see bindInsTest():
bnd[9].buffer_type = MYSQL_TYPE_STRING;
bnd[9].buffer_length = STMT_INDICATOR_NTS;
bnd[9].is_null = &para->x509certI;
bnd[9].buffer = (void*) para->x509cert;
please get the details out of source file MysqlDynamic.c. please adapt defines in the source to your environment, verify content, and run it. you will find compile info in source code. MysqlDynymic -c will create the table. MysqlDynamic -i will insert 3 records each run. And 'MysqlDynamic -d` drop the the table again.
MysqlDynamic -vc show:
session set autocommit to <0>
connection id: 175
mariadb server ver:<100408>, client ver:<100408>
connected on localhost to db test by testA
>> if program get stuck - table is locked
table t_test created
mysql connection closed
pgm ended normaly
MysqlDynamic -i show
ins2: BufPara <92> name<master> stamp<> epoch<1651313806000>
cert is cert<(nil)> buf<(nil)> null<1>
picure is pic<0x5596a0f0c220> buf<0x5596a0f0c220> null<0> length<172>
ins1: BufPara <91> name<> stamp<2020-04-30> epoch<1650707701123>
cert is cert<0x5596a0f181d0> buf<0x5596a0f181d0> null<0>
picure is pic<(nil)> buf<(nil)> null<1> length<0>
ins0: BufPara <90> name<gugus> stamp<1988-10-12T18:43:36> epoch<922337203685477580>
cert is cert<(nil)> buf<(nil)> null<1>
picure is pic<(nil)> buf<(nil)> null<1> length<0>
free(): corrupted unsorted chunks
Aborted (core dumped)
checking t_test table content show all records are inserted as expected.
you can disable loading of x509cert and/or picture by commenting out the defines line 57/58. the program than end normally. you also can comment out line 208. the buffers are then indicated as NULL.
Questions:
is there a generic coding mistake in the program causing this behavior?
can you run the program in your environment without core dumping? i'm currently using version 10.04.08.
any improvment in code will be welcome.

A way to intervene "after" in tcl

I have written a tcl script where there is wait for 20 seconds in between.
Sometimes I have to come out of the waiting (or rather stop ) in between.
I have used:
after 20000
Is there a way to stop the waiting?
If you just use:
after 20000
Then no, there's no way (other than killing the process with Ctrl+C or kill, of course).
If you want an interruptable wait in Tcl, use this:
after 20000 set done 1
vwait done
That will continue to service events and so can be interrupted. Save the token returned by after 20000 set done 1 and you can cancel the timer (e.g., if you want to reschedule it). Set the (global done) variable yourself otherwise and you'll finish waiting sooner.
In one of my previous projects at work, I had to deal with a similar situation where my scripts wait for N seconds, then continue on, unless the user hit Ctrl+C, then the wait is canceled. Below is my script, reduced to bare essential to show how it works.
package require Tclx
proc abortTest {} {
after cancel $::afterId
set ::status "Ctrl+C detected"
}
#
# Main
#
puts "start"
set status "Ctrl+C not detected"
# when Ctrl+c detected, call abortTest
signal trap SIGINT abortTest
# Wait for N seconds
set WAIT_TIME 10000
set afterId [after $WAIT_TIME {set status "Exit normally"}]
puts "Wait for [expr {$WAIT_TIME / 1000}] seconds"
vwait status; # wait until status changed
puts $status
puts "end"
Discussion
The signal command said, "if the user hit Ctrl+C, then call abortTest"
Notice that in this form, the after command will exit right away, but the TCL interpreter will execute the set status ... command 10 seconds later.
The vwait command waits until the variable status to change its value.
After 10 seconds, if the user had not hit Ctrl+C, then the variable status will be set to "Exit normally", at that time, the vwait command exits
If the user hit Ctrl+C during that time, then status will be changed and the vwait command exits
Update
While reducing the original script, I took out the part to cancel the after command. Now I am putting it back in. Notice that the after command returns an ID, which I used in abortTest to cancel the action.

How to interrupt Tcl_eval

I have a small shell application that embeds Tcl 8.4 to execute some set of Tcl code. The Tcl interpreter is initialized using Tcl_CreateInterp. Everything is very simple:
user types Tcl command
the command gets passed to Tcl_Eval for evaluation
repeat
Q: Is there any way to interrupt a very long Tcl_Eval command? I can process a 'Ctrl+C' signal, but how to interrupt Tcl_Eval?
Tcl doesn't set signal handlers by default (except for SIGPIPE, which you probably don't care about at all) so you need to use an extension to the language to get the functionality you desire.
By far the simplest way to do this is to use the signal command from the TclX package (or from the Expect package, but that's rather more intrusive in other ways):
package require Tclx
# Make Ctrl+C generate an error
signal error SIGINT
Just evaluate a script containing those in the same interpreter before using Tcl_Eval() to start running the code you want to be able to interrupt; a Ctrl+C will cause that Tcl_Eval() to return TCL_ERROR. (There are other things you can do — such as running an arbitrary Tcl command which can trap back into your C code — but that's the simplest.)
If you're on Windows, the TWAPI package can do something equivalent apparently.
Here's a demonstration of it in action in an interactive session!
bash$ tclsh8.6
% package require Tclx
8.4
% signal error SIGINT
% puts [list [catch {
while 1 {incr i}
} a b] $a $b $errorInfo $errorCode]
^C1 {can't read "i": no such variableSIGINT signal received} {-code 1 -level 0 -errorstack {INNER push1} -errorcode {POSIX SIG SIGINT} -errorinfo {can't read "i": no such variableSIGINT signal received
while executing
"incr i"} -errorline 2} {can't read "i": no such variableSIGINT signal received
while executing
"incr i"} {POSIX SIG SIGINT}
%
Note also that this can leave the interpreter in a somewhat-odd state; the error message is a little bit odd (and in fact that would be a bug, but I'm not sure what in). It's probably more elegant to do it like this (in 8.6):
% try {
while 1 {incr i}
} trap {POSIX SIG SIGINT} -> {
puts "interrupt"
}
^Cinterrupt
%
Another way to solve this problem would be to fork your tcl interpreter into a separate process and driving the stdin and stdout of the tcl interpreter from your main process. Then, in the main process, you can intercept Ctrl-C and use it to kill the process of your forked tcl interpreter and to refork a new tcl interpreter.
With this solution the tcl interpreter will never lock up on your main program. However, its really annoying to add c-function extension if they need them to run in the main process because you need to use inter-process communication to invoke functions.
I have a similar problem I was trying to solve, where I start the TCL interpret in a worker thread. Except, there's really no clean way to kill a worker thread because it leave allocated memory in an uncleaned up state, leading to memory leaks. So really the only way to fix this problem is to use a process model instead or to just keep quitting and restarting your application. Given the amount of time it takes to go with process solution I just decided to stick with threads and fix the problem one of these days to get the ctrl-c to work in a separate process, rather than leaking memory everytime i kill a thread. and potential destabilizing and crashing my program.
UPDATE:
My conclusion is that Tcl Arrays are not normal variables and you can't use Tcl_GetVar2Ex to read "tmp" variable after Eval and tmp doesn't show up under "info globals". So to get around this I decided to directly call the Tcl-Library API rather than Eval shortcut to build a dictionary object to return.
Tcl_Obj* dict_obj = Tcl_NewDictObj ();
if (!dict_obj) {
return TCL_ERROR;
}
Tcl_DictObjPut (
interp,
dict_obj,
Tcl_NewStringObj ("key1", -1),
Tcl_NewStringObj ("value1", -1)
);
Tcl_DictObjPut (
interp,
dict_obj,
Tcl_NewStringObj ("key2", -1),
Tcl_NewStringObj ("value2", -1)
);
Tcl_SetObjResult(interp, dict_obj);

Ruby Shoes and MySQL: GUI freezes, should I use threads?

I'm trying to learn Shoes and decided to make a simple GUI to run a SQL-script line-by-line.
My problem is that in GUI pressing the button, which executes my function, freezes the GUI for the time it takes the function to execute the script.
With long scripts this might take several minutes.
Someone had a similar problem (http://www.stackoverflow.com/questions/958662/shoes-and-heavy-operation-in-separate-thread) and the suggestion was just to put intensive stuff under a Thread: if I copy the math-code from previously mentioned thread and replace my mysql-code the GUI works without freezing, so that probably hints to a problem with how I'm using the mysql-adapter?
Below is a simplified version of the code:
problem.rb:
# copy the mysql-gem if not in gem-folder already
Shoes.setup do
gem 'mysql'
end
require "mysql"
# the function that does the mysql stuff
def someFunction
con = Mysql::real_connect("myserver", "user", "pass", "db")
scriptFile = File.open("myfile.sql", "r")
script = scriptFile.read
scriptFile.close
result = []
script.each_line do |line|
result << con.query(line)
end
return result
end
# the Shoes app with the Thread
Shoes.app do
stack do
button "Execute someFunction" do
Thread.start do
result = someFunction
para "Done!"
end
end
end
stack do
button "Can't click me when GUI is frozen" do
alert "GUI wasn't frozen?"
end
end
end
I think the problem arises from scheduling which is done by ruby, not by the operating system. Probably just a special case with shoes + mysql.
As a workaround i'd suggest you spawn a separate process for the script and use socket or file based communication between the processes.

restart multi-threaded perl script and close mysql connection

I have a Perl script that reads a command file and restarts itself if necessary by doing:
myscript.pl:
exec '/home/foo/bin/myscript.pl';
exit(0);
Now, this works fine except for one issue. The thread that reads the command file does not have access to the DBI handle I use. And over a number of restarts I seem to build up the number of open mysql connections till I get the dreaded "Too Many Connections" error. The DBI spec says:
"Because of this (possibly temporary) restriction, newly created threads must make their own connections to the database. Handles can't be shared across threads."
Any way to close connections or perhaps a different way to restart the script?
Use a flag variable that is shared between threads. Have the command line reading thread set the flag to exit, and the thread holding the DB handle release it and actually do the re-exec:
#!/usr/bin/perl
use threads;
use threads::shared;
use strict; use warnings;
my $EXIT_FLAG :shared;
my $db_thread = threads->create('do_the_db_thing');
$db_thread->detach;
while ( 1 ) {
sleep rand 10;
$EXIT_FLAG = 1 if 0.05 > rand or time - $^T > 20;
}
sub do_the_db_thing {
until ( $EXIT_FLAG ) {
warn sprintf "%d: Working with the db\n", time - $^T;
sleep rand 5;
}
# $dbh->disconnect; # here
warn "Exit flag is set ... restarting\n";
exec 'j.pl';
}
You could try registering an atexit function to close the DBI handle at the point where it is opened, and then use fork & exec to restart the script rather then just exec. The parent would then call exit, invoking the atexit callback to close the DBI handle. The child could re-exec itself normally.
Edit: After thinking for a couple more minutes, I believe you could skip the atexit entirely because the handle would be closed automatically upon the parent's exit. Unless, of course, you need to do a more complex operation upon closing the DB handle than a simple filehandle close.
my $pid = fork();
if (not defined $pid) {
#Could not fork, so handle the error somehow
} elsif ($pid == 0) {
#Child re-execs itself
exec '/home/foo/bin/myscript.pl';
} else {
#Parent exits
exit(0);
}
If you expect a lot of connections, you probably want DBI::Gofer to act as a DBI proxy for you. You create as many connections in as many scripts as you like, and DBI::Gofer shares them when it can.