Mysql in a multithreaded program - my_thread_global_end() - mysql

I'm having serious problems with mysql using pthreads. The error I get after ending my program:
"Error in my_thread_global_end(): 1 threads didn't exit"
I called mysql_library_init in main before starting any threads. For the sake of it, I just started 1 thread. After the thread is closed (using pthread_join), I call mysql_library_end in main. In the pthread itself I call mysql_init. For some reason this seems incorrect cause I get the error. I use MySQL 5.6 and link with libmysqlclient.a.
The mysql manual is extremely unclear and contradictory, so I hope someone with a logical mind can explain this to me:
"In a nonmulti-threaded environment, mysql_init invokes mysql_library_init automatically as necessary. However, mysql_library_init is not thread-safe in a multi-threaded environment, and thus neither is mysql_init. Before calling mysql_init, either call mysql_library_init prior to spawning any threads, or use a mutex to protect the mysql_library_init call. This should be done prior to any other client library call."
First line: So mysql_init ONLY invokes mysql_library_init in a NONmulti-threaded environment "when needed" (when is it needed anyway in a NONmulti-threaded environment?) and so I can conlcude from this that mysql_init() thinks it is NOT needed in a multi-threaded environment? I guess not, so fine, I call mysql_library_init in my main... Then I read everywhere I should also call mysql_init within the thread after that. I want each thread to have his own connection, so fine, I also do that so each thread have their own MYSQL struct. But the manual sais mysql_init is not thread safe... Uhm, ok... So just with 1 thread, I still have the problem...
main -> mysql_library_init
main -> create 1 pthread
pthread -> mysql_init
pthread -> mysql_real_connect
pthread -> mysql_close
....
I press Ctrl C after a few seconds (mysql was closed by now in the thread) so the cleaning up starts:
main -> pthread_cancel
main -> pthread_join
main -> mysql_library_end
RESULT: Error in my_thread_global_end: 1 threads didn't exit
........

int main( void )
{
if ( mysql_library_init( 0, NULL, NULL ) != 0 ) { ... }
if ( mysql_thread_safe() ) { ... } // This goes fine
sem_init( &queue.totalStored, 0, 0 );
pthread_mutex_init( &mutex_bees, NULL );
pthread_create( &workerbees[tid], &attr, BeeWork, ( void * ) tid );
pthread_attr_destroy( &attr );
while ( recv_signal == 0 )
{
errno = 0;
sock_c = accept( sock_s, NULL, NULL );
if ( ( sock_c == -1 ) && ( errno == EINTR ) )
{
// do stuff
if ( recv_signal == SIGHUP ) { /* do stuff*/ }
} else { /* do stuff */ }
}
// CLEANUP
close( sock_s );
RC = pthread_cancel( workerbees[tid] );
if ( RC != 0 ) { Log( L_ERR, "Unsuccessful pthread_cancel()" ); }
// WAIT FOR THREADS TO FINISH WORK AND EXIT
RC = pthread_join( workerbees[tid], &res );
if ( RC != 0 ) { Log( L_ERR, "Error: Unsuccessful pthread_join()" ); }
if ( res == PTHREAD_CANCELED )
{ /* print debug stuff */ }
else { /* print debug stuff */ }
mysql_library_end();
sem_destroy( &queue.totalStored );
exit( 0 );
}
void *BeeWork( void *t )
{
// DISABLE SIGNALS THAT main() ALREADY LISTENS TO
sigemptyset( &sigset );
sigaddset( &sigset, SIGINT );
sigaddset( &sigset, SIGTERM );
sigaddset( &sigset, SIGQUIT );
sigaddset( &sigset, SIGHUP );
pthread_sigmask( SIG_BLOCK, &sigset, NULL );
MYSQL *conn;
conn = mysql_init( NULL );
if ( ! mysql_real_connect( conn, server, prefs.mysql_user, prefs.mysql_pass, prefs.mysql_db, 0, prefs.mysql_sock, 0 ) ) { /* error */ }
mysql_close( conn );
// Do stuff
...
pthread_exit( ( void * ) t );
}

I guess I can answer my own question, I found out my pthread cleanup handler was not executed (installed with pthread_cleanup_push) and the end of the code with pthread_exit was called sooner than main could cancel the thread. I did a pthread_cleanup_pop( 0 ) and changed it to pthread_cleanup_pop( 1 ) so the cleanup handler also got executed when the thread exits sooner than main could cancel. In this cleanup handler, now mysql_thread_end actually got a chance to run and it fixed the problem.

Addition: Apparently, the MySQL client library is not designed to use connection handles (MYSQL*) across threads.
Do not pass a MYSQL* from one thread to another.
Do not use std::async or simlilar in conjunction with MySQL functions. (Code may or may not be executed in a separate thread.)

Related

Early ending of code

In my code the code is executed before doing all tasks. Whattä do I have to change on my code such that does all tasks before ending?
package main
import (
"fmt"
"math/rand"
"time"
)
// run x tasks at random intervals
// - a task is a goroutine that runs for 2 seconds.
// - a task runs concurrently to other task
// - the interval between task is between 0 and 2 seconds
func main() {
// set x to the number of tasks
x := 4
// random numbers generation initialization
random := rand.New(rand.NewSource(1234))
for num := 0; num < x; num++ {
// sleep for a random amount of milliseconds before starting a new task
duration := time.Millisecond * time.Duration(random.Intn(2000))
time.Sleep(duration)
// run a task
go func() {
// this is the work, expressed by sleeping for 2 seconds
time.Sleep(2 * time.Second)
fmt.Println("task done")
}()
}
}
Yes as #Laney mentions this can be done using both Waitgroups and channels. Refer code below.
Waitgroups:
package main
import (
"fmt"
"math/rand"
"sync"
"time"
)
// run x tasks at random intervals
// - a task is a goroutine that runs for 2 seconds.
// - a task runs concurrently to other task
// - the interval between task is between 0 and 2 seconds
func main() {
// set x to the number of tasks
x := 4
// random numbers generation initialization
var wg sync.WaitGroup
random := rand.New(rand.NewSource(1234))
for num := 0; num < x; num++ {
// sleep for a random amount of milliseconds before starting a new task
duration := time.Millisecond * time.Duration(random.Intn(2000))
time.Sleep(duration)
//
wg.Add(1)
// run a task
go func() {
// this is the work, expressed by sleeping for 2 seconds
time.Sleep(2 * time.Second)
fmt.Println("task done")
wg.Done()
}()
}
wg.Wait()
fmt.Println("All tasks done")
}
Output:
task done
task done
task done
task done
All tasks done
On playground : https://play.golang.org/p/V-olyX9Qm8
Using channels:
package main
import (
"fmt"
"math/rand"
"time"
)
// run x tasks at random intervals
// - a task is a goroutine that runs for 2 seconds.
// - a task runs concurrently to other task
// - the interval between task is between 0 and 2 seconds
func main() {
//Channel to indicate completion of a task, can be helpful in sending a result value also
results := make(chan int)
// set x to the number of tasks
x := 4
t := 0 //task tracker
// random numbers generation initialization
random := rand.New(rand.NewSource(1234))
for num := 0; num < x; num++ {
// sleep for a random amount of milliseconds before starting a new task
duration := time.Millisecond * time.Duration(random.Intn(2000))
time.Sleep(duration)
//
// run a task
go func() {
// this is the work, expressed by sleeping for 2 seconds
time.Sleep(2 * time.Second)
fmt.Println("task done")
results <- 1 //may be something possibly relevant to the task
}()
}
//Iterate over the channel till the number of tasks
for result := range results {
fmt.Println("Got result", result)
t++
if t == x {
close(results)
}
}
fmt.Println("All tasks done")
}
Output:
task done
task done
Got result 1
Got result 1
task done
Got result 1
task done
Got result 1
All tasks done
Playground : https://play.golang.org/p/yAFdDj5nhb
In Go, as most languages, the process will exit when the entrypoint main() function exits.
Because you're spawning a number of goroutines, the main function is ending before the goroutines are all done, causing the process to exit and not finish those goroutines.
As others have suggested, you want to block your main() function until all the goroutines are done, and a couple of the most common ways to do that are either using semaphores (sync.WaitGroup), or channels (go by example)
There are many options. For example, you can use channels or sync.WaitGroup
The program ends when main goroutine ends.
You may use:
waitgroup - it gives very convenient way to wait when all tasks are done
channels - read from channel is blocked until new data arrives or channel gets closed.
naïve sleep - good only for example purposes

Error while defining the predicate for thrust Min_element, using zip_iterators for device_ptr

In the simple example I tried to find the min value, which is not yet visited.
float *cost=NULL;
cudaMalloc( (void **) &cost, 5 * sizeof(float) );
bool *visited=NULL;
cudaMalloc( (void **) &visited, 5 * sizeof(bool) );
thrust::device_ptr< float > dp_cost( cost );
thrust::device_ptr< bool > dp_visited( visited );
typedef thrust::device_ptr<bool> BoolIterator;
typedef thrust::device_ptr<float> ValueIterator;
BoolIterator bools_begin = dp_visited, bools_end = dp_visited +5;
ValueIterator values_begin = dp_cost, values_end = dp_cost +5;
typedef thrust::tuple<BoolIterator, ValueIterator> IteratorTuple;
typedef thrust::tuple<bool, float> DereferencedIteratorTuple;
typedef thrust::zip_iterator<IteratorTuple> NodePropIterator;
struct nodeProp_comp : public thrust::binary_function<DereferencedIteratorTuple, DereferencedIteratorTuple, bool>
{
__host__ __device__
bool operator()( const DereferencedIteratorTuple lhs, const DereferencedIteratorTuple rhs ) const
{
if( !( thrust::get<0>( lhs ) ) && !( thrust::get<0>( rhs ) ) )
{
return ( thrust::get<1>( lhs ) < thrust::get<1>( rhs ) );
}
else
{
return !( thrust::get<0>( lhs ) );
}
}
};
NodePropIterator iter_begin (thrust::make_tuple(bools_begin, values_begin));
NodePropIterator iter_end (thrust::make_tuple(bools_end, values_end));
NodePropIterator min_el_pos = thrust::min_element( iter_begin, iter_end, nodeProp_comp() );
DereferencedIteratorTuple tmp = *min_el_pos;
But on compilation i get this error.
thrust_min.cu(99): error: no instance of overloaded function "thrust::min_element" matches the argument list
argument types are: (NodePropIterator, NodePropIterator, nodeProp_comp)
1 error detected in the compilation of "/tmp/tmpxft_00005c8e_00000000-6_thrust_min.cpp1.ii".
I compile using :
nvcc -gencode arch=compute_30,code=sm_30 -G -g thrust_min.cu -Xcompiler -rdynamic,-Wall,-Wextra -lineinfo -o thrust_min
I am using gcc version 4.6.3 20120306 (Red Hat 4.6.3-2) (GCC), CUDA 5.
I get no error if I omit the predicate during the call to min_element ... which uses the default 'less' functor i guess.
Please help.
I asked around about this, and it seems that, in c++03, a local type (i.e., nodeProp) can't be used as a template parameter because it has no linkage. You may want to review this (non-thrust related) SO question/answer for additional discussion.
Thrust, being a template library, depends on this. So I think the recommendation is to put your functors that are used in thrust operations at global scope.
If you think there are other issues at play, you may want to post a new question with examples. However for the code you've posted in this question, I believe this is the reason, and I've demonstrated that reordering the code fixes the issue. Note the struct definition is really what is at issue here, not the typedefs.

Tcl C API: redirect stdout of embedded Tcl interp to a file without affecting the whole program

#include <tcl.h>
int main(int argc, char** argv)
{
Tcl_Interp *interp = Tcl_CreateInterp();
Tcl_Channel stdoutChannel = Tcl_GetChannel(interp, "stdout", NULL);
Tcl_UnregisterChannel(interp, stdoutChannel);
Tcl_Channel myChannel = Tcl_OpenFileChannel(interp, "/home/aminasya/nlb_rundir/imfile", "w", 0744);
Tcl_RegisterChannel(interp, myChannel);
Tcl_Eval(interp, "puts hello");
}
In this code I have tried to close stdout channel and redirect it to file. (As described Get the output from Tcl C Procedures). After running, "imfile" is created but empty. What am doing wrong?
I have seen How can I redirect stdout into a file in tcl too, but I need to do it using Tcl C API.
I have also tried this way, but again no result.
FILE *myfile = fopen("myfile", "W+");
Tcl_Interp *interp = Tcl_CreateInterp();
Tcl_Channel myChannel = Tcl_MakeFileChannel(myfile, TCL_WRITABLE);
Tcl_SetStdChannel(myChannel, TCL_STDOUT);
The difficulty in your case is the interaction between the standard channels of a Tcl interpreter and the file descriptors (FDs) of the standard streams as seen by the main program (and the C runtime), coupled with the semantics of open(2) in Unix.
The process which makes all your output redirected rolls like this:
The OS makes sure the three standard file descriptors (FDs) are open (and numbered 0, 1 and 2, with 1 being the standard output) by the time the program starts executing.
As soon as the Tcl interpreter you create initializes its three standard channels (this happens when you call Tcl_GetChannel() for "stdout", as described here), they get associated with those already existing three FDs in the main program.
Note that the underlying FDs are not cloned, instead, they are just "borrowed" from the enclosing program. In fact, I think in 99% of cases this is a sensible thing to do.
When you close (which happend when unregisteting) the standard channel stdout in your Tcl interpreter, the underlying FD (1) is closed as well.
The call to fopen(3) internally calls open(2) which picks up the lowest free FD, which is 1, and thus the standard output stream as understood by the main program (and the C runtime) is now connected to that opened file.
You then create a Tcl channel out of your file and register it with the interpreter. The channel indeed becomes stdout for the interpreter.
In the end, both writes to the standard output stream in your main program and writes to the standard output channel in your Tcl interpreter are sent do the same underlying FD and hence end up in the same file.
I can see two ways to deal with this behaviour:
Play a neat trick to "reconnect" the FD 1 to the same stream it was initially opened to and make the file opened for the Tcl interpreter's stdout use an FD greater than 2.
Instead of first letting the Tcl interpreter initialize its standard channels and then reinitializing one of them, initialize them all manually before letting that auto-vivification machinery kick in.
Both approaches have their pros and cons:
"Preserving FD 1" is generally simpler to implement, and if you want to redirect only stdout in your Tcl interpreter, and leave the two other standard channels to be connected to the same standard streams used by the enclosing program, this approach seems to be sensible to employ. The possible downsides are:
Too much magic involved (extensive commenting the code is advised).
Not sure how this would work on Windows: there's no dup(2) there (see below) and some other approach might be needed.
Not using the standard streams for stdin and stderr from the enclosing program might be useful.
Initializing the standard channels in the Tcl interpreter by hand requires more code and supposedly warrants the correct ordering (stdin, stdout, stderr, in that order). If you want the remaining two standard channels in your Tcl interpreter to be connected to the matching streams of the enclosing program, this approach is more work; the first approach does this for free.
Here's how to preserve FD 1 to make only stdout in the Tcl interpreter be connected to a file; for the enclosing program FD 1 is still connected to the same stream as set up by the OS.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <tcl.h>
int redirect(Tcl_Interp *interp)
{
Tcl_Channel chan;
int rc;
int fd;
/* Get the channel bound to stdout.
* Initialize the standard channels as a byproduct
* if this wasn't already done. */
chan = Tcl_GetChannel(interp, "stdout", NULL);
if (chan == NULL) {
return TCL_ERROR;
}
/* Duplicate the descriptor used for stdout. */
fd = dup(1);
if (fd == -1) {
perror("Failed to duplicate stdout");
return TCL_ERROR;
}
/* Close stdout channel.
* As a byproduct, this closes the FD 1, we've just cloned. */
rc = Tcl_UnregisterChannel(interp, chan);
if (rc != TCL_OK)
return rc;
/* Duplicate our saved stdout descriptor back.
* dup() semantics are such that if it doesn't fail,
* we get FD 1 back. */
rc = dup(fd);
if (rc == -1) {
perror("Failed to reopen stdout");
return TCL_ERROR;
}
/* Get rid of the cloned FD. */
rc = close(fd);
if (rc == -1) {
perror("Failed to close the cloned FD");
return TCL_ERROR;
}
/* Open a file for writing and create a channel
* out of it. As FD 1 is occupied, this FD won't become
* stdout for the C code. */
chan = Tcl_OpenFileChannel(interp, "aaa.txt", "w", 0666);
if (chan == NULL)
return TCL_ERROR;
/* Since stdout channel does not exist in the interp,
* this call will make our file channel the new stdout. */
Tcl_RegisterChannel(interp, chan);
return TCL_OK;
}
int main(void)
{
Tcl_Interp *interp;
int rc;
interp = Tcl_CreateInterp();
rc = redirect(interp);
if (rc != TCL_OK) {
fputs("Failed to redirect stdout", stderr);
return 1;
}
puts("before");
rc = Tcl_Eval(interp, "puts stdout test");
if (rc != TCL_OK) {
fputs("Failed to eval", stderr);
return 2;
}
puts("after");
Tcl_Finalize();
return 0;
}
Building and running (done in Debian Wheezy):
$ gcc -W -Wall -I/usr/include/tcl8.5 -L/usr/lib/tcl8.5 -ltcl main.c
$ ./a.out
before
after
$ cat aaa.txt
test
As you can see, the string "test" output by puts goes to the file while the strings "before" and "after", which are write(2)n to FD 1 in the enclosing program (this is what puts(3) does in the end) go to the terminal.
The hand-initialization approach would be something like this (sort of pseudocode):
Tcl_Channel stdin, stdout, stderr;
stdin = Tcl_OpenFileChannel(interp, "/dev/null", "r", 0666);
stdout = Tcl_OpenFileChannel(interp, "aaa.txt", "w", 0666);
stderr = Tcl_OpenFileChannel(interp, "/dev/null", "w", 0666);
Tcl_RegisterChannel(interp, stdin);
Tcl_RegisterChannel(interp, stdout);
Tcl_RegisterChannel(interp, stderr);
I have not tested this approach though.
At the level of the C API, and assuming that you are on a Unix-based OS (i.e., not Windows), you can do this far more simply by using the right OS calls:
#include <fcntl.h>
#include <unistd.h>
// ... now inside a function
int fd = open("/home/aminasya/nlb_rundir/imfile", O_WRONLY|O_CREAT, 0744);
// Important: deal with errors here!
dup2(fd, STDOUT_FILENO);
close(fd);
You could also use dup() to save the old stdout (to an arbitrary number that Tcl will just ignore) so that you can restore it later, if desired.
Try this:
FILE *myfile = fopen("myfile", "W+");
Tcl_Interp *interp = Tcl_CreateInterp();
Tcl_Channel myChannel = Tcl_MakeFileChannel(myfile, TCL_WRITABLE);
Tcl_RegisterChannel(myChannel);
Tcl_SetStdChannel(myChannel, TCL_STDOUT);
You need to register the channel with the interpreter before you can reset the std channel to use it.

SQL Scripts - Does the equivalent of a #define exist?

I have a script that I use to construct both the tables and stored procedures. For example I have a column of type varchar. varchar requires a size parameter, that size I also use as parameters in stored procedures and within those procedures.
is it possible to have thequivalentnt of a #define for its size, so I can easily adjust the size without the necessity of having to change ithroughht the whole of the script?
I am using MySql workbench.
EDIT
I have tried SET and DECLARE
I have a script - this is (abridged)
CREATE TABLE `locations`
(
`location` VARCHAR(25) NOT NULL
);
...
CREATE PROCEDURE AddLocation (IN location VARCHAR(25)
BEGIN
...
END$$
What I am trying to achieve is replace the values 25 in the script with a constant - similar to a #define at the top of the script that creates the table and stored procedures, so I am able to easily change the 25 to another number.
Anybody has found a solution to this problem?
The C Pre Processor (cpp) is historically associated with C (hence the name), but it really is a generic text processor that can be used (or abused) for something else.
Consider this file, named location.src (more on that later).
// C++ style comments works here
/* C style works also */
-- plain old SQL comments also work,
-- but you should avoid using '#' style of comments,
-- this will confuse the C pre-processor ...
#define LOCATION_LEN 25
/* Debug helper macro */
#include "debug.src"
DROP TABLE IF EXISTS test.locations;
CREATE TABLE test.locations
(
`location` VARCHAR(LOCATION_LEN) NOT NULL
);
DROP PROCEDURE IF EXISTS test.AddLocation;
delimiter $$
CREATE PROCEDURE test.AddLocation (IN location VARCHAR(LOCATION_LEN))
BEGIN
-- example of macro
ASSERT(length(location) > 0, "lost or something ?");
-- do something
select "Hi there.";
END
$$
delimiter ;
and file debug.src, which is included:
#ifdef HAVE_DEBUG
#define ASSERT(C, T) \
begin \
if (not (C)) then \
begin \
declare my_msg varchar(1000); \
set my_msg = concat("Assert failed, file:", __FILE__, \
", line: ", __LINE__, \
", condition ", #C, \
", text: ", T); \
signal sqlstate "HY000" set message_text = my_msg; \
end; \
end if; \
end
#else
#define ASSERT(C, T) begin end
#endif
When compiled with:
cpp -E location.src -o location.sql
you get the code you are looking for, with cpp expanding #define values.
When compiled with:
cpp -E -DHAVE_DEBUG location.src -o location.sql
you get the same, plus the ASSERT macro (posted as a bonus, to show what could be done).
Assuming a build with HAVE_DEBUG deployed in a testing environment (in 5.5 or later since SIGNAL is used), the result looks like this:
mysql> call AddLocation("Here");
+-----------+
| Hi there. |
+-----------+
| Hi there. |
+-----------+
1 row in set (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
mysql> call AddLocation("");
ERROR 1644 (HY000): Assert failed, file:location.src, line: 24, condition length(location) > 0, text: lost or something ?
Note how the file name, line number, and condition points right at the place in the source code in location.src where the assert is raised, thanks again to the C pre processor.
Now, about the ".src" file extension:
you can use anything.
Having a different file extension helps with makefiles, etc, and prevents confusion.
EDIT: Originally posted as .xql, renamed to .src for clarity. Nothing related to xml queries here.
As with any tools, using cpp can lead to good things, and the use case for maintaining LOCATION_LEN in a portable way looks very reasonable.
It can also lead to bad things, with too many #include, nested #ifdef hell, macros, etc that at the end obfuscate the code, so your mileage may vary.
With this answer, you get the whole thing (#define, #include, #ifdef, __FILE__, __LINE__, #C, command line options to build), so I hope it should cover it all.
Have you tried SET?
here is an example :
SET #var_name = expr
more examples here :
http://dev.mysql.com/doc/refman/5.0/en/user-variables.html
It sounds like you're looking for user defined data types. Unfortunately for us all mySQL doesn't yet support user defined data types like SQL Server, Oracle, and others do.
Here's a list of supported data types:
http://dev.mysql.com/doc/refman/5.0/en/data-types.html
For those that are interested:
I ended up writing a PHP script because:
a) The machine that can access the database does not belong to me and I cannot access the C
preprocessor
b) The other the two answers do not work.
c) Seemed the simplest solution
Here is the script for those who might find it useful. I am using it to define the tables column
widths and then use those same values in the stored procedures. This is due to the column
widths have not yet been fully decided for production.
I have also built in that you can define strings that last over a few lines. This has the advantage that I can obey the 80 column width (hence printing looks readable).
Here is the script
<?php
if (1==count($argv))
{
?>
Processing #defines from stdin and send to SQL server:
This script will remove
1. #define <name> <integer>
2. #define <name> '<string>'
3. #define <name> '<string>' \
'<continuation of string>'
and replace the occurances of name with the #define value as specified
<name> is upper case alpha numberics or underscores, not starting with a
digit.
The arguments of this script is passed to the mysql executable.
<?php
exit(1);
}
function replace(&$newValues, $a, $b, $c)
{
return $a . (array_key_exists($b, $newValues) ? $newValues[$b] : $b) . $c;
}
// The patterns to be used
$numberPattern='/^#define[ \t]+([A-Z_][A-Z0-9_]*)[ \t]+(0|([1-9][0-9]*))'.
'[ \t]*[\r\n]+$/';
$stringPattern= '/^#define[ \t]+([A-Z_][A-Z0-9_]*)[ \t]+\''.
'((\\\'|[^\'\r\n])*)\'[ \t]*(\\\\{0,1})[\n\r]+$/';
$continuationPattern='/^[ \t]*\'((\\\'|[^\'\r\n])*)\'[ \t]*'.
'(\\\\{0,1})[\n\r]+$/';
// String to be evaluated to replace define values with a new value
$evalStr='replace($newValues, \'\1\', \'\2\', \'\3\');';
array_splice($argv, 0, 1);
// Open up the process
$mysql=popen("mysql ".implode(' ', $argv), 'w');
$newValues=array(); // Stores the defines new values
// Variables to control the replacement process
$define=false;
$continuation=false;
$name='';
$value='';
while ($line=fgets(STDIN))
{
$matches=array();
// #define numbers
if (!$define &&
1 == preg_match($numberPattern, $line, $matches))
{
$define = true;
$continuation = false;
$name = $matches[1];
$value = $matches[2];
}
// #define strings
if (!$define &&
1 == preg_match($stringPattern,
$line, $matches))
{
$define = true;
$continuation = ('\\' == $matches[4]);
$name = $matches[1];
$value = $matches[2];
}
// For #define strings that continue over more than one line
if ($continuation &&
1 == preg_match($continuationPattern,
$line, $matches))
{
$value .= $matches[1];
$continuation = ('\\' == $matches[3]);
}
// Have a complete #define, add to the array
if ($define && !$continuation)
{
$define = $continuation = false;
$newValues[$name]=$value;
}
elseif (!$define)
{
// Do any replacements
$line = preg_replace('/(^| |\()([A-Z_][A-Z0-9_]*)(\)| |$)/e',
$evalStr, $line);
echo $line; // In case we need to have pure SQL.
// Send it to be processed.
fwrite($mysql, $line) or die("MySql has failed!");
}
}
pclose($mysql);
?>

Emulating execvp - Is There a Better Way To Do This?

I'm currently wrapping a command line tool (espeak) with Tcl/Tk, and I have figured this out so far:
load ./extensions/system.so
package require Tk
package require Tclx
set chid 0
proc kill {id} {
exec kill -9 $id
}
proc speak {} {
global chid
set chid [fork]
if {$chid == 0} {
execvp espeak [.text get 1.0 end]
}
}
proc silent {} {
global chid
kill $chid
}
Where system.so is an extension I hacked together to be able to use execvp:
#include <tcl.h>
#include <tclExtend.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
static int
execvp_command(ClientData cdata, Tcl_Interp *interp, int argc, const char* argv[])
{
if (argc == 1)
{
interp->result = "execvp command ?args ...?";
return TCL_ERROR;
}
execvp(argv[1], argv + 1);
return TCL_OK;
}
int System_Init(Tcl_Interp* interp)
{
if (Tcl_InitStubs(interp, "8.1", 0) == NULL)
return TCL_ERROR;
Tcl_CreateCommand(interp, "execvp", execvp_command, NULL, NULL);
Tcl_PkgProvide(interp, "system", "1.0");
return TCL_OK;
}
The reason I need execvp is because a subprocess created by exec (Tcl) seems to keep going when the process dies (I can confirm this by ^C'ing out of the GUI), whereas if I use execvp, espeak dies properly.
Thus, all I really need out of this script is to be able to start a subprocess and kill it on demand.
Is there another library that can do this properly, like Expect?
Tcl uses execvp internally (really; I've just checked the source) so the difference lies elsewhere. That elsewhere will be in signals; the subprocess created by the exec command (and other things that use the same underlying engine) will have the majority of signals forced to use the default signal handlers, but since the only signal it sets to non-default is SIGPIPE, I wonder what else is going on.
That said, the definitive extension for working with this sort of thing is TclX. That gives you access to all the low-level POSIX functionality that you've been using partially. (Expect may also be able to do it.)