I'm trying to learn Shoes and decided to make a simple GUI to run a SQL-script line-by-line.
My problem is that in GUI pressing the button, which executes my function, freezes the GUI for the time it takes the function to execute the script.
With long scripts this might take several minutes.
Someone had a similar problem (http://www.stackoverflow.com/questions/958662/shoes-and-heavy-operation-in-separate-thread) and the suggestion was just to put intensive stuff under a Thread: if I copy the math-code from previously mentioned thread and replace my mysql-code the GUI works without freezing, so that probably hints to a problem with how I'm using the mysql-adapter?
Below is a simplified version of the code:
problem.rb:
# copy the mysql-gem if not in gem-folder already
Shoes.setup do
gem 'mysql'
end
require "mysql"
# the function that does the mysql stuff
def someFunction
con = Mysql::real_connect("myserver", "user", "pass", "db")
scriptFile = File.open("myfile.sql", "r")
script = scriptFile.read
scriptFile.close
result = []
script.each_line do |line|
result << con.query(line)
end
return result
end
# the Shoes app with the Thread
Shoes.app do
stack do
button "Execute someFunction" do
Thread.start do
result = someFunction
para "Done!"
end
end
end
stack do
button "Can't click me when GUI is frozen" do
alert "GUI wasn't frozen?"
end
end
end
I think the problem arises from scheduling which is done by ruby, not by the operating system. Probably just a special case with shoes + mysql.
As a workaround i'd suggest you spawn a separate process for the script and use socket or file based communication between the processes.
Related
My app is written in Python 2.7 and wxPython 2.8, and accesses a MySQL database. I have had issues with the program freezing when doing an add via:-
cursor.execute(sql ([idSession, TestDateTime, DataBLOb]))
Although this is in try: - except: construct it never executes the except portion. I have run this section from the command line and got:-
_mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away')
Obviously I need to investigate the cause of the error but how can I get my software to execute the except: code rather than just freeze!
To solve this, you need to create a new Thread. Look the code:
from threading import Thread
class Spam(wx.Frame):
def doStuff(self, arg1,arg2,arg3)
# Do Something...
try:
cursor.execute(sql ([idSession, TestDateTime, DataBLOb]))
except Exception a e:
# Do the treatments or show a dialog
def ButtonClick(self, event): # Assuming a button like "do this operation"
t = Thread(target=self.doStuff,args=(arg1,arg2,arg3))
t.start()
Observation:
The function doStuff need to be free of errors, if some error raised and not is treated will create a deadlock, or a zombie thread.
I'm running this code from the mysql2 gem docs:
require 'mysql2/em'
EM.run do
client1 = Mysql2::EM::Client.new
defer1 = client1.query "SELECT sleep(3) as first_query"
defer1.callback do |result|
puts "Result: #{result.to_a.inspect}"
end
client2 = Mysql2::EM::Client.new
defer2 = client2.query "SELECT sleep(1) second_query"
defer2.callback do |result|
puts "Result: #{result.to_a.inspect}"
end
end
It runs fine, printing the results
Result: [{"second_query"=>0}]
Result: [{"first_query"=>0}]
but then the script just hangs and never returns to the command line. Any idea what is going on?
EM.run will start an EventMachine reactor. That reactor just loops and loops and loops until you somehow tell it to stop. You can manually stop it using EM.stop.
In your case, you might want to check for the callback results and stop the reactor when both callbacks fired. Ilya's em-http-request library provides a nice interface for exactly that use case. Might be worth a look.
Could someone tell me why this works:
require 'rubygems'
require 'mysql2'
#inverters=Inverter.where(:mac=>#mac)
Inverter.transaction do
#inverters.each do |inverter|
inverter.ip = #client_ip
inverter.save # Object is saved:)!
end
end
But this does not?:
require 'rubygems'
require 'mysql2'
#outputs=<a two dimensional hash>
Output.transaction do
#outputs.each do |out|
#newOut = Output.new
#newOut.inverter_id = out[:inverter_id]
#newOut.eac = out[:eac]
#newOut.pac = out[:pac]
#newOut.vac = out[:vac]
#newOut.iac = out[:iac]
#newOut.epv = out[:epv]
#newOut.ppv = out[:ppv]
#newOut.vpv = out[:vpv]
#newOut.save # Object fails to save to db:(.
# 2 lines of other code
end
end
Both objects save successfully when I enter the same commands manually in the rails console, but the second one fails within my script. I have done extensive debugging making sure that all variables ('out' and '#outputs') have expected values and again it is all working in the console. I am using Ruby 1.8.7, Rails 3.0.3 and mysql2 gem version 0.2.7. Thank you so much in advance!
The first thing I did to figure this out was to open up a separate tab in terminal, navigate to my rails app folder and follow what was going on with mySQL behind the scenes by typing the following line: tail -f log/development.log. I could see on running the second not working script above, in the log that after the INSERT into outputs table line, it would just say 'ROLLBACK'. The reason this was happening was that I had two lines of random code after my #newOut.save statement. When I took those two lines out of the transaction loop, everything worked. This is clearly a total newbie error, but I hope it helps someone.
I have a perl script in which i create a table from the existing mysql databases. I have hundreds of databases and tables in each database contains millions of records, because of this, sometimes, query takes hours because of problems with indexing, and sometimes i run out disk space due to improper joins. Is there a way i can kill the query in the same script watching for memory consumption and execution time?
P.S. I am using DBI module in perl for mysql interface
As far as execution time, you use alarm Perl functionaility to time out.
Your ALRM handle can either die (see example below), or issue a DBI cancel call (sub { $sth->cancel };)
The DBI documentation actually has a very good discussion of this as well as examples:
eval {
local $SIG{ALRM} = sub { die "TIMEOUT\n" }; # N.B. \n required
eval {
alarm($seconds);
... code to execute with timeout here (which may die) ...
};
# outer eval catches alarm that might fire JUST before this alarm(0)
alarm(0); # cancel alarm (if code ran fast)
die "$#" if $#;
};
if ( $# eq "TIMEOUT\n" ) { ... }
elsif ($#) { ... } # some other error
As far as watching for memory, you just need for ALRM handler - instead of simply dying/cancelling - first checks the memory consumption of your script.
I won't go into details of how to measure memory consumption since it's an unrelated question that was likely already answered comprehensively on SO, but you can use size() method from Proc::ProcessTable as described in the Perlmonks snippet Find memory usage of perl program.
I used KILL QUERY command as described in http://www.perlmonks.org/?node_id=885620
This is the code from my script
eval {
eval { # Time out and interrupt work
my $TimeOut=Sys::SigAction::set_sig_handler('ALRM',sub {
$dbh->clone()->do("KILL QUERY ".$dbh->{"mysql_thread_id"});
die "TIMEOUT\n";
});
#Set alarm
alarm($seconds);
$sth->execute();
# Clear alarm
#alarm(0);
};
# Prevent race condition
alarm(0);
die "$#" if $#;
};
This code kills the query and also removes all the temporary tables
Watch out that you can't kill a query, using the same connection handler if you have a query that is stuck because of a table lock. You must open another connection with the same user and will that thread id.
Of course, you have to store in a hash the list of currently opened thread ids.
Mind that once you've terminated a thread id, the rest of the Perl code will execute .. on a unblessed handler.
I have a Perl script that reads a command file and restarts itself if necessary by doing:
myscript.pl:
exec '/home/foo/bin/myscript.pl';
exit(0);
Now, this works fine except for one issue. The thread that reads the command file does not have access to the DBI handle I use. And over a number of restarts I seem to build up the number of open mysql connections till I get the dreaded "Too Many Connections" error. The DBI spec says:
"Because of this (possibly temporary) restriction, newly created threads must make their own connections to the database. Handles can't be shared across threads."
Any way to close connections or perhaps a different way to restart the script?
Use a flag variable that is shared between threads. Have the command line reading thread set the flag to exit, and the thread holding the DB handle release it and actually do the re-exec:
#!/usr/bin/perl
use threads;
use threads::shared;
use strict; use warnings;
my $EXIT_FLAG :shared;
my $db_thread = threads->create('do_the_db_thing');
$db_thread->detach;
while ( 1 ) {
sleep rand 10;
$EXIT_FLAG = 1 if 0.05 > rand or time - $^T > 20;
}
sub do_the_db_thing {
until ( $EXIT_FLAG ) {
warn sprintf "%d: Working with the db\n", time - $^T;
sleep rand 5;
}
# $dbh->disconnect; # here
warn "Exit flag is set ... restarting\n";
exec 'j.pl';
}
You could try registering an atexit function to close the DBI handle at the point where it is opened, and then use fork & exec to restart the script rather then just exec. The parent would then call exit, invoking the atexit callback to close the DBI handle. The child could re-exec itself normally.
Edit: After thinking for a couple more minutes, I believe you could skip the atexit entirely because the handle would be closed automatically upon the parent's exit. Unless, of course, you need to do a more complex operation upon closing the DB handle than a simple filehandle close.
my $pid = fork();
if (not defined $pid) {
#Could not fork, so handle the error somehow
} elsif ($pid == 0) {
#Child re-execs itself
exec '/home/foo/bin/myscript.pl';
} else {
#Parent exits
exit(0);
}
If you expect a lot of connections, you probably want DBI::Gofer to act as a DBI proxy for you. You create as many connections in as many scripts as you like, and DBI::Gofer shares them when it can.