I'd like to know, in a running jruby script, which java options are set. Is there a way to do this?
My problem is this: There are some scripts that I know require much more memory than others, and I would like to add a constraint in the code so that execution will stop early with proper warnings, rather than running out of memory at some unspecified time in the future.
Perhaps something I could stick in a BEGIN{}, like:
if is_set_joption?('-J-Xmx') then
if get_joption('-J-Xmx').match(/\d+/)[0].to_i < 1000 then
puts "You're gonna run out of memory...";
abort();
end
else
puts "I recommend you start with -J-Xmx1000m.";
abort();
end
(... where is_set_joption? and get_joption are made up methods.)
Running jruby 1.7.8.
It'll be in your ENV if you've set JAVA_OPTS in your environment. You could get it from that.. You've already initiated the JVM at this point though, so you'll want to set that elsewhere, like in your command line when you exec jruby with -D
One can do:
require 'java';
java_import 'java.lang.Runtime';
mxm = Runtime.getRuntime.maxMemory.to_i;
if mxm < (512 * 1024 * 1024) then
raise "You're gonna need a bigger boat.";
end
Related
My problem is simple: I'm trying to write a tcl script to use $grofile instead writing every time I need this file name.
So, what I did in TkConsole was:
% set grofile "file.gro"
% mol load gro ${grofile}
and, indeed, I succeeded uploading the file.
In the script I have the same lines, but still have this error:
wrong # args: should be "set varName ?newValue?"
can't read "grofile": no such variable
I tried to solve my problem with
% set grofile [./file.gro]
and I have this error,
invalid command name "./file.gro"
can't read "grofile": no such variable
I tried also with
% set grofile [file ./file.gro r]
and I got the first error, again.
I haven't found any simple way to avoid using the explicit name of the file I want to upload. It seems like you only can use the most trivial, but tedious way:
mol load file.gro
mol addfile file.xtc
and so on and so on...
Can you help me with a brief explanation about why in the TkConsole I can upload the file and use it as a variable while I can not in the tcl script?
Also, if you have where is my mistake, I will appreciate it.
I apologize if it is basic, but I could not find any answer. Thanks.
I add the head of my script:
set grofile "sim.part0001_protein_lipid.gro"
set xtcfile "protein_lipid.xtc"
set intime "0-5ms"
set system "lower"
source view_change_render.tcl
source cg_bonds.tcl
mol load gro $grofile xtc ${system}_${intime}_${xtcfile}
It was solved, thanks for your help.
You may think you've typed the same thing, but you haven't. I'm guessing that your real filename has spaces in it, and that you've not put double-quotes around it. That will confuse set as Tcl's general parser will end up giving set more arguments than it expects. (Tcl's general parser does not know that set only takes one or two arguments, by very long standing policy of the language.)
So you should really do:
set grofile "file.gro"
Don't leave the double quotes out if you have a complicated name.
Also, this won't work:
set grofile [./file.gro]
because […] is used to indicate running something as a command and using the result of that. While ./file.gro is actually a legal command name in Tcl, it's… highly unlikely.
And this won't work:
set grofile [file ./file.gro r]
Because the file command requires a subcommand as a first argument. The word you give is not one of the standard file subcommands, and none of them accept those arguments anyway, which look suitable for open (though that returns a channel handle suitable for use with commands like gets and read).
The TkConsole is actually pretty reasonable as quick-and-dirty terminal emulations go (given that it omits a lot of the complicated cases). The real problem is that you're not being consistently accurate about what you're really typing; that matters hugely in most programming languages, not just Tcl. You need to learn to be really exacting; cut-n-paste when creating a question helps a lot.
I'm running this code from the mysql2 gem docs:
require 'mysql2/em'
EM.run do
client1 = Mysql2::EM::Client.new
defer1 = client1.query "SELECT sleep(3) as first_query"
defer1.callback do |result|
puts "Result: #{result.to_a.inspect}"
end
client2 = Mysql2::EM::Client.new
defer2 = client2.query "SELECT sleep(1) second_query"
defer2.callback do |result|
puts "Result: #{result.to_a.inspect}"
end
end
It runs fine, printing the results
Result: [{"second_query"=>0}]
Result: [{"first_query"=>0}]
but then the script just hangs and never returns to the command line. Any idea what is going on?
EM.run will start an EventMachine reactor. That reactor just loops and loops and loops until you somehow tell it to stop. You can manually stop it using EM.stop.
In your case, you might want to check for the callback results and stop the reactor when both callbacks fired. Ilya's em-http-request library provides a nice interface for exactly that use case. Might be worth a look.
Wanted to understand the example line of code given # perldoc.perl.org for getlogin
$login = getlogin || getpwuid($<) || "Kilroy";
It seems like it tries to get the user name from getlogin or getpwuid, but if either fails, use Kilroy instead. I might be wrong, so please correct me. Also, I've been using getlogin() in previous scripts - is there any difference between getlogin() and getlogin?
What is this code safeguarding against? Also, what purpose does $< serve? I'm not exactly sure what to search for when looking up what $< is and what it does.
EDIT
found this in the special variables section - still don't know why it is needed or what is does in the example above
$<
The real uid of this process.
(Mnemonic: it's the uid you came from,
if you're running setuid.) You can
change both the real uid and the
effective uid at the same time by
using POSIX::setuid(). Since changes
to $< require a system call, check $!
after a change attempt to detect any
possible errors.
EDIT x2
Is this line comparable to the above example? (it is currently what I use to avoid any potential problems with "cron" executing a script - i've never run into this problem, but i am trying to avoid any theoretical problem)
my $username = getlogin(); if(!($username)){$username = 'jsmith';}
You're exactly right. If getlogin returns false it will test getpwuid($<) if that returns false it will set $login to "Kilroy"
$< is the real uid of the process. Even if you're running in a setuid environment it will return the original uid the process was started from.
Edit to match your edit :)
getpwuid returns the user's name by the UID (in scalar context, which would be the case here). You would want $< as an argumnent in case the program switched UID at some point ($< is the original one it was started with)
The only thing it's guarding against is the fact that on some systems, in some circumstances, getlogin can fail to return anything useful. In particular, getlogin only does anything useful when the process it's in has a "controlling terminal", which non-interactive processes may not. See, e.g., http://www.perlmonks.org/?node_id=663562.
I think the fallback of "Kilroy" is just for fun, though in principle getpwuid can fail to return anything useful too. (You can have a user ID that doesn't have an entry in the password database.)
I have a Perl script that reads a command file and restarts itself if necessary by doing:
myscript.pl:
exec '/home/foo/bin/myscript.pl';
exit(0);
Now, this works fine except for one issue. The thread that reads the command file does not have access to the DBI handle I use. And over a number of restarts I seem to build up the number of open mysql connections till I get the dreaded "Too Many Connections" error. The DBI spec says:
"Because of this (possibly temporary) restriction, newly created threads must make their own connections to the database. Handles can't be shared across threads."
Any way to close connections or perhaps a different way to restart the script?
Use a flag variable that is shared between threads. Have the command line reading thread set the flag to exit, and the thread holding the DB handle release it and actually do the re-exec:
#!/usr/bin/perl
use threads;
use threads::shared;
use strict; use warnings;
my $EXIT_FLAG :shared;
my $db_thread = threads->create('do_the_db_thing');
$db_thread->detach;
while ( 1 ) {
sleep rand 10;
$EXIT_FLAG = 1 if 0.05 > rand or time - $^T > 20;
}
sub do_the_db_thing {
until ( $EXIT_FLAG ) {
warn sprintf "%d: Working with the db\n", time - $^T;
sleep rand 5;
}
# $dbh->disconnect; # here
warn "Exit flag is set ... restarting\n";
exec 'j.pl';
}
You could try registering an atexit function to close the DBI handle at the point where it is opened, and then use fork & exec to restart the script rather then just exec. The parent would then call exit, invoking the atexit callback to close the DBI handle. The child could re-exec itself normally.
Edit: After thinking for a couple more minutes, I believe you could skip the atexit entirely because the handle would be closed automatically upon the parent's exit. Unless, of course, you need to do a more complex operation upon closing the DB handle than a simple filehandle close.
my $pid = fork();
if (not defined $pid) {
#Could not fork, so handle the error somehow
} elsif ($pid == 0) {
#Child re-execs itself
exec '/home/foo/bin/myscript.pl';
} else {
#Parent exits
exit(0);
}
If you expect a lot of connections, you probably want DBI::Gofer to act as a DBI proxy for you. You create as many connections in as many scripts as you like, and DBI::Gofer shares them when it can.
This is weird and I'm not sure who the culprit really is.
I'm doing some scripting, on FreeBSD (6.2)? which makes extensive use of the following ***bash***ism:
do_something <(mysql --skip-column-names -B -e 'select ... from ... where ...;')
... where "do_something is a somewhat crufty utility (in Perl) that won't read from a pipeline. If I use a regular file it works fine. My bash script using things like exec 4< <(...) with these sorts of queries (following by loops of the form while read x y z <&4; do ... never seem to have any issues.
However, Perl (5.8.x) seems to periodically block (apparently forever). I tried changing out the chomp(my $data = <MYDATA>); with a routine that used sysread and I wrote some test cases in Python for comparison. These seem to block far less often than the idiomatic Perl code, but they still do it sometimes. (The Python code using f.read() or os.read(f.fileno()...) seems to behave about equally in this issue).
I've tried reproducing the issue using ... <(cat ...) (where I'm cating the regular file) and that never seems to reproduce that stall.
I've glanced at some ktrace/kdump data ... but I'm far more familiar with Linux strace or even Solaris truss ... so I haven't figured out what's going from there yet, either.
I suppose we can mostly rule out Perl, because I've reproduced the same issue using Python ... I don't see how the bash could be doing anything wrong here (it's just creating a named pipe in /var/tmp/sh-np-xxx and wiring the processes up to that).
What could the mysql shell/utility be doing that might cause this? I don't think I've seen it from anything else (such as cat or dd). I haven't tested this scenario under Linux ... but I've used <(...) (process substitution) for years under Linux and don't recall ever seeing this.
Is it a FreeBSD issue?
Sure I can work around the issue using temporary files ... but I'd sure rather understand why it's doing this (and avoid some of the races and clean-up messiness that temporary files entail).
Any suggestions?
The big difference between operating on the output of mysql and directly on a file is timing. When the perl process is stalled, the big question is: "why is it not making forward progress"? You can use the "l" option to ps to see the wait channel for the perl process; that way you can see if it blocked on a read, or if something else is going on. If it is really blocked on pipe input, I expect the MWCHAN entry for perl to be "piperd".
The same information would be interesting for the mysql process.
What does your Python test code look like?
Another way of writing this while avoiding the bashism is this; that would allow you to rule out bash:
mysql --skip-column-names -B -e 'select ... from ... where ...;' | do_something /dev/stdin
Other interesting questions:
Does the --unbuffered option to mysql change anything?
Does piping the mysql output through dd change anything? (eg. "perlscript <(mysql ... | dd)
Summary: Need more information.