How can I Get command exit code executed with Testcontainers? - docker-java

Using GenericContainer#execInContainer I can only get stdout or stderr.
Is there any way to get exit code of executed command?
I can't rely on the presence of text in stderr. The application I execute prints some info to stderr but exits with code 0.

execInContainer is just a shortcut to execCreateCmd/execStartCmd from docker-java. Unfortunately, their API doesn't provide a way to get the exit code.
But you can make use of built-in shell functionality and just return the code as part of stdout/stderr:
$ sh -c 'false; echo "ExitCode=$?"'
ExitCode=1
where false is your command

You can use inspectExecCmd(execId) to get information about the executed command, and also you can get exit code from the response of inspectExecCmd

Related

Exit code from powershell script when terminating from subfunction

I have a Powershell script running which calls a function from a dot-sourced .ps1 file.
Inside this function I do an exit 1 which terminates the whole script (as intended).
When I now look at $? and $LASTEXITCODE it says True and 0.
Shouldn't there be False and 1?
Is there anything I don't see?
Example:
main script:
Log-Error -Error "some error" -exit $True
in function.ps1:
Function Log-Finish {
[CmdletBinding()]
Param ([Parameter(Mandatory=$false)][boolean]$Exit)
Process {
" error "
# If $Exit is $true, end calling script
If(($Exit) -or ($Exit -eq $True)) {
Exit 1
}
}
}
You will found an explanation to your "issue" in this post http://blogs.technet.com/b/heyscriptingguy/archive/2011/05/12/powershell-error-handling-and-why-you-should-care.aspx
Also, remember that when external command or script is run, $? will
not always tell you a true story. Let’s see why. Create a script that
has nothing but one line—our favorite error generating command:
Get-Item afilethatdoesntexist.txt
Now run the script and see the output. You will notice that the host
shows you the error. You will also notice that $error contains the
error object that was generated by the command in the script. But $?
Is set to TRUE! Oh, and don’t try this in the order I mentioned
because it will skew the results of $?.
So can you tell me why $? Is set to True? Because your script ran
successfully as far as Windows PowerShell is concerned. Every step in
script was executed—whether it resulted in an error or not. And that
was successful in doing what the script told Windows PowerShell to do.
It doesn’t necessarily mean that the commands within script didn’t
generate any error. See how quickly it gets confusing? And we haven’t
started to go deep yet!

Wrapping program with TclApp causes smtp package to stop working properly?

So I'm getting a strange issue when trying to send an email from my company's local mail server using Tcl. The problem is I've written code that works, but it doesn't work as soon as I wrap it with TclApp. I believe I've included all the necessary packages.
Code:
package require smtp;
package require mime;
package require tls;
set body "Hello World, I am sending an email! Hear me roar.";
#mime info
set token [mime::initialize -canonical text/plain -string $body];
#mail options
set opts {};
#MAIL SERVER + PORT
lappend opts -servers "my_server";
lappend opts -ports 25;
tls::init -tls1 1;
lappend opts -username "someEmail#example.com";
lappend opts -password "somePasswordExample";
#SUBJECT + FROM + TO
lappend opts -header [list "Subject" "This is a test e-mail, beware!!"];
lappend opts -header [list "From" "theFromEmail#example.com"];
lappend opts -header [list "To" "theToEmail#yahoo.com"];
if {[catch {
smtp::sendmessage $token \
{*}$opts \
-queue false \
-atleastone false \
-usetls false \
-debug 1;
mime::finalize $token;
} msg]} {
set out [open "error_log.txt" w];
puts $out $msg;
close $out;
}
puts "Hit enter to continue...";
gets stdin;
exit;
This works when I run it and I successfully get the email sent.
But when I wrap it, it doesn't. Here's the output after wrapping and executing the program:
For whatever reason, wrapping with TclApp makes my program fail to authenticate.
*Update: I've decided to try to wrap the script with Freewrap to see if I get the same problem and amazingly I do not. Script works if not wrapped or if wrapped by Freewrap, but not TclApp; is this a bug in TclApp or am I simply missing something obvious?
It seems that this is an old problem (last post): https://community.activestate.com/forum/wrapping-package-tls151
Investigate whether the wrapped version has the same .dll
dependencies as in the article you linked.
Check in the TCL Wiki to see whether this is a known issue ( TCL
Wiki ).
Are you initializing your environment properly when wrapped ( TCL
App Initialization )?
I've found the solution to my problem and solved it! Hopefully this answer will help anyone else who ran into the same issue.
I contacted ActiveState and they were able to point me in the right direction - this link to a thread on the ActiveState forums goes over a similar issue.
I've decided to run the prefix file I've been using to wrap my programs in TclApp - I am using base-tk8.6-thread-win32-ix86. Running this opens up a console/Tcl interpreter which allowed me to try sourcing my Tcl e-mail script manually and I found that the prefix file/interpreter by itself was completely unaware of the location of any package on my system and that I needed to lappend to the auto_path the location of the packages I wanted. Moreover I found from the thread that the email script I was using required other dependencies I had not wrapped in. Namely sha1, SASL, and otp among their own dependencies. When I got these dependencies my script worked through the base-tk8.6-thread-win32-ix86 interpreter and subsequantly through my wrapped program.
Another clue that these dependencies were necessary after doing some research was that my script would give me a 530: 5.5.1 Authentication Required. I later learned SASL fixed this - but I don't know anything about networking so I wouldn't have guessed it.
The other thing I learned was there is a distinct difference between teacup get and teacup install which is why I wasn't installing Tcl packages correctly and TclApp was unaware of SASL, sha1, or otp.
What I don't understand, and perhaps someone with more in depth knowledge about these programs can tell me, how does Wish86 know about a script's package dependencies and know where to find them?? It wasn't until I tried looking for the packages in TclApp that I realized I didn't even have sha1, SASL, etc. on my system nor did I included their package require commands at the top of my script, yet Wish86 could execute my script perfectly. The frustrating thing about this is it is difficult to know whether or not you've met all dependency requirements for systems that don't have a magical Wish86 interpreter that just makes everything work (lol.)
After looking through the end of the thread, I found there is a discussion on the depth TclApp goes to look for hard and soft dependencies in projects.
EDIT: Added more information.

Expect send to other process

I have a expect script which does the following:
Spawns a.sh -> a.sh waits for user input
Spawns b.sh -> runs and finishes
I now want to send input to a.sh but having difficulties getting this working.
Below is my expect script
#!/usr/bin/expect
spawn "./a.sh"
set a $spawn_id
expect "enter"
spawn "./b.sh"
expect eof
send -i $a "test\r"
a.sh is
read -p "enter " group
echo $group
echo $group to file > file.txt
and b.sh is
echo i am b
sleep 5
echo xx
Basically, expect will work with two feasible commands such as send and expect. In this case, if send is used, then it is mandatory to have expect (in most of the cases) afterwards. (while the vice-versa is not required to be mandatory)
This is because without that we will be missing out what is happening in the spawned process as expect will assume that you simply need to send one string value and not expecting anything else from the session.
After the following code
send "ian\n"
we have to make the expect to wait for something afterwards. You have used expect eof for the spawned process a.sh . In a same way, it can be added for the b.sh spawned process as well.
"Well, then why interact worked ?". I heard you.
They both work in the same manner except that interact will expect some input from the user as an interactive session. It obviously expect from you. But, your current shell script is not designed to get input and it will eventually quit since there is no much further code in the script and which is why you are seeing the proper output. Or, even having an expect code alone (even without eof) can do the trick.
Have a look at the expect's man page to know more.
I've just got this working using
spawn "./a.sh"
set a $spawn_id
expect "enter"
spawn "./b.sh"
expect eof
set spawn_id $a
send "ian\n"
interact
My question now is why do you need interact at the end?

my nodejs script is not exiting on its own after successful execution

I have written a script to update my db table after reading data from db tables and solr. I am using asyn.waterfall module. The problem is that the script is not getting exited after successful completion of all operations. I have used db connection pool also thinking that may be creating the script to wait infinitly.
I want to put this script in crontab and if it will not exit properly it would be creating a hell lot of instances unnecessarily.
I just went through this issue.
The problem with just using process.exit() is that the program I am working on was creating handles, but never destroying them.
It was processing a directory and putting data into orientdb.
so some of the things that I have come to learn is that database connections need to be closed before getting rid of the reference. And that process.exit() does not solve all cases.
When my project processed 2,000 files. It would get down to about 500 left, and the extra handles would have filled up the available working memory. Which means it would not be able to continue. Therefore never reaching the process.exit at the end.
On the other hand, if you close the items that are requesting the app to stay open, you can solve the problem at its source.
The two "Undocumented Functions" that I was able to use, were
process._getActiveHandles();
process._getActiveRequests();
I am not sure what other functions will help with debugging these types of issues, but these ones were amazing.
They return an array, and you can determine a lot about what is going on in your process by using these methods.
You have to tell it when you're done, by calling
process.exit();
More specifically, you'll want to call this in the callback from async.waterfall() (the second argument to that function). At that point, all your asynchronous code has executed, and your script should be ready to exit.
EDIT: As pointed out by #Aaron below, this likely has to do with something like a database connection being active, and not allowing the node process to end.
You can use the node module why-is-node-running:
Run npm install -D why-is-node-running
Add import * as log from 'why-is-node-running'; in your code
When you expect your program to exit, add a log statement:
afterAll(async () => {
await app.close();
log();
})
This will print a list of open handles with a stacktrace to find out where they originated:
There are 5 handle(s) keeping the process running
# Timeout
/home/maf/dev/node_modules/why-is-node-running/example.js:6 - setInterval(function () {}, 1000)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
# TCPSERVERWRAP
/home/maf/dev/node_modules/why-is-node-running/example.js:7 - server.listen(0)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
We can quit the execution by using:
connection.destroy();
If you use Visual Studio code, you can attach to an already running Node script directly from it.
First, run the Debug: Attached to Node Process command:
When you invoke the command, VS Code will prompt you which Node.js process to attach to:
Your terminal should display this message:
Debugger listening on ws://127.0.0.1:9229/<...>
For help, see: https://nodejs.org/en/docs/inspector
Debugger attached.
Then, inside your debug console, you can use the code from The Lazy Coder’s answer:
process._getActiveHandles();
process._getActiveRequests();

Avoid printing job exit codes in SGE with option -sync yes

I have a Perl script which submits a bunch of array jobs to SGE. I want all the jobs to be run in parallel to save me time, and the script to wait for them all to finish, then go on to the next processing step, which integrates information from all SGE output files and produces the final output.
In order to send all the jobs into the background and then wait, I use Parallel::ForkManager and a loop:
$fork_manager = new Parallel::ForkManager(#as);
# #as: Max nb of processes to run simultaneously
for $a (#as) {
$fork_manager->start and next; # Starts the child process
system "qsub <qsub_options> ./script.plx";
$fork_manager->finish; # Terminates the child process
}
$fork_manager->wait_all_children;
<next processing step, local>
In order for the "waiting" part to work, however, I have had to add "-sync yes" to the qsub options. But as a "side effect" of this, SGE prints the exit code for each task in each array job, and since there are many jobs and the single tasks are light, it basically renders my shell unusable due to all those interupting messages while the qsub jobs are running.
How can I get rid of those messages? If anything, I would be interested in checking qsub's exit code for the jobs (so I can check everything went ok before the next step), but not in one exit code for each task (I log the tasks' error via option -e anyway in case I need it).
The simplest solution would be to redirect the output from qsub somewhere, i.e.
system("qsub <qsub options> ./script.plx >/dev/null 2>&1");
but this masks errors that you might want to see. Alternatively, you can use open() to start the subprocess and read it's output, only printing something if the subprocess generates an error.
I do have an alternate solution for you, though. You could submit the jobs to SGE without -sync y, and capture the job id when qsub prints it. Then, turn your summarization and results collection code into a follow on job and submit it with a dependency on the completion of the first jobs. You can submit this final job with -sync y so your calling script waits for it to end. See the documentation for -hold_jid in the qsub man page.
Also, rather than making your calling script decide when to submit the next job (up to your maximum), use SGE's -tc option to specify the maximum number of simultaneous jobs (note that -tc isn't in the man page, but it is in qsub's -help output). This depends on you using a new enough version of SGE to have -tc, of course.