Exit code from powershell script when terminating from subfunction - function

I have a Powershell script running which calls a function from a dot-sourced .ps1 file.
Inside this function I do an exit 1 which terminates the whole script (as intended).
When I now look at $? and $LASTEXITCODE it says True and 0.
Shouldn't there be False and 1?
Is there anything I don't see?
Example:
main script:
Log-Error -Error "some error" -exit $True
in function.ps1:
Function Log-Finish {
[CmdletBinding()]
Param ([Parameter(Mandatory=$false)][boolean]$Exit)
Process {
" error "
# If $Exit is $true, end calling script
If(($Exit) -or ($Exit -eq $True)) {
Exit 1
}
}
}

You will found an explanation to your "issue" in this post http://blogs.technet.com/b/heyscriptingguy/archive/2011/05/12/powershell-error-handling-and-why-you-should-care.aspx
Also, remember that when external command or script is run, $? will
not always tell you a true story. Let’s see why. Create a script that
has nothing but one line—our favorite error generating command:
Get-Item afilethatdoesntexist.txt
Now run the script and see the output. You will notice that the host
shows you the error. You will also notice that $error contains the
error object that was generated by the command in the script. But $?
Is set to TRUE! Oh, and don’t try this in the order I mentioned
because it will skew the results of $?.
So can you tell me why $? Is set to True? Because your script ran
successfully as far as Windows PowerShell is concerned. Every step in
script was executed—whether it resulted in an error or not. And that
was successful in doing what the script told Windows PowerShell to do.
It doesn’t necessarily mean that the commands within script didn’t
generate any error. See how quickly it gets confusing? And we haven’t
started to go deep yet!

Related

How can I Get command exit code executed with Testcontainers?

Using GenericContainer#execInContainer I can only get stdout or stderr.
Is there any way to get exit code of executed command?
I can't rely on the presence of text in stderr. The application I execute prints some info to stderr but exits with code 0.
execInContainer is just a shortcut to execCreateCmd/execStartCmd from docker-java. Unfortunately, their API doesn't provide a way to get the exit code.
But you can make use of built-in shell functionality and just return the code as part of stdout/stderr:
$ sh -c 'false; echo "ExitCode=$?"'
ExitCode=1
where false is your command
You can use inspectExecCmd(execId) to get information about the executed command, and also you can get exit code from the response of inspectExecCmd

Wrapping program with TclApp causes smtp package to stop working properly?

So I'm getting a strange issue when trying to send an email from my company's local mail server using Tcl. The problem is I've written code that works, but it doesn't work as soon as I wrap it with TclApp. I believe I've included all the necessary packages.
Code:
package require smtp;
package require mime;
package require tls;
set body "Hello World, I am sending an email! Hear me roar.";
#mime info
set token [mime::initialize -canonical text/plain -string $body];
#mail options
set opts {};
#MAIL SERVER + PORT
lappend opts -servers "my_server";
lappend opts -ports 25;
tls::init -tls1 1;
lappend opts -username "someEmail#example.com";
lappend opts -password "somePasswordExample";
#SUBJECT + FROM + TO
lappend opts -header [list "Subject" "This is a test e-mail, beware!!"];
lappend opts -header [list "From" "theFromEmail#example.com"];
lappend opts -header [list "To" "theToEmail#yahoo.com"];
if {[catch {
smtp::sendmessage $token \
{*}$opts \
-queue false \
-atleastone false \
-usetls false \
-debug 1;
mime::finalize $token;
} msg]} {
set out [open "error_log.txt" w];
puts $out $msg;
close $out;
}
puts "Hit enter to continue...";
gets stdin;
exit;
This works when I run it and I successfully get the email sent.
But when I wrap it, it doesn't. Here's the output after wrapping and executing the program:
For whatever reason, wrapping with TclApp makes my program fail to authenticate.
*Update: I've decided to try to wrap the script with Freewrap to see if I get the same problem and amazingly I do not. Script works if not wrapped or if wrapped by Freewrap, but not TclApp; is this a bug in TclApp or am I simply missing something obvious?
It seems that this is an old problem (last post): https://community.activestate.com/forum/wrapping-package-tls151
Investigate whether the wrapped version has the same .dll
dependencies as in the article you linked.
Check in the TCL Wiki to see whether this is a known issue ( TCL
Wiki ).
Are you initializing your environment properly when wrapped ( TCL
App Initialization )?
I've found the solution to my problem and solved it! Hopefully this answer will help anyone else who ran into the same issue.
I contacted ActiveState and they were able to point me in the right direction - this link to a thread on the ActiveState forums goes over a similar issue.
I've decided to run the prefix file I've been using to wrap my programs in TclApp - I am using base-tk8.6-thread-win32-ix86. Running this opens up a console/Tcl interpreter which allowed me to try sourcing my Tcl e-mail script manually and I found that the prefix file/interpreter by itself was completely unaware of the location of any package on my system and that I needed to lappend to the auto_path the location of the packages I wanted. Moreover I found from the thread that the email script I was using required other dependencies I had not wrapped in. Namely sha1, SASL, and otp among their own dependencies. When I got these dependencies my script worked through the base-tk8.6-thread-win32-ix86 interpreter and subsequantly through my wrapped program.
Another clue that these dependencies were necessary after doing some research was that my script would give me a 530: 5.5.1 Authentication Required. I later learned SASL fixed this - but I don't know anything about networking so I wouldn't have guessed it.
The other thing I learned was there is a distinct difference between teacup get and teacup install which is why I wasn't installing Tcl packages correctly and TclApp was unaware of SASL, sha1, or otp.
What I don't understand, and perhaps someone with more in depth knowledge about these programs can tell me, how does Wish86 know about a script's package dependencies and know where to find them?? It wasn't until I tried looking for the packages in TclApp that I realized I didn't even have sha1, SASL, etc. on my system nor did I included their package require commands at the top of my script, yet Wish86 could execute my script perfectly. The frustrating thing about this is it is difficult to know whether or not you've met all dependency requirements for systems that don't have a magical Wish86 interpreter that just makes everything work (lol.)
After looking through the end of the thread, I found there is a discussion on the depth TclApp goes to look for hard and soft dependencies in projects.
EDIT: Added more information.

Expect send to other process

I have a expect script which does the following:
Spawns a.sh -> a.sh waits for user input
Spawns b.sh -> runs and finishes
I now want to send input to a.sh but having difficulties getting this working.
Below is my expect script
#!/usr/bin/expect
spawn "./a.sh"
set a $spawn_id
expect "enter"
spawn "./b.sh"
expect eof
send -i $a "test\r"
a.sh is
read -p "enter " group
echo $group
echo $group to file > file.txt
and b.sh is
echo i am b
sleep 5
echo xx
Basically, expect will work with two feasible commands such as send and expect. In this case, if send is used, then it is mandatory to have expect (in most of the cases) afterwards. (while the vice-versa is not required to be mandatory)
This is because without that we will be missing out what is happening in the spawned process as expect will assume that you simply need to send one string value and not expecting anything else from the session.
After the following code
send "ian\n"
we have to make the expect to wait for something afterwards. You have used expect eof for the spawned process a.sh . In a same way, it can be added for the b.sh spawned process as well.
"Well, then why interact worked ?". I heard you.
They both work in the same manner except that interact will expect some input from the user as an interactive session. It obviously expect from you. But, your current shell script is not designed to get input and it will eventually quit since there is no much further code in the script and which is why you are seeing the proper output. Or, even having an expect code alone (even without eof) can do the trick.
Have a look at the expect's man page to know more.
I've just got this working using
spawn "./a.sh"
set a $spawn_id
expect "enter"
spawn "./b.sh"
expect eof
set spawn_id $a
send "ian\n"
interact
My question now is why do you need interact at the end?

Perl HTML file upload issue. File has zero size

I have a perl CGI script, that works, to upload a file from a PC to a Linux server.
It works exactly as intended when I write the call to the CGI in my own HTML form and then execute, but when I put the same call into an existing application, the file is created on the server, but does not get the data, it is size zero.
I have compared environment variables (those I can extract from %ENV) and nothing there looks like a cause. I actually tried changing several of the ENV in my own HTML script, to the values the existing application was using, and this did not reveal the problem.
Nothing in the log gives me a clue, the upload operation thinks it was successful.
The user is the same for both tests. If permissions were an issue, then the file would not even be created on the server.
Results are the same in IE as in Chrome (works from my own HTML script, not from within the application).
What specific set up should I be looking at, to compare?
This is the upload code:
if (open(UPLOADFILE, ">$upload_dir/$fname")) {
binmode UPLOADFILE;
while (<$from_fh>) {
print UPLOADFILE;
}
close UPLOADFILE;
$out_msg = "Done with Upload: upload_dir=$upload_dir fname=$fname";
}
else {
$out_msg = "ERROR opening for upload: upload_dir=$upload_dir filename=$filename";
}
I did verify that
It does NOT enter the while loop, when running from inside the application.
It does enter the while loop, when called from my own HTML script.
The value of $from_fh is the same for both runs.
All values, used in the below block, are exactly the same for both runs.
You could check the error result of your open?
my $err;
open(my $uploadfile, ">", "$upload_dir/$fname") or $err = $!;
if (!$uploadfile) {
my $out_msg = "ERROR opening for upload: upload_dir=$upload_dir filename=$filename: $err";
}
else {
### Stuff
...;
}
My guess, based on the fact you are embedding it in another application, is that all the input has been read already by some functionality that is part of the other application. For example, if I tried to use this program as part of a CGI script, and I had used the param() function from CGI.pm, then the entire file upload would have been read already. So if my own code tried to read the file again, it would receive zero data, because the data would have been ready already.

PackageMaker "Result of Script" requirement never passes

I am trying to use the "Result of Script" Requirement to check if a particular process is running, so that I can message the user before installation begins.
My script is a shell script that returns 1 for failure and 0 for success. The problem I'm having is that, regardless of my return value, the installer flow is interpreting it as failure. I am not using an incredibly simple script:
#!/bin/bash
echo "script starting">> /tmp/myfile
true
(the echo is to assure myself that the script is, in fact, running). I've tried replacing the last line with a lot of things (exit 0, exit 1, "true", "TRUE") but nothing results in the test passing.
I also discovered the following JavaScript code that gets added to distribution.dist when I activate this requirement.
<installation-check script="pm_install_check();"/>
<script>function pm_install_check() {
if(!(system.run('path/to/script/myscript.sh') == true)) {
my.result.title = 'Title';
my.result.message = 'Message';
my.result.type = 'Fatal';
return false;
}
return true;
}
</script>
as far as i can tell, the expression in the if statement will never evaluate to true. So, I'm assuming this is my problem. I don't know how to get around it, though, because this code is generated by PackageMaker.
Update
I've decided to work under the impression that this is a bug in PackageMaker, and am close to a workaround. Rather than using the "Result of Script" requirement, I used the "Result of Javascript" requirement, and built a Javascript function the looks like
function my_check() {
code = system.run('path/to/script/myscript.sh');
return code == 0;
}
Now my only problem is that this will only work when I point to my script via an absolute path. Obviously this poses a problem for an installer.
It's probably too late for you but I feel like this should be documented somewhere.
I'd been looking around for an answer for this for most of this morning. Long story short I ended up looking at generic bash scripting and I found some info about returning values from a script called by a script. Here's how it can be done:
Anywhere you'd be using exit 0 (for success) use $(exit 1).
As you'd expect exit 1 should be replaced by $(exit 0).
I realize that it's backwards and I don't really get the reasoning behind it but after some experimentation that's what I found.
Well, this isn't exactly an answer to the question, but it did end up being a solution to my problem. This freeware packaging utility called Packages supports the "Result of Script" functionality and handles the path correctly. Unfortunately the packages it creates are only compatible with OS 10.5 and later. To support 10.4, I'm building a separate installer using PackageMaker but skipping the "Result of Script" requirement.