Argument getting triggered even if it is optional and mentioned in getopts - parameter-passing

I have written a script myscript which does an action on particular type of files. However there is an option for displaying the filename with -d. If I've not mentioned
$ myscript -d
also it is saying, invalid file path (because that file should be file_evaluated AFTER the action of my script on it) but -d is triggered even if it's not mentioned. How do I solve this? Do help. Thanks.
myscript.sh:
#!/bin/zsh
echo "Enter source directory: "
read directory
echo
echo "Add a file (with path): "
read file
echo
while getopts ":rd" cal; do
case "$cal" in
r) echo "Renaming..."
echo
;;
d) echo "Displaying the Renamed files.."
echo
;;
*) echo "Invalid option: $OPTARG"
;;
esac
done
shift $((OPTIND -1))
echo
find . "*.txt" -print0 | while read -d $'\0' file
do
anothescript -r "$file" -d
done
echo "done..."

Related

How to fix or avoid Error: Unable to process file command 'output' successfully?

Recently github has announced change that echo "::set-output name=x::y" command is deprecated and should be replaced by echo "x=y" >> $GITHUB_OUTPUT
The previous command was able to process multilined value of b while the new approach fails with the folllowing errors
Error: Unable to process file command 'output' successfully.
Error: Invalid format
In my script, I populate a variable message with a message text that should be sent to slack. I need output variables to pass that text to the next job step which performs the send operation.
message="Coverage: $(cat coverage.txt). Covered: $(cat covered.txt). Uncovered: $(cat uncovered.txt). Coverage required: $(cat coverageRequires.csv)"
The last part of message includes context of a csv file which has multiple lines
While the set-output command was able to process such multilined parameters
echo "::set-output name=text::$message"
the new version fails
echo "text=$message" >> $GITHUB_OUTPUT
What can be done to fix or avoid this error?
The documentation describes syntax for multiline strings in a different section but it seems to work even for output parameters.
Syntax:
{name}<<{delimiter}
{value}
{delimiter}
This could be interpreted as:
Set output with the defined name, and a delimiter (typically EOF) that would mark the end of data.
Keep reading each line and concatenating it into one input.
Once reaching the line consisting of the defined delimiter, stop processing. This means that another output could start being added.
Therefore, in your case the following should work and step's text output would consist of a multiline string that $message contains:
echo "text<<EOF" >> $GITHUB_OUTPUT
echo "$message" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
...and unless you need $message for something else, you could actually avoid setting it and get a more readable set of instructions to construct the output:
echo "text<<EOF" >> $GITHUB_OUTPUT
echo "Coverage: $(cat coverage.txt)." >> $GITHUB_OUTPUT
echo "Covered: $(cat covered.txt)." >> $GITHUB_OUTPUT
echo "Uncovered: $(cat uncovered.txt)." >> $GITHUB_OUTPUT
echo "Coverage required: $(cat coverageRequires.csv)" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
Note: The last example is not 100% same as yours because it would contain new lines between the sections. You could use echo -n to avoid that.
I ended up having replaced all breaklines in the message variables by the command
message=$(echo $message | tr '\n' ' ')
echo "text=$message" >> $GITHUB_OUTPUT
This eliminated the error.
The previous command was able to process multilined value of b while the new approach fails with the folllowing errors
Actually it has not been, but lately they changed the behaviour:
https://github.com/orgs/community/discussions/26288
What can be done to fix or avoid this error?
The same way as was for the GITHUB_ENV variable:
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#multiline-strings
echo 'var<<EOF' >> $GITHUB_OUTPUT
echo "<multi-line-output>" >> $GITHUB_OUTPUT
echo 'EOF' >> $GITHUB_OUTPUT
Or more fancy way:
https://github.com/orgs/community/discussions/26288#discussioncomment-3876281
delimiter="$(openssl rand -hex 8)"
echo "output-name<<${delimiter}" >> "${GITHUB_OUTPUT}"
echo "Some\nMultiline\nOutput" >> "${GITHUB_OUTPUT}"
echo "${delimiter}" >> "${GITHUB_OUTPUT}"
Another option to set multilines in outputs could be using this implementation (same as for ENV variables in the $GITHUB_ENV):
- name: Setup output var
id: test1
run: |
MESSAGE=$(cat << EOF
first line
second line
third line
...
EOF
)
echo TEST=$MESSAGE >> $GITHUB_OUTPUT
- name: Check output var
run: |
echo ${{steps.test1.outputs.TEST}}
I made a test here with the same behavior that for environment variables (detailed in this other thread)
EDIT 1:
This syntax also works (and looks easier to use):
run: |
echo "TEST1=first line \
second line \
third line" >> $GITHUB_OUTPUT
EDIT 2:
It's also possible to display the output as multilines (and not on a single line as the other examples above). However, the syntax would be different and you would need yo use echo -e together with \n inside the variable.
Example:
- name: Setup output var
id: test
run: echo "TEST=first line\n second line\n third line" >> $GITHUB_OUTPUT
- name: Check output var
run: |
echo ${{steps.test.outputs.TEST}} #Will keep the n from the \n
echo -e "${{steps.test.outputs.TEST}}" #Will break the line from the \n
steps:
- run: |
some_response=$(curl -i -H "Content-Type: application/json" \
-d "${body}" -X POST "${url}")
echo response_output=$some_response >> $GITHUB_OUTPUT
id: some-request
- run: |
echo "Response is: ${{ steps.some-request.outputs.response_output }}"
Worked for me well. Quotes (and curly brackets) are not needed in case of just setting output var

variable expansion in subshell in makefile fails

I have a makefile function inside a makefile (myfunction.mk):
.ONESHELL:
define call_script
set +x
mkdir -p $$(dirname $(2))
if [ ! -f $(2) ]; then
echo "" > $(2)
fi
REDIRECT='| tee -a'
echo '>> $(1)'
($(1) ???????? $(2))
RET_CODE=$$?
echo "exit_code is: $$RET_CODE"
if [ ! $$RET_CODE = 0 ]; then
echo "$(3) terminated with error $$RET_CODE"
exit $$RET_CODE
else
if [ ! -z "$(strip $(3))" ]; then
echo "$(3) done"
fi
fi
endef
this function call a script and append result to a log (which is created with its folder if non existing), the result of the script is append only if the makefile variable given as the 4th ($(4)) argument is equal to 'yes'.
you call it like this:
include myfunction.mk
OUTPUT_ENABLED ?= yes
target:
$(call call_script, echo "test", reports/mylog.log, "doing test", OUTPUT_ENABLED)
This works for the most part:
if i replace '????????' by '| tee -a', it works.
if i replace '????????' by $(REDIRECT), it fails.
if i replace '????????' by $$REDIRECT, it fails.
why?
note: running it from a shell /bin/sh: symbolic link to dash
note: of course i want to add a ifeq that allows me to check for $(4) and replace | tee -a by &>>
I'll assume that you use call in a recipe, not flat in your Makefile. There are few problems with your shell script. First, if you try the following on the command line:
mkdir -p reports
REDIRECT='| tee -a'
echo '>> echo "test"'
(echo "test" $REDIRECT reports/mylog.log)
you'll see that echo considers:
"test" $REDIRECT reports/mylog.log
as its arguments. They are expanded and echoed, which prints:
test | tee -a reports/mylog.log
on the standard output, not the effect you expected, I guess. You could, for instance, use eval. On the command line:
eval "echo "test" $REDIRECT reports/mylog.log"
Which, in your Makefile, would become:
eval "$(1) $$REDIRECT $(2)"
Next you should not quote the third parameter of call because the quotes will be passed unmodified and your script will be expanded by make as:
echo " "doing test" terminated with error $RET_CODE"
Again probably not what you want.
Third, you should avoid useless spaces in the parameters of call because they are preserved too (as you can see above between the first 2 double quotes):
.PHONY: foo
foo:
$(call call_script,echo "test",reports/mylog.log,doing test,OUTPUT_ENABLED)
And for your last desired feature, it would be slightly easier to pass the value of OUTPUT_ENABLED to call instead of its name, but let's go this way:
$ cat myfunction.mk
define call_script
set +x
mkdir -p $$(dirname $(2))
if [ ! -f $(2) ]; then
echo "" > $(2)
fi
if [ "$($(4))" = "yes" ]; then
REDIRECT='| tee -a'
else
REDIRECT='&>>'
fi
echo '>> $(1)'
eval "$(1) $$REDIRECT $(2)"
RET_CODE=$$?
echo "exit_code is: $$RET_CODE"
if [ ! $$RET_CODE = 0 ]; then
echo "$(3) terminated with error $$RET_CODE"
exit $$RET_CODE
else
if [ ! -z "$(strip $(3))" ]; then
echo "$(3) done"
fi
fi
endef
$ cat Makefile
.ONESHELL:
include myfunction.mk
OUTPUT_ENABLED ?= yes
target:
$(call call_script,echo "test",reports/mylog.log,doing test,OUTPUT_ENABLED)
Note that I moved the .ONESHELL: in the main Makefile because it is probably better to not hide it inside an included file. Up to you.
The most problematic issue here is that if you pipe your commands, the exit code is the exit code of the last command in a pipe, e.g false | tee foo.log will exit with 0 as tee will most probably succeed. Note also that pipe only redirects stdout, so your log will not contain any stderr messages unless explicitly redirected.
Considering that piping commands influence exit code and lack of portability of $PIPESTATUS (most specifically not being supported in dash), I would try to avoid piping commands and use a temporary file for gathering output, i.e.:
$ cat Makefile
# $(1) - script to execute
# $(2) - log file
# $(3) - description
define call_script
echo '>> $(1)'
$(if $(OUTPUT_ENABLED), \
$(1) > $#.log 2>&1; RET_CODE=$$?; mkdir -p $(dir $(2)); cat $#.log >> $(2); cat $#.log; rm -f $#.log, \
$(1); RET_CODE=$$? \
); \
echo "EXIT_CODE is: $${RET_CODE}"; \
if [ $${RET_CODE} -ne 0 ]; then $(if $(3),echo "$(3) terminated with error $${RET_CODE}";) exit $${RET_CODE}; fi; \
$(if $(3), echo "$(3) done.")
endef
good:
$(call call_script,echo "test",reports/mylog.log,doing test)
bad:
$(call call_script,mkdir /root/foo,reports/mylog.log,intentional fail)
ugly:
$(call call_script,bad_command,reports/mylog.log)
Regular call will not create the logs and will stop on errors:
$ make good bad ugly
echo '>> echo "test"'
>> echo "test"
echo "test"; RET_CODE=$? ; echo "EXIT_CODE is: ${RET_CODE}"; if [ ${RET_CODE} -ne 0 ]; then echo "doing test terminated with error ${RET_CODE}"; exit ${RET_CODE}; fi; echo "doing test done."
test
EXIT_CODE is: 0
doing test done.
echo '>> mkdir /root/foo'
>> mkdir /root/foo
mkdir /root/foo; RET_CODE=$? ; echo "EXIT_CODE is: ${RET_CODE}"; if [ ${RET_CODE} -ne 0 ]; then echo "intentional fail terminated with error ${RET_CODE}"; exit ${RET_CODE}; fi; echo "intentional fail done."
mkdir: cannot create directory ‘/root/foo’: Permission denied
EXIT_CODE is: 1
intentional fail terminated with error 1
make: *** [Makefile:19: bad] Error 1
Note that ugly was not built due to failure on bad. Now the same with the log:
$ make good bad ugly OUTPUT_ENABLED=1
echo '>> echo "test"'
>> echo "test"
echo "test" > good.log 2>&1; RET_CODE=$?; mkdir -p reports/; cat good.log >> reports/mylog.log; cat good.log; rm -f good.log; echo "EXIT_CODE is: ${RET_CODE}"; if [ ${RET_CODE} -ne 0 ]; then echo "doing test terminated with error ${RET_CODE}"; exit ${RET_CODE}; fi; echo "doing test done."
test
EXIT_CODE is: 0
doing test done.
echo '>> mkdir /root/foo'
>> mkdir /root/foo
mkdir /root/foo > bad.log 2>&1; RET_CODE=$?; mkdir -p reports/; cat bad.log >> reports/mylog.log; cat bad.log; rm -f bad.log; echo "EXIT_CODE is: ${RET_CODE}"; if [ ${RET_CODE} -ne 0 ]; then echo "intentional fail terminated with error ${RET_CODE}"; exit ${RET_CODE}; fi; echo "intentional fail done."
mkdir: cannot create directory ‘/root/foo’: Permission denied
EXIT_CODE is: 1
intentional fail terminated with error 1
make: *** [Makefile:19: bad] Error 1
$ cat reports/mylog.log
test
mkdir: cannot create directory ‘/root/foo’: Permission denied
Note that this time ugly was also not run. But if run later, it will correctly append to the log:
$ make ugly OUTPUT_ENABLED=1
echo '>> bad_command'
>> bad_command
bad_command > ugly.log 2>&1; RET_CODE=$?; mkdir -p reports/; cat ugly.log >> reports/mylog.log; cat ugly.log; rm -f ugly.log; echo "EXIT_CODE is: ${RET_CODE}"; if [ ${RET_CODE} -ne 0 ]; then exit ${RET_CODE}; fi;
/bin/sh: 1: bad_command: not found
EXIT_CODE is: 127
make: *** [Makefile:22: ugly] Error 127
$ cat reports/mylog.log
test
mkdir: cannot create directory ‘/root/foo’: Permission denied
/bin/sh: 1: bad_command: not found
Personally I am not fan of implementing logging in this way. It is complicated and it only logs output of commands, not make output itself, and only of those commands which are explicitly called to do so. I'd rather keep Makefile clean and simple and just run make 2>&1 | tee log instead to have the output logged.

Shell script: if statement does not work as I want it to

I wrote a shell script (for practice) that should compile a C++ (.cpp) file, automatically generate an executable with clang++ and execute it. My code:
#!/bin/bash
function runcpp() {
CPPFILE=$1
if [ -z $CPPFILE ]; then
echo "You need to specify a path to your .cpp file!"
else
echo -n "Checking if '$CPPFILE' is a valid file..."
if [[ $CPPFILE == "*.cpp" ]]; then
echo -e "\rChecking if '$CPPFILE' is a valid file... successful"
echo -n "Generating executable for '$CPPFILE'..."
clang++ $CPPFILE
echo -e "\rGenerating executable for '$CPPFILE'... done"
fi
fi
}
It's not done yet, however, at line 9 (if [[ $CPPFILE == "*.cpp" ]]; then) something goes wrong: the script exits, even though the file I specified is a .cpp file. My Terminal window:
kali#kali:~$ ls -lha *.cpp
-rw-r--r-- 1 kali kali 98 Feb 9 19:35 test.cpp
kali#kali:~$ runcpp test.cpp
Checking if 'test.cpp' is a valid file...kali#kali:~$

while loop calling function but only for first line, Serverlist.txt contains multiple server details

I am trying to catch the log, Serverlist.txt contains some servers details like root 10.0.0.1 22 TestServer, while I run the script it only read the first line and exit, its not working for further lines. Below is my script.
newdate1=`date -d "yesterday" '+%b %d' | sed 's/0/ /g'`
newdate2=`date -d "yesterday" '+%d/%b/%Y'`
newdate3=`date -d "yesterday" '+%y%m%d'`
DL=/opt/$newdate3
Serverlist=/opt/Serverlist.txt
serverlog()
{
mkdir -p $DL/$NAME
ssh -p$PORT $USER#$IP "cat /var/log/messages*|grep '$newdate1'"|cat > $DL/$NAME/messages.log
}
while read USER IP PORT NAME
do
serverlog
sleep 1;
done <<<"$Serverlist"
Use < instead of <<<. <<<is a Here String substitution. The right side is evaluated, and then the result is read from the loop as standard input:
$ FILE="my_file"
$ cat $FILE
First line
Last line
$ while read LINE; do echo $LINE; done <$FILE
First line
Last line
$ set -x
$ while read LINE; do echo $LINE; done <<<$FILE
+ read LINE
+ echo my_file
my_file
+ read LINE
$ while read LINE; do echo $LINE; done <<<$(ls /home)
++ ls /home
+ read LINE
+ echo antxon install lost+found
antxon install lost+found
+ read LINE
$
I got the answer from another link.
you can use "-n" option in ssh, this will not break the loop and you will get the desired result.

How to parse json response in the shell script?

I am working with bash shell script. I need to execute an URL using shell script and then parse the json data coming from it.
This is my URL - http://localhost:8080/test_beat and the responses I can get after hitting the URL will be from either these two -
{"error": "error_message"}
{"success": "success_message"}
Below is my shell script which executes the URL using wget.
#!/bin/bash
DATA=$(wget -O - -q -t 1 http://localhost:8080/test_beat)
#grep $DATA for error and success key
Now I am not sure how to parse json response in $DATA and see whether the key is success or error. If the key is success, then I will print a message "success" and print $DATA value and exit out of the shell script with zero status code but if the key is error, then I will print "error" and print $DATA value and exit out of the shell script with non zero status code.
How can I parse json response and extract the key from it in shell script?
I don't want to install any library to do this since my JSON response is fixed and it will always be same as shown above so any simpler way is fine.
Update:-
Below is my final shell script -
#!/bin/bash
DATA=$(wget -O - -q -t 1 http://localhost:8080/tester)
echo $DATA
#grep $DATA for error and success key
IFS=\" read __ KEY __ MESSAGE __ <<< "$DATA"
case "$KEY" in
success)
exit 0
;;
error)
exit 1
;;
esac
Does this looks right?
If you are going to be using any more complicated json from the shell and you can install additional software, jq is going to be your friend.
So, for example, if you want to just extract the error message if present, then you can do this:
$ echo '{"error": "Some Error"}' | jq ".error"
"Some Error"
If you try this on the success case, it will do:
$echo '{"success": "Yay"}' | jq ".error"
null
The main advantage of the tool is simply that it fully understands json. So, no need for concern over corner cases and whatnot.
#!/bin/bash
IFS= read -d '' DATA < temp.txt ## Imitates your DATA=$(wget ...). Just replace it.
while IFS=\" read -ra LINE; do
case "${LINE[1]}" in
error)
# ERROR_MSG=${LINE[3]}
printf -v ERROR_MSG '%b' "${LINE[3]}"
;;
success)
# SUCCESS_MSG=${LINE[3]}
printf -v SUCCESS_MSG '%b' "${LINE[3]}"
;;
esac
done <<< "$DATA"
echo "$ERROR_MSG|$SUCCESS_MSG" ## Shows: error_message|success_message
* %b expands backslash escape sequences in the corresponding argument.
Update as I didn't really get the question at first. It should simply be:
IFS=\" read __ KEY __ MESSAGE __ <<< "$DATA"
[[ $KEY == success ]] ## Gives $? = 0 if true or else 1 if false.
And you can examine it further:
case "$KEY" in
success)
echo "Success message: $MESSAGE"
exit 0
;;
error)
echo "Error message: $MESSAGE"
exit 1
;;
esac
Of course similar obvious tests can be done with it:
if [[ $KEY == success ]]; then
echo "It was successful."
else
echo "It wasn't."
fi
From your last comment it can be simply done as
IFS=\" read __ KEY __ MESSAGE __ <<< "$DATA"
echo "$DATA" ## Your really need to show $DATA and not $MESSAGE right?
[[ $KEY == success ]]
exit ## Exits with code based from current $?. Not necessary if you're on the last line of the script.
You probably already have python installed, which has json parsing in the standard library. Python is not a great language for one-liners in shell scripts, but here is one way to use it:
#!/bin/bash
DATA=$(wget -O - -q -t 1 http://localhost:8080/test_beat)
if python -c '
import json, sys
exit(1 if "error" in json.loads(sys.stdin.read()) else 0)' <<<"$DATA"
then
echo "SUCCESS: $DATA"
else
echo "ERROR: $DATA"
exit 1
fi
Given:
that you don't want to use JSON libraries.
and that the response you're parsing is simple and the only thing you care about is the presence of substring "success", I suggest the following simplification:
#!/bin/bash
wget -O - -q -t 1 http://localhost:8080/tester | grep -F -q '"success"'
exit $?
-F tells grep to search for a fixed (literal) string.
-q tells grep to produce no output and instead only reflect via its exit code whether a match was found or not.
exit $? simply exits with grep's exit code ($? is a special variable that reflects the most recently executed command's exit code).
Note that if you all you care about is whether wget's output contains "success", the above pipeline will do - no need to capture wget's output in an aux. variable.