Invalid value. Matching delimiter not found - github-actions

In updating GitHub actions to reflect the recent announcement deprecating set-output, I have run into the following error attempting to send multiline output to GITHUB_OUTPUT following the provided documentation
Error: Unable to process file command 'output' successfully.
Error: Invalid value. Matching delimiter not found 'e8e24219e2b73f81'
Below is the example action:
name: Action Test
description: test new action output
runs:
using: "composite"
steps:
- name : write
run : |
delimiter="$(openssl rand -hex 8)"
echo "OUT<<${delimiter}" >> $GITHUB_OUTPUT
cat test.json >> $GITHUB_OUTPUT
echo "${delimiter}" >> $GITHUB_OUTPUT
shell : bash
id: write
- name: Print Output
run: echo ${{ steps.write.outputs.OUT }}
shell: bash
In theory this should generate a random delimiter, put it at the beginning and end of the output, and allow the action to then print the multiline file. In practice, I'm unsure what is happening to the second instance of the delimiter as there is no match.
I have tried various solutions such as those posted in this topic

This turned out not to be the issue at hand for the asker, but would lead to the same error message, so leaving it here:
The JSON file is missing a newline at the end, so the actual contents written to the output file look something like
OUT<<eabbd4511f4f29ab
{"key":"value"}eabbd4511f4f29ab
and the closing delimiter can't be found because it's not at the beginning of a line.
To fix, we can apply this answer to add a newline if it's missing:
run : |
delimiter=$(openssl rand -hex 8)
{
echo "OUT<<$delimiter"
sed -e '$a\' test.json
echo "$delimiter"
} >> "$GITHUB_OUTPUT"
shell: bash

Related

github worflow fails when running a grep command that does not found anything

I'm working on a workflow that has the following step:
- name: Analyze blabla
run: grep -Ri --include \*.ts 'stringToBeSearched' ./tmp/bla > ./tmp/results.txt
shell: bash
This works well in the case the grep command founds something. Then the found lines are dumped into results.txt and the returncode is 1, and the workflow goes to the next step as expected
But in the case the grep command does not found the searched strings, then an empty file is saved as result.txt (what correct until this point), but the result code is 0, and the step is set as failed, and the whole workflow fails.
Is there a way to not set the step as failed when the result code is 0?
Thanks
You could use the continue-on-error step option:
jobs.<job_id>.steps[*].continue-on-error
Prevents a job from failing
when a step fails. Set to true to allow a job to pass when this step
fails
Like:
- name: Analyze blabla
continue-on-error: true
id: grep
run: grep -Ri --include \*.ts 'stringToBeSearched' ./tmp/bla > ./tmp/results.txt
shell: bash
You. could check the outcome of the step in order to understand if was failed or not like:
steps.<id>.outcome != 'success'
See outcome doc here

Why is JSON from aws rds run in Docker "malformed" according to other tools?

To my eyes the following JSON looks valid.
{
"DescribeDBLogFiles": [
{
"LogFileName": "error/postgresql.log.2022-09-14-00",
"LastWritten": 1663199972348,
"Size": 3032193
}
]
}
A) But, jq, json_pp, and Python json.tool module deem it invalid:
# jq 1.6
> echo "$logfiles" | jq
parse error: Invalid numeric literal at line 1, column 2
# json_pp 4.02
> echo "$logfiles" | json_pp
malformed JSON string, neither array, object, number, string or atom,
at character offset 0 (before "\x{1b}[?1h\x{1b}=\r{...") at /usr/bin/json_pp line 51
> python3 -m json.tool <<< "$logfiles"
Expecting value: line 1 column 1 (char 0)
B) But on the other hand, if the above JSON is copy & pasted into an online validator, both 1 and 2, deem it valid.
As hinted by json_pp's error above, hexdump <<< "$logfiles" indeed shows additional, surrounding characters. Here's the prefix: 5b1b 313f 1b68 0d3d 1b7b ...., where 7b is {.
The JSON is output to a logfiles variable by this command:
logfiles=$(aws rds describe-db-log-files \
--db-instance-identifier somedb \
--filename-contains 2022-09-14)
# where `aws` is
alias aws='docker run --rm -it -v ~/.aws:/root/.aws amazon/aws-cli:2.7.31'
> bash --version
GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)
Have perused this GitHub issue, yet can't figure out the cause. I suspect that double quotes get mangled somehow when using echo - some reported that printf "worked" for them.
The use of docker run --rm -it -v command to produce the JSON, added some additional unprintable characters to the start of the JSON data. That makes the resulting file $logfiles invalid.
The -t option allocations a tty and the -i creates an interactive shell. In this case the -t is allowing the shell to read login scripts (e.g. .bashrc). Something in your start up scripts is outputting ansi escape codes. Often this will to clear the screen, set up other things for the interactive shell, or make the output more visually appealing by colorizing portions of the data.

Store command response to a variable in Github action

I am trying to store the response of aws-cli command to a variable. I am using this:
- name: Check if certificate exists
id: check_certificate
run: echo "::set-output name=S3_RESPONSE::$(aws s3api head-object --bucket test-bucket-ssl --key fullchain.pem)"
- name: Print vars
run: echo "${{steps.check_certificate.outputs.S3_RESPONSE}}"
I am getting blank output even though the S3 API returns a one-line response. If I use a simple command like "echo" instead of AWS command then it works.

Invert a test in .gitlab-ci.yml

I would like to prevent TODO comments (or other problematic strings) from being checked in with a gitlab CI test rule. I added the last line here:
.job_template: &template_test
image: python:3.6-stretch
tags:
- python
# ...
stages:
- test
test:
<<: *template_test
stage: test
script:
- flake8 *.py
- ! grep TODO *.py
But when I look at the output of the runner, it fails:
$ flake8 *.py
$ grep TODO *.py
ERROR: Job failed: exit code 1
It seems like Gitlab swallowed the exclamation mark !, used in the shell to negate the return value of grep.
A line with the exclamation mark (! grep ...) at the beginning must be quoted. However, even this ('! grep ...') will not work here, return code will always be zero. I got the solution from https://stackoverflow.com/a/31549913/491884, a subshell must be started because GitLab CI starts the shell with set -e. This should work and is reasonably short:
script:
...
- (! grep TODO *.py)
The ! is a reserved character in YAML, therefore this does not work.
However, in this case you could use a if..then expression:
- if [ "$(grep TODO *.py)" != "" ]; then exit 1; fi

How to import shell functions from one file into another?

I have the shell script:
#!/bin/bash
export LD=$(lsb_release -sd | sed 's/"//g')
export ARCH=$(uname -m)
export VER=$(lsb_release -sr)
# Load the test function
/bin/bash -c "lib/test.sh"
echo $VER
DISTROS=('Arch'
'CentOS'
'Debian'
'Fedora'
'Gentoo')
for I in "${DISTROS[#]}"
do
i=$(echo $I | tr '[:upper:]' '[:lower:]') # convert distro string to lowercase
if [[ $LD == "$I"* ]]; then
./$ARCH/${i}.sh
fi
done
As you can see it should run a shell script, depending on which architecture and OS it is run on. It should first run the script lib/test.sh before it runs this architecture and OS-specific script. This is lib/test.sh:
#!/bin/bash
function comex {
which $1 >/dev/null 2>&1
}
and when I run it on x86_64 Arch Linux with this x86_64/arch.sh script:
#!/bin/bash
if comex atom; then
printf "Atom is already installed!"
elif comex git; then
printf "Git is installed!"
fi
it returned the output:
rolling
./x86_64/arch.sh: line 3: comex: command not found
./x86_64/arch.sh: line 5: comex: command not found
so clearly the comex shell function is not correctly loaded by the time the x86_64/arch.sh script is run. Hence I am confused and wondering what I need to do in order to correctly define the comex function such that it is correctly loaded in this architecture- and OS-dependent final script.
I have already tried using . "lib/test.sh" instead of /bin/bash -c "lib/test.sh" and I received the exact same error. I have also tried adding . "lib/test.sh" to the loop, just before the ./$ARCH/${i}.sh line. This too failed, returning the same error.
Brief answer: you need to import your functions using . or source instead of bash -c:
# Load the test function
source "lib/test.sh"
Longer answer: when you call script with bash -c, a child process is created. This child process sees all exported variables (including functions) from parent process. But not vice versa. So, your script will never see comex function. Instead you need to include script code directly in current script and you do so by using . or source commands.
Part 2. After you "sourced" lib/test.sh, your main script is able to use comex function. But arch scripts won't see this function because it is not exported to them. Your need to export -f comex:
#!/bin/bash
function comex {
which $1 >/dev/null 2>&1
}
export -f comex