Invert a test in .gitlab-ci.yml - gitlab-ci-runner

I would like to prevent TODO comments (or other problematic strings) from being checked in with a gitlab CI test rule. I added the last line here:
.job_template: &template_test
image: python:3.6-stretch
tags:
- python
# ...
stages:
- test
test:
<<: *template_test
stage: test
script:
- flake8 *.py
- ! grep TODO *.py
But when I look at the output of the runner, it fails:
$ flake8 *.py
$ grep TODO *.py
ERROR: Job failed: exit code 1
It seems like Gitlab swallowed the exclamation mark !, used in the shell to negate the return value of grep.

A line with the exclamation mark (! grep ...) at the beginning must be quoted. However, even this ('! grep ...') will not work here, return code will always be zero. I got the solution from https://stackoverflow.com/a/31549913/491884, a subshell must be started because GitLab CI starts the shell with set -e. This should work and is reasonably short:
script:
...
- (! grep TODO *.py)

The ! is a reserved character in YAML, therefore this does not work.
However, in this case you could use a if..then expression:
- if [ "$(grep TODO *.py)" != "" ]; then exit 1; fi

Related

Does a GitHub action step use `set -e` semantics by default?

A common pattern in GitHub action workflows is to run something like this:
- name: Install and Build 🔧
run: |
npm ci
npm run build
Clearly the intention is to run the second command only if the first command succeeds.
When running on Linux, the question becomes if the shell runs with set -e semantics. This answer suggests that set -e semantics are the default.
I'm trying to find that information in the documentation, but I'm a bit confused how it is specified. The section on exit codes contains the following for shell/sh shells:
Fail-fast behavior using set -eo pipefail: This option is set when shell: bash is explicitly specified. It is not applied by default.
This seems to contradict the other answer (and question!), and would mean that the above pattern actually is invalid, because the second line would be executed even if the first line fails.
Am I just misreading the documentation, or is it really necessary to either always specify set -e manually or add the shell: bash explicitly to get the desired behavior?
Does a GitHub action step use set -e semantics by default?
Yes, it does.
According to jobs.<job_id>.steps[*].shell, the sh and bash invocations do include -e whether specified or unspecified.
unspecified: bash -e {0}
with shell: bash: bash --noprofile --norc -eo pipefail {0}
with shell: sh: sh -e {0}
However, this section specified under Exit codes and error action preference:
bash/sh: Fail-fast behavior using set -eo pipefail: This option is set when shell: bash is explicitly specified. It is not applied by default.
applies to the -o pipefail part for Bash only. It could have been more explicit though.
An issue has been created on the GitHub docs repo to revise this:
https://github.com/github/docs/issues/23853

Invalid value. Matching delimiter not found

In updating GitHub actions to reflect the recent announcement deprecating set-output, I have run into the following error attempting to send multiline output to GITHUB_OUTPUT following the provided documentation
Error: Unable to process file command 'output' successfully.
Error: Invalid value. Matching delimiter not found 'e8e24219e2b73f81'
Below is the example action:
name: Action Test
description: test new action output
runs:
using: "composite"
steps:
- name : write
run : |
delimiter="$(openssl rand -hex 8)"
echo "OUT<<${delimiter}" >> $GITHUB_OUTPUT
cat test.json >> $GITHUB_OUTPUT
echo "${delimiter}" >> $GITHUB_OUTPUT
shell : bash
id: write
- name: Print Output
run: echo ${{ steps.write.outputs.OUT }}
shell: bash
In theory this should generate a random delimiter, put it at the beginning and end of the output, and allow the action to then print the multiline file. In practice, I'm unsure what is happening to the second instance of the delimiter as there is no match.
I have tried various solutions such as those posted in this topic
This turned out not to be the issue at hand for the asker, but would lead to the same error message, so leaving it here:
The JSON file is missing a newline at the end, so the actual contents written to the output file look something like
OUT<<eabbd4511f4f29ab
{"key":"value"}eabbd4511f4f29ab
and the closing delimiter can't be found because it's not at the beginning of a line.
To fix, we can apply this answer to add a newline if it's missing:
run : |
delimiter=$(openssl rand -hex 8)
{
echo "OUT<<$delimiter"
sed -e '$a\' test.json
echo "$delimiter"
} >> "$GITHUB_OUTPUT"
shell: bash

github worflow fails when running a grep command that does not found anything

I'm working on a workflow that has the following step:
- name: Analyze blabla
run: grep -Ri --include \*.ts 'stringToBeSearched' ./tmp/bla > ./tmp/results.txt
shell: bash
This works well in the case the grep command founds something. Then the found lines are dumped into results.txt and the returncode is 1, and the workflow goes to the next step as expected
But in the case the grep command does not found the searched strings, then an empty file is saved as result.txt (what correct until this point), but the result code is 0, and the step is set as failed, and the whole workflow fails.
Is there a way to not set the step as failed when the result code is 0?
Thanks
You could use the continue-on-error step option:
jobs.<job_id>.steps[*].continue-on-error
Prevents a job from failing
when a step fails. Set to true to allow a job to pass when this step
fails
Like:
- name: Analyze blabla
continue-on-error: true
id: grep
run: grep -Ri --include \*.ts 'stringToBeSearched' ./tmp/bla > ./tmp/results.txt
shell: bash
You. could check the outcome of the step in order to understand if was failed or not like:
steps.<id>.outcome != 'success'
See outcome doc here

Why is JSON from aws rds run in Docker "malformed" according to other tools?

To my eyes the following JSON looks valid.
{
"DescribeDBLogFiles": [
{
"LogFileName": "error/postgresql.log.2022-09-14-00",
"LastWritten": 1663199972348,
"Size": 3032193
}
]
}
A) But, jq, json_pp, and Python json.tool module deem it invalid:
# jq 1.6
> echo "$logfiles" | jq
parse error: Invalid numeric literal at line 1, column 2
# json_pp 4.02
> echo "$logfiles" | json_pp
malformed JSON string, neither array, object, number, string or atom,
at character offset 0 (before "\x{1b}[?1h\x{1b}=\r{...") at /usr/bin/json_pp line 51
> python3 -m json.tool <<< "$logfiles"
Expecting value: line 1 column 1 (char 0)
B) But on the other hand, if the above JSON is copy & pasted into an online validator, both 1 and 2, deem it valid.
As hinted by json_pp's error above, hexdump <<< "$logfiles" indeed shows additional, surrounding characters. Here's the prefix: 5b1b 313f 1b68 0d3d 1b7b ...., where 7b is {.
The JSON is output to a logfiles variable by this command:
logfiles=$(aws rds describe-db-log-files \
--db-instance-identifier somedb \
--filename-contains 2022-09-14)
# where `aws` is
alias aws='docker run --rm -it -v ~/.aws:/root/.aws amazon/aws-cli:2.7.31'
> bash --version
GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)
Have perused this GitHub issue, yet can't figure out the cause. I suspect that double quotes get mangled somehow when using echo - some reported that printf "worked" for them.
The use of docker run --rm -it -v command to produce the JSON, added some additional unprintable characters to the start of the JSON data. That makes the resulting file $logfiles invalid.
The -t option allocations a tty and the -i creates an interactive shell. In this case the -t is allowing the shell to read login scripts (e.g. .bashrc). Something in your start up scripts is outputting ansi escape codes. Often this will to clear the screen, set up other things for the interactive shell, or make the output more visually appealing by colorizing portions of the data.

Fish shell: Check if argument is provided for function

I am creating a function (below) with which you can provide an argument, a directory. I test if the $argv is a directory with -d option, but that doesn’t seem to work, it always return true even if no arguments are supplied. I also tried test -n $argv -a -d $argv to test is $argv is empty sting, but that returns test: Missing argument at index 1 error. How should I test if any argument is provided with the function or not? Why is test -d $argv not working, from my understanding it should be false when no argument is provided, because empty string is not a directory.
function fcd
if test -d $argv
open $argv
else
open $PWD
end
end
Thanks for the help.
count is the right way to do this. For the common case of checking whether there are any arguments, you can use its exit status:
function fcd
if count $argv > /dev/null
open $argv
else
open $PWD
end
end
To answer your second question, test -d $argv returns true if $argv is empty, because POSIX requires that when test is passed one argument, it must "Exit true (0) if $1 is not null; otherwise, exit false". So when $argv is empty, test -d $argv means test -d which must exit true because -d is not empty! Argh!
edit Added a missing end, thanks to Ismail for noticing
In fish 2.1+ at least, you can name your arguments, which then allows for arguably more semantic code:
function fcd --argument-names 'filename'
if test -n "$filename"
open $filename
else
open $PWD
end
end
if not set -q argv[1]
echo 'none'
else
echo 'yes'
end
From the man set page:
set ( -q | --query ) [SCOPE_OPTIONS] VARIABLE_NAMES...
o -q or --query test if the specified variable names are defined. Does not output anything, but the builtins exit status is the number of variables specified
that were not defined.
$argv is a list, so you want to look at the first element, if there are elements in that list:
if begin; test (count $argv) -gt 0; and test -d $argv[1]; end
open $argv[1]
else
open $PWD
end
Maybe is non related, but i would like to add another perspective for the question.
I want to broaden the insight to a wider scope the scope of testing the shell code with the libraries developed in the fisherman group.
With mock you can check if the open command is called safely without side effects.
Example.:
function fcd
if count $argv > /dev/null
open $argv
else
open $PWD
end
end
mock open 0 "echo \$args"
fcd "cool" # echoes cool
mock open 0 "echo \$args"
fcd # echoes $PWD
Is a recent library, but it could help to test things that might be dangerous, like for example rm
mock rm 0 "echo nothing would happen on \$args"
rm "some file" # simply echoes the message with a list of the files that would been affected
I hope that it gives a more out of the box point of view
P.S.: Sorry for the blatant publicity, but i think that is a cool idea that would be nice to be adopted by shell scripters, to test and add sustainability to shell scripts is nuts. :P
EDIT: i recently noticed a bug about the sample i posted. Please do not use rm *, because the asterisk is not treated as a param, instead fish shell expands asterisk to the list of files found and the command only mocks the first call, this means that the first file would be ignored by the mock, but all the subsequent files will get erased so please be careful if trying the sample and use a single file for the example not the wildcard.