I am trying to print value of API_RESPONSE but it prints "response is: ". S3_RESPONSE value is set but API_RESPONSE shows blank in echo command.
- name: Check if certificate exists
id: check_certificate
run: |
API_RESPONSE=$(aws s3api head-object --bucket test-bucket-ssl --key fullchain.pem 2>&1 | tee true)
echo "::set-output name=S3_RESPONSE::$(echo $API_RESPONSE)"
echo "response is: ${API_RESPONSE}"
I think, instead of
response is: ${API_RESPONSE}
you should use
response is: ${{API_RESPONSE}}
You might need to use an environment file, as illustrate here.
In your case:
- name: Check if certificate exists (Set the value)
id: Check if certificate exists_set_value
run: |
echo "API_RESPONSE=$(aws s3api head-object --bucket test-bucket-ssl --key fullchain.pem 2>&1 | tee true)" >> $GITHUB_ENV
- name: Check if certificate exists (Use the value)
id: Check if certificate exists_use_value
run: |
echo "API_RESPONSE=${{ env.API_RESPONSE}}"
Related
Actions code:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout codes
uses: actions/checkout#v2
- name: Test git for actions
shell: bash
run: |
## use GitHub variables, such as: GITHUB_REF,GITHUB_HEAD_REF..
BRANCH=${GITHUB_REF##*/}
branch=$BRANCH
git ls-remote --heads --exit-code repo_url "$branch" >/dev/null
if [ "$?" == "1" ]
then
echo "Branch doesn't exist"
else
echo "Branch exist"
fi
It will occur the following error:
BRANCH=${GITHUB_REF##*/}
branch=${BRANCH}
echo $branch
git ls-remote --heads --exit-code repo_url "$branch" >/dev/null
if [ "$?" == "1" ]
then
echo "Branch doesn't exist"
else
echo "Branch exist"
fi
shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
main
Error: Process completed with exit code 2.
When I replace ${GITHUB_REF} with main, it works fine.
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Test git for actions
shell: bash
run: |
## use GitHub variables, such as: GITHUB_REF,GITHUB_HEAD_REF..
BRANCH=${GITHUB_REF##*/}
branch=main
git ls-remote --heads --exit-code repo_url "$branch" >/dev/null
if [ "$?" == "1" ]
then
echo "Branch doesn't exist"
else
echo "Branch exist"
fi
Output:
BRANCH=${GITHUB_REF##*/}
branch=main
echo $branch
git ls-remote --heads --exit-code repo_url "$branch" >/dev/null
if [ "$?" == "1" ]
then
echo "Branch doesn't exist"
else
echo "Branch exist"
fi
main
Branch exist
Is the git ls-remote command not able to use variables?
I want to check whether a certain branch exists in the remote warehouse in GitHub Actions.
According to jobs.<job_id>.steps[*].shell, the default Bash invocation is:
bash --noprofile --norc -eo pipefail {0}
which makes it to fail fast as described under Exit codes and error action preference, for bash:
Fail-fast behavior using set -eo pipefail: This option is set when shell: bash is explicitly specified.
And, according to Bash manual, under The Set Builtin:
-e: Exit immediately if a pipeline (see Pipelines), which may consist of a single simple command (see Simple Commands), a list (see Lists of Commands), or a compound command (see Compound Commands) returns a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in an if statement, part of any command executed in a && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command’s return status is being inverted with !. If a compound command other than a subshell returns a non-zero status because a command failed while -e was being ignored, the shell does not exit. A trap on ERR, if set, is executed before the shell exits.
and,
-o pipefail: If set, the return value of a pipeline is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands in the pipeline exit successfully. This option is disabled by default.
In your case, the possible solution could be to directly handle the exit status of the command in an if:
if command; then
# succes
else
# failure
end
i.e.
if ! git ls-remote --heads --exit-code repo_url "$branch" >/dev/null
then
echo "Branch doesn't exist"
else
echo "Branch exist"
fi
or,
if git ls-remote --heads --exit-code repo_url "$branch" >/dev/null
then
echo "Branch exist"
else
echo "Branch doesn't exist"
fi
I am querying the github api and then using jq to parse some values from the result
- uses: octokit/request-action#v2.x
id: get_in_progress_workflow_run
with:
route: GET /repos/myorg/myrepo/actions/runs?page=1&per_page=20
env:
GITHUB_TOKEN: ${{ secrets.ACCESS_TOKEN }}
- name: Get waiting pull_request_id
id: get_pr_id
run: |
prId="$(echo '${{ steps.get_in_progress_workflow_run.outputs.data }}' | jq '.workflow_runs[] | first(select(.status=="in_progress")) | .pull_requests[0] | .number')";
echo "prId=$prId" >> "$GITHUB_OUTPUT";
This works fine unless the json result from the first step contains a closing parenthesis. When this happens the command substitution get's closed and I get an error about the next line of json being an unrecognized command.
line 1055: timestamp:: command not found
and line 1055
"head_commit": {
"id": "67fb50d15527690eesasdaddc4425fdda5d4e1eba8",
"tree_id": "37df61a25863dce0e3aec7a61df928f53ca64235",
"message": "message with a )",
"timestamp": "2023-01-05T20:27:05Z",
Is there any way to avoid this? I have tried stripping out the ) but I find that no matter how I try to print json from the github context into bash it errors out before I can do anything with it. And there doesn't appear to be a way to do string substitution from the github context.
For instance even just assigning the string to a variable fails with the same error
- name: Get waiting pull_request_id
id: get_pr_id
run: |
json='${{ steps.get_in_progress_workflow_run.outputs.data }}';
fails with
syntax error near unexpected token `)'
${{ .. }} does string interpolation before the shell gets to see anything, so any special character in there can mess up your shell script. It's also a vector for shell injection.
To fix both, set the value in the environment first, and then reference it:
- name: Get waiting pull_request_id
id: get_pr_id
env:
data: ${{ steps.get_in_progress_workflow_run.outputs.data }}
run: |
prId="$(echo "$data" | jq '
.workflow_runs[]
| first(select(.status=="in_progress"))
| .pull_requests[0].number
')"
echo "prId=$prId" >> "$GITHUB_OUTPUT"
Alternatively, you can use the GitHub CLI to make the request without an additional action, and use the --jq parameter instead of stand-alone jq:
- name: Get waiting pull_request_id
id: get_pr_id
env:
GITHUB_TOKEN: ${{ secrets.ACCESS_TOKEN }}
run: |
id=$(gh api "repos/$GITHUB_REPOSITORY/actions/runs" \
--method GET \
--raw-field per_page=20 \
--jq '
.workflow_runs[]
| first(select(.status=="in_progress"))
| .pull_requests[0].number
')
echo "prId=$id" >> "$GITHUB_OUTPUT"
I'm trying to set the output from a command to an other command in a github action in the following way:
- name: 'Checkout source code'
uses: actions/checkout#v2
with:
fetch-depth: 0
- name: 'Create delta packages for new, modified or deleted metadata'
run: |
mkdir changed-sources
echo "DIFF=$(git rev-parse --short origin/dev)" >> $GITHUB_ENV
echo "${{ env.DIFF }}"
sfdx sgd:source:delta --to "HEAD" --from ${{ env.DIFF }} --output changed-sources/ --generate-delta --source force-app/
But for some reason it's not working, I receive the following output:
steps:
mkdir changed-sources
echo "DIFF=$(git rev-parse --short origin/dev)" >> $GITHUB_ENV
echo ""
sfdx sgd:source:delta --to "HEAD" --from --output changed-sources/ --generate-delta --source force-app/
What I'm I missing, any help would be highly appreciated!
According to the official GitHub documentation:
If you generate a value in one step of a job, you can use the value in subsequent steps of the same job by assigning the value to an existing or new environment variable and then writing this to the GITHUB_ENV environment file.
In your example, you are using the env variable in the same step.
You should therefore access the env variable in another step instead of in the same one.
Example:
- name: 'Checkout source code'
uses: actions/checkout#v2
with:
fetch-depth: 0
- name: 'Create delta packages for new, modified or deleted metadata'
run: |
mkdir changed-sources
echo "DIFF=$(git rev-parse --short origin/dev)" >> $GITHUB_ENV
- name: NEW STEP
run: |
echo "${{ env.DIFF }}"
sfdx sgd:source:delta --to "HEAD" --from ${{ env.DIFF }} --output changed-sources/ --generate-delta --source force-app/
I have an environment where the sudo password of a user is not the same on several servers. This can be seen as a security feature :-D
I wanted to save the sudo password (vault/security aside at time point) in a file:
host1.sudopwd
host2.sudopwd
...
hostn.sudopwd
in my ansible playbook, I am using something like that:
---
- hosts: all
tasks:
- include: '{{ inventory_hostname }}'
vars:
ansible_become: true
ansible_become_user: root
ansible_become_pass: "{{ lookup('file','{{ inventory_hostname }}.sudopwd') }}"
And it works well if I have one file per hosts.
Now how can I make it more "dynamic"?
Let say that, that I have a script that is called searchsudopwd, which is a simple bash script (could be any language in fact).
#!/usr/bin/env bash
if [[ "$1" == "host1" ]]; then
echo "pwd1"
elif [[ "$1" == "host2" ]]; then
echo "pwd2"
else
echo "default pwd"
fi
How can I use it with ansible? What needs to be changed in my playbook to make it work?
I have tried something like:
---
- hosts: all
tasks:
- include: '{{ inventory_hostname }}'
vars:
ansible_become: true
ansible_become_user: root
ansible_become_pass: "{{ item }}"
with_lines: searchsudopwd '{{ inventory_hostname }}'
But it doesn't work...
If my idea doesn't work, is there a way to have like a "default" password, and only get the password for specific hosts?
Thank you for your help and time!
I think you are better off putting the passwords in the individual host_vars files, when needed, and put the default in group_vars/all. Put it all in Ansible Vault. (But then you have a shared password for Ansible Vault.)
I have a host called "ml01" so I changed your script:
$ cat /home/jscheible/searchsudopwd
#!/usr/bin/env bash
if [[ "$1" == "ml01" ]]; then
echo "pwd1"
elif [[ "$1" == "host2" ]]; then
echo "pwd2"
else
echo "default pwd"
fi
To call that local program with the passwords, just call that in a play with delegate_to: localhost:
$ cat show_pwd
---
- hosts: all
connection: ssh
gather_facts: false
tasks:
- name: Do passwd lookup
command: /home/jscheible/searchsudopwd {{ inventory_hostname }}
delegate_to: localhost
register: result
- name: Show results
debug:
var: result.stdout
delegate_to: localhost
Here are the results for my test:
$ ansible-playbook show_pwd
PLAY [all] *********************************************************************
TASK [Do passwd lookup] ********************************************************
changed: [al01 -> localhost]
changed: [ml01 -> localhost]
TASK [Show results] ************************************************************
ok: [al01 -> localhost] => {
"result.stdout": "default pwd"
}
ok: [ml01 -> localhost] => {
"result.stdout": "pwd1"
}
PLAY RECAP *********************************************************************
There are handy lookup plugins.
You can actually just replace your file lookup with pipe lookup:
ansible_become_pass: "{{ lookup('pipe','/path_on_ansible_machine/searchsudopwd '+inventory_hostname) }}"
From the docker distribution document: https://github.com/docker/distribution
It says to configure the docker to use the mirror, we should:
Configuring the Docker daemon
You will need to pass the --registry-mirror option to your Docker daemon on startup:
docker --registry-mirror=https://<my-docker-mirror-host> daemon
I'm newbie to docker, and I start docker from mac normal by the provided "Docker Quickstart Termial" app, which actaully invokes a start.sh shell:
#!/bin/bash
VM=default
DOCKER_MACHINE=/usr/local/bin/docker-machine
VBOXMANAGE=/Applications/VirtualBox.app/Contents/MacOS/VBoxManage
BLUE='\033[0;34m'
GREEN='\033[0;32m'
NC='\033[0m'
unset DYLD_LIBRARY_PATH
unset LD_LIBRARY_PATH
clear
if [ ! -f $DOCKER_MACHINE ] || [ ! -f $VBOXMANAGE ]; then
echo "Either VirtualBox or Docker Machine are not installed. Please re-run the Toolbox Installer and try again."
exit 1
fi
$VBOXMANAGE showvminfo $VM &> /dev/null
VM_EXISTS_CODE=$?
if [ $VM_EXISTS_CODE -eq 1 ]; then
echo "Creating Machine $VM..."
$DOCKER_MACHINE rm -f $VM &> /dev/null
rm -rf ~/.docker/machine/machines/$VM
$DOCKER_MACHINE create -d virtualbox --virtualbox-memory 2048 --virtualbox-disk-size 204800 $VM
else
echo "Machine $VM already exists in VirtualBox."
fi
VM_STATUS=$($DOCKER_MACHINE status $VM)
if [ "$VM_STATUS" != "Running" ]; then
echo "Starting machine $VM..."
$DOCKER_MACHINE start $VM
yes | $DOCKER_MACHINE regenerate-certs $VM
fi
echo "Setting environment variables for machine $VM..."
clear
cat << EOF
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
EOF
echo -e "${BLUE}docker${NC} is configured to use the ${GREEN}$VM${NC} machine with IP ${GREEN}$($DOCKER_MACHINE ip $VM)${NC}"
echo "For help getting started, check out the docs at https://docs.docker.com"
echo
eval $($DOCKER_MACHINE env $VM --shell=bash)
USER_SHELL=$(dscl /Search -read /Users/$USER UserShell | awk '{print $2}' | head -n 1)
if [[ $USER_SHELL == *"/bash"* ]] || [[ $USER_SHELL == *"/zsh"* ]] || [[ $USER_SHELL == *"/sh"* ]]; then
$USER_SHELL --login
else
$USER_SHELL
fi
Is it the correct file that I can put my '--registry-mirror' config to it? What should I do?
If you do a docker-machine create --help:
docker-machine create --help
Usage: docker-machine create [OPTIONS] [arg...]
Create a machine.
Run 'docker-machine create --driver name' to include the create flags for that driver in the help text.
Options:
...
--engine-insecure-registry [--engine-insecure-registry option --engine-insecure-registry option] Specify insecure registries to allow with the created en
gine
--engine-registry-mirror [--engine-registry-mirror option --engine-registry-mirror option] Specify registry mirrors to use
So you can modify your script to add one more parameter:
--engine-registry-mirror=...
However, since your 'default' docker-machine probably already exists (do a docker-machine ls), you might need to remove it first (docker-machine rm default: make sure you can easily recreate your images from your local Dockerfiles, and/or that you don't have data container that would need to be saved first)
Open C:\Users\<YourName>\.docker\daemon.json, edit the "registry-mirrors" entry in that file.
{"registry-mirrors":["https://registry.docker-cn.com"],"insecure-registries":[], "debug":true, "experimental": true}