Terratest in GitHub Action fails - github-actions

when running terratest locally with an assertion it works as expected. But when trying to run it via a github action i get the error below (removing the assertion has terratest running as expected):
TestTerraform 2022-07-27T08:34:49Z logger.go:66: ::set-output name=exitcode::0
output.go:19:
Error Trace: output.go:19
terraform_test.go:24
Error: Received unexpected error:
invalid character 'c' looking for beginning of value
Test: TestTerraform
The terratest file looks like:
package test
import (
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
"testing"
)
func TestTerraform(t *testing.T) {
iam_arn_expected := "arn:aws:iam::xxx:role/terratest_iam"
terraformOptions := terraform.WithDefaultRetryableErrors(t, &terraform.Options{
TerraformDir: "../examples/simple",
Vars: map[string]interface{}{
"iam_name": "terratest_iam",
"region": "eu-west-2",
},
})
defer terraform.Destroy(t, terraformOptions)
terraform.InitAndApply(t, terraformOptions)
iam_arn := terraform.Output(t, terraformOptions, "iam_arn")
assert.Equal(t, iam_arn_expected, iam_arn)
}
the github action looks like:
jobs:
terratest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: actions/setup-go#v3.2.1
- name: Install tfenv
run: |
git clone https://github.com/tfutils/tfenv.git ~/.tfenv
echo "$HOME/.tfenv/bin" >> $GITHUB_PATH
- name: Install Terraform
working-directory: .
run: |
pwd
tfenv install
terraform --version
- name: Download Go Modules
working-directory: test
run: go mod download
- name: Run Terratest
working-directory: test
run: go test -v
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PLAYGROUND_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PLAYGROUND_KEY }}

The solution is to use the HashiCorp setup-terraform action with the Terraform wrapper turned off in place of your "Install Terraform" step.
- name: Setup Terraform
uses: hashicorp/setup-terraform#v2
with:
terraform_version: 1.3.6
terraform_wrapper: false

Related

Github workflow: Where does "npm ci" store the "node_modules"-folder

I'm wondering where npm ci saves the node_modules.
Furthermore I want to zip the node_modules. Is this even possible with git workflows or should I try a different approach? I need this workflow to deploy the codebase on git to a lamda function in AWS.
This is my code for example:
jobs:
node:
runs-on: ubuntu-latest
steps:
- name: ๐Ÿ›’ Checkout
uses: actions/checkout#v3
- name: ๐Ÿค– Setup Node
uses: actions/setup-node#v3
with:
node-version: 16.13.2
registry-url: 'https://npm.pkg.github.com'
cache: 'npm'
cache-dependency-path: '**/package.json'
- name: Cache
id: cache-dep
uses: actions/cache#v3
with:
path: |
./node_modules
key: ${{ runner.os }}-pricing-scraper-${{ hashFiles('package.json') }}
restore-keys: |
${{ runner.os }}-pricing-scraper-
- name: Config NPM
shell: bash
run: |
npm install -g npm#8.1.2
- name: โš™๏ธ Install node_modules
if: steps.cache-dep.outputs.cache-hit != 'true'
shell: bash
run: |
npm ci
build:
runs-on: ubuntu-latest
needs: [node]
steps:
- name: ๐Ÿ›’ Checkout
uses: actions/checkout#v3
- name: ๐Ÿ— Zip files and folder
if: steps.cache-dep.outputs.cache-hit != 'true'
shell: bash
run: |
echo Start running the zip script
# delete prior deployment zips
DIR="./output"
if [ ! -d "$DIR" ]; then
echo "Error: ${DIR} not found. Can not continue."
exit 1
fi
if [ ! -d "./node_modules" ]; then
echo "Error: node_modules not found. Can not continue."
exit 1
fi
rm "$DIR"/*
echo Deleted prior deployment zips
# Zip the core directories which are the same for every scraper.
zip -r "$DIR"/core.zip config node_modules shipping util
...
It's not saved in the repo
- name: Bash Test node_modules
if: steps.cache-dep.outputs.cache-hit != 'true'
shell: bash
run: |
echo "cached"
echo $(ls)
Output:
Run echo "cached"
cached
README.md build config output package-lock.json package.json scraper shipping testHandler.js util
The place where the node_modules are stored is ยด~/.npmยด
The problem was that I did 2 jobs (node, build) on 2 different VMs. So the installed node_modules were on the other VM (node-job) than where I needed them (build-job).

github actions- if condition inside the for loop

I'm tring to run the below workflow using for loop with if condition inside, but the action is parsed with an error, what is my mistake?
workflow:
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
analysis:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Setup Cpp Check
run: sudo apt-get install cppcheck
- name: Run Static Analysis
run: cppcheck . --force --error-exitcode=1
unit_test:
# needs: Scan_changes
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Setup Ruby
run: sudo apt-get install ruby
- name: Setup Ceedling
run: sudo gem install ceedling
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files#v34
with:
dir_names: true
files: |
code/**
- name: Unit Test for Changed modules
run: |
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
if: contains('$file', 'SomeString')
echo "$file was changed"
done
- name: Test
run: |
ceedling
and the output is
So i'm not sure what is the propore syntex for the if condition inside the fore loop
is not valid bash script code. You should do something like:
- name: Unit Test for Changed modules
run: |
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
if [[ $file =~ "SomeString" ]]; then
echo "$file was changed"
fi
done

setting output variables on Windows using cmd.exe as the shell

How can i set output variables when using shell: cmd on Windows? My repo is hosted at Github (not Gitlab).
The following is based on this accepted answer
jobs:
job1:
runs-on: my-windows-machine
# Map a step output to a job output
outputs:
output1: ${{ steps.step1.outputs.test }}
output2: ${{ steps.step2.outputs.test }}
steps:
- id: step1
run: |
echo '::echo::on'
echo "::set-output name=test::hello"
shell: cmd
- id: step2
run: |
echo '::echo::on'
echo "::set-output name=test::world"
shell: cmd
job2:
runs-on: my-windows-machine
needs: job1
steps:
- run: echo ok: ${{needs.job1.outputs.output1}} ${{needs.job1.outputs.output2}}
shell: cmd
The echo stmt in job2 only shows ok: string if shell: cmd is used in the steps in job1.
As the OP Andrew concludes in the comments, output vars just is not supported with cmd.exe.
I went ahead and broke up the steps in my workflow to use shell: cmd for the things that need to be processed by cmd.exe and created a separate step (using the default shell) just to set the output vars.
As an alternative, you can see in "Github Actions, how to share a calculated value between job steps?" the $GITHUB_OUTPUT command, which can be used to define outputs for steps. The outputs can then be used in later steps and evaluated in with and env input sections.
You can see it used in "How can I use a GitHub action's output in a workflow?".
Note: the older ::set-output command has now (Oct. 2022) been deprecated.
name: 'Hello World'
runs:
using: "composite"
steps:
- id: random-number-generator
run: echo "name=random-id::$(echo $RANDOM)" >> $GITHUB_OUTPUT
shell: bash
...
jobs:
test-job:
runs-on: self-hosted
steps:
- name: Call Hello World
id: hello-world
uses: actions/hello-world-action#v1
- name: Comment
if: ${{ github.event_name == 'pull_request' }}
uses: actions/github-script#v3
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
github.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: 'Output - ${{ steps.hello-world.outputs.random-number }}'
})

How to resolve gulp publish error while deploying ci/cd with github action

I am trying to deploy the ci/cd pipeline for ECR in AWS.
We are trying to migrate the azure pipeline to the GitHub actions pipeline
When I try to implement the pipeline I am facing the below error,
Run gulp publish --profile-name development
Using gulpfile ~/work/test-api/test-api/gulpfile.js
Starting 'publish'...
'publish' errored after 9.91 ms
Error: Invalid publish profile named development
at LoadPublishProfile (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/LoadPublishProfile.js:11:15)
at async BuildDeployContext (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/DeployContext.js:94:28)
at async Publish (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/Publish.js:14:21)
Error: Process completed with exit code 1.
Here is my workflow YAML file,
on:
push:
branches: [ main ]
name: Node Project `my-app` CI on ECRjobs
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Use Node 14.17.X
uses: actions/setup-node#v2
with:
node-version: 14.17.X
- name: 'Yarn'
uses: borales/actions-yarn#v2.3.0
with:
cmd: install --frozen-lockfile --non-interactive
- name: Update SAM version
uses: aws-actions/setup-sam#v1
- run: |
wget https://github.com/aws/aws-sam-cli/releases/latest/download/aws-sam-cli-linux-x86_64.zip
unzip aws-sam-cli-linux-x86_64.zip -d sam-installation
sudo ./sam-installation/install --update
sam --version
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push the image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: test-pinz-api
IMAGE_TAG: latest
run: |
gulp publish --profile-name development
Using gulp we publish the environment using the below config file,
development.json
{
"apiDomainName": "domain",
"assetsDomainName": "domain",
"awsProfile": "Pinz",
"bastionBucket": "bucketname",
"corsDomains": ["domain"],
"dbBackupSources": ["db source", "db source"],
"dbClusterIdentifier": "cluster identfier",
"designDomainName": "domain",
"lambdaEcr": "ecr",
"snsApplication": "sns",
"snsServerKeySecretName": "name",
"stackName": "name",
"templateBucket": "bucketname",
"userJwtPublicKey": "token",
"websiteUrl": "domain",
"wwwDomainName": "domain",
"wwwEcr": "ecr repo"
}
In the YAML file, I run the command
gulp publish --profile-name development
So it will reach out to development.json and publish the file
I couldn't figure out what could be wrong here.
honestly, I am new to the ECR deployment through pipeline and gulp is a new concept for me. Can anyone guide me on this?
If need more details comment down.

using defined environmental variable in with block for github actions

I am trying to figure out how to reference a global scoped environmental variable for input in to an action like so:
name: validate
on: pull_request
env:
CONFIG_PATH: configuration/conf.json
jobs:
upload_config:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: create config
shell: bash -l {0}
run: |
mkdir `dirname ${CONFIG_PATH}`
echo "some config" > ${CONFIG_PATH}
- name: upload config
uses: actions/upload-artifact#v1
with:
name: config
path: ${{ CONFIG_PATH }}
However I am getting an invalid yaml error stating there is an "Unrecognized named-value: 'CONFIG_PATH'". If I try referencing the environmental variable like so:
path: ${CONFIG_PATH}
I get a "Path does not exist ${CONFIG_PATH}" error.
Any ideas?
I couldn't find a clear example of it in the docs but you need to use the env context for this like so:
path: ${{ env.CONFIG_PATH }}