Why is GOOGLE_APPLICATION_CREDENTIALS not being set as env variable? - json

If I set this env variable in my npm script like the below it works:
"cypress:open:dev": "CYPRESS_BASE_URL=$URL GOOGLE_APPLICATION_CREDENTIALS=cypress/plugins/serviceAccountKeyDev.json cypress open --env version=development"
However, if I set this env in my Cypress Config file it doesn't seem to actually set it:
export default defineConfig({
e2e: {
async setupNodeEvents(on, config) {
// eslint-disable-next-line #typescript-eslint/no-var-requires
require('cypress-iap/utils')(on, config);
const version = config.env.version || 'development'
const configFile = await import(path.join(
config.projectRoot,
'cypress/config',
`${version}.json`
));
const credentialsFile = await import(path.join(
config.projectRoot,
'cypress/config',
'credentials.json'
));
const serviceAccountFile = await import(path.join(
config.projectRoot,
'cypress/plugins',
'serviceAccountKeyDev.json'
));
config = cypressBrowserPermissionsPlugin(on, config)
config.env = {
'browserPermissions': {
'geolocation': 'allow',
},
'GOOGLE_APPLICATION_CREDENTIALS': serviceAccountFile
}
config = {
...config, // take config defined in this file
...configFile // merge/override from the external file
}
config.env = {
...config.env, // 2nd level merge
...credentialsFile[version] // from git-ignored file
}
config.baseUrl = config.baseUrl || configFile.env.baseUrl
return config
},
reporter: 'mochawesome'
},
});
If the tests are run in GitHub actions it also passes too. So why isn't the env variable set locally unless I hardcode it in the npm script?
GitHub Action workflow that also works:
name: Daily Regression Tests
on:
workflow_dispatch:
schedule:
#https://crontab.guru/#0_0_*_*_*
- cron: "0 0 * * *"
jobs:
cypress-run:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
# run 3 copies of the current job in parallel
containers: [1, 2, 3]
steps:
# Install NPM dependencies, cache them correctly
- name: Checkout
uses: actions/checkout#v3
- name: Setup Node
uses: actions/setup-node#v3
with:
node-version: '18.7.0'
cache: 'npm'
- name: NPM Install
run: npm install
- name: Write the credentials.json and serviceAccountKeyDev.json file ๐Ÿ“
# use quotes around the secret, as its value
# is simply inserted as a string into the command
run: |
echo '${{ secrets.CYPRESS_ENV_CI }}' > cypress/config/credentials.json
echo '${{ secrets.CYPRESS_ENV_SERVICE_ACCOUNT_KEY }}' > cypress/plugins/serviceAccountKeyDev.json
# Run all Cypress tests
- run: npm start -- --record --parallel --browser chrome --tag "dev,nightly"
env:
# and pass the Dashboard record key as an environment variable
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
GOOGLE_APPLICATION_CREDENTIALS: cypress/plugins/serviceAccountKeyDev.json

Related

Store Environment variables in a configuration options GitHub workflow

I'm trying to inject backend URLs into an Angular front-end app.
I have a backend that I already deployed and have the URLs stored inside the env
env:
URL1: google.com
URL2: stackoverflow.com
Then I ran this matrix workflow to build and replace the environment.prod.ts
run_and_build_webapp:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
strategy:
matrix:
services:
[
{
"appName": "app1-webapi",
"directory": "./src/app2/app1.WebSPA/app1-WebSPA",
"apiUrl": "${{ env.URL2 }}"
},
{
"appName": "app2-webapi",
"directory": "./src/app2/app2.WebSPA/app2-WebSPA",
"apiUrl": "${{ env.URL2 }}",
}
]
steps:
- name: Checkout repository
uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: 14
cache: "npm"
cache-dependency-path: ${{ matrix.services.directory }}/package-lock.json
- name: Modify the Environment File
run: |
cd ${{ matrix.services.directory }}/src/environments
echo "export const environment = { production: true, backEndUrl: '${{matrix.services.apiUrl}}'};" > enviroment.prod.ts
but when the workflow runs I get the following error:
Unrecognized named-value: 'env' # this line: "apiUrl": "${{ env.URL2 }}"
Is there a way to store environment variables in a configuration options GitHub workflow?

Github workflow: Where does "npm ci" store the "node_modules"-folder

I'm wondering where npm ci saves the node_modules.
Furthermore I want to zip the node_modules. Is this even possible with git workflows or should I try a different approach? I need this workflow to deploy the codebase on git to a lamda function in AWS.
This is my code for example:
jobs:
node:
runs-on: ubuntu-latest
steps:
- name: ๐Ÿ›’ Checkout
uses: actions/checkout#v3
- name: ๐Ÿค– Setup Node
uses: actions/setup-node#v3
with:
node-version: 16.13.2
registry-url: 'https://npm.pkg.github.com'
cache: 'npm'
cache-dependency-path: '**/package.json'
- name: Cache
id: cache-dep
uses: actions/cache#v3
with:
path: |
./node_modules
key: ${{ runner.os }}-pricing-scraper-${{ hashFiles('package.json') }}
restore-keys: |
${{ runner.os }}-pricing-scraper-
- name: Config NPM
shell: bash
run: |
npm install -g npm#8.1.2
- name: โš™๏ธ Install node_modules
if: steps.cache-dep.outputs.cache-hit != 'true'
shell: bash
run: |
npm ci
build:
runs-on: ubuntu-latest
needs: [node]
steps:
- name: ๐Ÿ›’ Checkout
uses: actions/checkout#v3
- name: ๐Ÿ— Zip files and folder
if: steps.cache-dep.outputs.cache-hit != 'true'
shell: bash
run: |
echo Start running the zip script
# delete prior deployment zips
DIR="./output"
if [ ! -d "$DIR" ]; then
echo "Error: ${DIR} not found. Can not continue."
exit 1
fi
if [ ! -d "./node_modules" ]; then
echo "Error: node_modules not found. Can not continue."
exit 1
fi
rm "$DIR"/*
echo Deleted prior deployment zips
# Zip the core directories which are the same for every scraper.
zip -r "$DIR"/core.zip config node_modules shipping util
...
It's not saved in the repo
- name: Bash Test node_modules
if: steps.cache-dep.outputs.cache-hit != 'true'
shell: bash
run: |
echo "cached"
echo $(ls)
Output:
Run echo "cached"
cached
README.md build config output package-lock.json package.json scraper shipping testHandler.js util
The place where the node_modules are stored is ยด~/.npmยด
The problem was that I did 2 jobs (node, build) on 2 different VMs. So the installed node_modules were on the other VM (node-job) than where I needed them (build-job).

GitHub dependabot for a library inside a yml file

Introduction
I'm currently working on a project that automatically containerizes a java project with JIB.
GitHub project link.
Problem
The LIB library is implicitly used inside the YAML file, like this :
- name: Build JIB container and publish to GitHub Packages
run: |
if [ ! -z "${{ inputs.module }}" ]; then
MULTI_MODULE_ARGS="-am -pl ${{ inputs.module }}"
fi
if [ ! -z "${{ inputs.main-class }}" ]; then
MAIN_CLASS_ARGS="-Djib.container.mainClass=${{ inputs.main-class }}"
fi
mvn package com.google.cloud.tools:jib-maven-plugin:3.2.1:build \
-Djib.to.image=${{ inputs.REGISTRY }}/${{ steps.downcase.outputs.lowercase }}:${{ inputs.tag-name }} \
-Djib.to.auth.username=${{ inputs.USERNAME }} \
-Djib.to.auth.password=${{ inputs.PASSWORD }} $MULTI_MODULE_ARGS $MAIN_CLASS_ARGS
shell: bash
When the new version of JIB is released my dependabot configuration doesn't update the YAML file.
Configuration of the Dependabot :
version: 2
updates:
- package-ecosystem: github-actions
directory: '/'
schedule:
interval: weekly
Question
Does someone know how to configure dependabot.yml for an implicitly declared library?
Or how to configure Dependabot.yml to automatically create an issue when a new JIB version is released?
You can do it with hiden-dependency-updater
Example of GitHub Workflow you can use:
name: Update hidden dependencies
on:
schedule:
- cron: '0 0 * * *'
jobs:
update:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: MathieuSoysal/hiden-dependency-updater#v1.1.1
with:
files: action.yml # List of files to update
prefix: "com.google.cloud.tools:jib-maven-plugin:" # Prefix before the version, default is: ""
suffix: ":build ."
regex: "[0-9.]*"
selector: "maven"
github_repository: "GoogleContainerTools/jib"
- name: Create Pull Request
uses: peter-evans/create-pull-request#v4
with:
token: ${{ secrets.GITHUB_TOKEN }} # You need to create your own token with pull request rights
commit-message: update jib
title: Update jib
body: Update jib to reflect release changes
branch: update-jib
base: main
From the doc:
The directory must be set to "/" to check for workflow files in
.github/workflows.
- package-ecosystem: "github-actions"
# Workflow files stored in the
# default location of `.github/workflows`
directory: "/"
schedule:
interval: "daily"
So: try specifying a different directory, as example:
- package-ecosystem: "github-actions"
# Workflow files stored in the
directory: "."
schedule:
interval: "daily"

Terratest in GitHub Action fails

when running terratest locally with an assertion it works as expected. But when trying to run it via a github action i get the error below (removing the assertion has terratest running as expected):
TestTerraform 2022-07-27T08:34:49Z logger.go:66: ::set-output name=exitcode::0
output.go:19:
Error Trace: output.go:19
terraform_test.go:24
Error: Received unexpected error:
invalid character 'c' looking for beginning of value
Test: TestTerraform
The terratest file looks like:
package test
import (
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
"testing"
)
func TestTerraform(t *testing.T) {
iam_arn_expected := "arn:aws:iam::xxx:role/terratest_iam"
terraformOptions := terraform.WithDefaultRetryableErrors(t, &terraform.Options{
TerraformDir: "../examples/simple",
Vars: map[string]interface{}{
"iam_name": "terratest_iam",
"region": "eu-west-2",
},
})
defer terraform.Destroy(t, terraformOptions)
terraform.InitAndApply(t, terraformOptions)
iam_arn := terraform.Output(t, terraformOptions, "iam_arn")
assert.Equal(t, iam_arn_expected, iam_arn)
}
the github action looks like:
jobs:
terratest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: actions/setup-go#v3.2.1
- name: Install tfenv
run: |
git clone https://github.com/tfutils/tfenv.git ~/.tfenv
echo "$HOME/.tfenv/bin" >> $GITHUB_PATH
- name: Install Terraform
working-directory: .
run: |
pwd
tfenv install
terraform --version
- name: Download Go Modules
working-directory: test
run: go mod download
- name: Run Terratest
working-directory: test
run: go test -v
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PLAYGROUND_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PLAYGROUND_KEY }}
The solution is to use the HashiCorp setup-terraform action with the Terraform wrapper turned off in place of your "Install Terraform" step.
- name: Setup Terraform
uses: hashicorp/setup-terraform#v2
with:
terraform_version: 1.3.6
terraform_wrapper: false

How to reference proper directory in a github actions workflows to call a module

I'm running my workflows using GitHub Actions. When I create a pull_request that will trigger my workflow, I am getting the error message at the bottom of my question. What I am trying to do is to call my infrastructure/test/main.tf from my audit-account/prod-env directory. What do i need to change in the Env section for directory
# deploy.yml
name: 'GitHub OIDC workflow'
on:
pull_request:
branches:
- prod
env:
tf_version: 'latest'
tg_version: 'latest'
tf_working_dir: './audit-account/prod-env'
permissions:
id-token: write
contents: read
jobs:
deploy:
name: 'Build and Deploy'
runs-on: ubuntu-latest
steps:
- name: 'checkout'
uses: actions/checkout#v2
- name: configure AWS credentials
uses: aws-actions/configure-aws-credentials#master
with:
aws-region: us-east-1
role-to-assume: arn:aws:iam::123456789012:role/GitHubActions_Workflow_role
role-duration-seconds: 3600
- name: 'Terragrunt Init'
uses: the-commons-project/terragrunt-github-actions#master
with:
tf_actions_version: ${{ env.tf_version }}
tg_actions_version: ${{ env.tg_version }}
tf_actions_subcommand: 'init'
tf_actions_working_dir: ${{ env.tf_working_dir }}
tf_actions_comment: true
env:
TF_INPUT: false
# audit-account/prod-env/terragrunt.hcl
terraform {
source = "../../../../..//infrastructure/test"
}
include {
path = find_in_parent_folders()
}
infrastructure/test
main.tf
resource "aws_vpc" "test-vpc" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "default"
tags = {
Name = "OIDC"
}
}
error message:
init: info: initializing Terragrunt configuration in /audit-account/prod-env
init: error: failed to initialize Terragrunt configuration in /audit-account/prod-env
time=2021-11-17T23:55:54Z level=error msg=Working dir infrastructure/test from source file:///github/workspace/audit-account/prod-env does not exist
Your source path for the infrastructure module goes way too far up in the folder structure.
Assuming you have the infrastructure and audit-account directories at the root of the repository, your source would be ../../infrastructure/test. You have it looking 5 folders up from audit-account/prod-env, which puts you 3 folders above the workspace in a folder somewhere on the runner's filesystem.