I am using Enterprise Git 3.0, created a private repository.
I created a GitHub Personal Access Token, stored it in the repository's secret and referred it from the workflow. The PAT has rights to read/write packages.
I created the workflow action mentioned below, but whenever I run, it's giving 401: Unauthorized.
Can someone guide me on what is missing.
name: Git Deploy
on:
push:
jobs:
publish:
strategy:
matrix:
maven: [ '3.6.3' ]
runs-on: [ self-hosted ]
steps:
- uses: actions/checkout#v2
- name: Set up JDK 1.8
uses: actions/setup-java#v1
with:
java-version: 1.8
server-id: github2 # Value of the distributionManagement/repository/id field of the pom.xml
settings-path: ${{ github.workspace }} # location for the settings.xml file
- name: install maven # If I don't do this, I was getting mvn not found error
uses: stCarolas/setup-maven#v4.2
with:
maven-version: 3.6.3
- name: read secrets from settings
uses: s4u/maven-settings-action#v2.5.0
with:
servers: |
[{
"id": "github2",
"username": "my github user id (Not email)",
"password": "${{ secrets.PAT }}"
}]
- name: Build and deploy
run: mvn -B deploy
env:
GITHUB_TOKEN: ${{ secrets.PAT }}
GITHUB_USER: "my github user id (Not email)"
Link I am referring
https://docs.github.com/en/actions/publishing-packages/publishing-java-packages-with-maven
Related
I'm trying to inject backend URLs into an Angular front-end app.
I have a backend that I already deployed and have the URLs stored inside the env
env:
URL1: google.com
URL2: stackoverflow.com
Then I ran this matrix workflow to build and replace the environment.prod.ts
run_and_build_webapp:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
strategy:
matrix:
services:
[
{
"appName": "app1-webapi",
"directory": "./src/app2/app1.WebSPA/app1-WebSPA",
"apiUrl": "${{ env.URL2 }}"
},
{
"appName": "app2-webapi",
"directory": "./src/app2/app2.WebSPA/app2-WebSPA",
"apiUrl": "${{ env.URL2 }}",
}
]
steps:
- name: Checkout repository
uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: 14
cache: "npm"
cache-dependency-path: ${{ matrix.services.directory }}/package-lock.json
- name: Modify the Environment File
run: |
cd ${{ matrix.services.directory }}/src/environments
echo "export const environment = { production: true, backEndUrl: '${{matrix.services.apiUrl}}'};" > enviroment.prod.ts
but when the workflow runs I get the following error:
Unrecognized named-value: 'env' # this line: "apiUrl": "${{ env.URL2 }}"
Is there a way to store environment variables in a configuration options GitHub workflow?
I created secrets in github actions and trying to use them in reusable workflow, but I am unable to make it work, However, If I pass secrets hardcoded from caller file, it works just fine
## set_env.yml
name: Sent Env Creds and Vars
on:
push:
branches:
- main
- dev
pull_request:
branches: [ main ]
jobs:
deploy-dev:
uses: ./.github/workflows/main.yml
with:
AWS_REGION: "us-east-2"
PREFIX: "dev"
secrets:
AWS_ACCESS_KEY_ID: ${{ secrets.DEV_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.DEV_AWS_ACCESS_KEY_ID }}
reusable workflow = main.yml
## main.yml
name: Deploy to AWS
# Controls when the workflow will run
on:
workflow_call:
inputs:
AWS_REGION:
required: true
type: string
PREFIX:
required: true
type: string
secrets:
AWS_ACCESS_KEY_ID:
required: true
AWS_SECRET_ACCESS_KEY:
required: true
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
terraform-deploy:
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
echo Hello, Epsilon! You are in ${{ inputs.AWS_REGION }} region ${{ inputs.PREFIX }} region
for dir in $(ls -l | grep '^d' | awk '{print $9}'); do
PARENT_DIR=`pwd`
echo $dir
cd $dir
terraform init -backend-config=${PARENT_DIR}/${{ inputs.PREFIX }}-backend.tfvars
terraform validate
terraform plan -var-file=${{ inputs.PREFIX }}_vars.tfvars
## terraform apply -input=false -auto-approve -var-file=${{ inputs.PREFIX }}_vars.tfvars
cd ..
done
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
If I hardcode secrets in set_env.yml while calling main.yml like below, it just works
jobs:
deploy-dev:
uses: ./.github/workflows/main.yml
with:
AWS_REGION: "us-east-2"
PREFIX: "dev"
secrets:
AWS_ACCESS_KEY_ID: <harcoded value>
AWS_SECRET_ACCESS_KEY: <hardcoded value>
I have been trying to make it work in many ways but doesnt work. Please help
As of May 3rd 2022, this is now possible with the new keyword inherit: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#onworkflow_callsecretsinherit
In the calling workflow, you tell it to inherit the secrets in the reusable workflow:
jobs:
deploy-dev:
uses: ./.github/workflows/main.yml
with:
AWS_REGION: "us-east-2"
PREFIX: "dev"
secrets: inherit
This makes the secrets available in the reusable workflow like normal:
with:
myInput: ${{ secrets.MY_SECRET }}
Note that there's no need to declare the secrets on the workflow_call trigger.
I was running into this issue. For me the culprit was the secret value in Github secrets. The secret had been created correctly, it had the correct value and name however Github actions could not find it for some reason. Deleting the secret and recreating it seems to have solved the issue though i cannot determine why
I am trying to deploy the ci/cd pipeline for ECR in AWS.
We are trying to migrate the azure pipeline to the GitHub actions pipeline
When I try to implement the pipeline I am facing the below error,
Run gulp publish --profile-name development
Using gulpfile ~/work/test-api/test-api/gulpfile.js
Starting 'publish'...
'publish' errored after 9.91 ms
Error: Invalid publish profile named development
at LoadPublishProfile (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/LoadPublishProfile.js:11:15)
at async BuildDeployContext (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/DeployContext.js:94:28)
at async Publish (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/Publish.js:14:21)
Error: Process completed with exit code 1.
Here is my workflow YAML file,
on:
push:
branches: [ main ]
name: Node Project `my-app` CI on ECRjobs
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Use Node 14.17.X
uses: actions/setup-node#v2
with:
node-version: 14.17.X
- name: 'Yarn'
uses: borales/actions-yarn#v2.3.0
with:
cmd: install --frozen-lockfile --non-interactive
- name: Update SAM version
uses: aws-actions/setup-sam#v1
- run: |
wget https://github.com/aws/aws-sam-cli/releases/latest/download/aws-sam-cli-linux-x86_64.zip
unzip aws-sam-cli-linux-x86_64.zip -d sam-installation
sudo ./sam-installation/install --update
sam --version
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push the image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: test-pinz-api
IMAGE_TAG: latest
run: |
gulp publish --profile-name development
Using gulp we publish the environment using the below config file,
development.json
{
"apiDomainName": "domain",
"assetsDomainName": "domain",
"awsProfile": "Pinz",
"bastionBucket": "bucketname",
"corsDomains": ["domain"],
"dbBackupSources": ["db source", "db source"],
"dbClusterIdentifier": "cluster identfier",
"designDomainName": "domain",
"lambdaEcr": "ecr",
"snsApplication": "sns",
"snsServerKeySecretName": "name",
"stackName": "name",
"templateBucket": "bucketname",
"userJwtPublicKey": "token",
"websiteUrl": "domain",
"wwwDomainName": "domain",
"wwwEcr": "ecr repo"
}
In the YAML file, I run the command
gulp publish --profile-name development
So it will reach out to development.json and publish the file
I couldn't figure out what could be wrong here.
honestly, I am new to the ECR deployment through pipeline and gulp is a new concept for me. Can anyone guide me on this?
If need more details comment down.
Trying to build and push docker image for java-gradle project, Below is the action script:
name: Java CI with Gradle
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Set up JDK 1.8
uses: actions/setup-java#v1
with:
java-version: 1.8
- name: Grant execute permission for gradlew
run: chmod +x gradlew
- name: Build with Gradle
run: ./gradlew build
- name: Login to DockerHub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
tags: user/app:latest
The error lies with login to dockerhub in the script. Below is the error obtained, not sure if it is correct?
*
Run docker/login-action#v1
Error: Username and password required
*
Please help.
I'm running my workflows using GitHub Actions. When I create a pull_request that will trigger my workflow, I am getting the error message at the bottom of my question. What I am trying to do is to call my infrastructure/test/main.tf from my audit-account/prod-env directory. What do i need to change in the Env section for directory
# deploy.yml
name: 'GitHub OIDC workflow'
on:
pull_request:
branches:
- prod
env:
tf_version: 'latest'
tg_version: 'latest'
tf_working_dir: './audit-account/prod-env'
permissions:
id-token: write
contents: read
jobs:
deploy:
name: 'Build and Deploy'
runs-on: ubuntu-latest
steps:
- name: 'checkout'
uses: actions/checkout#v2
- name: configure AWS credentials
uses: aws-actions/configure-aws-credentials#master
with:
aws-region: us-east-1
role-to-assume: arn:aws:iam::123456789012:role/GitHubActions_Workflow_role
role-duration-seconds: 3600
- name: 'Terragrunt Init'
uses: the-commons-project/terragrunt-github-actions#master
with:
tf_actions_version: ${{ env.tf_version }}
tg_actions_version: ${{ env.tg_version }}
tf_actions_subcommand: 'init'
tf_actions_working_dir: ${{ env.tf_working_dir }}
tf_actions_comment: true
env:
TF_INPUT: false
# audit-account/prod-env/terragrunt.hcl
terraform {
source = "../../../../..//infrastructure/test"
}
include {
path = find_in_parent_folders()
}
infrastructure/test
main.tf
resource "aws_vpc" "test-vpc" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "default"
tags = {
Name = "OIDC"
}
}
error message:
init: info: initializing Terragrunt configuration in /audit-account/prod-env
init: error: failed to initialize Terragrunt configuration in /audit-account/prod-env
time=2021-11-17T23:55:54Z level=error msg=Working dir infrastructure/test from source file:///github/workspace/audit-account/prod-env does not exist
Your source path for the infrastructure module goes way too far up in the folder structure.
Assuming you have the infrastructure and audit-account directories at the root of the repository, your source would be ../../infrastructure/test. You have it looking 5 folders up from audit-account/prod-env, which puts you 3 folders above the workspace in a folder somewhere on the runner's filesystem.