Elastic beanstalk with github action Deploy fails - amazon-elastic-beanstalk

I try to Deloy my project via Gitaction. Gitaction works well.
It upload my source bundle in S3, and update my EB environment(It takes my source bundle from S3 to staging directory successfully)
I checked logs via EC2 which is connected to my EB
Ec2 succeed in downloading sourcebundle and unpacking it with pip install -r requirements.txt.
/var/app/staging/
there is my source file(source bundle unpacked)
But Like below event statement
It keep fail deploying it... (to be exact keep fail updating EB environment)
long shots I tried
I terminated Ec2 and loadbalancer(integrated with my classic ELB(load balancer)) created new one for EB. and it fail updating environment, either
I changed my source bundle name (actually my sourcebundle name changes everysecond(git.sha)
remake application, environment and redeploy it . fails
I need some help... Below is details
++ summary
Amazon Linux 2
my application is hooked in /var/app/staging successfully
but it fails to move in /var/app/current
I want to know which Elastic beanstalk engine script is in charge of copying var/app/staging to /var/app/current
(or validating it)
ElasticBeanstalk environment settings:
Python 3.7 running on 64bit Amazon Linux 2/3.3.6
.github/workflow/main.yml
name: PlayplzAction-deploy
on:
push:
branches: [ playplz/backend ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Create ZIP deployment package
run: zip -r deploy_package.zip ./
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEYID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Create env file
run: |
touch .env
echo SECRET_KEY="${{ secrets.SECRET_KEY }}" >> .env
echo VIMEO_SECRET_KEY = "${{ secrets.VIMEO_SECRET_KEY }}" >> .env
echo DEBUG = "${{ secrets.DEBUG }}" >> .env
echo DB_NAME= "${{ secrets.DB_NAME }}">> .env
echo DB_USER= "${{ secrets.DB_USER }}">> .env
echo DB_PASSWORD= "${{ secrets.DB_PASSWORD }}">> .env
echo DB_HOST= "${{ secrets.DB_HOST }}">> .env
echo DB_PORT= "${{ secrets.DB_PORT }}">> .env
echo AWS_ACCESS_KEYID= "${{ secrets.AWS_ACCESS_KEYID }}">> .env
echo AWS_SECRET_ACCESS_KEY="${{ secrets.AWS_SECRET_ACCESS_KEY }}" >> .env
echo AWS_STORAGE_BUCKET_NAME="${{ secrets.AWS_STORAGE_BUCKET_NAME }}" >> .env
echo AWS_REGION= "${{ secrets.AWS_REGION}}">> .env
cat .env
- name: Upload package to S3 bucket
run: aws s3 cp deploy_package.zip s3://playplz/gitaction/deploy-${{ github.sha }}.zip
- name: Create new ElasticBeanstalk Application Version
run: |
aws elasticbeanstalk create-application-version \
--application-name classic_DRF \
--source-bundle S3Bucket="playplz",S3Key="gitaction/deploy-${{ github.sha }}.zip" \
--version-label "ver-${{ github.sha }}" \
--description "commit-sha-${{ github.sha }}"
- name: Deploy new ElasticBeanstalk Application Version
run: aws elasticbeanstalk update-environment --environment-name Classicdrf-env --version-label "ver-${{ github.sha }}"
# - name: Generate deployment package
# run: zip -r deploy-${{ github.sha }}.zip . -x '*.git*'
# - name: Beanstalk Deploy for app
# uses: einaregilsson/beanstalk-deploy#v18
# with:
# aws_access_key: ${{secrets.AWS_ACCESS_KEYID}}
# aws_secret_key: ${{secrets.AWS_SECRET_ACCESS_KEY}}
# application_name: PLAYPLZ_NEW
# environment_name: Playplznew-dev
# region: ${{secrets.AWS_REGION}}
# version_label: "ver-${{ github.sha }}"
# deployment_package: deploy-${{ github.sha }}.zip
# use_existing_version_if_available : "true"
# existing_bucket_name : playplz
EB ec2 /var/log/
EB ec2 /var/log/eb-engine.log

Related

github action is running on feature branches and it should only run on main branch or release branch

I have the following github action and it runs the action even if I create a feature branch with a name other than main, master, or release
What am I doing wrong?
#see https://raw.githubusercontent.com/zellwk/zellwk.com/master/.github/workflows/deploy.yml
name: deploy
on:
push:
branches:
- main
- master
- release
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v1
- name: Install SSH Key
uses: shimataro/ssh-key-action#v2
with:
key: ${{ secrets.SSH_PRIVATE_KEY }}
known_hosts: unnecessary
- name: Adding Known Hosts
run: ssh-keyscan -p ${{ secrets.SSH_PORT}} -H ${{ secrets.SSH_HOST }} >> ~/.ssh/known_hosts
- name: Set env file and jwk.json for release
if: ${{ contains(github.ref_name, 'release') || github.ref == 'refs/heads/release' }}
run: |
echo "${{secrets.PRODUCTION_ENV }}" > .env.prod
ln -sf .env.prod .env
echo "${{secrets.PRODUCTION_JWK}}" | base64 --decode > jwk.json
- name: Set env file and jwk.json for development
if: ${{ !contains(github.ref_name, 'release') || github.ref != 'refs/heads/release' }}
run: |
echo "${{secrets.DEVELOPMENT_ENV }}" > .env.dev
ln -sf .env.dev .env
echo "${{secrets.DEVELOPMENT_JWK}}" | base64 --decode > jwk.json
- name: Deploy with rsync for release
if: ${{ contains(github.ref_name, 'release') || github.ref == 'refs/heads/release' }}
# from ./bin/deploy.sh
run: rsync -azvP -e "ssh -p ${{ secrets.SSH_PORT }}" --delete --exclude=node_modules --exclude=redis-data --exclude=.idea --exclude=.git --exclude=mongo_data --exclude=data01 --exclude=uploads --exclude=emails.txt --exclude=main --exclude=deno --exclude=app --exclude=database.sqlite --exclude=database.sqlite-journal --exclude=data ./ ${{secrets.SSH_USER}}#${{secrets.SSH_HOST}}:www/${{secrets.HOST_PATH_PROD}}/${{secrets.HOST_PROJECT}}
# run: rsync -avz -e "ssh -p ${{ secrets.SSH_PORT }}" ./dist/ ${{ secrets.SSH_USER }}#${{ secrets.SSH_HOST }}:/var/www/zellwk.com/
- name: Deploy with rsync for development
if: ${{ !contains(github.ref_name, 'release') && github.ref != 'refs/heads/release' }}
# from ./bin/deploy.sh
run: rsync -azvP -e "ssh -p ${{ secrets.SSH_PORT }}" --delete --exclude=node_modules --exclude=redis-data --exclude=.idea --exclude=.git --exclude=mongo_data --exclude=data01 --exclude=uploads --exclude=emails.txt --exclude=main --exclude=deno --exclude=app --exclude=database.sqlite --exclude=database.sqlite-journal --exclude=data ./ ${{secrets.SSH_USER}}#${{secrets.SSH_HOST}}:www/${{secrets.HOST_PATH_DEV}}/${{secrets.HOST_PROJECT}}
- name: Post-Deploy script for release
if: ${{ contains(github.ref_name, 'release') || github.ref == 'refs/heads/release' }}
# from ./bin/deploy.sh
run: ssh -t ${{secrets.SSH_USER}}#${{secrets.SSH_HOST}} -p ${{secrets.SSH_PORT}} \$HOME/www/${{secrets.HOST_PATH_PROD}}/${{secrets.HOST_PROJECT}}/bin/post-deploy.sh
- name: Post-Deploy script for development
if: ${{ !contains(github.ref_name, 'release') && github.ref != 'refs/heads/release' }}
# from ./bin/deploy.sh
run: ssh -t ${{secrets.SSH_USER}}#${{secrets.SSH_HOST}} -p ${{secrets.SSH_PORT}} \$HOME/www/${{secrets.HOST_PATH_DEV}}/${{secrets.HOST_PROJECT}}/bin/post-deploy.sh
# - name: Restart App Server
# uses: appleboy/ssh-action#master
# with:
# host: ${{ secrets.SSH_HOST }}
# username: ${{ secrets.SSH_USER }}
# key: ${{ secrets.SSH_PRIVATE_KEY }}
# port: ${{ secrets.SSH_PORT }}
# debug: true
# # from ./bin/post-deploy.sh
# # if [ ${{ contains(github.ref_name, 'release') || github.ref == 'refs/heads/release' }} ]; then
# # else
# # cd $HOME/www/${{secrets.HOST_PATH_DEV}}/${{secrets.HOST_PROJECT}}
# # deno upgrade
# # sudo /etc/init.d/nginx reload
# # sudo systemctl daemon-reload
# # sudo systemctl restart ${{secrets.META_SERVICE_DEV}}
# # fi
# script: |
# cd $HOME/www/${{secrets.HOST_PATH_DEV}}/${{secrets.HOST_PROJECT}}
# deno upgrade
# sudo /etc/init.d/nginx reload
# sudo systemctl daemon-reload
# sudo systemctl restart ${{secrets.META_SERVICE_DEV}}
it shouldn't run the action on push to a different branch ie: feature1
There are 2 things going on. Even though you've updated the YAML file in the main/master branch, it's likely that existing branches have a copy of the YAML file without the filter. You can fix that by cherry-picking the new YAML file into the existing branches.
The other thing you can do is define an Environment and add an environment: xxxxx to the YAML file and a branch filter on the environment. That will prevent people from running the deploy job against the environment.
In your repository settings, navigate to environments add an environment (any name will do) and then set the Deployment branches to Selected branches and then add the list of branches you want to allow to the list using the โž• Add Deployment Branch.
By putting all the production secrets in the list of Environment Secrets instead of the Repository Secrets you also prevent others from accessing these from any workflow that doesn't specifically target this environment.

Github workflow: Where does "npm ci" store the "node_modules"-folder

I'm wondering where npm ci saves the node_modules.
Furthermore I want to zip the node_modules. Is this even possible with git workflows or should I try a different approach? I need this workflow to deploy the codebase on git to a lamda function in AWS.
This is my code for example:
jobs:
node:
runs-on: ubuntu-latest
steps:
- name: ๐Ÿ›’ Checkout
uses: actions/checkout#v3
- name: ๐Ÿค– Setup Node
uses: actions/setup-node#v3
with:
node-version: 16.13.2
registry-url: 'https://npm.pkg.github.com'
cache: 'npm'
cache-dependency-path: '**/package.json'
- name: Cache
id: cache-dep
uses: actions/cache#v3
with:
path: |
./node_modules
key: ${{ runner.os }}-pricing-scraper-${{ hashFiles('package.json') }}
restore-keys: |
${{ runner.os }}-pricing-scraper-
- name: Config NPM
shell: bash
run: |
npm install -g npm#8.1.2
- name: โš™๏ธ Install node_modules
if: steps.cache-dep.outputs.cache-hit != 'true'
shell: bash
run: |
npm ci
build:
runs-on: ubuntu-latest
needs: [node]
steps:
- name: ๐Ÿ›’ Checkout
uses: actions/checkout#v3
- name: ๐Ÿ— Zip files and folder
if: steps.cache-dep.outputs.cache-hit != 'true'
shell: bash
run: |
echo Start running the zip script
# delete prior deployment zips
DIR="./output"
if [ ! -d "$DIR" ]; then
echo "Error: ${DIR} not found. Can not continue."
exit 1
fi
if [ ! -d "./node_modules" ]; then
echo "Error: node_modules not found. Can not continue."
exit 1
fi
rm "$DIR"/*
echo Deleted prior deployment zips
# Zip the core directories which are the same for every scraper.
zip -r "$DIR"/core.zip config node_modules shipping util
...
It's not saved in the repo
- name: Bash Test node_modules
if: steps.cache-dep.outputs.cache-hit != 'true'
shell: bash
run: |
echo "cached"
echo $(ls)
Output:
Run echo "cached"
cached
README.md build config output package-lock.json package.json scraper shipping testHandler.js util
The place where the node_modules are stored is ยด~/.npmยด
The problem was that I did 2 jobs (node, build) on 2 different VMs. So the installed node_modules were on the other VM (node-job) than where I needed them (build-job).

xcodebuild fails until pod installing again(twice) after the failure occurs - how to resolve?

I'm setting up github actions to test my xcode app (kotlin multiplatform) and for some reason my build is not successful until pod installing a second time after attempting to build.
So I pod install -> build -> build fails -> pod install again -> build -> build succeeds.
Steps to reproduce this locally:
Checkout the repo
arch -x86_64 pod install
xcodebuild ARCHS=x86_64 ONLY_ACTIVE_ARCH=NO -workspace myworkspace.xcworkspace -scheme
myScheme -configuration Release -destination 'platform=iOS Simulator,name=iPhone 12,OS=15.4'
The build fails on this step trying to import the common kotlin library:
import common
^
** BUILD FAILED **
and then if I run these steps again
arch -x86_64 pod install
xcodebuild ARCHS=x86_64 ONLY_ACTIVE_ARCH=NO -workspace myworkspace.xcworkspace -scheme myScheme -configuration Release -destination 'platform=iOS Simulator,name=iPhone 12,OS=15.4'
the build is successful
/usr/bin/codesign --force --sign - --entitlements /Users/me/Library/Developer/Xcode/DerivedData/myApp-esecylpzfadofbsakhxtkqqgzuk/Build/Intermediates.noindex/myApp.build/Release-iphonesimulator/myApp.build/myApp.app.xcent --timestamp\=none --generate-entitlement-der /Users/me/Library/Developer/Xcode/DerivedData/myApp-esecylpzfadofbsakhxtkqqgzuk/Build/Products/Release-iphonesimulator/myApp.app
** BUILD SUCCEEDED **
Here is my podfile:
platform :ios, '15.2'
use_frameworks!
inhibit_all_warnings!
def shared_pods
pod 'common', :path => '../common'
pod 'GoogleSignIn'
end
target 'myApp' do
shared_pods
end
target 'myApp_Tests' do
shared_pods
end
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '12.0'
end
end
end
and here's my BuildTests.yaml file for github actions
name: ios-unit-tests
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
run_tests:
runs-on: macos-latest
strategy:
matrix:
include:
- ios: "15.2"
name: test iOS (${{ matrix.ios }})
steps:
- uses: actions/setup-java#v2
with:
distribution: 'temurin'
java-version: '11'
- name: Checkout repository
uses: actions/checkout#v3
- name: Install the Apple certificate and provisioning profile
env:
BUILD_CERTIFICATE_BASE64: ${{ secrets.BUILD_CERTIFICATE_BASE64 }}
P12_PASSWORD: ${{ secrets.P12_PASSWORD }}
BUILD_PROVISION_PROFILE_BASE64: ${{ secrets.BUILD_PROVISION_PROFILE_BASE64 }}
KEYCHAIN_PASSWORD: ${{ secrets.KEYCHAIN_PASSWORD }}
run: |
# create variables
CERTIFICATE_PATH=$RUNNER_TEMP/build_certificate.p12
PP_PATH=$RUNNER_TEMP/build_pp.mobileprovision
KEYCHAIN_PATH=$RUNNER_TEMP/app-signing.keychain-db
# import certificate and provisioning profile from secrets
echo -n "$BUILD_CERTIFICATE_BASE64" | base64 --decode --output $CERTIFICATE_PATH
echo -n "$BUILD_PROVISION_PROFILE_BASE64" | base64 --decode --output $PP_PATH
# create temporary keychain
security create-keychain -p "$KEYCHAIN_PASSWORD" $KEYCHAIN_PATH
security set-keychain-settings -lut 21600 $KEYCHAIN_PATH
security unlock-keychain -p "$KEYCHAIN_PASSWORD" $KEYCHAIN_PATH
# import certificate to keychain
security import $CERTIFICATE_PATH -P "$P12_PASSWORD" -A -t cert -f pkcs12 -k $KEYCHAIN_PATH
security list-keychain -d user -s $KEYCHAIN_PATH
# apply provisioning profile
mkdir -p ~/Library/MobileDevice/Provisioning\ Profiles
cp $PP_PATH ~/Library/MobileDevice/Provisioning\ Profiles
- name: Install M1 Pod
run: sudo arch -x86_64 gem install ffi;
- name: Pod Install
run: cd myApp; which pod; rm myApp.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved; arch -x86_64 pod install
- name: Build
run: xcodebuild ARCHS=x86_64 ONLY_ACTIVE_ARCH=NO -workspace myApp/myApp.xcworkspace -scheme myApp -configuration Release -destination 'platform=iOS Simulator,name=iPhone 12,OS=${{ matrix.ios }}'
- name: Run unit tests
run: xcodebuild test -workspace myApp/myApp.xcworkspace -scheme myApp -sdk iphonesimulator -destination 'platform=iOS Simulator,name=iPhone 12,OS=${{ matrix.ios }}'
UPDATE:
Resolution was to delete .gitignore and build the project and see which files changed. Turned out there were some build files commented out that should have been committed

Github actions failing on installation of dependencies

Github actions throwing error:
Run composer install -q --no-ansi --no-interaction --no-scripts
--no-progress --prefer-dist composer install -q --no-ansi --no-interaction --no-scripts --no-progress --prefer-dist shell: /usr/bin/bash -e {0} Error: The operation was canceled.
Please see configuration and image below:
Laravel.yml file
name: Laravel
on:
push:
branches:
- master
- develop
- features/**
pull_request:
branches:
- master
- develop
jobs:
laravel-tests:
runs-on: ubuntu-latest
# Service container Postgresql postgresql
services:
# Label used to access the service container
postgres:
# Docker Hub image (also with version)
image: postgres:latest
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: db_test_laravel
## map the "external" 55432 port with the "internal" 5432
ports:
- 55432:5432
# Set health checks to wait until postgresql database has started (it takes some seconds to start)
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
strategy:
matrix:
operating-system: [ubuntu-latest]
php-versions: [ '8.0','7.4' ]
dependency-stability: [ prefer-stable ]
name: P${{ matrix.php-versions }} - L${{ matrix.laravel }} - ${{ matrix.dependency-stability }} - ${{ matrix.operating-system}}
steps:
- uses: actions/checkout#v2
- name: Setup Node.js
uses: actions/setup-node#v1
with:
node-version: '15.x'
- name: Cache node_modules directory
uses: actions/cache#v2
id: node_modules-cache
with:
path: node_modules
key: ${{ runner.OS }}-build-${{ hashFiles('**/package.json') }}-${{ hashFiles('**/package-lock.json') }}
- name: Install NPM packages
if: steps.node_modules-cache.outputs.cache-hit != 'true'
run: npm ci
- name: Build frontend
run: npm run development
- name: Install PHP versions
uses: shivammathur/setup-php#v2
with:
php-version: ${{ matrix.php-versions }}
- name: Get Composer Cache Directory 2
id: composer-cache
run: |
echo "::set-output name=dir::$(composer config cache-files-dir)"
- uses: actions/cache#v2
id: actions-cache
with:
path: ${{ steps.composer-cache.outputs.dir }}
key: ${{ runner.os }}-composer-${{ hashFiles('**/composer.lock') }}
restore-keys: |
${{ runner.os }}-composer-
- name: Cache PHP dependencies
uses: actions/cache#v2
id: vendor-cache
with:
path: vendor
key: ${{ runner.OS }}-build-${{ hashFiles('**/composer.lock') }}
- name: Copy .env
run: php -r "file_exists('.env') || copy('.env.example', '.env');"
- name: Install Dependencies
if: steps.vendor-cache.outputs.cache-hit != 'true'
run: composer install -q --no-ansi --no-interaction --no-scripts --no-progress --prefer-dist
- name: Generate key
run: php artisan key:generate
- name: Directory Permissions
run: chmod -R 777 storage bootstrap/cache
- name: Run Migrations
# Set environment
env:
DB_CONNECTION: pgsql
DB_DATABASE: db_test_laravel
DB_PORT: 55432
DB_USERNAME: postgres
DB_PASSWORD: postgres
run: php artisan migrate
- name: Show dir
run: pwd
- name: PHP Version
run: php --version
# Code quality
- name: Execute tests (Unit and Feature tests) via PHPUnit
# Set environment
env:
DB_CONNECTION: pgsql
DB_DATABASE: db_test_laravel
DB_PORT: 55432
DB_USERNAME: postgres
DB_PASSWORD: postgres
run: vendor/bin/phpunit --testdox
Action summary image
Composer.json
"require": {
"php": "^7.3|^8.0",
Fixed this by doing the following:
Changed the version to php-versions: [ '8.0' ]
Github actions run successfully.

How can we share ECR login step between the jobs

I created 3 jobs, all are building the image and pushing into the ECR but as you can see I have to repeat the Configure AWS Credentials and Log in to Amazon ECR step.
Is there a way to reduce it?
jobs:
build-app1:
steps:
# see: https://github.com/aws-actions/configure-aws-credentials
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
# see: https://github.com/aws-actions/amazon-ecr-login
- name: Log in to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: reponame
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
build-app2:
steps:
# see: https://github.com/aws-actions/configure-aws-credentials
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
# see: https://github.com/aws-actions/amazon-ecr-login
- name: Log in to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: reponame2
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
build-app3:
steps:
# see: https://github.com/aws-actions/configure-aws-credentials
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
# see: https://github.com/aws-actions/amazon-ecr-login
- name: Log in to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: reponame3
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
You can have matrix and run all three parallel. Below is the code snippet
jobs:
build-app:
runs-on: ubuntu-latest
strategy:
matrix:
Repo: [Repo1, Repo2, Repo3]
steps:
# see: https://github.com/aws-actions/configure-aws-credentials
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
# see: https://github.com/aws-actions/amazon-ecr-login
- name: Log in to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: ${{ matrix.Repo }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
Or if you feel running three jobs on three different machines is not suitable for your needs then below shell script might help you
#!/bin/bash
ECR_IMAGE_NAME=<Image Name>
ECR_REPO_NAME=<ECR REPO 1>
ECR_IMAGE_URL=$ECR_REPO_NAME/$ECR_IMAGE_NAME:$GITHUB_SHA
echo "Login in ECR"
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPO_NAME
echo "logged in ECR"
docker build -t $ECR_IMAGE_NAME .
docker tag $ECR_IMAGE_NAME $ECR_IMAGE_URL
docker push $ECR_IMAGE_URL
echo "Logging out of $ECR_REPO_NAME"
docker logout
###########################################
ECR_IMAGE_NAME=<Image Name>
ECR_REPO_NAME=<ECR REPO 2>
ECR_IMAGE_URL=$ECR_REPO_NAME/$ECR_IMAGE_NAME:$GITHUB_SHA
echo "Login in ECR"
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPO_NAME
echo "logged in ECR"
docker build -t $ECR_IMAGE_NAME .
docker tag $ECR_IMAGE_NAME $ECR_IMAGE_URL
docker push $ECR_IMAGE_URL
echo "Logging out of $ECR_REPO_NAME"
docker logout
#########################################
ECR_IMAGE_NAME=<Image Name>
ECR_REPO_NAME=<ECR REPO 3>
ECR_IMAGE_URL=$ECR_REPO_NAME/$ECR_IMAGE_NAME:$GITHUB_SHA
echo "Login in ECR"
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPO_NAME
echo "logged in ECR"
docker build -t $ECR_IMAGE_NAME .
docker tag $ECR_IMAGE_NAME $ECR_IMAGE_URL
docker push $ECR_IMAGE_URL
echo "Logging out of $ECR_REPO_NAME"
docker logout
And run this script alone after the aws login step since all three repos are in same region,
jobs:
build-app:
runs-on: ubuntu-latest
steps:
# see: https://github.com/aws-actions/configure-aws-credentials
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Push Docker Images to ECR
run: chmod +x script.sh && ./script.sh
env:
GITHUB_SHA: ${{github.sha}}
AWS_REGION: us-east-1
You can't as each job may represent a different machine. So basically once your job is finished machine is cleared and goes back to the pool and become available for a another workload.
We have faced similar issues of long "copy-paste" YAML files in GitHub Actions.
This could have probably been easily avoided, if GitHub Actions supported YAML anchors (it seems like they still don't).
Since this problem was unacceptable to me, I developed a configuration helper command line named kojo - which lets you create configuration templates.
With it, your configuration could look something like this:
# workflow.yml
jobs:
build-app1:
#import build (repo: 'reponame1')
build-app2:
#import build (repo: 'reponame2')
build-app3:
#import build (repo: 'reponame3')
# build.yml
steps:
# see: https://github.com/aws-actions/configure-aws-credentials
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
# see: https://github.com/aws-actions/amazon-ecr-login
- name: Log in to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: %{repo}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
Notice that I am passing a variable named repo from one file to another (%{repo} in build.yml).
Then, generating the final file is as easy as running:
$ kojo file workflow.yml --save out.yml
We have been using this approach with many production sites, for quite some time.
Full disclosure: I am the developer of kojo (open source), and only posting here since this problem was exactly the reason it was born (well - this and endless Kubernetes manifests...).