I want to automate the CI process where the tool I use is connected to the GitHub and there are 2 dbs After a developer pushes to one db, the second db should have the capability to pull the resources that were pushed in the first db. The tool (hosted on aws) provides a .sh file which triggers the pull for the second db. How can I connect to the aws instance from GitHub using actions and point to the aws folder and make use of the .sh file to trigger the pull.
I am new to Github and could not find suitable solution to resolve my issue.
Looking for any help/advice. Thanks
A good start would be to use the unfor19/install-aws-cli-action, to benefit from aws CLI.
You can see an example in "The CI / CD pipeline of Github Action for serverless lambda function containerization deployment." from Dr. Tri Basuki Kurniawan.
steps:
- name: Install AWS CLI
uses: unfor19/install-aws-cli-action#v1
with:
version: 1
env:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ ->
secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_DEFAULT_REGION }} - name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ ->
secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_DEFAULT_REGION }} - name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1 - name: Check out code
uses: actions/checkout#v2
...
But depending on the nature of your database, you also have more specialized actions like "Labs: Cross-Region Replication for RDS".
Related
here are my some of the workflow steps and last one is failing
- name: Install oc
uses: redhat-actions/openshift-tools-installer#v1
with:
oc: 4.10
# https://github.com/redhat-actions/oc-login#readme
- name: Log in to OpenShift
uses: redhat-actions/oc-login#v1
with:
openshift_server_url: ${{ env.OPENSHIFT_SERVER }}
openshift_token: ${{ env.OPENSHIFT_TOKEN }}
insecure_skip_tls_verify: true
namespace: ${{ env.OPENSHIFT_NAMESPACE }}
- name: Deploy new image as rollout
run: oc rollout latest dc/asenion-app
unable to find out any GitHub action in github marketplace to run "oc rollout" or other openshift cli commands
I have a very specific use case leveraging github actions
Build and push docker image to a private registry on linode
Login to Linode K8s Environment and do a rollout restart on the affected deployments
Problem is, there are no ready yaml file actions on the Github market place for Linode integration- they have for other providers like AWS, Azure, GKE, etc using Dockerhub
The internet in general does not have these use cases combined.
I am a newbie to Github actions so it will take some time to hack this myself. Any help/pointers will be appreciated.
After some hacking, I was able to come up with this simple workflow that works for me. Credit to this post
name: deployment-deploy
on:
push:
branches:
- somebranch
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
#build and push image
- name: Build, tag, and push image to Private registry
id: build-image
env:
IMAGE_TAG: image_tag
run: |
docker build -t ${{secrets.REGISTRY_ENDPOINT}}:$IMAGE_TAG .
docker login registry.domain.com -u ${{ secrets.REGISTRY_USERNAME }} -p ${{secrets.REGISTRY_PASSWORD}}
docker push ${{secrets.REGISTRY_ENDPOINT}}:$IMAGE_TAG
echo "::set-output name=image::${{secrets.REGISTRY_ENDPOINT}}:$IMAGE_TAG"
- name: Kubernetes set context
uses: Azure/k8s-set-context#v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.KUBE_CONFIG }}
#push
- name: Deploy k8s yaml
id: deploy-k8s-yaml
run: |
# Verify deployment
kubectl rollout restart deployment some_depl
How can I publish Npm Package to custom JFrog artifactory using Github action?
publish:
name: Publish the Packages
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
with:
node-version: ${{ env.NODE_VERSION }}
registry-url: ${{ env.ARTIFACTORY_URL }}
- name: Publish Packages
run: npm publish
working-directory: ${{ env.CORE_WORKING_DIR }}
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
The above one is giving 401 error. Is it the right approach to do or we've to use some third party actions?
From what I can find, you'll have to do this in a more manual fashion by setting up the JFrog CLI in GitHub.
First, set up JFrog in GitHub actions: https://github.com/marketplace/actions/setup-jfrog-cli
Then, go to JFrog and find how to install npm packages to artifactory using their CLI: https://jfrog.com/blog/npm-flies-with-jfrog-cli/
- uses: jfrog/setup-jfrog-cli#v2
env:
# JFrog platform url (for example: https://acme.jfrog.io)
JF_URL: ${{ secrets.JF_URL }}
# Basic authentication credentials
JF_USER: ${{ secrets.JF_USER }}
JF_PASSWORD: ${{ secrets.JF_PASSWORD }}
or
# JFrog Platform access token
JF_ACCESS_TOKEN: ${{ secrets.JF_ACCESS_TOKEN }}
- run: |
jf rt npm-install --build-name=${{ inputs.build_name }} --build-number=${{ inputs.build_number }}
That's roughly how it should work.
I'm trying to clone a repository from GitHub to a remote server.
My solution using appleboy/ssh-action GitHub action was working but I was told the same can be achieved using actions/checkout#v2 GitHub action.
I tried to just change - uses: value to actions/checkout#V2` but then the code doesn't work.
I can't find any templates on how to do it using actions/checkout#v2. Any advice would be much appreciated.
name: deploy to a server on push
on:
push:
branches: [ master ]
jobs:
deploy-to-server:
runs-on: ubuntu-latest
steps:
- uses: appleboy/ssh-action#master
with:
host: 123.132.123.132
username: tomas
key: ${{ secrets.PRIVATE_KEY }}
port: 59666
script:
git clone https://github.com/Tomas-R/website.git
As the documentation of actions/checkout#v2 says
This action checks-out your repository under $GITHUB_WORKSPACE, so your workflow can access it.
steps:
- name: Checkout the repo
uses: actions/checkout#v2
with:
# This will create a directory named `my-repo` and copy the repo contents to it
# so that you can easily upload it to your remote server
path: my-repo
To copy this checked-out repo to a remote server, you may use scp command as follows.
# Runs a set of commands using the runners shell
- name: Upload repo to remote server
env:
SSH_AUTH_SOCK: /tmp/ssh_agent.sock
run: |
ssh-agent -a $SSH_AUTH_SOCK > /dev/null
ssh-add - <<< "${{ secrets.PRIVATE_KEY }}"
scp -o StrictHostKeyChecking=no -r -P 59666 my-repo tomas#123.132.123.132:/target/directory
By using the above commands we,
Start ssh-agent and bind it to a known location.
Import the private key from the secret to the ssh-agent.
Copy contents from my-repo to the target directory on your remote server.
This way, the private key is never written to the disk / being exposed.
There is yet another easier way to run scp using the Copy via ssh GitHub action.
- name: Copy folder content recursively to remote
uses: garygrossgarten/github-action-scp#release
with:
local: my-repo
remote: ~/target/directory
host: 123.132.123.132
port: 59666
username: tomas
privateKey: ${{secrets.PRIVATE_KEY}}
I have encountered similar problem.
In my case, problem is this (appleboy/ssh-action#master) action file.
Just replace this action file with other action files from Github Marketplace
I have used LuisEnMarroquin/setup-ssh-action#v2.0.0 action file.
My Workflow File:
name: SSH to Ubuntu EC2
on:
push:
branches:
- main
jobs:
ssh-to-ec2:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Set up SSH key
uses: LuisEnMarroquin/setup-ssh-action#v2.0.0
with:
ORIGIN: ${{ secrets.HOST }}
SSHKEY: ${{ secrets.TEST }}
NAME: production
PORT: 22
USER: ubuntu
- run: ssh production "ls -la;id; echo hehe > h.txt "
I have a project on GitHub, I want to setup CI job to build docker images and push to AWS ECR. My requirements are -
One single ci file (I have created.github/workflows/aws.yml)
CI job must trigger on the push to master and sandbox branches only
If pushed to sandbox branch, then docker images should be pushed ECR1
If pushed to master branch, then docker image should be pushed to ECR2
So far I have made the following CI file
.github/workflows/aws.yml -
name: CI
on:
pull_request:
branches:
- master
- sandbox
push:
branches:
- master
- sandbox
env:
AWS_REPOSITORY_URL_MASTER: ${{ secrets.AWS_REPOSITORY_URL_MASTER }}
AWS_REPOSITORY_URL_SANDBOX: ${{ secrets.AWS_REPOSITORY_URL_SANDBOX }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
jobs:
build-and-push:
name: Build and push image to AWS ECR master
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup ECR
run: $( aws ecr get-login --no-include-email --region ap-south-1)
- name: Build and tag the image
run: docker build -t $AWS_REPOSITORY_URL_MASTER .
- name: Push
run: docker push $AWS_REPOSITORY_URL_MASTER
build-and-push-sandbox:
name: Build and push image to AWS ECR master
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup ECR
run: $( aws ecr get-login --no-include-email --region ap-south-1)
- name: Build and tag the image
run: docker build -t $AWS_REPOSITORY_URL_SANDBOX .
- name: Push
run: docker push $AWS_REPOSITORY_URL_SANDBOX
How will the script distinguish when to run build-and-push-master(triggered on master branch push) and build-and-push-sandbox(triggered on sandbox branch push)?
Add an if clauses at the job level:
jobs:
build-and-push:
name: Build and push image to AWS ECR master
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/master'
steps:
and
build-and-push-sandbox:
name: Build and push image to AWS ECR sandbox
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/sandbox'
steps:
Alternatively, since the jobs are so similar, you can try to unify them and set an env variable $AWS_REPOSITORY to either ${{ secrets.AWS_REPOSITORY_URL_MASTER }} or ${{ secrets.AWS_REPOSITORY_URL_SANDBOX }}, depending on the value of github.ref.