Deploy to a Kubernetes cluster on Linode via Github Actions - github-actions

I have a very specific use case leveraging github actions
Build and push docker image to a private registry on linode
Login to Linode K8s Environment and do a rollout restart on the affected deployments
Problem is, there are no ready yaml file actions on the Github market place for Linode integration- they have for other providers like AWS, Azure, GKE, etc using Dockerhub
The internet in general does not have these use cases combined.
I am a newbie to Github actions so it will take some time to hack this myself. Any help/pointers will be appreciated.

After some hacking, I was able to come up with this simple workflow that works for me. Credit to this post
name: deployment-deploy
on:
push:
branches:
- somebranch
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
#build and push image
- name: Build, tag, and push image to Private registry
id: build-image
env:
IMAGE_TAG: image_tag
run: |
docker build -t ${{secrets.REGISTRY_ENDPOINT}}:$IMAGE_TAG .
docker login registry.domain.com -u ${{ secrets.REGISTRY_USERNAME }} -p ${{secrets.REGISTRY_PASSWORD}}
docker push ${{secrets.REGISTRY_ENDPOINT}}:$IMAGE_TAG
echo "::set-output name=image::${{secrets.REGISTRY_ENDPOINT}}:$IMAGE_TAG"
- name: Kubernetes set context
uses: Azure/k8s-set-context#v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.KUBE_CONFIG }}
#push
- name: Deploy k8s yaml
id: deploy-k8s-yaml
run: |
# Verify deployment
kubectl rollout restart deployment some_depl

Related

GitHub actions: connect to AWS to run .sh file

I want to automate the CI process where the tool I use is connected to the GitHub and there are 2 dbs After a developer pushes to one db, the second db should have the capability to pull the resources that were pushed in the first db. The tool (hosted on aws) provides a .sh file which triggers the pull for the second db. How can I connect to the aws instance from GitHub using actions and point to the aws folder and make use of the .sh file to trigger the pull.
I am new to Github and could not find suitable solution to resolve my issue.
Looking for any help/advice. Thanks
A good start would be to use the unfor19/install-aws-cli-action, to benefit from aws CLI.
You can see an example in "The CI / CD pipeline of Github Action for serverless lambda function containerization deployment." from Dr. Tri Basuki Kurniawan.
steps:
- name: Install AWS CLI
uses: unfor19/install-aws-cli-action#v1
with:
version: 1
env:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ ->
secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_DEFAULT_REGION }} - name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ ->
secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_DEFAULT_REGION }} - name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1 - name: Check out code
uses: actions/checkout#v2
...
But depending on the nature of your database, you also have more specialized actions like "Labs: Cross-Region Replication for RDS".

actions/upload-pages-artifact fails at actions/upload-artifact with "No files were found with the provided path"

I would like to create a GitHub Workflow that builds a C++ application using emscripten and cmake, and deploys it to GitHub Pages. My Workflow job looks like this.
environment:
name: github-pages
url: ${{steps.deployment.outputs.page_url}}
runs-on: ubuntu-latest
container:
image: emscripten/emsdk
steps:
- uses: actions/checkout#v3
- run: cmake -B $GITHUB_WORKSPACE/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}} -DEMSCRIPTEN=ON
- run: cmake --build $GITHUB_WORKSPACE/build --config ${{env.BUILD_TYPE}}
# actions/upload-pages-artifact uses this directory, but it doesn't exist in the image
- run: mkdir -p ${{runner.temp}}
- uses: actions/configure-pages#v1
- uses: actions/upload-pages-artifact#v1
with:
path: $GITHUB_WORKSPACE/build
- id: deployment
uses: actions/deploy-pages#v1
upload-pages-artifact runs tar and lists all the files to be deployed in the log. When running upload-artifact the log reads Warning: No files were found with the provided path: /__w/_temp/artifact.tar. No artifacts will be uploaded..
Note that the path in the warning is different from the one provided as a parameter to upload-artifact (path: /home/runner/work/_temp/artifact.tar).
upload-pages-artifact works as expected when running without the emscripten container.
I would have to either get upload-pages-artifact working inside the container, or somehow share the build with a second job running outside the container.
Split up the job into two jobs, one for building and one for deploying. Use actions/upload-artifact and actions/download-artifact to pass the build from one job to the next. Don't use $GITHUB_WORKSPACE, as it might not point to the right directory in your image.
jobs:
build:
runs-on: ubuntu-latest
container:
image: emscripten/emsdk
steps:
- uses: actions/checkout#v3
- run: cmake -B build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}} -DEMSCRIPTEN=ON
- run: cmake --build build --config ${{env.BUILD_TYPE}}
- uses: actions/upload-artifact#master
with:
name: page
path: build
if-no-files-found: error
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: github-pages
url: ${{steps.deployment.outputs.page_url}}
steps:
- uses: actions/download-artifact#master
with:
name: page
path: .
- uses: actions/configure-pages#v1
- uses: actions/upload-pages-artifact#v1
with:
path: .
- id: deployment
uses: actions/deploy-pages#main

Github Actions Secrets: value not updating after changing it on Github UI

I have some github workflows using secrets in the repository.
I created the secret, but I used the wrong value for it so I went on the website again and updated the value to a different string.
It seems though that no matter what I do, the value doesn't change when the workflow is running. Outputting the value to the console shows the initial value that I set.
I even tried removing the secret and re-adding it with the new value, no success.
Any idea how to get it to change?
You probably forgot to pass the secrets on in the Actions yml
Changing this did the trick for me.
Adding them in the yaml file as env passes them on to the workflow
jobs:
run:
runs-on: ubuntu-latest
environment: publish
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.9
uses: actions/setup-python#v2
with:
python-version: 3.9
cache: 'pip'
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run
run: python publish.py
env:
URL: ${{ secrets.URL }}
USER: ${{ secrets.USER }}
PASS: ${{ secrets.PASS }}

How to build, run and call docker container in Github Action

I need to build docker image form the source code of the current repository, run a container, then execute some API calls. How to do that with the github action?
name: Docs Generator
on:
pull_request:
types: [opened]
jobs:
pr-labeler:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
- name: Get the version
id: vars
run: echo ::set-output name=tag::$(echo ${GITHUB_REF:10})
- name: Build the tagged Docker image
run: docker build . --file Dockerfile --tag service:${{steps.vars.outputs.tag}}
- name: Run docker image
docker run -v ${{ inputs.path }}:/src service:${{steps.vars.outputs.tag}}
- name: Call API
run: |
curl +x http://localhost:8080/test
.....
For this purpose, you could use a combination of https://github.com/marketplace/actions/build-and-push-docker-images and https://github.com/addnab/docker-run-action
The first would build and publish a container, and the second would take this container and run your commands there.
The example is below. I don't use this setup myself but I have tested it. Replace username/container with your username and container.
name: Docker Image CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
compile:
name: Build and run the container
runs-on: ubuntu-latest
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action#v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
- name: Login to DockerHub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action#v2
with:
push: true
tags: username/container
- name: Check out the repo
uses: actions/checkout#v2
- name: Run the build process with Docker
uses: addnab/docker-run-action#v3
with:
image: username/container:latest
options: -v ${{ github.workspace }}:/data
run: |
echo "Hello World"
Note that building a container is quite a long task and might deplete your Github Action limits quickly. You might consider building/publishing a container separately, or add better caching here (i.e. to rebuild it only on Dockerfile change)
Note that you need to set up DOCKERHUB_USERNAME and DOCKERHUB_TOKEN secrets.
Instead of echo "Hello World", use the commands you want to run. The repo data will be in the /data directory, for this setup.

Git : Using actions/checkout#v2 instead of appleboy/ssh-action#master to clone repository to a server

I'm trying to clone a repository from GitHub to a remote server.
My solution using appleboy/ssh-action GitHub action was working but I was told the same can be achieved using actions/checkout#v2 GitHub action.
I tried to just change - uses: value to actions/checkout#V2` but then the code doesn't work.
I can't find any templates on how to do it using actions/checkout#v2. Any advice would be much appreciated.
name: deploy to a server on push
on:
push:
branches: [ master ]
jobs:
deploy-to-server:
runs-on: ubuntu-latest
steps:
- uses: appleboy/ssh-action#master
with:
host: 123.132.123.132
username: tomas
key: ${{ secrets.PRIVATE_KEY }}
port: 59666
script:
git clone https://github.com/Tomas-R/website.git
As the documentation of actions/checkout#v2 says
This action checks-out your repository under $GITHUB_WORKSPACE, so your workflow can access it.
steps:
- name: Checkout the repo
uses: actions/checkout#v2
with:
# This will create a directory named `my-repo` and copy the repo contents to it
# so that you can easily upload it to your remote server
path: my-repo
To copy this checked-out repo to a remote server, you may use scp command as follows.
# Runs a set of commands using the runners shell
- name: Upload repo to remote server
env:
SSH_AUTH_SOCK: /tmp/ssh_agent.sock
run: |
ssh-agent -a $SSH_AUTH_SOCK > /dev/null
ssh-add - <<< "${{ secrets.PRIVATE_KEY }}"
scp -o StrictHostKeyChecking=no -r -P 59666 my-repo tomas#123.132.123.132:/target/directory
By using the above commands we,
Start ssh-agent and bind it to a known location.
Import the private key from the secret to the ssh-agent.
Copy contents from my-repo to the target directory on your remote server.
This way, the private key is never written to the disk / being exposed.
There is yet another easier way to run scp using the Copy via ssh GitHub action.
- name: Copy folder content recursively to remote
uses: garygrossgarten/github-action-scp#release
with:
local: my-repo
remote: ~/target/directory
host: 123.132.123.132
port: 59666
username: tomas
privateKey: ${{secrets.PRIVATE_KEY}}
I have encountered similar problem.
In my case, problem is this (appleboy/ssh-action#master) action file.
Just replace this action file with other action files from Github Marketplace
I have used LuisEnMarroquin/setup-ssh-action#v2.0.0 action file.
My Workflow File:
name: SSH to Ubuntu EC2
on:
push:
branches:
- main
jobs:
ssh-to-ec2:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Set up SSH key
uses: LuisEnMarroquin/setup-ssh-action#v2.0.0
with:
ORIGIN: ${{ secrets.HOST }}
SSHKEY: ${{ secrets.TEST }}
NAME: production
PORT: 22
USER: ubuntu
- run: ssh production "ls -la;id; echo hehe > h.txt "