Add github run number to a file in github repo - github-actions

I am trying to add the github run number to a file in the github repository. The file looks like the following:
import json
from importlib import reload
import hashlib
from logging import raiseExceptions
import os
import importlib
qwe = importlib.import_module("asd-64")
The 64 signifies the github run number. I have tried doing the following:
qwe = importlib.import_module("asd-${{ github.run_number }}")
This doesn't work and prints the string ${{ github.run_number }}. Is there a way to achieve this?

I've done it using a python script and placeholders. (It might not be the best solution, but at least it works!)
The file to update would look like this:
test.py
import json
from importlib import reload
import hashlib
from logging import raiseExceptions
import os
import importlib
#placeholder1
The script to update it would look like this:
update_file.py
import os
import re
print("START")
WORKSPACE = os.getenv("WORKSPACE")
GITHUB_RUN_NUMBER = os.getenv("GITHUB_RUN_NUMBER")
FILE = f"{WORKSPACE}/test.py"
with open(FILE, "r") as file:
content = file.read()
importlib = f'qwe = importlib.import_module("asd-{GITHUB_RUN_NUMBER}")'
content = re.sub(r"#placeholder1", importlib, content)
with open(FILE, "w") as file:
file.write(content)
print("END")
And the workflow file that would perform the operation would look like this:
workflow.yml
name: ...
on:
workflow_dispatch:
jobs:
job1:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2.3.4
- uses: actions/setup-python#v4
with:
python-version: 3.8
- name: Execute Python script to update test.py file
run: python .github/scripts/update_file.py
env:
WORKSPACE: ${{ github.workspace }}
GITHUB_RUN_NUMBER: ${{ github.run_number }}
- run: cat test.py
Here is the related and successful workflow run.
(workflow file, update_file.py file, test.py file)

Related

GitHub dependabot for a library inside a yml file

Introduction
I'm currently working on a project that automatically containerizes a java project with JIB.
GitHub project link.
Problem
The LIB library is implicitly used inside the YAML file, like this :
- name: Build JIB container and publish to GitHub Packages
run: |
if [ ! -z "${{ inputs.module }}" ]; then
MULTI_MODULE_ARGS="-am -pl ${{ inputs.module }}"
fi
if [ ! -z "${{ inputs.main-class }}" ]; then
MAIN_CLASS_ARGS="-Djib.container.mainClass=${{ inputs.main-class }}"
fi
mvn package com.google.cloud.tools:jib-maven-plugin:3.2.1:build \
-Djib.to.image=${{ inputs.REGISTRY }}/${{ steps.downcase.outputs.lowercase }}:${{ inputs.tag-name }} \
-Djib.to.auth.username=${{ inputs.USERNAME }} \
-Djib.to.auth.password=${{ inputs.PASSWORD }} $MULTI_MODULE_ARGS $MAIN_CLASS_ARGS
shell: bash
When the new version of JIB is released my dependabot configuration doesn't update the YAML file.
Configuration of the Dependabot :
version: 2
updates:
- package-ecosystem: github-actions
directory: '/'
schedule:
interval: weekly
Question
Does someone know how to configure dependabot.yml for an implicitly declared library?
Or how to configure Dependabot.yml to automatically create an issue when a new JIB version is released?
You can do it with hiden-dependency-updater
Example of GitHub Workflow you can use:
name: Update hidden dependencies
on:
schedule:
- cron: '0 0 * * *'
jobs:
update:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: MathieuSoysal/hiden-dependency-updater#v1.1.1
with:
files: action.yml # List of files to update
prefix: "com.google.cloud.tools:jib-maven-plugin:" # Prefix before the version, default is: ""
suffix: ":build ."
regex: "[0-9.]*"
selector: "maven"
github_repository: "GoogleContainerTools/jib"
- name: Create Pull Request
uses: peter-evans/create-pull-request#v4
with:
token: ${{ secrets.GITHUB_TOKEN }} # You need to create your own token with pull request rights
commit-message: update jib
title: Update jib
body: Update jib to reflect release changes
branch: update-jib
base: main
From the doc:
The directory must be set to "/" to check for workflow files in
.github/workflows.
- package-ecosystem: "github-actions"
# Workflow files stored in the
# default location of `.github/workflows`
directory: "/"
schedule:
interval: "daily"
So: try specifying a different directory, as example:
- package-ecosystem: "github-actions"
# Workflow files stored in the
directory: "."
schedule:
interval: "daily"

Terratest in GitHub Action fails

when running terratest locally with an assertion it works as expected. But when trying to run it via a github action i get the error below (removing the assertion has terratest running as expected):
TestTerraform 2022-07-27T08:34:49Z logger.go:66: ::set-output name=exitcode::0
output.go:19:
Error Trace: output.go:19
terraform_test.go:24
Error: Received unexpected error:
invalid character 'c' looking for beginning of value
Test: TestTerraform
The terratest file looks like:
package test
import (
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
"testing"
)
func TestTerraform(t *testing.T) {
iam_arn_expected := "arn:aws:iam::xxx:role/terratest_iam"
terraformOptions := terraform.WithDefaultRetryableErrors(t, &terraform.Options{
TerraformDir: "../examples/simple",
Vars: map[string]interface{}{
"iam_name": "terratest_iam",
"region": "eu-west-2",
},
})
defer terraform.Destroy(t, terraformOptions)
terraform.InitAndApply(t, terraformOptions)
iam_arn := terraform.Output(t, terraformOptions, "iam_arn")
assert.Equal(t, iam_arn_expected, iam_arn)
}
the github action looks like:
jobs:
terratest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: actions/setup-go#v3.2.1
- name: Install tfenv
run: |
git clone https://github.com/tfutils/tfenv.git ~/.tfenv
echo "$HOME/.tfenv/bin" >> $GITHUB_PATH
- name: Install Terraform
working-directory: .
run: |
pwd
tfenv install
terraform --version
- name: Download Go Modules
working-directory: test
run: go mod download
- name: Run Terratest
working-directory: test
run: go test -v
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PLAYGROUND_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PLAYGROUND_KEY }}
The solution is to use the HashiCorp setup-terraform action with the Terraform wrapper turned off in place of your "Install Terraform" step.
- name: Setup Terraform
uses: hashicorp/setup-terraform#v2
with:
terraform_version: 1.3.6
terraform_wrapper: false

Use github actions to concat json files

I have a directory containing json files, and i want to use github actions to create a new file in the repository, that contains an array of all those json files.
for example, the directory <my-repo>/configurations contain the files a.json, b.json, I want to create a new file called configs.json contains [<a.json content>,<b.json content>].
The creation must be done dynamically.
Any suggestions?
solution for config files sitting under \configs directory:
name: build unified config file
on: [push]
jobs:
build_file:
name: build unified config file
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout#v2
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Run script
uses: jannekem/run-python-script-action#v1
with:
script: |
import os, json, shutil
with open("unified.json", "r+") as t:
t.truncate(0)
t.write('[')
for filename in os.scandir('configs'):
print(filename)
with open(filename, "r") as f:
content = f.read()
t.write(content)
t.write(',')
t.write(']')
barak = open("unified.json", "r+")
contentb = barak.read()
print(contentb)
- name: push file to main
uses: EndBug/add-and-commit#v9
with:
add: 'unified.json'
committer_name: Committer Name
committer_email: mail#example.com
default_author: github_actor
message: 'Update unified config file'
push: true

Publish Python Package via GitHub Actions to AWS CodeArtifact

I have a hard time to publish a package to AWS CodeArtifact. Problem is the authentification.
First I tried to execute the login via the aws cli but due to the lack of the .pypirc file containing the repository settings that didn't work out. Now I tried to store the token and feed it into the --repository-url but in both cases I end up that the process wants a username anyway.
Stacktrace:
File "/opt/hostedtoolcache/Python/3.9.1/x64/bin/twine", line 8, in <module>
sys.exit(main())
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/__main__.py", line 28, in main
result = cli.dispatch(sys.argv[1:])
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/cli.py", line 82, in dispatch
return main(args.args)
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/commands/upload.py", line 154, in main
return upload(upload_settings, parsed_args.dists)
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/commands/upload.py", line 91, in upload
repository = upload_settings.create_repository()
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/settings.py", line 345, in create_repository
self.username,
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/settings.py", line 146, in username
return cast(Optional[str], self.auth.username)
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/auth.py", line 35, in username
return utils.get_userpass_value(
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/utils.py", line 241, in get_userpass_value
return prompt_strategy()
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/auth.py", line 81, in username_from_keyring_or_prompt
return self.prompt("username", input)
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/auth.py", line 92, in prompt
return how(f"Enter your {what}: ")
EOFError: EOF when reading a line
Enter your username:
Error: Process completed with exit code 1.
Partial github-action.yml:
steps:
- uses: actions/checkout#v2
- name: Set up Python
uses: actions/setup-python#v2
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_CA_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CA_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Build and publish
run: |
token=$(aws codeartifact get-authorization-token --domain foobar --domain-owner 123456678901 --query authorizationToken --output text)
python setup.py sdist bdist_wheel
twine upload --repository-url https://aws:$token#foobar-123456678901.d.codeartifact.eu-central-1.amazonaws.com/pypi/my-repo/simple dist/*
You need to pass to twine the correct authentication values, try with the following:
twine upload --repository-url https://foobar-123456678901.d.codeartifact.eu-central-1.amazonaws.com/pypi/my-repo/simple --username aws --password $token dist/*
see: https://twine.readthedocs.io/en/latest/#commands
The AWS CLI lets you configure credentials for twine so you don't have to pass them explicitly.
- name: Build and publish
run: |
aws codeartifact login --tool twine --domain foobar --repository my-repo
python setup.py sdist bdist_wheel
twine upload --repository codeartifact dist/*
Links:
https://docs.aws.amazon.com/codeartifact/latest/ug/python-configure.html
https://docs.aws.amazon.com/codeartifact/latest/ug/python-run-twine.html

aiohttp and gunicorn: logger doesn't work

I'm trying to debug my aiopg connection settings in production environment using aiohttp logger and gunicorn.
I'm trying to log my database credentials:
models.py:
async def init_pg(app):
logger = logging.getLogger('aiohttp.access')
logger.error("POSTGRES_USER = %s" % app['settings'].POSTGRES_USER)
logger.error("POSTGRES_DATABASE = %s" % app['settings'].POSTGRES_DATABASE)
logger.error("POSTGRES_HOST = %s" % app['settings'].POSTGRES_HOST)
logger.error("POSTGRES_PASSWORD = %s" % app['settings'].POSTGRES_PASSWORD)
app['engine'] = await create_engine(
user=app['settings'].POSTGRES_USER,
database=app['settings'].POSTGRES_DATABASE,
host=app['settings'].POSTGRES_HOST,
password=app['settings'].POSTGRES_PASSWORD
)
This doesn't add any output to /var/log/gunicorn/error_log, although it is expected to.
Here is how I start gunicorn:
/usr/local/bin/gunicorn producer.main:app --daemon --bind 0.0.0.0:8002 --worker-class aiohttp.worker.GunicornWebWorker --access-logfile /var/log/gunicorn/access_log --error-logfile /var/log/gunicorn/error_log --env ENVIRONMENT=PRODUCTION --timeout 120
Here is how I create aiohttp app:
main.py:
import logging
import aiohttp_jinja2
import jinja2
from aiojobs.aiohttp import setup as setup_aiojobs
from aiohttp_swagger import setup_swagger
from aiohttp import web, web_middlewares
from . import settings
from .models import init_pg
from .urls import setup_routes
"""
Run either of the following commands from the parent of current directory:
adev runserver producer --livereload
python3 -m producer.main
"""
def create_app():
logging.basicConfig(level=logging.DEBUG)
app = web.Application(middlewares=[
web_middlewares.normalize_path_middleware(append_slash=True),
], client_max_size=2048**2)
app.update(name='producer', settings=settings)
# setup Jinja2 template renderer
aiohttp_jinja2.setup(app, loader=jinja2.PackageLoader('producer', 'templates'))
# create db connection on startup, shutdown on exit
app.on_startup.append(init_pg)
# app.on_cleanup.append(close_pg)
# setup views and routes
setup_routes(app)
# setup middlewares
# setup_middlewares(app)
# setup aiojobs scheduler
setup_aiojobs(app)
# setup swagger documentation
setup_swagger(app, swagger_url="api/doc")
return app
if __name__ == '__main__':
app = create_app()
web.run_app(app, host=app['settings'].HOST, port=app['settings'].PORT)