Docker cache not working on repository dispatch - github-actions

I have a workflow that builds a Docker image.
When the workflow runs with manual trigger/push trigger the cache works fine and I get really good performance.
When I trigger the workflow through repository dispatch (another workflow that triggers the workflow) the cache doesn't work.
I tried everything: using cache module with all storage possibilities there are, running on GitHub runner, running on self-hosted runner, using bash commands to build and push the image instead of using a module, nothing seems to work.
Did anyone come across a similar issue?
This is how build and push look at the moment (on a self hosted runner):
- name: Build Docker image
id: image_id
run: |
docker build -f Dockerfile.test --build-arg LAMBDA_NAME=sharon-test LAMBDA_HANDLER=dist/apps/test/main.handler --build-arg NPM_TOKEN=${{ secrets.NPM_TOKEN }} -t ****.dkr.ecr.us-east-2.amazonaws.com/sharon-test:latest .
- name: Push Docker image
run: |
docker push ****.dkr.ecr.us-east-2.amazonaws.com/sharon-test:latest

Related

how to run a github runner as root

I am trying to run my github runner as root for self hosted linux servers. Can anyone point me to easy solution that I can implement quickly in following code:
name: Test
on: push
jobs:
Test1:
runs-on: selfhosted-linux # This should run on this self hosted runner only
steps:
- uses: actions/checkout#v2
At this point I cannot ssh into the selfhoste linux but can access it only via code in the workflow folder
and I would like to run the checkout as root rather then non root user.
You need to set the environment variable RUNNER_ALLOW_RUNASROOT before you run config.sh to set up the runner. e.g.
RUNNER_ALLOW_RUNASROOT=1 ./config.sh --token asdlkjfasdlkj

How to handle GitHub Actions deleting its cache after 7 days

I have a GitHub Action that runs tests for my Python/Django project. It caches the virtual environment that Pipenv creates. Here's the workflow with nearly everything but the relevant steps commented out/removed:
jobs:
build:
runs-on: ubuntu-latest
services:
# postgres:
steps:
#- uses: actions/checkout#v2
#- name: Set up Python
#- name: Install pipenv and coveralls
- name: Cache pipenv virtualenv
uses: actions/cache#v2
id: pipenv-cache
with:
path: ~/.pipenv
key: ${{ runner.os }}-pipenv-v4-${{ hashFiles('**/Pipfile.lock') }}
restore-keys: |
${{ runner.os }}-pipenv-v4-
- name: Install dependencies
env:
WORKON_HOME: ~/.pipenv/virtualenvs
PIPENV_CACHE_DIR: ~/.pipenv/pipcache
if: steps.pipenv-cache.outputs.cache-hit != 'true'
run: pipenv install --dev
# Run tests etc.
This works fine usually, but because caches are removed after 7 days, if this is run less frequently than that, it can't find the cache and the Install Dependencies step fails with:
FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/.pipenv/virtualenvs/my-project-CfczyyRI/bin/pip'
I then bump the cache key's version number (v4 above) and the action runs OK.
I thought the if: steps.pipenv-cache.outputs.cache-hit != 'true' would fix this but it doesn't. What am I missing?
First alternative: using a separate workflow, with a schedule trigger event, you can run workflows on a recurring schedule.
That way, you force a refresh of those dependencies in the workflow cache.
Second alternative: use github.rest.actions.getActionsCacheList from actions/github-script (as seen here) again in a separate workflow, just to read said cache, and check if it still disappear after 7 days.
Third alternative: check if reading the cache through the new Web UI is enough to force a refresh.
On that third point (Oct. 2022):
Manage caches in your Actions workflows from Web Interface
Caching dependencies and other commonly reused files enables developers to speed up their GitHub Actions workflows and make them more efficient.
We have now enabled Cache Management from the web interface to enable developers to get more transparency and control over their cache usage within their GitHub repositories.
Actions users who use actions/cache can now:
View a list of all cache entries for a repository.
Filter and sort the list of caches using specific metadata such as cache size, creation time, or last accessed time.
Delete a corrupt or a stale cache entry
Monitor aggregate cache usage for repositories and organizations.
In addition to the Cache Management UX that we have now enabled, you could also use our Cache APIs or install the GitHub CLI extension for Actions cache to manage your caches from your terminal.
Learn more about dependency caching to speed up your Actions workflows.

Extract POM version, for a job from Github Actions

I am trying to test with Github Actions to build my docker container automatically.
In my POM, the version of the docker image that I create with JIB, I extract it from the version of my project.
<groupId>io.xxx.my-proyect</groupId>
<artifactId>my-proyect</artifactId>
<version>0.2.0-SNAPSHOT</version>
<name>my-proyect</name>
...
<plugin>
<groupId>com.google.cloud.tools</groupId>
....
<to>

</to>
</plugin>
Github Actions:
- name: package
run: ./mvnw package jib:dockerBuild
- name: push
run: docker push xxx/my-proyect:VERSION (<-- Extract from version property of my POM)
Anyone have any idea how to do it.
You can ask Maven using maven-help-plugin:
mvn help:evaluate -Dexpression=project.version -q -DforceStdout
If the command prints nothing (because -q suppresses all output including the evaluated version), then it's probably because the maven-help-plugin being used is too old. If so, you can pin the version like this:
mvn org.apache.maven.plugins:maven-help-plugin:3.2.0:evaluate -Dexpression=project.version -q -DforceStdout
Or, you could pin the version 3.2.0 in your pom.xml or in a parent pom.xml (if applicable).
Note that running a Maven command always takes some time, so it may add a few seconds to your build time. Passing --offline may help if it works (or may not so much).
Alternatively, you could run jib:build instead of jib:dockerBuild to have Jib directly push to xxx/my-project:VERSION. Looking at your YAML, there does not seem any good reason to run jib:dockerBuild followed by docker push. Using jib:build in this case will substantially cut your build time, as Jib loses a lot of performance optimizations when pushing to a local Docker daemon.
UDPATE: also, unless you are using Jib's <containerizingMode>packaged configuration, you don't need mvn package. mvn compile jib:... will suffice (and marginally faster).

How can I specify Dockerfile build in buildConfig for OpenShift Online?

Openshfit details:
Paid Professional version.
Version Information:
Been trying to create a build from a Dockerfile in Openshift.
Its tough going.
So I tried to use the existing templates in the Cluster Console.
One of which is the Docker one. When i press "Try it" it generates a sample BuildConfig, when I try to then Create it, it gives me the error:
(i have now raised the above in the Origin upstream issue tracker)
Anyhoo...anyone know how to specify a buildConfig an image from a Dockerfile in a git repo? I would be grateful to know.
You can see the build strategies allowed for OpenShift Online on the product website: https://www.openshift.com/products/online. Dockerfile build isn't deprecated, it's just explicitly disallowed in OpenShift Online. You can build your Dockerfile locally and push it directly to the OpenShift internal registry (commands for docker login and docker push are on your cluster's About page).
However, in other environments (not OpenShift Online), you can specify a Dockerfile build as follows and providing a Git Repo with a Dockerfile contained within (located at BuildConfig.spec.source.contextDir)
strategy:
type: Docker
There are additional options that can be configured for a Dockerfile build as well, outlined in https://docs.okd.io/latest/dev_guide/builds/build_strategies.html#docker-strategy-options.

Deploying a node.js application with Bluemix

I am trying to deploy a simple node.js application with the new Kubernetes support in Bluemix. When I run the container I made, I get a ImagePullBackOff error, which means it can't pull down the image.
NAME READY STATUS RESTARTS AGE
hello-node-2399519400-6m8dz 0/1 ImagePullBackOff 0 13m
My Docker image uses the node.js base image.
FROM node:6.9.2
EXPOSE 8080
COPY server.js .
CMD node server.js
I deployed using:
docker build -t hello-node:v1 .
kubectl run hello-node --image=hello-node:v1 --port=8080
I am thinking that Bluemix can't pull down the node.js image, but I am not certain.
I see the docker build of the image, and I'm presuming that you're using the kubectl with the exported cluster config (bx cs cluster-config ...), so that it's targetting your cluster.
Did you tag and push that image from your local docker into the bluemix registry, or to another remote registry that would be accessible from the container service? (My apologies if this is obvious - just didn't see the step there to tag and push it to a registry that would be available).
I had to first push the image to Bluemix with:
docker build -t registry.ng.bluemix.net/namespace/hello-node:1
docker push registry.ng.bluemix.net/namespace/hello-node:1
kubectl run hello-node-deployment --image=registry.ng.bluemix.net/namespace/hello-node:1