setup-node restoring cache but still takes a while to yarn - github-actions

So the setup-node github action caches node_modules with this config:
- uses: actions/setup-node#v2
with:
node-version: '14.15.5'
cache: 'yarn'
I can see it restores a cache.
/home/runner/.cache/yarn/v6
Received 0 of 138278798 (0.0%), 0.0 MBs/sec
Received 113246208 of 138278798 (81.9%), 53.4 MBs/sec
Received 138278798 of 138278798 (100.0%), 55.8 MBs/sec
Cache Size: ~132 MB (138278798 B)
/usr/bin/tar --use-compress-program zstd -d -xf /home/runner/work/_temp/b44b9064-7157-4afd-a342-f81e1005ef1d/cache.tzst -P -C /home/runner/work/app-frontend/app-frontend
Cache restored successfully
But when I do a yarn --frozen-lockfile (we always commit our lockfiles)
I see this output:
Run yarn --frozen-lockfile
yarn install v1.22.17
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
...
and the step still takes 44 seconds.
I'm confused about why this happens.
I implemented my own caching like this:
- name: Cache Modules
uses: actions/cache#v2
with:
path: '**/node_modules'
key: ${{ runner.os }}-modules-${{ hashFiles('**/yarn.lock') }}
And now when I run yarn --frozen-lockfile the step completes in 3 seconds and outputs:
Run yarn --frozen-lockfile
yarn install v1.22.17
[1/4] Resolving packages...
success Already up-to-date.
Done in 1.21s.
I'm confused as to why this is. Obviously I'm misunderstanding something about the way something (yarn, setup-node caching, something else?) works.
The goal is to get the build as fast as possible (while being correct, of course). Can anybody help me understand why setup-node is restoring a cache but yarn is still doing 44 seconds worth of work?

The built-in cache of setup-node puts the installed packages in the global cache for the package manager used (yarn or npm). The packages will still need to be resolved and installed from the global cache to the local working directory, which, in my case, still took 44 seconds.
My cache, on the other hand, cached the local working directory packages, so they were already installed and resolved locally when I ran it a second time.

Related

Docker cache not working on repository dispatch

I have a workflow that builds a Docker image.
When the workflow runs with manual trigger/push trigger the cache works fine and I get really good performance.
When I trigger the workflow through repository dispatch (another workflow that triggers the workflow) the cache doesn't work.
I tried everything: using cache module with all storage possibilities there are, running on GitHub runner, running on self-hosted runner, using bash commands to build and push the image instead of using a module, nothing seems to work.
Did anyone come across a similar issue?
This is how build and push look at the moment (on a self hosted runner):
- name: Build Docker image
id: image_id
run: |
docker build -f Dockerfile.test --build-arg LAMBDA_NAME=sharon-test LAMBDA_HANDLER=dist/apps/test/main.handler --build-arg NPM_TOKEN=${{ secrets.NPM_TOKEN }} -t ****.dkr.ecr.us-east-2.amazonaws.com/sharon-test:latest .
- name: Push Docker image
run: |
docker push ****.dkr.ecr.us-east-2.amazonaws.com/sharon-test:latest

CircleCI is unable to read the JUnit xml generated by Behave while splitting the test by timings

Ok, so I am trying to split my Appium tests by their timing in CircleCI for running the test in parallel. My tests are written in behave(Python) and I am generating the JUnit XML file. Here is my config.yml file
version: 2.1
orbs:
macos: circleci/macos#2.2.0
jobs:
example-job:
macos:
xcode: 13.4.1
parallelism: 4
resource_class: large
steps:
- checkout
- run:
name: Install appium server
command: |
sudo npm update -g
sudo npm install -g appium
sudo npm install -g wd
- run:
name: Start appium server
command: appium --address localhost --port 4723
background: true
- run:
name: Installing Dependencies
command: pip3 install -r requirements.txt
- run:
name: Test application
command: |
TEST=$(circleci tests glob "features/featurefiles/*.feature" | circleci tests split --split-by=timings --timings-type=classname)
echo $TEST
behave $TEST --junit
- store_test_results:
path: reports
- store_artifacts:
path: ./Logs
destination: logs-file
- store_artifacts:
path: ./screenshots
workflows:
example-workflow:
jobs:
- example-job
When I am running the test, I am getting the error "No timing found for "features/featurefiles/XXX.feature" and it is splitting the test by the filename. It runs well but the split is not happening by the timing.
When the execution is done, I can see the data in the TESTS tab, also in the Timing Tab
I believe CircleCI is not able to read the JUnit file generated by Behave, it is searching for the different JUnit XML file. How can we make CircleCI read the JUnit file generated by Behave?
If anyone face such issue, please take a look at the classname in the JUnit report. There is a format in which CircleCI read the classname. In my case, the classname in the JUnit report was mentioned as
features.featurefiles.Login.feature
But the CircleCI was looking for the classname in below format
features/featurefiles/Login.feature
I had to write a utility to change the classname in the report once the execution was completed.Once it was done, CircleCI was able to read the timings.
Hope it help someone :)

GitHub.com Actions - npm run build - debug.log

I’ve got an Action which builds a React site. Works perfectly locally and similar code works in a different repo but wont build via this Action.
I (think) I’ve narrowed it down to a single file but despite committing single lines at a time and having working elsewhere, I’m getting nowhere.
The actions Build and Deploy step log includes:
npm ERR! A complete log of this run can be found in:
npm ERR! /github/home/.npm/_logs/2021-07-05T20_48_03_994Z-debug.log
---End of Oryx build logs---
Oryx has failed to build the solution.
Anybody know how to access the debug.log?
Someone suggested I use actions/upload-artifact to try and upload the artifacts (and hopefully the logs) so I added this:
- name: Archive production logs
uses: actions/upload-artifact#v2
if: always()
with:
retention-days: 1
path: |
**
!/home/runner/work/mysite/mysite/node_modules/**
** to get everything excluding node_modules which is huge
Unfortunately, it still didn't include the log files which I assume is because they're in the Oryx container and I cant access them.
I somehow found this article: https://github.com/microsoft/Oryx/issues/605
and added this bit to my workflow
env:
CI: false
which I believe means that warnings are not treated as errors
TLDR
How do you access the debug.log when using GitHub Actions?
I've had success archiving npm failure logs with the following step:
- name: Archive npm failure logs
uses: actions/upload-artifact#v2
if: failure()
with:
name: npm-logs
path: ~/.npm/_logs
I'm using the if: failure() conditional statement to only run this step when any previous step fails; excluding the conditional entirely will mean that a failure of the previous step will prevent this from running (it looks like the implicit default is always if: success()). If you'd like to archive the logs as an artifact in all cases, you'll want to change that back to if: always() like you had in your sample code.
(I'm also only archiving the ~/.npm/_logs path, and archiving it without a retention time.)

How to handle GitHub Actions deleting its cache after 7 days

I have a GitHub Action that runs tests for my Python/Django project. It caches the virtual environment that Pipenv creates. Here's the workflow with nearly everything but the relevant steps commented out/removed:
jobs:
build:
runs-on: ubuntu-latest
services:
# postgres:
steps:
#- uses: actions/checkout#v2
#- name: Set up Python
#- name: Install pipenv and coveralls
- name: Cache pipenv virtualenv
uses: actions/cache#v2
id: pipenv-cache
with:
path: ~/.pipenv
key: ${{ runner.os }}-pipenv-v4-${{ hashFiles('**/Pipfile.lock') }}
restore-keys: |
${{ runner.os }}-pipenv-v4-
- name: Install dependencies
env:
WORKON_HOME: ~/.pipenv/virtualenvs
PIPENV_CACHE_DIR: ~/.pipenv/pipcache
if: steps.pipenv-cache.outputs.cache-hit != 'true'
run: pipenv install --dev
# Run tests etc.
This works fine usually, but because caches are removed after 7 days, if this is run less frequently than that, it can't find the cache and the Install Dependencies step fails with:
FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/.pipenv/virtualenvs/my-project-CfczyyRI/bin/pip'
I then bump the cache key's version number (v4 above) and the action runs OK.
I thought the if: steps.pipenv-cache.outputs.cache-hit != 'true' would fix this but it doesn't. What am I missing?
First alternative: using a separate workflow, with a schedule trigger event, you can run workflows on a recurring schedule.
That way, you force a refresh of those dependencies in the workflow cache.
Second alternative: use github.rest.actions.getActionsCacheList from actions/github-script (as seen here) again in a separate workflow, just to read said cache, and check if it still disappear after 7 days.
Third alternative: check if reading the cache through the new Web UI is enough to force a refresh.
On that third point (Oct. 2022):
Manage caches in your Actions workflows from Web Interface
Caching dependencies and other commonly reused files enables developers to speed up their GitHub Actions workflows and make them more efficient.
We have now enabled Cache Management from the web interface to enable developers to get more transparency and control over their cache usage within their GitHub repositories.
Actions users who use actions/cache can now:
View a list of all cache entries for a repository.
Filter and sort the list of caches using specific metadata such as cache size, creation time, or last accessed time.
Delete a corrupt or a stale cache entry
Monitor aggregate cache usage for repositories and organizations.
In addition to the Cache Management UX that we have now enabled, you could also use our Cache APIs or install the GitHub CLI extension for Actions cache to manage your caches from your terminal.
Learn more about dependency caching to speed up your Actions workflows.

Does GitHub install the same packages each time?

The GitHub actions documentation recommends to put in each workflow, an installation of all the required packages, such as flake8, pytest, etc. Does this mean that, whenever I push a change to my repository, GitHub installs all these packages anew? This seems very wasteful: a lot of energy is wasted each time. Why do they need to reinstall all packages again and again?
By default, GitHub actions does not cache any files and will have to fetch them each time.
However, you can cache the packages with the cache action: https://github.com/actions/cache
An example for pip:
- uses: actions/cache#v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
The cached files must not exceed 400 MB. Unused caches will expire after 1 week.
My guess is that these are powered by shared Azure DevOps agents and each agent instance is cleaned up after a build job.