Does GitHub install the same packages each time? - github-actions

The GitHub actions documentation recommends to put in each workflow, an installation of all the required packages, such as flake8, pytest, etc. Does this mean that, whenever I push a change to my repository, GitHub installs all these packages anew? This seems very wasteful: a lot of energy is wasted each time. Why do they need to reinstall all packages again and again?

By default, GitHub actions does not cache any files and will have to fetch them each time.
However, you can cache the packages with the cache action: https://github.com/actions/cache
An example for pip:
- uses: actions/cache#v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
The cached files must not exceed 400 MB. Unused caches will expire after 1 week.
My guess is that these are powered by shared Azure DevOps agents and each agent instance is cleaned up after a build job.

Related

setup-node restoring cache but still takes a while to yarn

So the setup-node github action caches node_modules with this config:
- uses: actions/setup-node#v2
with:
node-version: '14.15.5'
cache: 'yarn'
I can see it restores a cache.
/home/runner/.cache/yarn/v6
Received 0 of 138278798 (0.0%), 0.0 MBs/sec
Received 113246208 of 138278798 (81.9%), 53.4 MBs/sec
Received 138278798 of 138278798 (100.0%), 55.8 MBs/sec
Cache Size: ~132 MB (138278798 B)
/usr/bin/tar --use-compress-program zstd -d -xf /home/runner/work/_temp/b44b9064-7157-4afd-a342-f81e1005ef1d/cache.tzst -P -C /home/runner/work/app-frontend/app-frontend
Cache restored successfully
But when I do a yarn --frozen-lockfile (we always commit our lockfiles)
I see this output:
Run yarn --frozen-lockfile
yarn install v1.22.17
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
...
and the step still takes 44 seconds.
I'm confused about why this happens.
I implemented my own caching like this:
- name: Cache Modules
uses: actions/cache#v2
with:
path: '**/node_modules'
key: ${{ runner.os }}-modules-${{ hashFiles('**/yarn.lock') }}
And now when I run yarn --frozen-lockfile the step completes in 3 seconds and outputs:
Run yarn --frozen-lockfile
yarn install v1.22.17
[1/4] Resolving packages...
success Already up-to-date.
Done in 1.21s.
I'm confused as to why this is. Obviously I'm misunderstanding something about the way something (yarn, setup-node caching, something else?) works.
The goal is to get the build as fast as possible (while being correct, of course). Can anybody help me understand why setup-node is restoring a cache but yarn is still doing 44 seconds worth of work?
The built-in cache of setup-node puts the installed packages in the global cache for the package manager used (yarn or npm). The packages will still need to be resolved and installed from the global cache to the local working directory, which, in my case, still took 44 seconds.
My cache, on the other hand, cached the local working directory packages, so they were already installed and resolved locally when I ran it a second time.

Can't find any online and idle self-hosted runner in the current repository

When I run GitHub Actions, shows error:
Can't find any online and idle self-hosted runner in the current repository, account/organization that matches the required labels: 'macos-11.1'
Waiting for a self-hosted runner to pickup this job...
I want to run GitHub Actions on macos-11.1, is the GitHub Actions did not support macos-11.1? what should I do to avoid this problem? I know the GitHub Actions support macos 11.1 from https://github.com/actions/virtual-environments/blob/main/images/macos/macos-10.15-Readme.md
this is my script:
jobs:
build:
#
# more macOS version:
# https://github.com/actions/virtual-environments/blob/main/images/macos/macos-10.15-Readme.md
#
runs-on: macos-11.1

GitHub.com Actions - npm run build - debug.log

I’ve got an Action which builds a React site. Works perfectly locally and similar code works in a different repo but wont build via this Action.
I (think) I’ve narrowed it down to a single file but despite committing single lines at a time and having working elsewhere, I’m getting nowhere.
The actions Build and Deploy step log includes:
npm ERR! A complete log of this run can be found in:
npm ERR! /github/home/.npm/_logs/2021-07-05T20_48_03_994Z-debug.log
---End of Oryx build logs---
Oryx has failed to build the solution.
Anybody know how to access the debug.log?
Someone suggested I use actions/upload-artifact to try and upload the artifacts (and hopefully the logs) so I added this:
- name: Archive production logs
uses: actions/upload-artifact#v2
if: always()
with:
retention-days: 1
path: |
**
!/home/runner/work/mysite/mysite/node_modules/**
** to get everything excluding node_modules which is huge
Unfortunately, it still didn't include the log files which I assume is because they're in the Oryx container and I cant access them.
I somehow found this article: https://github.com/microsoft/Oryx/issues/605
and added this bit to my workflow
env:
CI: false
which I believe means that warnings are not treated as errors
TLDR
How do you access the debug.log when using GitHub Actions?
I've had success archiving npm failure logs with the following step:
- name: Archive npm failure logs
uses: actions/upload-artifact#v2
if: failure()
with:
name: npm-logs
path: ~/.npm/_logs
I'm using the if: failure() conditional statement to only run this step when any previous step fails; excluding the conditional entirely will mean that a failure of the previous step will prevent this from running (it looks like the implicit default is always if: success()). If you'd like to archive the logs as an artifact in all cases, you'll want to change that back to if: always() like you had in your sample code.
(I'm also only archiving the ~/.npm/_logs path, and archiving it without a retention time.)

How to handle GitHub Actions deleting its cache after 7 days

I have a GitHub Action that runs tests for my Python/Django project. It caches the virtual environment that Pipenv creates. Here's the workflow with nearly everything but the relevant steps commented out/removed:
jobs:
build:
runs-on: ubuntu-latest
services:
# postgres:
steps:
#- uses: actions/checkout#v2
#- name: Set up Python
#- name: Install pipenv and coveralls
- name: Cache pipenv virtualenv
uses: actions/cache#v2
id: pipenv-cache
with:
path: ~/.pipenv
key: ${{ runner.os }}-pipenv-v4-${{ hashFiles('**/Pipfile.lock') }}
restore-keys: |
${{ runner.os }}-pipenv-v4-
- name: Install dependencies
env:
WORKON_HOME: ~/.pipenv/virtualenvs
PIPENV_CACHE_DIR: ~/.pipenv/pipcache
if: steps.pipenv-cache.outputs.cache-hit != 'true'
run: pipenv install --dev
# Run tests etc.
This works fine usually, but because caches are removed after 7 days, if this is run less frequently than that, it can't find the cache and the Install Dependencies step fails with:
FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/.pipenv/virtualenvs/my-project-CfczyyRI/bin/pip'
I then bump the cache key's version number (v4 above) and the action runs OK.
I thought the if: steps.pipenv-cache.outputs.cache-hit != 'true' would fix this but it doesn't. What am I missing?
First alternative: using a separate workflow, with a schedule trigger event, you can run workflows on a recurring schedule.
That way, you force a refresh of those dependencies in the workflow cache.
Second alternative: use github.rest.actions.getActionsCacheList from actions/github-script (as seen here) again in a separate workflow, just to read said cache, and check if it still disappear after 7 days.
Third alternative: check if reading the cache through the new Web UI is enough to force a refresh.
On that third point (Oct. 2022):
Manage caches in your Actions workflows from Web Interface
Caching dependencies and other commonly reused files enables developers to speed up their GitHub Actions workflows and make them more efficient.
We have now enabled Cache Management from the web interface to enable developers to get more transparency and control over their cache usage within their GitHub repositories.
Actions users who use actions/cache can now:
View a list of all cache entries for a repository.
Filter and sort the list of caches using specific metadata such as cache size, creation time, or last accessed time.
Delete a corrupt or a stale cache entry
Monitor aggregate cache usage for repositories and organizations.
In addition to the Cache Management UX that we have now enabled, you could also use our Cache APIs or install the GitHub CLI extension for Actions cache to manage your caches from your terminal.
Learn more about dependency caching to speed up your Actions workflows.

Stop TeamCity from Auto Checkout when adding a repo

I'm trying to configure TeamCity for use in our continuous integration.
Our project has approximately 35 mercurial repos spread across 4 cities. All in all the code in the repos are approximately 30GB in size.
Our problem is that if we add/remove a repo from the VCS roots of a build configuration, the configuration automatically does a complete clean re-checkout of all repos. This adds an extra 3 hours to our build cycle.
Is there anyway to turn this off?
We have TeamCity versions 7.0 and 7.1
UPDATE:
Additional details for one of the build configurations:
Name: BE - Full Build
Description: none
Build number format: %AssemblyBuildNumber%, next build number: #%AssemblyBuildNumber%
Artifact paths:
none specifed
Build options:
hanging builds detection: ON
status widget: OFF
maximum number of simultaneously running builds: unlimited
Version Control Settings edit »
VCS checkout mode: Automatically on server
Checkout directory: default
Clean all files before build: OFF
VCS labeling: disabled
Attached VCS roots:
< All the repos with no rules and no labels >
Show changes from snapshot dependencies: OFF
Perhaps an agent side checkout + local mirror could help you. Take a look at internal properties section here: http://confluence.jetbrains.net/display/TCD7/Mercurial