I have private repo A (which is a library) and that repo has releases. Now I have repo B which has a dependency on the artifacts of A. The dependency (i.e. which version) is stored in a json file in B. What I'm looking for is a way to download the artifacts of release X from repo A in an action/workflow in repo B.
I have seen a lengthy bash script which make this possible, but I'm wondering if there are off the shelf actions around.
If you are using a Linux runner, you can use the Fetch Github Release Asset action.
uses: dsaltares/fetch-gh-release-asset#master
with:
repo: "user/repo"
version: "tags/v1"
file: "filename.ext"
target: "targetFolder/targetFileName.ext"
token: ${{ secrets.PAT_TO_ACCESS_PRIVATE_REPO }}
Inputs
token
Required The GitHub token. Typically this will be ${{ secrets.GITHUB_TOKEN }}
file
Required The name of the file to be downloaded.
repo
The org/repo containing the release. Defaults to the current repo.
version
The release version to fetch from in the form tags/<tag_name> or <release_id>. Defaults to latest.
target
Target file path. Only supports paths to subdirectories of the GitHub Actions workspace directory
Related
My project contains a submodule that holds LFS files:
mainproject/
submodule/
path/to/lfs/file
QUESTION: How do I ensure the LFS files in the submodule are pulled? (Currently they aren't)
In my main.yml file, I do the following:
jobs:
steps:
...
- uses: actions/checkout#v2
with:
lfs: true
token: ${{ secrets.ACCESS_TOKEN }}
submodules: recursive
fetch-depth: 0
This used to work, but I started getting errors a few weeks ago that I root-caused to the LFS files not being pulled.
This is really weird and appears to be undocumented.
I did a little bit of fishing inside my self-hosted runner, and it turns out there's a _work/ inside the github-runner directory where the runner clones repos. This folder gets mapped to /_work/ when running inside a Docker container.
Somewhere along the way, the repos must have broken and git lfs pull stopped updating the latest files.
The solution was to log into the runner and wipe out the repos from under _work/.
My workflow has automatic packaging and upload steps that package build artifacts and upload them to the workflow page. I also manually create releases.
I would like to do the following:
when I push to the tag that was created with a given release,
I would like to upload the zipped artifact file to that release, so users can download
the artifact. Any tips on how to do this ?
Here is my build yaml file.
Thanks!
Turns out it's dead simple to do this:
- name: upload binaries to release
uses: softprops/action-gh-release#v1
if: ${{startsWith(github.ref, 'refs/tags/') }}
with:
files: build/FOO.zip
It seems the action (softprops/action-gh-release#v1) mentioned also creates a release. But there is a much simpelere way upload an artifact to a release without the need of introducing action. The gh cli which is by default available on a GitHub hosted runners can upload the artifact for you.
assets:
name: upload assets
runs-on: ubuntu-latest
permissions:
contents: write # release changes require contents write
steps:
- uses: actions/checkout#v3
- name: Upload Release Asset
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run:
gh release upload <release_tag> <a_file>
Checkout full release example here.
I need to do some directory grooming before my app is ready to be tested or deployed. I would like to utilize a Makefile target which calls a shell script in the repo to make this CI/CD-agnostic. One can call this target with make prepare_directory
The CI platform I am using is Github Actions. Here are the relevant parts of the workflow which is being run on new Pull Requests:
name: PR Tests
env:
GIT_TOKEN: ${{ secrets.GITHUB_TOKEN }}
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
with:
fetch-depth: 1
- name: Prep directoy
run: make prepare_directory
Here is the relevant part of the Makefile (which works exactly as expected locally):
...
prepare_directory:
./scripts/prepare_directory.sh
clean:
#rm -Rf ./$(BUILDPREFIX)
.PHONY: all clean docker lint prep_avro $(dockerbuilds)
Here is the relevant part of the ./scripts/prepare-directory.sh script:
#!/bin/bash -e
# ...
# clone repo using https and GITHUB_TOKEN
git clone https://$GIT_TOKEN#github.com:USERNAME/REPO.git
When I try to clone using that URL, from the shell script, the script fails (along with the Github workflow pipeline) with the following error: fatal: unable to access 'https://github.com:USERNAME/REPO.git/': URL using bad/illegal format or missing URL
Does anybody know what I'm doing wrong?
You can add this action after your checkout step and GitHub can access your private repo dependancy.
Note:- Make sure to add a server's private key as a secret, public key to GitHub SSH keys and Please replace your private repo URL from https+auth_token to SSH. ssh://git#github.com/your_group/your_project.git
Below is the example
- uses: webfactory/ssh-agent#v0.4.1
with:
ssh-private-key: ${{ secrets.SSH_KEY }}
SSH_KEY is the private key secret which you created.
I've created a GitHub App and Github Action to workaround these limitations GitHub Actions Access Manager. Feel free to drop some feedback.
Workflow
This GitHub action will request an access token for a Target Repository from the App Server, authorize by the GitHub Action ID Token (JWT signed by GitHub).
The App Server requests a GitHub App Installation Token to read .github/access.yaml file in Target Repository.
The App Server reads .github/access.yaml file from Target Repository and determine which permissions should be granted to Requesting Repository, authorized by the GitHub App Installation Token from step 2..
The App Server requests a GitHub App Installation Token with granted permissions for Source Directory and send it back in response to the GitHub action from step 1..
The GitHub action sets the token as environment variable $GITHUB_ACCESS_TOKEN and as step output value ${{ steps.github-actions-access.outputs.token }}.
Further steps can then utilize this token to access resources of the Target Repository.
The URL in your prepare-directory.sh script's git clone call has a typo in it:
# clone repo using https and GITHUB_TOKEN
git clone https://$GIT_TOKEN#github.com:USERNAME/REPO.git
The URL there is halfway between an HTTPS git URL and an SSH one. The : should be a /. Assuming your $GIT_TOKEN contains just the token and nothing else, you'd also need a : before the #:
git clone https://$GIT_TOKEN:#github.com/USERNAME/REPO.git
If your $GIT_TOKEN contained the complete auth string, which for a Github App installation access token would be something like "x-access-token:ghs_..." then you wouldn't need the : following $GIT_TOKEN, you'd just have $GIT_TOKEN#github.com.
I use Gitlab pages and Jekyll to generate a website, and a python script to generate the images and data files (JSON) used by Jekyll. As I need to update these files daily, I commit and push dozens of images to update the website, which is not really convenient.
I also use Github Actions to generate and store these files as artifacts on Github:
- name: Main script
run: |
python generate_images.py --reload # saves in folder saved_images
# I manually commit and push these images in jekyll/assets/img to update the site
- name: Upload images artifacts
uses: actions/upload-artifact#v1
with:
name: saved_images
path: saved_images
I would find it better to tell Jekyll to use the artifacts, instead of the committed files, so that I can update the site by just re-launching the github action (hopefully without extra commit or branch change). Actually that's what I've seen on Gitlab on another project:
pages:
stage: Web_page
before_script:
- python generate_images.py --reload
- cp -r saved_images/*.png jekyll/assets/img
- cd jekyll
- bundle install --path vendor
script:
- bundle exec jekyll build -d ../public
...
So I wonder if it is possible to use artifacts as Jekyll assets and data files in Github pages?
My ultimate goal is to be able to schedule posts on my Jekyll blog. I am using Travis-CI to deploy the contents of /_site/ to an S3 bucket whenever I commit to my master branch in Github.
The Travis-CI flow works as expected but for the fact that new pages that are not built and addd to the /_site/ directory unless I build my site locally and push the new /_site/ folder directly to master. The posts are present in /_posts/ but do not get build and added to /_site/ automatically as they should when the site is rebuilt daily.
My travis.yml file is below.
language: ruby
rvm:
- 2.3.3
# before_script:
# - chmod +x ./script/cibuild # or do this locally and commit
# Assume bundler is being used, therefore
# the `install` step will run `bundle install` by default.
install: gem install jekyll html-proofer jekyll-twitter-plugin
script: jekyll build && htmlproofer ./_site
# branch whitelist, only for GitHub Pages
branches:
only:
- master
env:
global:
- NOKOGIRI_USE_SYSTEM_LIBRARIES=true # speeds up installation of html-proofer
exclude: [vendor]
sudo: false # route your build to the container-based infrastructure for a faster build
deploy:
provider: s3
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
bucket: $S3_BUCKET
local_dir: _site
I figured this out: the Travis-CI deploy gem doesn't include a build step. It just pushes the contents of the repo to S3. I updated my build script to push as part of the build and validation step.
You must set the option skip_cleanup as true on deploy directive