Problem using rsync and relative paths in GitHub Actions - github-actions

I have a GitHub Actions workflow job that seems like a simple thing, but one that I'm really struggling to implement.
Essentially I want to use rsync to copy several files from the checkout repository to another existing folder in the checkout repository. Here is a snippet of my action:
steps:
- name: Checkout
uses: actions/checkout#v3
with:
token: ${{ secrets.token }}
fetch-depth: '0'
- name: Rsync Test Files
working-directory: ./tests/amazon-linux-2/integration-test
run: |
rsync -av ../../../ . --exclude tests --exclude .github --exclude .releaserc --exclude README.md --exclude .git
There are several files and folders in the root of the checkout repo that I want to copy to ./tests/amazon-linux-2/integration-test. For info, tests is a folder in the project root.
The error I'm getting from the Rsync Test Files step is:
OCI runtime exec failed: exec failed: unable to start container process: chdir to cwd ("/__w/<repo name>/<repo name>/./tests/amazon-linux-2/integration-test") set in config.json failed: no such file or directory: unknown
Two things confuse me that may or may not be related to the problem. There is an additional . between and tests. Also, the repo name is appearing twice in the path.
I have tried setting the working directory of the Rsync Test Files tasks in multiple places and tweking the rsync command accordingly, but nothing works, with variations of the same error.

Related

Does the Github Action checkout#v2 checkout .eslintrc file?

I noticed when I was trying to run my github workflow to deploy my dockerized VUE app to Elastic Beanstalk that I kept getting an error in my logs saying no eslint config found, since I had just a handful of ignore lines.
So when I added a step in the workflow to ls the files being checked out, I saw it did not grab any of the files formatted as .*.
I would assume it should at least be getting the .eslintrc.* file since it is supposed to come featured to run npm install and npm run lint it would look at the checked out config file to determine if the rules pass.
Here is my workflow up to this point:
name: Deploy to Staging Environment
on: [workflow_dispatch]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Latest Repo
uses: actions/checkout#v2
- name: List Checked Out files
run: ls
# DOES NOT SHOW ANY .* files checked out
Is anyone else noticing the same? What should I try?

Github Action fails on Windows due to missing library

I've just discovered Github workflows and I've been trying to create two for a private C++ repository of mine, which contains a small C++ library.
I've succeeded in creating one that runs on Ubuntu (i.e., it runs and completes successfully), but the other that runs on Windows (almost an exact copy of that one that runs on Ubuntu) fails due to a missing C library.
This is the .yml file of the workflow that runs on Windows:
name: CMake
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
env:
# the directory of the library's source code (and which contains the CMakeLists.txt)
LAL_DIR: D:\a\linear-arrangement-library\linear-arrangement-library/lal
# directories of the different builds
REL_DIR: ${{github.workspace}}/windows-build-release
DEB_DIR: ${{github.workspace}}/windows-build-debug
jobs:
windows_build:
runs-on: windows-2019
steps:
- uses: actions/checkout#v2
- name: Configure CMake on Windows
run: cmake -G "MSYS Makefiles" -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ ${{env.LAL_DIR}} -B ${{env.REL_DIR}} -DCMAKE_BUILD_TYPE=Release ;
cmake -G "MSYS Makefiles" -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ ${{env.LAL_DIR}} -B ${{env.DEB_DIR}} -DCMAKE_BUILD_TYPE=Debug
- name: Build on Windows
run: cmake --build ${{env.REL_DIR}} --config Release -j4 ;
cmake --build ${{env.DEB_DIR}} --config Debug -j4
I'm new on this, so I don't know if I applied the "best practices" (if there are any).
The error I get is the following:
In file included from D:/a/linear-arrangement-library/linear-arrangement-library/lal/generate/rand_ulab_rooted_trees.hpp:50,
from D:/a/linear-arrangement-library/linear-arrangement-library/lal/generate/rand_ulab_free_trees.hpp:50,
from D:/a/linear-arrangement-library/linear-arrangement-library/lal/generate/rand_ulab_free_trees.cpp:42:
D:/a/linear-arrangement-library/linear-arrangement-library/lal/numeric/integer.hpp:45:10: fatal error: gmp.h: No such file or directory
#include <gmp.h>
^~~~~~~
compilation terminated.
The error is telling me that g++ can't find the file gmp.h. The workflow running on Ubuntu, however, does not fail.
I guess that the system executing Ubuntu's workflow simply has the gmp library installed, whereas the one executing Window's workflow doesn't. How can I resolve this? (if it is actually possible, that is)
Thank you very much.

Use artifacts as Jekyll assets in Github pages

I use Gitlab pages and Jekyll to generate a website, and a python script to generate the images and data files (JSON) used by Jekyll. As I need to update these files daily, I commit and push dozens of images to update the website, which is not really convenient.
I also use Github Actions to generate and store these files as artifacts on Github:
- name: Main script
run: |
python generate_images.py --reload # saves in folder saved_images
# I manually commit and push these images in jekyll/assets/img to update the site
- name: Upload images artifacts
uses: actions/upload-artifact#v1
with:
name: saved_images
path: saved_images
I would find it better to tell Jekyll to use the artifacts, instead of the committed files, so that I can update the site by just re-launching the github action (hopefully without extra commit or branch change). Actually that's what I've seen on Gitlab on another project:
pages:
stage: Web_page
before_script:
- python generate_images.py --reload
- cp -r saved_images/*.png jekyll/assets/img
- cd jekyll
- bundle install --path vendor
script:
- bundle exec jekyll build -d ../public
...
So I wonder if it is possible to use artifacts as Jekyll assets and data files in Github pages?

Clone a Mercurial repository into a non-empty directory

tl;dr;
hg clone ssh://hg#bitbucket.org/team/repo ~/prod/ fails with "destination is not empty" if ~/prod/ is not empty. Can I force cloning?
I am trying to write my first Ansible playbook that should deploy my code from a Bitbucket Mercurial repository to my server. There is a deployment path, ~/prod, which contains all code files as well as the data in ~/prod/media and ~/prod/db.db. To make sure the playbook works even if the ~/prod directory is empty or doesn't exist, this is what I have so far:
- name: create directory
file: path=/home/user/prod state=directory
- name: clone repo
hg:
repo: ssh://hg#bitbucket.org/team/repo
dest: /home/user/prod
force: yes
In my understanding, it ensures that the deployment directory exists and then clones the repo there. It works beautifully if the directory doesn't exist or is empty. However, as soon as I've cloned the repo once, this playbook fails with destination is not empty.
I can move media and db.db out first, then delete all other files, then clone, then move the data back. But it looks cumbersome.
I simply want to force cloning. But I cannot find the way to do it. Presumably this is so wrong that Mercurial won't allow me doing this. Why and what's a better way to go?
Though I haven't yet read it anywhere, looks like force-cloning is impossible. The two alternatives then are, as explained in another thread on the same topics:
indeed, clone the .hg folder to another directory and then move it to the target directory
or, hg init /home/user/prod and then hg pull ssh://hg#bitbucket.org/team/repo /home/user/prod; hg update -C -R /home/user/prod.
With the second one, it is possible to optimise the Ansible task, to perform this action only if the target directory doesn't contain .hg:
- name: recreate repo
command:
hg ssh://hg#bitbucket.org/team/repo -R /home/user/prod
creates=/home/user/prod/.hg # <-- only execute command if .hg does not exist
- name: update files
hg:
repo: ssh://hg#bitbucket.org/team/repo
dest: /home/user/prod
clone: no
update: yes # optional, for readability
force: yes
notify: "restart web services"

Travis-CI: S3 deploy script is not adding new files

My ultimate goal is to be able to schedule posts on my Jekyll blog. I am using Travis-CI to deploy the contents of /_site/ to an S3 bucket whenever I commit to my master branch in Github.
The Travis-CI flow works as expected but for the fact that new pages that are not built and addd to the /_site/ directory unless I build my site locally and push the new /_site/ folder directly to master. The posts are present in /_posts/ but do not get build and added to /_site/ automatically as they should when the site is rebuilt daily.
My travis.yml file is below.
language: ruby
rvm:
- 2.3.3
# before_script:
# - chmod +x ./script/cibuild # or do this locally and commit
# Assume bundler is being used, therefore
# the `install` step will run `bundle install` by default.
install: gem install jekyll html-proofer jekyll-twitter-plugin
script: jekyll build && htmlproofer ./_site
# branch whitelist, only for GitHub Pages
branches:
only:
- master
env:
global:
- NOKOGIRI_USE_SYSTEM_LIBRARIES=true # speeds up installation of html-proofer
exclude: [vendor]
sudo: false # route your build to the container-based infrastructure for a faster build
deploy:
provider: s3
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
bucket: $S3_BUCKET
local_dir: _site
I figured this out: the Travis-CI deploy gem doesn't include a build step. It just pushes the contents of the repo to S3. I updated my build script to push as part of the build and validation step.
You must set the option skip_cleanup as true on deploy directive