what is the difference between
steps:
- name: npm install, build, and test
run: |
npm ci
npm run build --if-present
npm test
and
steps:
- name: npm install, build, and test
- run: npm ci
- run: npm run build --if-present
- run: npm test
in github actions?
I tried to read the documentation on steps but it does not mention anything like that
Difference is that first example is executed as single script with three commands, and second example is executed as three different one-line scripts (side note: second example is invalid, as you use step with name without run, i'll ignore that line).
Let's assume for a second that npm does not create any output when running. In first example, if one of commands fails, it might be a problem to identify which one - you have only one step marked as failed. In second example, you'll know exactly where the problem is, as each command is its own step.
Let's assume for a second that npm needs to be run in specific subdirectory. We need to remember that each steps always starts in workspace directory / repo's root directory, so we need enter directory where our stuff is first.
- run: |
cd my/directory
npm ci
npm run build --if-present
npm test
- run: npm ci
working-directory: my/directory
- run: npm run build --if-present
working-directory: my/directory
- run: npm test
working-directory: my/directory
OR
- run: cd my/directory && npm ci
- run: cd my/directory && npm run build --if-present
- run: cd my/directory && npm test
Let's assume for a second npm test needs to be run only on push event, but workflow is configured to run on: [push, pull_request]
- run: |
npm ci
npm run build --if-present
if [ "${{ github.event_name }}" == "push" ]; then
npm test
fi
shell: bash
- run: npm ci
- run: npm run build --if-present
- run: npm test
if: github.event_name == 'push'
Under Actions tab, when processing pull_request event, second example will be displayed as...
- Run npm ci
- Run npm run build...
- Run npm test <-- this one will be grayed out
...and you need only a quick look to see that npm test step is skipped. In first example you'll have to expand step first and inspect log to notice any difference.
And so on, and so on, there's dozens of scenarios when it's easier/better to use all-in-one step, and as much scenarios when command-by-command steps are the way to go; it's up to you to decide which one fits you best.
At the end of the day, both examples do exactly same thing, after all. But if anything goes wrong along the way, picking one way to run commands over another (which also changes how they're displayed) can make a difference how long it gonna take to prepare a fix.
run: | is executed as a single-line script with the commands you mentioned.
Example:
run: Each command will be executed as a one-line script.
Related
I want to use mold instead of lld on github actions ci/cd and I don't know how to make it work as my tests are failing because they cannot locate the mold binary.
jobs:
build-mold:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name : mold
run : |
sudo apt update
sudo apt-get install -y build-essential git clang cmake libstdc++-10-dev libssl-dev libxxhash-dev zlib1g-dev pkg-config
git clone https://github.com/rui314/mold.git
cd mold
git checkout v1.1.1
make -j$(nproc) CXX=clang++
sudo make install
export PATH="$PATH:/usr/local/bin/:/usr/lib/ccache:/usr/local/opt/ccache/libexec"
So far I've tried this, before adding my test section, and the installation goes through fine, but my tests are still failing due to linker not being found in path. Also, is this the way to do it because apt update and building and installing mold takes quite some time before the task is finished.
Instead of building mold by yourself, you may want to wget a binary distribution from the release note page (see the bottom of https://github.com/rui314/mold/releases/tag/v1.1.1) and extract it into /usr.
I noticed when I was trying to run my github workflow to deploy my dockerized VUE app to Elastic Beanstalk that I kept getting an error in my logs saying no eslint config found, since I had just a handful of ignore lines.
So when I added a step in the workflow to ls the files being checked out, I saw it did not grab any of the files formatted as .*.
I would assume it should at least be getting the .eslintrc.* file since it is supposed to come featured to run npm install and npm run lint it would look at the checked out config file to determine if the rules pass.
Here is my workflow up to this point:
name: Deploy to Staging Environment
on: [workflow_dispatch]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Latest Repo
uses: actions/checkout#v2
- name: List Checked Out files
run: ls
# DOES NOT SHOW ANY .* files checked out
Is anyone else noticing the same? What should I try?
I've just discovered Github workflows and I've been trying to create two for a private C++ repository of mine, which contains a small C++ library.
I've succeeded in creating one that runs on Ubuntu (i.e., it runs and completes successfully), but the other that runs on Windows (almost an exact copy of that one that runs on Ubuntu) fails due to a missing C library.
This is the .yml file of the workflow that runs on Windows:
name: CMake
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
env:
# the directory of the library's source code (and which contains the CMakeLists.txt)
LAL_DIR: D:\a\linear-arrangement-library\linear-arrangement-library/lal
# directories of the different builds
REL_DIR: ${{github.workspace}}/windows-build-release
DEB_DIR: ${{github.workspace}}/windows-build-debug
jobs:
windows_build:
runs-on: windows-2019
steps:
- uses: actions/checkout#v2
- name: Configure CMake on Windows
run: cmake -G "MSYS Makefiles" -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ ${{env.LAL_DIR}} -B ${{env.REL_DIR}} -DCMAKE_BUILD_TYPE=Release ;
cmake -G "MSYS Makefiles" -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ ${{env.LAL_DIR}} -B ${{env.DEB_DIR}} -DCMAKE_BUILD_TYPE=Debug
- name: Build on Windows
run: cmake --build ${{env.REL_DIR}} --config Release -j4 ;
cmake --build ${{env.DEB_DIR}} --config Debug -j4
I'm new on this, so I don't know if I applied the "best practices" (if there are any).
The error I get is the following:
In file included from D:/a/linear-arrangement-library/linear-arrangement-library/lal/generate/rand_ulab_rooted_trees.hpp:50,
from D:/a/linear-arrangement-library/linear-arrangement-library/lal/generate/rand_ulab_free_trees.hpp:50,
from D:/a/linear-arrangement-library/linear-arrangement-library/lal/generate/rand_ulab_free_trees.cpp:42:
D:/a/linear-arrangement-library/linear-arrangement-library/lal/numeric/integer.hpp:45:10: fatal error: gmp.h: No such file or directory
#include <gmp.h>
^~~~~~~
compilation terminated.
The error is telling me that g++ can't find the file gmp.h. The workflow running on Ubuntu, however, does not fail.
I guess that the system executing Ubuntu's workflow simply has the gmp library installed, whereas the one executing Window's workflow doesn't. How can I resolve this? (if it is actually possible, that is)
Thank you very much.
I just start with GitHub Actions and I'm trying to configure correctly jobs. Now I have a job - build which set up python and installs dependencies, I have a job with behave test too which needs the dependencies to run.
When I have the test and build in the one job, everything works fine. But I want to have build and test in separate jobs. But when I run them in this configuration, I get the error behave: command not found. I install the Behave in requirementx.txt file. What am I doing wrong? Is this configuration generally possible?
name: CI test
on:
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.8
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
cc_test:
needs: build
runs-on: ubuntu-latest
steps:
- name: Run cc test
run: |
behave --no-capture --no-skipped -t guest -t cc -D driver=BROWSERSTACK features
As riQQ and documentation says
A job is a set of steps that execute on the same runner. By default, a workflow with multiple jobs will run those jobs in parallel. You can also configure a workflow to run jobs sequentially. For example, a workflow can have two sequential jobs that build and test code, where the test job is dependent on the status of the build job. If the build job fails, the test job will not run.
In your case it would be the best to have one job build and test and do both things in this one job. Putting tests in separate jobs can be a good move, but it would require one of two:
prepare testable package in previous step and share it (it could still requires to install some dependencies)
checkout code, install dependencies, build code and run tests what means that you need to repeat all steps from previous job
So, I have a library haste-mapper (link to Github - I would like some opinions on it). It uses gulp, babel-core and a few other npm packages to build itself so as to have valid JavaScript instead of Flow into the build/ directory. I added that as a postinstall hook script in package.json:
"postinstall": "gulp build"
It works, the script starts running but it does not meet the required dependencies in the host package. I have gulp and babel-core as devDependencies and it seems not to install them. Adding them to dependencies seems semantically wrong. I tried adding them to peerDependencies, but instead of installing what's missing, it just complains about it.
How should I go about this?
P.S. Here is the package.json
If you want to use something in a postinstall hook, it needs to be a dependency.
However, you're doing it wrong. You shouldn't be transpiling your code after the install. Instead, you should transpile your code before you publish the package.
To do that, you will need to rename your script to prepublish so that it is run when you run npm publish. List gulp, babel, etc. as devDependencies. Add an .npmignore file in the root of your project, containing:
/src/
The .npmignore file works just like a .gitignore. You don't want your src/ directory included in the published package, only build/. Make sure .npmignore is committed to git. If you don't have an .npmignore, npm will use the .gitignore file. This isn't what you want, since build/ is ignored for version control, but should be included in the npm package.
When you run npm publish, npm will run your prepublish hook before bundling your package for the registry. Then when someone npm installs your package, they will get the build/ folder, but not src/. Just what you want!
I started to leave a comment on RyanZim's answer because his technique is correct. However, I wanted to give a slightly different approach. Our company maintains a lot of open source projects and this is how we would advise you.
Keep developing your project like you normally would. Your .gitignore file should be ignoring your dist directory (/build in your case).
When you are ready to deploy, you want to build your code, bump your version number inside package.json, tag the changes, and push the built code to both github and npm.
The main idea is that we want to keep a copy of our built code in github along with a "tag" for that version. This allows us to see exactly what was pushed to npm for any particular version. The built code is not part of the master branch but exists only under a tag (which is sort of like a branch). When a user reports a bug and he's using version x.x.x, you can checkout that exact version and start debugging. When you fix the bug, you release a new "patch" and your user will get the changes the next time he runs npm install or npm update.
We have created a set of npm scripts to do most of this for us. Here is what we use (this goes in your package.json):
"scripts": {
"build": "node build.js",
"preversion": "npm run build",
"version": "git commit -am \"Update dist for release\" && git checkout -b release && git add -f dist/",
"postversion": "git push --tags && git checkout master && git branch -D release && git push",
"release:pre": "npm version prerelease && npm publish",
"release:patch": "npm version patch && npm publish",
"release:minor": "npm version minor && npm publish",
"release:major": "npm version major && npm publish"
}
I know that may look confusing so let me explain. Whenever we are ready to release new code, we run one of the release: commands. For example, when we run npm run release:minor, here is the list of commands which are run in order. I have annotated it so you can see what happens:
node build.js ## run the build code - you will want to run gulp instead
npm version minor ## bumps the version number in package.json and creates a new git tag
git commit -am "Update dist for release" ## commit the package.json change to git (with new version number) - we will push it at the end
git checkout -b release ## create a temporary "release" branch
git add -f dist/ ## force add our dist/ directory - you will want to add your build/ directory instead
npm publish ## push the code to npm
git push --tags ## push the built code and tags to github
git checkout master ## go back to the master branch
git branch -D release ## delete the temporary "release" branch
git push ## push the updated package.json to github
If you have any questions, please ask. You might want to do things in a slightly different order as your situation is a little different. Please feel free to ask questions. This code works really well on dozens of projects - we release new code multiple times a day.