Background
I have a number of different repos in my monorepo project.
For my api and common-services repo, i want to run a test command
For my frontend repo, I want to run a test2 command
Current
Currently, when running the test command for the frontend repo, the command variable is provided in the job as an Array.
Expected
I expect the command variable to always be a string
Note: I have simplified my test workflow to represent the problem. I do not want to manually specify every test configuration as an include.
Oops, I was confusing the array input of the original matrix with the includes configuration ...
wrong
matrix:
repo: [api, common-services]
command: [test]
include:
- repo: frontend
command: [test2]
- repo: common-lib
command: [test2]
correct
matrix:
repo: [api, common-services]
command: [test]
include:
- repo: frontend
command: test2 # DO NOT SPECIFY AS ARRAY HERE
- repo: common-lib
command: test2 # DO NOT SPECIFY AS ARRAY HERE
Related
I have two steps in GitHub Actions:
The first uploads a zipped artifact:
- name: Upload artifact
uses: actions/upload-artifact#master
with:
name: artifacts
path: target/*.jar
The second uses a custom java command to read the uploaded artifact:
name: Read artifact
runs: java -jar pipeline-scan.jar -- "artifacts.zip"
I've redacted the java command, but it's supposed to scan my zip file using Veracode. GitHub Actions returns the following error:
java -jar pipeline-scan.jar: error: argument -f/--file: Insufficient
permissions to read file: 'artifacts.zip'
I've tried changing the permissions of the GITHUB_TOKEN, but apparently you can only pass in the $GITHUB_TOKEN secret with a "uses" parameter and not a "runs" parameter. I've also made sure that my default workflow permissions are set to "read and write permissions."
Does anyone know how to resolve this permissions issue?
I have a bash script that sets a series of environment variables.
My action includes the following two steps:
- name: Set env variables
run: source ./setvars.sh
- name: dump env variables
run: env
I notice setvars.sh runs successfully, but all of the variables defined inside it are missing after the steps.
How can I use a bash .sh script to add environment variables to the context of the workflow?
I don't see environment variables defined by sourcing file in GitHub Actions workflow.
I only see them defined as map (key-value) at the job or workflow level (since oct. 2019).
See if you can cat your file and append its content to GITHUB_ENV.
I am setting a couple of env variables on build time when deploying on vercel using "amondnet/vercel-action#v19.0.1+3" github action.
Everything works fine when I set just one variable, but when I set multiple variables as described in Vercel's documenation here: https://vercel.com/docs/cli#commands/overview/unique-options/build-env, I get the following error when running the action:
Error! The specified file or directory "PR_NUMBER=423]" does not exist.
The command the action is trying to run is as follows:
/usr/local/bin/npx vercel --build-env [NODE_ENV=pr PR_NUMBER=423] -t *** -m
It should be:
/usr/local/bin/npx vercel --build-env NODE_ENV=pr --build-env PR_NUMBER=423 -b KEY=value
I am trying to upload J-unit reports on Gitlab CI(these are test results from my Cypress automation framework). I am using Junit-merge. Due the architecture of Cypress (each test in isolation), it requires an extra 'merge' for the reports to get them into one file. Locally evertything works fine:
Junit generates single reports of each test with a hashcode
After all reports have been generated I run a script (shown below) that mixed all the reports into one single .xml file and outputs it below the 'results' package.
Tried to debug it locally, but locally everything just works fine. Possiblities I could think of: Either the merge script is not handled properly or Gitlab does not accept the relative path to the .xml file.
{
"baseUrl": "https://www-acc.anwb.nl/",
"reporter": "mocha-junit-reporter",
"reporterOptions": {
"mochaFile": "results/resultsreport.[hash].xml",
"testsuiteTitle": "true"
}
}
This is the Cypress.json file, where I configured the Junit reporter and let it output the single testfiles in the results package.
cypress-e2e:
image: cypress/base:10
stage: test
script:
- npm run cy:run:staging
- npx junit-merge -d results -o results/results.xml
artifacts:
paths:
- results/results.xml
reports:
junit: results/results.xml
expire_in: 1 week
This is part of the yml file. The npx junit-merge command makes sure all .xml files in the results package are being merged into results.xml.
Again, locally everything works as expected. The error I get from gitlab Ci is:
Uploading artifacts...
WARNING: results/results.xml: no matching files
ERROR: No files to upload
ERROR: Job failed: exit code 1
Artifacts can only exist in
directories relative to the build directory and specifying paths which don't
comply to this rule trigger an unintuitive and illogical error message (an
enhancement is discussed at
gitlab-ce#15530
). Artifacts need to be uploaded to the GitLab instance (not only the GitLab
runner) before the next stage job(s) can start, so you need to evaluate
carefully whether your bandwidth allows you to profit from parallelization
with stages and shared artifacts before investing time in changes to the
setup.
https://gitlab.com/gitlab-org/gitlab-ee/tree/master/doc/ci/caching
which means next configuration should fix the problem:
artifacts:
reports:
junit: <testing-repo>/results/results.xml
expire_in: 1 week
I have a OpenShift template in template.yaml file which includes following objects - deployment-config, pod, service and route. I am using the following command to execute the yaml:
oc process -f template.yml | oc apply -f -
I want to perform following validations before I actually apply/execute the yaml:
YAML syntax validation - if there are any issues with the YAML syntax.
OpenShift schema validation - to check if the object definition abides by the OpenShift object schema.
It seems that the command 'oc process' is doing following checking:
Basic YAML syntax validation
Template object schema validation
How to perform schema validation of other objects (e.g. deployment-config, service, pod, etc.) that are defined in template.yaml?
This is now possible with the OpenShift client (and on Kubernetes in general), e.g.
$ oc login
Username: john.doe
Password:
Login successful.
$ oc apply -f openshift/template-app.yaml --dry-run
template "foobar-app" created (dry run)
It's also possible to process the template locally, thus you can avoid sending it to the server first, e.g.
$ oc process -f openshift/template-app.yaml --local -p APP_NAME=foo | oc apply --dry-run --validate -f -
deploymentconfig "foo" created (dry run)
service "foo" created (dry run)
Also note the --validate option I'm using for schema validation. Unfortunately, you still have to log in for the apply command to work (there's no --local option for apply).
Oddly, this feature is not described in the CLI documentation, however it's mentioned on the help screen:
$ oc apply --help
Apply a configuration to a resource by filename or stdin.
JSON and YAML formats are accepted.
Usage:
oc apply -f FILENAME [options]
...
Options:
...
--dry-run=false: If true, only print the object that would be sent, without sending it.
...
--validate=false: If true, use a schema to validate the input before sending it
Use "oc <command> --help" for more information about a given command.
Use "oc options" for a list of global command-line options (applies to all commands).
I'm having the same issue with cryptic errors coming back from the oc process command.
However if you go into the Openshift Console and use the "Add to Project" link at the top of the console, choose the "Import YAML / JSON" option and import your YAML/JSON that way you get slightly more useful errors.