I have 2 JSON files:Parameters.json and updatedParam.json
I want to write a yml script that allows me to take both the json files and patch the change done in Parameters.json to updatedParam.json
I am trying to trigger a pipeline whenever a change is made in the Parameters.json file.
Thanks in advance.
I want to write a yml script that allows me to take both the json
files and patch the change done in Parameters.json to
updatedParam.json
DevOps doesn't support this feature, if you need, you need to design your own code.
And if you want your pipeline triggered by a specific file like Parameters.json, your pipeline should be like this:
trigger:
branches:
include:
- main
paths:
include:
- test/Parameters.json
pool:
vmImage: ubuntu-latest
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
- script: |
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
displayName: 'Run a multi-line script'
Related
I have two steps in GitHub Actions:
The first uploads a zipped artifact:
- name: Upload artifact
uses: actions/upload-artifact#master
with:
name: artifacts
path: target/*.jar
The second uses a custom java command to read the uploaded artifact:
name: Read artifact
runs: java -jar pipeline-scan.jar -- "artifacts.zip"
I've redacted the java command, but it's supposed to scan my zip file using Veracode. GitHub Actions returns the following error:
java -jar pipeline-scan.jar: error: argument -f/--file: Insufficient
permissions to read file: 'artifacts.zip'
I've tried changing the permissions of the GITHUB_TOKEN, but apparently you can only pass in the $GITHUB_TOKEN secret with a "uses" parameter and not a "runs" parameter. I've also made sure that my default workflow permissions are set to "read and write permissions."
Does anyone know how to resolve this permissions issue?
Background
I have a number of different repos in my monorepo project.
For my api and common-services repo, i want to run a test command
For my frontend repo, I want to run a test2 command
Current
Currently, when running the test command for the frontend repo, the command variable is provided in the job as an Array.
Expected
I expect the command variable to always be a string
Note: I have simplified my test workflow to represent the problem. I do not want to manually specify every test configuration as an include.
Oops, I was confusing the array input of the original matrix with the includes configuration ...
wrong
matrix:
repo: [api, common-services]
command: [test]
include:
- repo: frontend
command: [test2]
- repo: common-lib
command: [test2]
correct
matrix:
repo: [api, common-services]
command: [test]
include:
- repo: frontend
command: test2 # DO NOT SPECIFY AS ARRAY HERE
- repo: common-lib
command: test2 # DO NOT SPECIFY AS ARRAY HERE
I've setup a job that run some PowerShell commands. One them returns JSON object.
however when I open Job log I see only part of the object. How I can see the full object?
{#{productNo=1; onTarget=f944fb79-b39f-4936-b0b6-8eef3c802014; name=asdffgh-as…
Write the output to a file, then store the file as an artifact:
script:
- your_command | Out-File -FilePath output.json
artifacts:
paths:
- output.json
See Using Out-File, and Job artifacts.
Running this dbt docs generatecommand generates a catalog.json file in the target folder. The process works well locally.
feature/dbt-docs:
- step:
name: 'setup dbt and generate docs'
image: fishtownanalytics/dbt:1.0.0
script:
- cd dbt_folder
- dbt docs generate
- cp target/catalog.json ../catalog.json
After generating the catalog.json file, I want to upload it to s3 in the next step. I copy it from the target folder to the root folder and then I upload it somewhat like this:
- step:
name: 'Upload to S3'
image: python:3.7.2
script:
- aws s3 cp catalog.json s3://testunzipping/
However, I get an error that:
+ aws s3 cp catalog.json s3://testunzipping/
The user-provided path catalog.json does not exist.
Although the copy command works well locally, it seems to not generate the file properly within the bitbucket pipeline. Is there any other way that I can save the content of catalog.json in some variable in the first step and then later upload it to S3?
In bitbucket pipelines, each step has its own build environment. To be able to share things between steps, you should use artifacts.
You may want to try the steps below.
feature/dbt-docs:
- step:
name: 'setup dbt and generate docs'
image: fishtownanalytics/dbt:1.0.0
script:
- cd dbt_folder
- dbt docs generate
- cp target/catalog.json ../catalog.json
artifacts:
- catalog.json
- step:
name: 'Upload to S3'
image: python:3.7.2
script:
- aws s3 cp catalog.json s3://testunzipping/
Reference : https://support.atlassian.com/bitbucket-cloud/docs/use-artifacts-in-steps/
I have a bash script that sets a series of environment variables.
My action includes the following two steps:
- name: Set env variables
run: source ./setvars.sh
- name: dump env variables
run: env
I notice setvars.sh runs successfully, but all of the variables defined inside it are missing after the steps.
How can I use a bash .sh script to add environment variables to the context of the workflow?
I don't see environment variables defined by sourcing file in GitHub Actions workflow.
I only see them defined as map (key-value) at the job or workflow level (since oct. 2019).
See if you can cat your file and append its content to GITHUB_ENV.