I wanted to pass secrets from a GitHub action to a JSON file in the same workflow.
# Github secrets
SECRET_TOKEN: 4321
In file.json the SECRET_TOKEN value needs to be fetched.
# file.json
{
secret_token: "SECRET_TOKEN", # should fetch the SECRET_TOKEN from git action
apiId: "blabla"
}
Expected Output:
# file.json
{
secret_token: "4321",
apiId: "blabla"
}
You have several options - you can use pure bash and jq to achieve that or if you are not that experienced, an easier way will be to use one of existing actions from marketplace, like this one:
https://github.com/marketplace/actions/create-json
- name: create-json
uses: jsdaniell/create-json#1.1.2
with:
name: "file.json"
json: '{"app":"blabla", "secret_token":"${{ secrets.SECRET_TOKEN }}"}'
I would suggest you to use the replace-tokens action, as example, suppose this json file:
file.json
{
secret_token: "#{SECRET_TOKEN}#",
apiId: "blabla"
}
the action:
- uses: cschleiden/replace-tokens#v1
with:
files: 'file.json'
env:
SECRET_TOKEN: ${{ secrets.SECRET_TOKEN }}
If you want to use a different token format, you can specify a custom token prefix/suffix. For example, to replace just tokens like `{SECRET_TOKEN} you could add:
- uses: cschleiden/replace-tokens#v1
with:
files: 'file.json'
tokenPrefix: '{'
tokenSuffix: '}'
env:
SECRET_TOKEN: ${{ secrets.SECRET_TOKEN }}
Related
I'm using BackstopJS for regression tests and trying to implement GitHub workflow.
At first little introduction how BackstopJS works:
We have reference images (pictures) of browser pages
We run BackstopJS test and compare actual browser view and reference image
Check backstop report HTML page in browser and decide which is correct actual view or reference image
If browser view is an updated correct version, we run backstop approve command to rewrite reference image with new actual image
What can be implemented inside GitHub actions:
Download reference images from S3 bucket
Run BackstopJS test
Save HTML report and actual browser images as artifacts
Download HTML report stored as artifact and check if new version of images is correct
??? Here is a problem
Problem:
Workflow is already ended, and we don't able to approve new images. So, is here any way to add dialog inside Pull Request if test Action failed to be able upload new images (stored as artifacts) to S3 as new reference images? Or some way to retry failed test with new parameters (let's say it will be env AUTO_APPROVE=true) to be able re-run test with new images approvement?
Finally, I implemented interactive workflow:
---
name: 'BackstopJS test'
on:
pull_request:
types:
- edited
- opened
- synchronize
branches:
- 'develop'
env:
AWS_ACCOUNT_ID: '12345678'
AWS_REGION: 'us-east-1'
AWS_BUCKET_NAME: 'bucket_name'
AWS_BUCKET_PATH: 'bucket_folder'
AWS_BUCKET_KEY: 'bitmaps_archive.zip'
defaults:
run:
shell: bash
working-directory: backstop_test
jobs:
test:
# yamllint disable rule:line-length
if: ${{ (github.event.action != 'edited' ) || contains(github.event.pull_request.body, 'approve ') }}
name: 'BackstopJS test'
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
pull-requests: read
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Get last commit message (only 100 commits in PR are acceptable)
if: ${{ github.event.action != 'edited' }}
env:
COMMITS_URL: ${{ github.event.pull_request.commits_url }}
run: |
if [ "${COMMITS_URL}x" != "x" ]; then
echo "COMMIT_MSG=$(curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" "${COMMITS_URL}?per_page=100" | jq -r .[-1].commit.message)" >> "${GITHUB_ENV}"
else
echo '::warning ::Cannot get commits list URL'
echo 'COMMIT_MSG=' >> "${GITHUB_ENV}"
fi
- name: Search for approve directives in PR body or commit message
shell: python
env:
PR_MSG: ${{ github.event.pull_request.body }}
run: |
from os import environ as env
from sys import exit
file_path = env.get('GITHUB_ENV', None)
if file_path is None:
raise OSError('Environ file not found')
autoapprove = False
approve_only = False
commit_message = env.get('COMMIT_MSG', '')
pr_message = env.get('PR_MSG', '')
with open(file_path, 'a') as gh_envs:
if '[cancel test]' in commit_message.lower() or '[skip test]' in commit_message.lower():
gh_envs.write('SKIP_TEST=1\n')
print("::warning ::Test is skipped by commit tag")
exit(0)
elif 'cancel test' in pr_message.lower() or 'skip test' in pr_message.lower():
gh_envs.write('SKIP_TEST=1\n')
print("::warning ::Test is skipped by tag in PR message")
exit(0)
else:
gh_envs.write('SKIP_TEST=0\n')
if '[approve me]' in commit_message.lower():
autoapprove = True
approve_only = True
print("Reference bitmaps will be approved by commit message")
else:
print("Last commit message:", commit_message)
if '${{ github.event.action }}' == 'edited':
approve_only = True
pr_message = pr_message.split('\n')
last_commit_id = '${{ github.event.pull_request.head.sha }}'
commit_id = None
for line in pr_message:
if line.startswith('approve '):
commit_id = line.split(' ')[-1].rstrip('\n\r')
break
if commit_id:
if last_commit_id.startswith(commit_id):
autoapprove = True
else:
print(
"::warning ::approved commit sha and last commit sha are missmatched:",
commit_id,
"/",
last_commit_id
)
else:
print("Auto approvment disabled")
with open(file_path, 'a') as gh_envs:
if autoapprove:
gh_envs.write('AUTOAPPROVE=1\n')
else:
gh_envs.write('AUTOAPPROVE=0\n')
if approve_only and autoapprove:
gh_envs.write('APPROVE_ONLY=1\n')
else:
gh_envs.write('APPROVE_ONLY=0\n')
- name: Configure AWS credentials
if: ${{ env.SKIP_TEST != 1 }}
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: arn:aws:iam::${{ env.AWS_ACCOUNT_ID }}:role/github-iam-role
aws-region: ${{ env.AWS_REGION }}
role-session-name: backstopjs_test_runner
- name: Download and extract reference bitmaps
if: ${{ env.SKIP_TEST != 1 }}
run: |
aws s3 cp "s3://${AWS_BUCKET_NAME}/${AWS_BUCKET_PATH}/${AWS_BUCKET_KEY}" "./backup_${AWS_BUCKET_KEY}" && unzip -od ./backstop_data "backup_${AWS_BUCKET_KEY}" || echo "::warning file=${AWS_BUCKET_KEY}::No reference bitmaps archive found"
- name: Run tests
if: ${{ env.SKIP_TEST != 1 }}
run: |
if [ "${AUTOAPPROVE}" == "1" ] || [ "${AUTOAPPROVE^^}" == "TRUE" ] || [ "${AUTOAPPROVE^^}" == "YES" ]; then
echo "**Autoapprove is activated. Reference images will be renewed** " >> "${GITHUB_STEP_SUMMARY}"
else
{
echo "**Autoapprove is not active. Current reference images will be used** ";
echo "";
echo "Add \`approve ${{ github.event.pull_request.head.sha }}\` line to PR description";
echo "and test JOB will be automatically re-run to approve new reference bitmaps";
echo "";
echo "* if you will do new commit into PR, \`sha\` of current approvment should be updated too ";
echo "";
echo "Also \`[approve me]\` tag may be added inside commit message to renew bitmaps automatically ";
} >> "${GITHUB_STEP_SUMMARY}"
fi
{
echo "";
echo "Test may be cancelled by using \`[skip test]\` (or \`[cancel test]\`) tag inside commit message ";
echo "or by using \`skip test\` (or \`cancel test\`) code word inside PR message ";
echo "";
echo "---";
} >> "${GITHUB_STEP_SUMMARY}"
# Run tests here.
# If AUTOAPPROVE=1 then backstop test → backstop approve → backstop test will be run (both reports will be saved)
# If AUTOAPPROVE=1 and APPROVE_ONLY=1 then backstop reference → backstop test will be run
# ...
# Set Job to fail or success depends on tests exit status code
# In this example tests are not included, and status will be always failed if autoapprove is 0
if [ "${AUTOAPPROVE}x" == "1x" ]; then
echo "IS_FAILED=0" >> "${GITHUB_ENV}"
else
echo "IS_FAILED=1" >> "${GITHUB_ENV}"
echo "Error: test \`BLAHBLAHBLAH\` failed with status code: \`1\`" >> "${GITHUB_STEP_SUMMARY}"
fi
- name: Upload new reference ritmaps to S3 bucket
if: ${{ env.AUTOAPPROVE == 1 && env.IS_FAILED == 0 && env.SKIP_TEST != 1 }}
run: |
cd backstop_data && zip -ur "../${AWS_BUCKET_KEY}" bitmaps_reference && cd .. && \
aws s3 cp "${AWS_BUCKET_KEY}" "s3://${AWS_BUCKET_NAME}/${AWS_BUCKET_PATH}/${AWS_BUCKET_KEY}"
if [ -f "backup_${AWS_BUCKET_KEY}" ]; then
aws s3 cp "backup_${AWS_BUCKET_KEY}" "s3://${AWS_BUCKET_NAME}/${AWS_BUCKET_PATH}/backup_${AWS_BUCKET_KEY}"
fi
- name: Save HTML reports
if: ${{ env.SKIP_TEST != 1 }}
uses: actions/upload-artifact#v3
with:
name: html_reports
path: backstop_test/report
- name: Save logs (only if failed)
if: ${{ env.IS_FAILED == 1 && env.SKIP_TEST != 1 }}
uses: actions/upload-artifact#v3
with:
name: test_logs
path: backstop_test/logs
- name: Set to fail
if: ${{ env.IS_FAILED == 1 && env.SKIP_TEST != 1 }}
uses: actions/github-script#v3
with:
script: |
core.setFailed('Some of regression tests failed. Check Summary for detailed info')
Flow runs on:
PR contents updates (updated)
New commits inside PR (synchronize)
New PR created (opened)
If PRs body has line skip test (cancel test) or commits message has tag [skip test] ([cancel test]), then test will be skipped
If PRs body has line approve commit-SHA (where commit-SHA is a sha of a last commit in PR) or commits message has tag [approve me], then new reference bitmaps will be created
If approve line is present in PR, only one test with new reference images will be run
If approve tag is present in commit message, two tests will be run (before and after approvement) and two reports will be saved
Reference images are uploaded/downloaded/stored from/to/in S3 bucket
In Azure Devops, I can set a pipeline variable at runtime by echoing:
##vso[task.setVariable var=value]
How can I do the same thing in Github Workflows?
I'm not making a custom action, so I don't think outputs are relevant, I just want to pass a variable from one step to another. However, I might be missing something.
The following will set a value as an env variable named environment_variable_name
echo "{environment_variable_name}={value}" >> $GITHUB_ENV
An example on how you would use this could be
steps:
- name: Set the value
id: step_one
run: |
echo "action_state=yellow" >> $GITHUB_ENV
- name: Use the value
id: step_two
run: |
echo "${{ env.action_state }}" # This will output 'yellow'
More on this can be found here
I want to convert a pipeline variable - delimited string - to a json array and assign the json array to an other pipeline variable. See my code below, output stays empty. What am I missing here?
script:
steps:
- task: PowerShell#2
inputs:
targetType: inline
script: |
$test = "LZ-UK;LZ-ES;LZ-NL"
$json = $test.Split(";") | ConvertTo-Json -AsArray
Write-Host "##vso[task.setvariable variable=JsonLZ]$json"
Write-Host "Assigned the variable"
Write-Host "New `r`n $($JsonLZ)"
- script: |
echo ${{ variables.JsonLZ }}
output:
Starting: PowerShell
==============================================================================
Task : PowerShell
Description : Run a PowerShell script on Linux, macOS, or Windows
Version : 2.200.0
Author : Microsoft Corporation
Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/powershell
==============================================================================
Generating script.
========================== Starting Command Output ===========================
/usr/bin/pwsh -NoLogo -NoProfile -NonInteractive -Command . '/home/vsts/work/_temp/380b437f-74c4-4883-9d4a-7b4f3ac79266.ps1'
"LZ-UK",
"LZ-ES",
"LZ-NL"
]
Assigned the variable
New
Finishing: PowerShell
You're very close. There were a few minor issues that I spotted with your YAML/PowerShell:
You forgot the semicolon after the variable name in "##vso[task.setvariable variable=JsonLZ]$json", it should be: "##vso[task.setvariable variable=JsonLZ;]$json"
You should be using $(JsonLZ) instead of ${{ variables.JsonLZ }}. The former will be evaluated at runtime, the latter at compile-time. Here's a link to the MS Docs: Understand variable syntax
Give this a try to see a working example:
name: Stackoverflow-Example-Pipeline
trigger:
- none
variables:
JsonLZ: 'UNSET'
stages:
- stage: StageA
displayName: "Stage A"
jobs:
- job: example_job
displayName: "Example Job"
pool:
vmImage: "ubuntu-latest"
steps:
- task: PowerShell#2
inputs:
targetType: inline
script: |
$test = "LZ-UK;LZ-ES;LZ-NL"
$json = $test.Split(";") | ConvertTo-Json -Compress
Write-Host "##vso[task.setvariable variable=JsonLZ;]$json"
Write-Host "Assigned the variable"
- script: |
echo $(JsonLZ)
echo ${{ variables.JsonLZ }}
I use GitHub-action for my build, and it generates multiple artifacts (with a different name).
Is there a way to predict the URL of the artifacts of the last successful build? Without knowing the sha1, only the name of the artifact and the repo?
I have developed a service that exposes predictable URLs to either the latest or a particular artifact of a repository's branch+workflow.
https://nightly.link/
https://github.com/oprypin/nightly.link
This is implemented as a GitHub App, and communication with GitHub is authenticated, but users that only download don't need to even log in to GitHub.
The implementation goes and fetches this through the API, in 3 steps:
https://api.github.com/repos/:owner/:repo/actions/workflows/someworkflow.yml/runs?per_page=1&branch=master&event=push&status=success
https://api.github.com/repos/:owner/:repo/actions/runs/123456789/artifacts?per_page=100
https://api.github.com/repos/:owner/:repo/actions/artifacts/87654321/zip
(the last one redirects you to an ephemeral direct download URL)
Note that authentication is required. For OAuth that's public_repo (or repo if appropriate). For GitHub Apps that's "Actions"/"Read-only".
There is indeed no more direct way to do this.
Some relevant issues are
https://github.com/actions/upload-artifact/issues/51
https://github.com/actions/upload-artifact/issues/27
At the moment, no, according to comments from staff although this may change with future versions of the upload-artifact action.
After poking around myself, it is possible to get this using the GitHub actions API:
https://developer.github.com/v3/actions/artifacts/
GET /repos/:owner/:repo/actions/runs/:run_id/artifacts
So you can receive a JSON reply and iterate through the "artifacts" array to get the corresponding "archive_download_url". A workflow can fill in the URL like so:
/repos/${{ github.repository }}/actions/runs/${{ github.run_id }}/artifacts
You can use the jq command-line JSON processor along with curl to extract the URL as follows.
curl -s https://api.github.com/repos/<OWNER>/<REPO_NAME>/actions/artifacts\?per_page\=<NUMBER_OF_ARTIFACTS_PER_BUILD> | jq '[.artifacts[] | {name : .name, archive_download_url : .archive_download_url}]' | jq -r '.[] | select (.name == "<NAME_OF_THE_ARTIFACT>") | .archive_download_url'
For example;
curl -s https://api.github.com/repos/ballerina-platform/ballerina-distribution/actions/artifacts\?per_page\=9 | jq '[.artifacts[] | {name : .name, archive_download_url : .archive_download_url}]' | jq -r '.[] | select (.name == "Linux Installer deb") | .archive_download_url'
Here, curl -s https://api.github.com/repos/<OWNER>/<REPO_NAME>/actions/artifacts\?per_page\=<NUMBER_OF_ARTIFACTS_PER_BUILD> returns the array of artifacts related to the latest build.
jq '[.artifacts[] | {name : .name, archive_download_url : .archive_download_url}]' extracts the artifacts array and filters required data.
jq -r '.[] | select (.name == "<NAME_OF_THE_ARTIFACT>") | .archive_download_url' selects the url for the given artifact name.
I am not a GitHub and jq guru. Probably there are more optimal solutions out there.
jq playground link: https://jqplay.org/s/Gm0kRcv63C - to test my solution and other possible ideas. I dropped some irrelevant field to shrink the sample JSON size (for example: node_id, size_in_bytes, created_at...)
Further details on the methods in the code samples below.
####### You can get the max date of your artifacts.
####### Then you need to choose the artifact entry by this date.
#######
####### NOTE: I just pre-formatted the first command "line".
####### 2nd "line" has a very similar, but simplified structure.
####### (at least easy to copy-paste into jq playground)
####### NOTE: ASSUMPTION:
####### First "page" of json response contains the most recent entries
####### AND includes artifact(s) with that specific name.
#######
####### (if other artifacts flood your API response, you can increase
####### the page size of it or do iteration on pages as a workaround)
bash$ cat artifact_response.json | \
jq '
(
[
.artifacts[]
| select(.name == "my-artifact" and .expired == false)
| .updated_at
]
| max
) as $max_date
| { $max_date }'
####### output
{ "max_date": "2021-04-29T11:22:20Z" }
Another way:
####### Latest ID of non-expired artifacts with a specific name.
####### Probably this is better solution than the above since you
####### can use the "id" instantly in your download url construction:
#######
####### "https://api.github.com/repos/blabla.../actions/artifacts/92394368/zip"
#######
####### ASSUMPTION: higher "id" means higher "update date" in your workflow
####### (there is no post-update of artifacts aka creation and
####### update dates are identical for an artifact)
cat artifact_response.json | \
jq '[ .artifacts[] | select(.name == "my-artifact" and .expired == false) | .id ] | max'
####### output
92394368
More compact filter assuming in reverse order by date or id in the API response:
####### no full command line, just jq filter string
#######
####### no "max" function, only pick the first element by index
#######
'[ .artifacts[] | select(.name == "my-artifact" and .expired == false) | .id ][0]'
Wrap-up
I recently faced with a similar use case, with the aim to make the build artifacts of a given GitHub Actions workflow more visible, with a single click in the commit statuses (albeit requiring users to be logged in github.com).
As pointed out by #geoff-hutchison in his answer:
The API https://developer.github.com/v3/actions/artifacts/ is helpful.
However:
It is impossible to query the list of artifacts generated by a workflow from a job of the current workflow; deferring the request in a subsequent workflow is needed.
The archive_download_url URLs contained in the corresponding JSON response do not seem practical enough (they are GitHub API URLs redirecting to ephemeral URLs).
Fortunately:
Browsing the workflow build artifacts URLs already proposed in the GitHub Actions page clearly shows that they have the form https://github.com/${{ github.repository }}/suites/${check_suite_id}/artifacts/${artifact_id}, and these URLs are static (albeit only available for logged users, and for 90 days maximum, given the expiration of artifacts).
Implementation
Hence the following, generic code I developed (under MIT license) to pin all artifacts of a given workflow in commit statuses (just replace the TODO-strings):
.github/workflows/pin-artifacts.yml:
name: Pin artifacts
on:
workflow_run:
workflows:
- "TODO-Name Of Existing Workflow"
types: ["completed"]
jobs:
# Make artifacts links more visible for the upstream project
pin-artifacts:
permissions:
statuses: write
name: Add artifacts links to commit statuses
if: ${{ github.event.workflow_run.conclusion == 'success' && github.repository == 'TODO-orga/TODO-repo' }}
runs-on: ubuntu-latest
steps:
- name: Add artifacts links to commit status
run: |
set -x
workflow_id=${{ github.event.workflow_run.workflow_id }}
run_id=${{ github.event.workflow_run.id }} # instead of ${{ github.run_id }}
run_number=${{ github.event.workflow_run.run_number }}
head_branch=${{ github.event.workflow_run.head_branch }}
head_sha=${{ github.event.workflow_run.head_sha }} # instead of ${{ github.event.pull_request.head.sha }} (or ${{ github.sha }})
check_suite_id=${{ github.event.workflow_run.check_suite_id }}
set +x
curl \
-H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/${{ github.repository }}/actions/runs/${run_id}/artifacts" \
| jq '[.artifacts | .[] | {"id": .id, "name": .name, "created_at": .created_at, "expires_at": .expires_at, "archive_download_url": .archive_download_url}] | sort_by(.name)' \
> artifacts.json
cat artifacts.json
< artifacts.json jq -r ".[] | \
.name + \"§\" + \
( .id | tostring | \"https://github.com/${{ github.repository }}/suites/${check_suite_id}/artifacts/\" + . ) + \"§\" + \
( \"Link to \" + .name + \".zip [\" + ( .created_at | sub(\"T.*\"; \"→\") ) + ( .expires_at | sub(\"T.*\"; \"] (you must be logged)\" ) ) )" \
| while IFS="§" read name url descr; do
curl \
-X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ github.token }}" \
"https://api.github.com/repos/${{ github.repository }}/statuses/${head_sha}" \
-d "$( printf '{"state":"%s","target_url":"%s","description":"%s","context":"%s"}' "${{ github.event.workflow_run.conclusion }}" "$url" "$descr" "$name (artifact)" )"
done
(If need be, see also my PR ocaml-sf/learn-ocaml#501 to see an implementation example and screenshots.)
Say I have a release pipeline in Azure DevOps written in yaml which has two tasks, one for reading json from a file and the second one for setting a key into a different json file using the json read in the first task. I have the following pipeline.yml -
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: PowerShell#2
name: ReadMetadataJson
inputs:
filePath: 'MetadataReader.ps1'
arguments: 'Run MetadataReader.ps1 -pathToMetadata metadata.json'
- task: PowerShell#2
name: SetAppSetting
inputs:
filePath: 'AppSettingSetter.ps1'
arguments: 'Run AppSettingSetter.ps1 -pathToAppSetting SomeApp/Data.json -appSettingKey testkey -appSettingValue $($(ReadMetadataJson)).testkey'
- script: echo $(ReadMetadataJson.metadata)
Below are the Powershell scripts being called from each tasks -
Powershell 1
# Read From the Metadata.json
param ($pathToMetadata)
echo $pathToMetadata
$metadata = Get-content $pathToMetadata | out-string | ConvertFrom-Json
Write-Output "Metadata Json from metadata reader ps script - $metadata"
echo "##vso[task.setvariable variable=metadata;]$metadata"
Powershell 2
# For now just accept the values in parameter and log them
param ($pathToAppSetting, $appSettingKey, $appSettingValue)
echo "pathToAppSetting : $pathToAppSetting"
echo "appSettingKey : $appSettingKey"
echo "appSettingValue : $appSettingValue"
# Code to set in another file. I have this working, so omitting for brevity
And these are the json files -
Metadata.json
{
"testkey": "TestValueFromMetadata",
"testkey1": "TestValueFromMetadata1"
}
appSetting.json
{
"testkey": "TestValueInAppSetting",
"testkey1": "TestValueInAppSetting1"
}
The problem is when I want to return the json data as output from the first task and use it in the second task to pass the parameter to the second powershell script. Below is a screenshot of the pipeline result after I run it.
As can be seen, it says ReadMetadataJson.metadata: command not found. I have been following the Microsoft document as a reference and have searched for other articles, but all I could find was handling values like string or integer, but not a json object. What is it that I am missing or doing wrong.
You can convert your JSON object to string (ConvertTo-Json) and pass it as variable to the second script.
Then on the second script, you just parse the string to JSON object again, using the ConvertFrom-Json method.
Except the method that Hugo mentioned above, there has another solution can achieve what you want without any additional step added.
Just add one line into your MetadataReader.ps1:
param ($pathToMetadata)
echo $pathToMetadata
$metadata = Get-content $pathToMetadata | out-string | ConvertFrom-Json
$metadata | Get-Member -MemberType NoteProperty | % { $o = $metadata.($_.Name); Write-Host "##vso[task.setvariable variable=$($_.Name);isOutput=true]$o" }
Then, it will parse those json objects into corresponding variables after the json file contents get.
(I make use of the work logic of terroform outputs here)
Then you can directly use {reference name}.{object name} to call corresponding json value.
- task: PowerShell#2
name: ReadMetadataJson
inputs:
filePath: 'MetadataReader.ps1'
arguments: 'Run MetadataReader.ps1 -pathToMetadata metadata.json'
- task: PowerShell#2
name: SetAppSetting
inputs:
filePath: 'AppSettingSetter.ps1'
arguments: 'Run AppSettingSetter.ps1 -pathToAppSetting Data.json -appSettingKey testkey -appSettingValue $(ReadMetadataJson.testkey)'
- script: echo $(ReadMetadataJson.testkey)
Note: I made changes here: -appSettingValue $(ReadMetadataJson.testkey)