I have a GitHub action that essentially is a bash script. The javascript portion of my action executes a bash script:
const core = require("#actions/core");
const exec = require("#actions/exec");
async function run() {
try {
// Execute bash script
await exec.exec(`${__dirname}/my-action-script.sh`);
} catch (error) {
core.setFailed(error.message);
}
}
run();
For now, this action will communicate with other actions by leaving files on the file system. This is an "invisible" way of communication and I would like to fill my action.yml with outputs. How can I enable my-action-script.sh to return me outputs defined in my action.yml?
the output must first be added to the action.yml, ex:
name: some GitHub workflow yaml file
description: some workflow description
runs:
using: node12
main: dist/index.js
inputs:
some_input:
description: some input
required: false
outputs:
some_output:
description: some output
and create the output from the bash script, ex:
echo ::set-output name=some_output::"$SOME_OUTPUT"
then you can use it in your workflow yaml, ex:
${{ steps.<step id>.outputs.some_output }}
Not totally clear if this is an action in a repo, or something you want to publish to the marketplace. At any rate, creating the output is done in the same way as indicated by the other answer, although you can run directly the shell if you use:
name: some GitHub workflow yaml file
description: some workflow description
runs:
using: composite
main: my-action-script.sh
inputs:
some_input:
description: some input
required: false
outputs:
some_output:
description: some output
See this article on creating this kind of actions
Related
I am working with Airflow 2.2.3 in GCP (Composer) and I am seeing inconsistent behavior which I can't explain when trying to use template values.
When I reference the templated value directly, it works without issue:
ts = '{{ ds }}' # results in 2022-05-09
When I reference the templated value in a function call, it doesn't work as expected:
ts_parts = '{{ ds }}'.split('-') # result ['2022-05-09']
The non-function call value is rendered without any issues, so it doesn't have any dependency on operator scope. There are examples here that show rendering outside of an operator, so I expect that not to be the issue. It's possible that Composer has setting configured so that Airflow will apply rendering to all python files.
Here's the full code for reference
dag.py
with DAG('rendering_test',
description='Testing template rendering',
schedule_interval=None, # only run on demand
start_date=datetime(2020, 11, 10), ) as rendering_dag:
ts = '{{ ds }}'
ts_parts = '{{ ds }}'.split('-')
literal_parts = '2022-05-09'.split('-')
print_gcs_info = BashOperator(
task_id='print_rendered_values',
bash_command=f'echo "ts: {ts}\nts_parts: {ts_parts}\nliteral_parts {literal_parts}"'
)
I thought that Airflow writes the files to some location with template values, then runs jinja against them with some supplied values, then runs the resulting python code. It looks like there is some logic applied if the line contains a function call? The documentation mentions none of these architectural principles and gives very limited examples.
Airflow does not render values outside of operator scope.
Rendering is a part of task execution which means that it's a step that happens only when task is in the worker (after being scheduled).
In your code the rendering is a top level code which is not part of operator templated fields thus Airflow consider it to be a regular string.
In your case the os.path.dirname() is executed on '{{ dag_run.conf.name }}' before it was rendered.
To fix your issue you need to set the Jinja string in templated fields of the operator.
bash_command=""" echo "path: {{ dag_run.conf.name }} path: os.path.dirname('{{ dag_run.conf.name }}')" """
Triggering DAG with {"name": "value"} will give:
Note that if you wish to use f-string with Jinja strings you must double the number of { }
source_file_path = '{{ dag_run.conf.name }}'
print_template_info = BashOperator(
task_id='print_template_info',
bash_command=f""" echo "path: { source_file_path } path: os.path.dirname('{{{{ dag_run.conf.name }}}}')" """
)
Edit:
Let me clarify - Airflow template fields as part of task execution.
You can see in the code base that Airflow invokes render_templates before it invokes pre_execute() and before it invokes execute(). This means that this step happens when the task is running on a worker. Trying to template outside of operator means the task doesn't even run - so the step of templating isn't running.
I have this section in one of my CircleCI jobs:
parameters:
aws_account:
type: string
default: '111111111111'
folder:
default: ''
description: The folder the changes will be deployed in
type: string
stack:
default: int
description: Sets the stack the deployment triggers.
type: string
I'm wondering how to move this over to Github Actions because neither parameters or even aws_account on its own is an allowed property for gh actions.
In Artillery, how can I capture the attribute of a random index in a JSON array returned from a GET, so my subsequent POSTs are evenly distributed across the resources?
https://artillery.io/docs/http-reference/#extracting-and-reusing-parts-of-a-response-request-chaining
I'm using serverless artillery to run a load test, which under the hood uses artillery.io .
A lot of my scenarios look like this:
-
get:
url: "/resource"
capture:
json: "$[0].id"
as: "resource_id"
-
post:
url: "/resource/{{ resource_id }}/subresource"
json:
body: "Example"
Get a list of resources, and then POST to one of those resources.
As you can see, I am using capture to capture an ID from the JSON response. My problem is that it is always getting the id from the first index of the array.
This will mean in my load test I end up absolutely battering one single resource rather than hitting them evenly which will be a more likely scenario.
I would like to be able to do something like:
capture:
json: "$[RANDOM].id
as: "resource_id"
but I have been unable to find anything in the JSONPath definition that would allow me to do so.
Define setResourceId function in custom JS code and to tell Artillery to load your custom code, set config.processor to the JS file path:
processor: "./custom-code.js"
- get:
url: "/resource"
capture:
json: "$"
as: "resources"
- function: "setResourceId"
- post:
url: "/resource/{{ resourceId }}/subresource"
json:
body: "Example"
custom-code.js file containing the below function
function setResourceId(context, next) {
const randomIndex = Math.round(Math.random() * context.vars.resources.length);
context.vars.resourceId = context.vars.resources[randomIndex].id;
}
Using this version:
------------ Version Info ------------
Artillery: 1.7.9
Artillery Pro: not installed (https://artillery.io/pro)
Node.js: v14.6.0
OS: darwin/x64
The answer above didn't work for me.
I got more info from here, and got it working with the following changes:
function setResourceId(context, events, done) {
const randomIndex = Math.round(Math.random() * (context.vars.resources.length - 1));
context.vars.resourceId = context.vars.resources[randomIndex].id;
return done();
}
module.exports = {
setResourceId: setResourceId
}
I am using the example provided on the chimp website for gulp-chimp
gulp.task('chimp-options', () => {
return chimp({
features: './features',
browser: 'phantomjs',
singleRun: true,
debug: false,
output: {
screenshotsPath: './screenshots',
jsonOutput: './cucumber.json',
},
htmlReport: {
enable: true,
jsonFile: './e2e_output/cucumber.json',
output: './e2e_output/report/cucumber.html',
reportSuiteAsScenarios: true,
launchReport: true,
}
});
});
the problem i have and that it's killing me is that when I run gulp chimp-options i get :
Unable to parse cucumberjs output into json: './e2e_output/cucumber.json' SyntaxError: ./e2e_output/cucumber.json: Unexpected end of JSON input
What am I doing wrong ?
I believe chimp is just a wrapper on multiple frameworks/libraries out there and I'm pretty sure they just use cucumber-html-reporter to generate its HTML reports.
If you still can't get it working automatically via chimp, just generate the options file as usual and npm install cucumber-html-reporter and then use it to generate the same report.
Create a separate file called generate_html_report.js and paste in the code under Usage. Then add this to your npm script to run after your test suite has finished. I'd avoid putting it in your afterHooks as I've had issues in the past where the JSON file hadn't been completely generated before it tries to run the script expecting the JSON file to be there.
I want to use grunt-hash plugin for renaming my js files.
This plugin create a new file containing map of renamed files:
hash: {
options: {
mapping: 'examples/assets.json', //mapping file so your server can serve the right files
Now I need to fix links to this files by replacing all usages (rename 'index.js' to 'index-{hash}.js') so I want to use grunt-text-replace plugin.
According to documentation I need to cofigure replacements:
replace: {
example: {
replacements: [{
from: 'Red', // string replacement
to: 'Blue'
}]
}
}
How could I read json mapping file to get {hash} values for each file and provide them to replace task?
grunt.file.readJSON('your-file.json')
is probably what you are looking for.
I've set up a little test. I have a simple JSON file 'mapping.json', which contains the following JSON object:
{
"mapping": [
{"file": "foo.txt"},
{"file": "bar.txt"}
]
}
In my Gruntfile.js I've written the following simple test task, which reads the first object in the 'mapping'-array:
grunt.registerTask('doStuff', 'do some stuff.', function() {
mapping = grunt.file.readJSON('mapping.json');
grunt.log.write(mapping.mapping[0]["file"]).ok();
});
When invoking the Grunt task, the console output will be as follows:
$ grunt doStuff
Running "doStuff" task
foo.txtOK
Done, without errors.
I hope this helps! :)