I have the following YAML
- name: Core
description: Core functionality
- name: Artifact
description: Artifact management
# - $ref: "v1/publications.yml#/tags/"
v1/publications.yml has
tags:
- name: Publication
description: |
This defines the publication API.
I sort of want the result to be
I have the following YAML
- name: Core
description: Core functionality
- name: Artifact
description: Artifact management
- name: Publication
description: |
This defines the publication API.
# - $ref: "v1/publications.yml#/tags/"
I can do it one at a time like this...
- name: Core
description: Core functionality
- name: Artifact
description: Artifact management
- $ref: "v1/publications.yml#/tags/0"
But I want it to add multiple without updating my source.
This is not possible with the technologies you tagged. $ref is exactly that, a reference to an external subtree. You need sequence concatenation, which is something neither json-ref nor plain YAML or JSON provide.
You may be able to do this using some templating technology, which many YAML-based utilities provide. If you are in control of the loading code, you can also implement custom tags to do something like
- name: Core
description: Core functionality
- name: Artifact
description: Artifact management
- !append {$ref: "v1/publications.yml#/tags"}
Related
I'm having difficulties figuring out the syntax for triggering off of different event types.
For example the following gives me a "duplicated mapping key" error on the secod pull_request trigger.
on:
pull_request:
types: [opened, reopened]
branches:
- main
- develop
pull_request:
types [synchronize]
branches:
- main
- develop
paths: ['**.h', '**.cpp', '**.hpp', '**.yaml', '**CMakeLists.txt', '**Makefile', '**.spec', '**.py', '**Dockerfile', '**conanfile.txt']
I want the workflow to always run when first opened (or reopened) but subsequently when the branch is synchronized it should only run if the changes are in one of the specified file types.
To clarify, I already have on.push event hook that's not shown here for the sake of brevity.
I do believe I neeed to have a pull_request.synchronize event to handle updated.
Can't find anything in the documentation on how to do that. I tried combining the two pull_requests triggers but then I'm getting an error that the "types" key is being duplicated.
Any ideas?
The documentation does talk about triggering based on multiple events, but not multiple events of the same type, so it isn't entirely clear if this is possible (beyond the validation errors).
To make this work you need to define three different workflows, one with each varying type of event and its filters and another with the reusable workflow using a workflow_call event.
#workflow-1
on:
pull_request:
types: [opened, reopened]
branches:
- main
- develop
jobs:
job:
uses: ./.github/workflows/workflow-3.yml
#workflow-2
on:
pull_request:
types: [synchronize]
branches:
- main
- develop
paths: ['**.h', '**.cpp', '**.hpp', '**.yaml', '**CMakeLists.txt', '**Makefile', '**.spec', '**.py', '**Dockerfile', '**conanfile.txt']
jobs:
job:
uses: ./.github/workflows/workflow-3.yml
#workflow-3
on:
workflow_call:
jobs:
job:
steps:
- run: do stuff
Context
A reusable workflow in public repos may be called by appending a reference which can be a SHA, a release tag, or a branch name, as for example:
{owner}/{repo}/.github/workflows/{filename}#{ref}
Githubs documentation states:
When a reusable workflow is triggered by a caller workflow, the github context is always associated with the caller workflow.
The problem
Since the github context is always associated with the caller workflow, the reusable workflow cannot access the reference, for example the tag v1.0.0. However, knowing the reference is important when the reusable workflow needs to checkout the repository in order to make use of composite actions.
Example
Assume that the caller workflow is being executed from within the main branch and calls the ref v1.0.0. of a reusable workflow:
name: Caller workflow
on:
workflow_dispatch:
jobs:
caller:
uses: owner/public-repo/.github/workflows/reusable-workflow.yml#v1.0.0
Here is the reusable workflow that uses a composite action:
name: reusable workflows
on:
workflow_call:
jobs:
first-job:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3.1.0
with:
repository: owner/public-repo
ref: ${{ github.ref_name }}
- name: composite action
uses: ./actions/my-composite-action
In the above code snippet, ${{ github.ref_name }} is main instead of v1.0.0 because github context is always associated with the caller workflow. Therefore, the composite actions code is based on main and not on v1.0.0. However, the caller wanted v1.0.0.
Hence my question: how is the reusable workflow able to access the reference given by the caller?
Is String manipulation possible in YAML file??
The configuration file application.yml in a Spring Boot application is reading the version from the pom file as
<properties>
<revision>10.10.11</revision>
</properties>
The YAML file
logging:
file:
name: #revision#/app.log
The issue is, how to remove the dots from the revision value i.e.
"10.10.11" → “101011“
like
name: #revision#.replace('.', '')/app.log
, so that a log file can be generated on a folder without dots
For the general case, you could use SpEL which allows to call Java methods:
name: '#{"#revision#".replace(".", "")}'
You would need the outer quotes to tell yml that # does not start a comment, and to quote #revision# so that it is interpreted as a String by SpEL.
The problem is that it does not seem to work with logging.file.name because it is read by LoggingApplicationListener & LogFile which does not seem to interpret SpEL.
It does not seem easy to customize this through Spring Boot configuration, but you could instead define your own listener (possibly based on the one above) to define your own naming scheme.
The following question might also help: register custom log appender in spring boot starter
I understand that it is not possible to use NumberofInstance property in Cloudformation, I have used "DesiredCapacity" in AWS::AutoScaling::AutoScalingGroup, But I would like to know if there is any alternative for this, like using iteration inside the template
or use customer scripts under user data to create identical instances
Unfortunately, although the EC2 RunInstances API supports launching multiple EC2 instances (via MaxCount/MinCount parameters), the AWS::EC2::Instance CloudFormation resource only allows you to create a single EC2 instance at a time (see also this forum post for confirmation from ChrisW#AWS on this limitation).
In addition, iteration inside the template is not possible using CloudFormation's Intrinsic Functions, so that is not an option either.
As an alternative, I would recommend using an intermediate template format, then compile down to a CloudFormation-template (JSON or YML) using a preprocessor when greater expressive power is needed. You can use a full-featured library like troposphere, but it's also easy enough to code up your own basic preprocessing layer to suit your use-case and programming-language/library preferences.
My current choice is embedded Ruby (ERB), mostly because I'm already familiar with it. Here's an example template.yml.erb file using iteration that generates a CloudFormation YAML:
Resources:
<% (1..5).each do |i| -%>
Instance<%=i%>:
Type: AWS::EC2::Instance
# ...etc etc...
<% end -%>
To process, run cat template.yml.erb | | ruby -rerb -e "puts ERB.new(ARGF.read, nil, '-').result" > template.yml, which will write the following CloudFormation-ready template to template.yml:
Resources:
Instance1:
Type: AWS::EC2::Instance
# ...etc etc...
Instance2:
Type: AWS::EC2::Instance
# ...etc etc...
Instance3:
Type: AWS::EC2::Instance
# ...etc etc...
Instance4:
Type: AWS::EC2::Instance
# ...etc etc...
Instance5:
Type: AWS::EC2::Instance
# ...etc etc...
I've used this technique to help manage large numbers of resources in complex CloudFormation stacks with good results.
What is an alternative to autotools in Haskell world? I want to be able to choose between different configurations of the same source code.
For example, there are at least two implementations of MD5 in Haskell: Data.Digest.OpenSSL.MD5 and Data.Digest.Pure.MD5. I'd like to write code in such a way that it can figure out which library is already installed, and didn't require to install the other.
In C I can use Autotools/Scons/CMake + cpp. In Python I can catch ImportError. Which tools should I use in Haskell?
In Haskell you use Cabal configurations. At your project top-level directory, you put a file with the extension .cabal, e.g., <yourprojectname>.cabal. The contents are roughly:
Name: myfancypackage
Version: 0.0
Description: myfancypackage
License: BSD3
License-file: LICENSE
Author: John Doe
Maintainer: john#example.com
Build-Type: Simple
Cabal-Version: >=1.4
Flag pure-haskell-md5
Description: Choose the purely Haskell MD5 implementation
Default: False
Executable haq
Main-is: Haq.hs
Build-Depends: base-4.*
if flag(pure-haskell-md5)
Build-Depends: pureMD5-0.2.*
else
Build-Depends: hopenssl-1.1.*
The Cabal documentation has more details, in particular the section on Configurations.
As nominolo says, Cabal is the tool to use. In particular, the 'configurations" syntax.