Gitversion in Github actions - Set a custom increment for release branches - github-actions

In my Github actions CI workflow I am using GitVersion to determine the build versions. Specifically I am using the plugin
gittools/actions/gitversion/execute#v0.9.15
What I would like to achieve is the following gitversion behavior:
When I create a release branch release-1.3.3 from dev, then the calculated version should be 1.3.3
When I merge a fix branch fix/some-fix into release-1.3.3 , then the calculated version should be 1.3.3-1
I.e. I want to be able to set no tag when the release branch version is calculated the first time and then have some custom suffix that's incremented automatically, e.g. -1, -2, etc..
Is this possible? Right now, I configured a patch increment on the release branch, i.e. any commits to release-1.3.3 would uptick the version to 1.3.4, 1.3.5, etc.
Please see my version config file that is used by gittools/actions/gitversion:
tag-prefix: 'demo\-'
mode: ContinuousDelivery
branches:
dev:
regex: ^dev(elop)?(ment)?$
mode: ContinuousDeployment
tag: "dev"
increment: Patch
prevent-increment-of-merged-branch-version: true
track-merge-target: false
source-branches: []
tracks-release-branches: true
is-release-branch: false
is-mainline: true
pre-release-weight: 0
release:
regex: ^releases?[/-]
mode: ContinuousDelivery
tag: ""
increment: Patch
prevent-increment-of-merged-branch-version: true
track-merge-target: false
source-branches: [ 'dev' ]
tracks-release-branches: false
is-release-branch: true
is-mainline: false
pre-release-weight: 30000
feature:
regex: ^features?[/-]
mode: ContinuousDelivery
tag: useBranchName
increment: Inherit
prevent-increment-of-merged-branch-version: false
track-merge-target: false
source-branches: [ 'dev', 'release', 'feature', 'fix' ]
tracks-release-branches: false
is-release-branch: false
is-mainline: false
pre-release-weight: 30000
fix:
regex: ^fix[/-]
mode: ContinuousDelivery
tag: useBranchName
increment: Inherit
prevent-increment-of-merged-branch-version: false
track-merge-target: false
source-branches: [ 'dev', 'release', 'feature', 'fix' ]
tracks-release-branches: false
is-release-branch: false
is-mainline: false
pre-release-weight: 30000

Related

Pages build fine locally but don't show when deployed to github pages

I'm trying to set up a hugo site and I've created a blog post with the following header
+++
title = "My First Post"
date = "2022-08-27T22:23:33-05:00"
author = ""
authorTwitter = "" #do not include #
tags = ["programming"]
keywords = ["twitter"]
description = ""
showFullContent = false
readingTime = true
hideComments = false
draft = false
+++
And here's my config.yaml
theme: terminal
languageCode: en-us
title: Hugo
baseURL: "https://thatnerduknow.github.io/"
params:
contentTypeName: "posts"
themeColor: "green"
showMenuItems: 2
centerTheme: true
fullWidthTHeme: false
autoCover: true
showLastUpdated: true
enableGitInfo: true
readingTime: true
Toc: true
TocTitle: "Table of Contents"
menu:
main:
- identifier: about
name: About
url: /about/
- identifier: tags
name: Tags
url: /tags/
When I run hugo on my machine, the site builds perfectly fine and all of my one posts show up. But when I deploy with github actions No pages show up.
Here's my hugo.yml github actions file
# Sample workflow for building and deploying a Hugo site to GitHub Pages
name: Deploy Hugo site to Pages
on:
# Runs on pushes targeting the default branch
push:
branches: ["main"]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow one concurrent deployment
concurrency:
group: "pages"
cancel-in-progress: true
# Default to bash
defaults:
run:
shell: bash
jobs:
# Build job
build:
runs-on: ubuntu-latest
env:
HUGO_VERSION: 0.99.0
steps:
- name: Install Hugo CLI
run: |
wget -O ${{ runner.temp }}/hugo.deb https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_Linux-64bit.deb \
&& sudo dpkg -i ${{ runner.temp }}/hugo.deb
- name: Checkout
uses: actions/checkout#v3
with:
submodules: recursive
- name: Setup Pages
id: pages
uses: actions/configure-pages#v2
- name: Build with Hugo
env:
# For maximum backward compatibility with Hugo modules
HUGO_ENVIRONMENT: production
HUGO_ENV: production
run: |
hugo \
--minify \
--baseURL "${{ steps.pages.outputs.base_url }}/"
- name: Upload artifact
uses: actions/upload-pages-artifact#v1
with:
path: ./public
# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages#v1
As far as I can tell, everything is as it should be so i'm at a loss as to why my pages aren't being built in github actions

Skip a task in Github action based on the value of a variable defined in another task

In a Github action, I've a task that performs some checks and outputs a Boolean flag as a result. I would like to skip other tasks if the flag is False. I'm not sure about the syntax, the following does not work as expected.
name: test
jobs:
test_job:
name: Test
runs-on: ubuntu-20.04
steps:
- name: Create flag
id: create_flag
run: |
# run some checks and put results in FLAG
# and make that available to other tasks.
# "true" for true, and "false" for false (so string instead of boolean).
echo "::set-output name=FLAG::$FLAG"
- name: Run-me if FLAG is true
if: ${{ steps.create_flag.outputs.FLAG }} == "true"
run: |
# some logic to run if flag==true
You need to keep 'true' in single quotes.
Check GitHub expressions section related to conversions here
if: ${{ steps.create_flag.outputs.FLAG == 'true' }}
if: steps.create_flag.outputs.FLAG == 'true'
Here is an example run of your workflow:
https://github.com/grzegorzkrukowski/stackoverflow_tests/runs/5558400970?check_suite_focus=true
And source of the workflow:
https://github.com/grzegorzkrukowski/stackoverflow_tests/actions/runs/1988349309/workflow

request response data when using taurus junit-xml module

I am trying to clean-up the jmeter docker+ci pipeline of our functional tests. I see taurus has a clean way to run jmeter scripts in a container and it does the heavy lifting of downloading the version of jmeter I want + installing the plugins my scripts use - excellent.
Now I need to generate the reports in junit.xml so I could keep the reporting consistent. Up until now I was using a modified fork of https://github.com/tguzik/m2u to convert jtl reports to junit.xml
Appreciate any help with how I can get request, response (code & body) for all samples onto junit.xml (at least for the failed samples)?
I tried few variations of taurus yaml ...
reporting:
- module: console
- module: final_stats
summary: true
percentiles: true
test-duration: true
- module: junit-xml
filename: report/report.xml
data-source: sample-labels
reporting:
- module: console
- module: final_stats
summary: true
percentiles: true
test-duration: true
- module: passfail
- module: junit-xml
filename: report/report.xml
data-source: pass-fail
Also added certain passfail criteria variations on the passfail module. did not help
After fiddling with this for few hours, I believe there is no clean way to get anything meaningful onto the junit .xml report from the junit-xml module in taurus. It appears barebone. I also noticed that it could mess up the default jenkins junit plugin test result summary.
So I settled down with the following yaml setting and continued to use m2u.jar to convert the jtl to junit.xml
modules:
jmeter:
path: ~/.bzt/jmeter-taurus/bin/jmeter
download-link: https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-{version}.zip
version: 5.3
force-ctg: true
detect-plugins: true
plugins:
- jpgc-json=2.2
- jmeter-ftp
- jpgc-casutg
xml-jtl-flags:
xml: true
fieldNames: true
time: true
timestamp: true
latency: true
connectTime: false
success: true
label: true
code: true
message: true
threadName: true
dataType: false
encoding: false
assertions: true
subresults: true
responseData: false
samplerData: false
responseHeaders: false
requestHeaders: true
responseDataOnError: true
saveAssertionResultsFailureMessage: true
bytes: true
threadCounts: false
url: true
execution:
- write-xml-jtl: full
scenario:
script: v_jmxfilename
properties:
environment: v_env
reporting:
- module: console
- module: final_stats
summary: true
percentiles: true
test-duration: true
# - module: junit-xml
# filename: report/junit-report.xml
# data-source: sample-labels
As per JUnit-XML-Reporter documentation currently this is not possible:
This reporter provides test results in JUnit XML format parseable by Jenkins JUnit Plugin. Reporter has two options:
filename (full path to report file, optional. By default xunit.xml in artifacts dir)
data-source (which data source to use: sample-labels or pass-fail)
If sample-labels used as source data, report will contain urls with test errors. If pass-fail used as source data, report will contain Pass/Fail criteria information. Please note that you have to place pass-fail module in reporters list, before junit-xml module.
Taurus is not only for JMeter, it supports many more tools and not all of them provide possibility to store request and response data so the options I can think of are in:
Add a Listener to your Test Plan and choose what metrics you need to store into a separate file, the easiest one for using is Flexible File Writer
Use ShellExec Service to run your m2u.jar from Taurus config YAML

Jekyll server address is //

When running bundle exec jekyll serve, I get the following: Server address: http://0.0.0.0:4000//
Where does // come from in my config? How do I get rid of it to get http://0.0.0.0:4000/ ?
_config.yaml
url: https://blabla.com
source: .
destination: ./_site
plugins_dir: ./_plugins
layouts_dir: ./_layouts
include: ['.htaccess']
exclude: []
timezone: UTC+1
plugins: [jekyll-paginate]
# Show future posts
future: true
show_drafts: nil
limit_posts: 500
highlighter: rouge
relative_permalinks: false
permalink: pretty
paginate_path: 'posts/:num'
paginate: 5
markdown: kramdown
markdown_ext: kramdown, markdown,mkd,mkdn,md
textile_ext: textile
kramdown:
input: GFM
syntax_highlighter: rouge
excerpt_separator: "<!-- more -->"
safe: false
host: 0.0.0.0
port: 4000
baseurl: /
lsi: false
rdiscount:
extensions: []
redcarpet:
extensions: []
kramdown:
auto_ids: true
footnote_nr: 1
entity_output: as_char
toc_levels: 1..6
smart_quotes: lsquo,rsquo,ldquo,rdquo
enable_coderay: false
input: GFM
syntax_highlighter_opts:
coderay:
coderay_wrap: div
coderay_line_numbers: inline
coderay_line_numbers_start: 1
coderay_tab_width: 4
coderay_bold_every: 10
coderay_css: style
redcloth:
hard_breaks: true
#
# jekyll-assets: see more at https://github.com/ixti/jekyll-assets
# bundle exec jekyll serve
#
assets:
dirname: assets
baseurl: /assets/
sources:
- _assets/javascripts
- _assets/stylesheets
- _assets/images
- _assets/fonts
js_compressor: uglifier
css_compressor: sass
cachebust: none
cache: true
gzip: [ text/css, application/javascript ]
debug: true
compressors:
uglifier:
harmony: true
compress:
unused: false
Jekyll default config : baseurl: "" # the subpath of your site, e.g. /blog
Your baseurl must be empty.

Incorporate JSON into YAML with Indentation

I'm trying to incorporate a JSON into a YAML file.
The YAML looks like this:
filebeat.inputs:
- type: log
<incorporate here with a single level indent>
enabled: true
paths:
Suppose you have the following variable:
a = { processors: { drop_event: { when: { or: [ {equals: { status: 500 }},{equals: { status: -1 }}]}}}}
I want to incorporate it into an existing YAML.
I've tried to use:
JSON.parse((a).to_json).to_yaml
After applying this, I got a valid YAML but without indentation (all lines have to be indented) and with a "---" which is Ruby's new document in YAML.
The result:
filebeat.inputs:
- type: log
---
processors:
drop_event:
when:
or:
- equals:
status: 500
- equals:
status: -1
enabled: true
The result I'm looking for:
filebeat.inputs:
- type: log
processors:
drop_event:
when:
or:
- equals:
status: 500
- equals:
status: -1
enabled: true```
It’s easier to produce a valid ruby object by merging hashes and then serialize the result to YAML than vice versa.
puts(yaml.map do |hash|
hash.each_with_object({}) do |(k, v), acc|
# the trick: we insert before "enabled" key
acc.merge!(JSON.parse(a.to_json)) if k == "enabled"
# regular assignment for all hash elements
acc[k] = v
end
end.to_yaml)
Results in:
---
- type: log
processors:
drop_event:
when:
or:
- equals:
status: 500
- equals:
status: -1
enabled: true
JSON.parse(a.to_json) basically converts symbols to strings.
In order to do that first you need to convert your original YAML into JSON
original = YAML.load(File.read(File.join('...', 'filebeat.inputs')))
# => [
{
"type": "log",
"enabled": true,
"paths": null
}
]
Then you have to merge your JSON into this original variable
original[0].merge!(a.stringify_keys)
original.to_yaml
# =>
---
-
type: log
enabled: true
paths:
processors:
drop_event:
when:
or:
- equals:
status: 500
- equals:
status: -1