Regex in static_config in prometheus job - configuration

I have a prometheus job like this:
- job_name: blackbox-http
metrics_path: /probe
params:
module: [http_2xx]
static_configs:
- targets:
- http://prometheus.io
- https://prometheus.io
- http://example.com:8080
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: localhost:9115
Now my question is can I use regex in targets of static_conifgs part?
for example something like this:
- job_name: blackbox-http
metrics_path: /probe
params:
module: [http_2xx]
static_configs:
- targets:
- http[s]?://prometheus.io
- http://example.com:8080/*?
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: localhost:9115

Related

github action: init services container port with "runner" context

I'm writing github workflow file that is using (docker container) services.
I tried to set the port of the service container with running actions-runner's name, like ${{ runner.name }}.
My workflows file looks like below. I'm using self-hosted-runners and ex is the label of my runner. Also actions-runner service is running on my linux server, version of actions-runner is actions-runner-linux-x64-2.298.2.tar.gz.
# example-job.yml
name: EXAMPLE JOB
'on':
push:
branches:
- develop
- master
pull_request:
types:
- opened
- synchronize
jobs:
...
test-chunks:
...
runs-on: [self-hosted, ex]
name: test-chunk-${{ matrix.ci_node_index }}
strategy:
fail-fast: false
matrix:
ci_node_total: [10]
ci_node_index: [0,1,2,3,4,5,6,7,8,9]
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: 16
- name: Echo runner context
run: echo "${{ toJSON(runner) }}"
...
services:
db:
image: 'mdillon/postgis:11-alpine'
env:
POSTGRES_DB: test
POSTGRES_USER: foo
POSTGRES_PASSWORD: foo
options: >-
--health-cmd pg_isready --health-interval 10s --health-timeout 5s
--health-retries 5
ports:
- '543${{ runner.name }}:5432'
...
When I run github action, the action immediately fails with error The workflow is not valid. .github/workflows/example-job.yml (Line: 124, Col: 13): Unrecognized named-value: 'runner'. Located at position 1 within expression: runner.name.
I've been set port like '543${{ matrix.ci_node_index }}:5432' and it worked fine. I don't know why "runner" context doesn't work properly.
Or is there way to run linux shell script in initializing port number?

EKS pod in workspace getting default scheduler for Fargate

Here is the issue I am running into:
I am creating the cluster using eksctl create cluster --name abc_name --profile profile_aws_creds
Once the cluster is created, I am creating the namespace using kubectl create namespace airflow-dev
On this namespace I am using helm to in
stall flux helm upgrade -i flux fluxcd/flux --set git.url=https://github.com/******/airflow-eks-config.git -n airflow-dev
when I look at the pods in the namespace they are always in the Pending state.
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2022-10-04T10:41:54-04:00"
kubernetes.io/psp: eks.privileged
creationTimestamp: "2022-10-04T14:41:54Z"
generateName: flux-596f88f8b5-
labels:
app: flux
pod-template-hash: 596f88f8b5
release: flux
name: flux-596f88f8b5-9jglf
namespace: airflow-dev
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: flux-596f88f8b5
uid: 5e86a479-96ca-46be-bec5-99a6ca64cb7b
resourceVersion: "11948"
uid: 672f236f-3a5e-4a40-a7f3-6b371e7434fc
spec:
containers:
- args:
- --log-format=fmt
- --ssh-keygen-dir=/var/fluxd/keygen
- --ssh-keygen-format=RFC4716
- --k8s-secret-name=flux-git-deploy
- --memcached-hostname=flux-memcached
- --sync-state=git
- --memcached-service=
- --git-url=https://github.com/****/airflow-eks-config.git
- --git-branch=master
- --git-path=
- --git-readonly=false
- --git-user=Weave Flux
- --git-email=support#weave.works
- --git-verify-signatures=false
- --git-set-author=false
- --git-poll-interval=5m
- --git-timeout=20s
- --sync-interval=5m
- --git-ci-skip=false
- --automation-interval=5m
- --registry-rps=200
- --registry-burst=125
- --registry-trace=false
env:
- name: KUBECONFIG
value: /root/.kubectl/config
image: docker.io/fluxcd/flux:1.25.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /api/flux/v6/identity.pub
port: 3030
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: flux
ports:
- containerPort: 3030
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/flux/v6/identity.pub
port: 3030
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 50m
memory: 64Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /root/.kubectl
name: kubedir
- mountPath: /etc/fluxd/ssh
name: git-key
readOnly: true
- mountPath: /var/fluxd/keygen
name: git-keygen
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-srqn4
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeSelector:
kubernetes.io/os: linux
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: flux
serviceAccountName: flux
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: flux-kube-config
name: kubedir
- name: git-key
secret:
defaultMode: 256
secretName: flux-git-deploy
- emptyDir:
medium: Memory
name: git-keygen
- name: kube-api-access-srqn4
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-10-04T14:41:54Z"
message: '0/2 nodes are available: 2 node(s) had taint {eks.amazonaws.com/compute-type:
fargate}, that the pod didn''t tolerate.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
as you can see above the pods are never scheduled, and the scheduler name is default-scheduler. When I do the same without deploying flux to a namespace(meaning deploying to default), the schedulerName is fargate-scheduler and the pod starts up.
Any thoughts on what is being done incorrectly?
Thanks
In Fargate the fargate-profile needs to be created first. Once this is created, the namespace can be created, and everything works.

How do you properly use Azure DevOps functions in a variable template?

How can I use the Azure DevOps counter function in a variable template?
Up until now, I have been using the counter function to set a variable in an pipeline and the value has been set as expected - it started at 1 and has incremented every time I run the pipeline.
Variable template - /Variables/variables--code--build-and-deploy-function-app.yml
variables:
- name: major
value: '1'
- name: minor
value: '0'
- name: patch
value: $[counter(format('{0}.{1}-{2}', variables['major'], variables['minor'], variables['Build.SourceBranchName']), 1)]
- name: branch
${{ if eq( variables['Build.SourceBranchName'], 'master' ) }}:
value: ''
${{ if ne( variables['Build.SourceBranchName'], 'master' ) }}:
value: -${{ variables['Build.SourceBranchName'] }}
However, after moving the exact same variables in to a variable template, the value of counter is the literal value specified in the template.
Digging further in to the documentation for templates, I found some words on template expression functions, together with an example of how to use a functon -
You can use general functions in your templates. You can also use a few template expression functions.
Given that counter is listed on the page that the link above refers to, I assumed I would be able to use it. However, no matter what I've tried, I can't get it to work. Here are a few examples -
${{ counter('${{ format('{0}.{1}-{2}', variables['major'], variables['minor'], variables['Build.SourceBranchName']) }}', 1) }}
${{ counter(format('{0}.{1}-{2}', variables['major'], variables['minor'], variables['Build.SourceBranchName']), 1) }}
$[counter('${{ format('{0}.{1}-{2}', variables['major'], variables['minor'], variables['Build.SourceBranchName']) }}', 1)]
What am I doing wrong?
Update
My variable template is as above, an here is how I use it in the pipeline -
pr: none
trigger: none
variables:
- template: ../Variables/variables--code--build-and-deploy-function-app.yml
name: ${{ variables.major }}.${{ variables.minor }}.${{ variables.patch }}${{ variables.branch }}
The expanded pipeline obtained from logs after a run is as follows -
pr:
enabled: false
trigger:
enabled: false
variables:
- name: major
value: 1
- name: minor
value: 0
- name: patch
value: $[counter(format('{0}.{1}-{2}', variables['major'], variables['minor'], variables['Build.SourceBranchName']), 1)]
- name: branch
value: -CS-805
name: 1.0.$[counter(format('{0}.{1}-{2}', variables['major'], variables['minor'], variables['Build.SourceBranchName']), 1)]-CS-805
As can be seen from the extended pipeline, the patch variable isn't being evaluated, resulting in the name containing the literal value -
I inserted the same variables into the template and the patch variable works as expected. It seems that your counter is correct.
Here are my sample, you could refer to it:
Template Yaml: build.yml
variables:
- name: major
value: '1'
- name: minor
value: '0'
- name: patch
value: $[counter(format('{0}.{1}-{2}', variables['major'], variables['minor'], variables['Build.SourceBranchName']), 1)]
Azure-Pipelines.yml
name: $(patch)
trigger:
- none
variables:
- template: build.yml
pool:
vmImage: 'windows-latest'
steps:
- script: echo $(patch)
displayName: 'Run a one-line script'
In order to make the result more intuitive, I set patch variable to build name.
Here is the result:
Update:
Test with $(varname) and it could work as expected.
trigger:
- none
variables:
- template: build.yml
name: $(major).$(minor)-$(patch)$(branch)
Result:
The $(varname) means runtime before a task executes.

What does attributeRestrictions mean in Openshift Container platform object schema definition?

apiVersion:
kind:
spec:
status:
evaluationError:
rules:
- apiGroups:
attributeRestrictions:
Raw:
nonResourceURLs:
resourceNames:
resources:
verbs:

ADD / REMOVE a backend block within k8s ingress manifest with JQ (YQ)

I have a kubernetes ingress manifest YAML, looking like next:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/acme-http01-edit-in-place: "true"
certmanager.k8s.io/cluster-issuer: letsencrypt
name: staging
namespace: dev
spec:
rules:
- host: staging.domain.com
http:
paths:
- backend:
serviceName: task-11111
servicePort: 80
path: /task-11111/*
- backend:
serviceName: task-22222
servicePort: 80
path: /task-22222/*
- backend:
serviceName: task-33333
servicePort: 80
path: /task-33333/*
tls:
- hosts:
- staging.domain.com
secretName: staging-domain-com
What I'm trying to achieve is to add (if not present) or remove (if present) a backend block. What I have now:
yq -y '.spec.rules[].http.paths += [{ "backend": { "serviceName": "'${CI_COMMIT_REF_NAME}'", "servicePort": 80}, "path": "/'${CI_COMMIT_REF_NAME}'/*"}]'
(adds a new block with variable value, but doesn't bother if it already exists)
yq -y 'del(.. | .paths? // empty | .[] | select(.path |contains("'${CI_COMMIT_REF_NAME}'")) )'
(fails with jq: error (at <stdin>:0): Invalid path expression with result {"backend":{"serviceName":...)
So rules may look like this after deletion (assume that CI_COMMIT_REF_NAME = task-33333):
spec:
rules:
- host: staging.domain.com
http:
paths:
- backend:
serviceName: task-11111
servicePort: 80
path: /task-11111/*
- backend:
serviceName: task-22222
servicePort: 80
path: /task-22222/*
or like this after adding (assume that CI_COMMIT_REF_NAME = task-44444):
spec:
rules:
- host: staging.domain.com
http:
paths:
- backend:
serviceName: task-11111
servicePort: 80
path: /task-11111/*
- backend:
serviceName: task-22222
servicePort: 80
path: /task-22222/*
- backend:
serviceName: task-33333
servicePort: 80
path: /task-33333/*
- backend:
serviceName: task-44444
servicePort: 80
path: /task-44444/*
Any help is greatly appreciated.
[The following has been revised to reflect the update to the question.]
Assuming CI_COMMIT_REF_NAME is available to jq as $CI_COMMIT_REF_NAME, which can be done using jq with the command-line argument:
--arg CI_COMMIT_REF_NAME "$CI_COMMIT_REF_NAME"
an appropriate jq filter would be along the following lines:
.spec.rules[0].http.paths |=
(map(select(.path | index($CI_COMMIT_REF_NAME) | not)) as $new
| if ($new | length) == length
then . + [{ "backend": { "serviceName": $CI_COMMIT_REF_NAME, "servicePort": 80}, "path": ($CI_COMMIT_REF_NAME + "/*") }]
else $new
end )
You can test this with the following jq invocation:
jq --arg CI_COMMIT_REF_NAME task-4444 -f program.jq input.json
where of course input.json is the JSON version of your YAML.
(I'd use index in preference to contains if at all possible.)