Use yq to substitute string in a yaml file - json

I am trying to substitute all substrings in a yaml file with yq.
File:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: <SOME_NAME>
name: <SOME_NAME>
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: <SOME_NAME>
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: <SOME_NAME>
spec:
serviceAccountName: api
containers:
- image: some-docker-repo/<SOME_NAME>:latest
Right now I am using command like this:
yq e '
.metadata.labels.app = "the-name-to-use" |
.metadata.name = "the-name-to-use" |
.spec.selector.matchLabels.app = "the-name-to-use" |
.spec.template.metadata.labels.app = "the-name-to-use" |
.spec.template.spec.containers[0].image |= sub("<SOME_NAME>", "the-name-to-use")
' template.yaml > result.yaml
But I am sure it can be done as a one-liner. I tried using different variations of
yq e '.[] |= sub("<SOME_NAME>", "the-name-to-use")' template.yaml > result.yaml
but I am getting error like
Error: cannot substitute with !!map, can only substitute strings. Hint: Most often you'll want to use '|=' over '=' for this operation.
Can you please suggest where I might have missed the point?
As an extra request, how would it look like if there would be 2 substitutions in template file?
e.x. <SOME_NAME_1> and <SOME_NAME_2> that need to be substituted with some_var_1 and some_var_2 respectively.

The trick is to use '..' to match all the nodes, then filter out to only include strings
yq e ' (.. | select(tag == "!!str")) |= sub("<SOME_NAME>", "the-name-to-use")' template.yaml
Disclosure: I wrote yq

Related

How to pass param file when applying a new yaml

I am on on openshift 4.6. I want to pass parameters when to a yaml file so i tried the below code but it threw an error
oc apply -f "./ETCD Backup/etcd_backup_cronjob.yaml" --param master-node = oc get nodes -o name | grep "master-0" | cut -d'/' -f2
Error: unknown flag --param
What you most likely want to use is OpenShift Templates. Using Templates you can have variables in your YAML files and then change them using oc process.
So your YAML would look like so:
kind: Template
apiVersion: v1
metadata:
name: my-template
objects:
- apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pi
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: "Replace"
startingDeadlineSeconds: 200
suspend: true
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
parent: "cronjobpi"
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(${{DIGITS}})"]
restartPolicy: OnFailure
parameters:
- name: DIGITS
displayName: Number of digits
description: Digits to compute
value: 200
required: true
Then you can use oc process like so:
oc process my-template.yml --param=DIGITS=300 | oc apply -f -

Helm3 - Reading json file into configmap produces a string?

Problem:
I want to read a json file into a configmap so it looks like:
apiVersion: v1
kind: ConfigMap
metadata:
name: json-test
data:
test.json: |-
{
"key": "val"
}
Instead I get
apiVersion: v1
kind: ConfigMap
metadata:
name: json-test
data:
test.json: |-
"{\r\n \"key\": \"val\"\r\n}"
What I've done:
I have the following helm chart:
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2020-02-06 10:51 AM static
d----- 2020-02-06 10:55 AM templates
-a---- 2020-02-06 10:51 AM 88 Chart.yaml
static/ contains a single file: test.json:
{
"key": "val"
}
templates/ contains a single configmap that reads test.json: test.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: json-test
data:
test.json: |-
{{ toJson ( .Files.Get "static/test.json" ) | indent 4}}
When I run helm install test . --dry-run --debug I get the following output
NAME: test
LAST DEPLOYED: Thu Feb 6 10:58:18 2020
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
{}
HOOKS:
MANIFEST:
---
# Source: sandbox/templates/test.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: json-test
data:
test.json: |-
"{\r\n \"key\": \"val\"\r\n}"
The problem here is my json is wrapped in double quotes. My process that wants to read the json is expecting actual json, not a string.
I see that this is not specific behavior only for helm 3. It generally works in kubernetes this way.
I've just tested it on kubernetes v1.13.
First I created a ConfigMap based on this file:
apiVersion: v1
kind: ConfigMap
metadata:
name: json-test
data:
test.json: |-
{
"key": "val"
}
When I run:
$ kubectl get configmaps json-test -o yaml
I get the expected output:
apiVersion: v1
data:
test.json: |-
{
"key": "val"
}
kind: ConfigMap
metadata:
...
but when I created my ConfigMap based on json file with the following content:
{
"key": "val"
}
by running:
$ kubectl create configmap json-configmap --from-file=test-json.json
Then when I run:
kubectl get cm json-configmap --output yaml
I get:
apiVersion: v1
data:
test-json.json: " { \n \"key\": \"val\"\n } \n"
kind: ConfigMap
metadata:
...
So it looks like it's pretty normal for kubernetes to transform the original json format into string when a ConfigMap is created from file.
It doesn't seem to be a bug as kubectl doesn't have any problems with extracting properly formatted json format from such ConfigMap:
kubectl get cm json-configmap -o jsonpath='{.data.test-json\.json}'
gives the correct output:
{
"key": "val"
}
I would say that it is application responsibility to be able to extract json from such string and it can be done probably in many different ways e.g. making direct call to kube-api or using serviceaccount configured to use kubectl in Pod.
I hade the samed problem with the format, also when creating the configmap from a yaml file. The fix for me was to remove all trailing whitspace.

How to embed JSON string as the value in a Kubernetes Secret

Part of our orchestration uses envsubst to update a YAML template file with our desired values.
envsubst < "${SECRET_TEMPLATE}" | kubectl apply -f -
The value for our keyword config is a JSON string:
data=$(jq -c . ${JSON_FILE})
This results in YAML that looks like this (trimmed for brevity):
apiVersion: v1
kind: Secret
metadata:
name: reporting-config
type: Opaque
data:
config: {"database": "foo"}
This apparently worked in some earlier versions of Kube, I wanna say 1.8. Anyways, we are running 1.15 and now kubectl interprets this as a map type and complains:
error: error validating "STDIN": error validating data: ValidationError(Secret.data.config): invalid type for io.k8s.api.core.v1.Secret.data: got "map", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false
Is there a trick to doing this now. I've played around with quoting and various places, escaping quotes, and all that jazz and nada.
* update 1 *
Using stringData still results in the same error:
apiVersion: v1
kind: Secret
metadata:
name: monsoon-storage-reporting-config
type: Opaque
stringData:
config: {"database": "foo"}
error: error validating "STDIN": error validating data: ValidationError(Secret.stringData.config): invalid type for io.k8s.api.core.v1.Secret.stringData: got "map", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false
You may use stringData as below. Note the pipe after stringData:
apiVersion: v1
kind: Secret
metadata:
name: monsoon-storage-reporting-config
type: Opaque
stringData: |
config: {"database": "foo"}
I had to base64 encode the value
$ echo {"database": "foo"} | base64
e2RhdGFiYXNlOiBmb299Cg==
and then use the base64 encoded value in the data: field
apiVersion: v1
kind: Secret
metadata:
name: reporting-config
type: Opaque
data:
config: e2RhdGFiYXNlOiBmb299Cg==
Also note this on base64 encoding:
When using the base64 utility on Darwin/macOS users should avoid using the -b option to split long lines. Conversely Linux users should add the option -w 0 to base64 commands or the pipeline base64 | tr -d '\n' if -w option is not available.

multi-line yaml input to mustache template outputs as JSON

I have the following shell snippet
inputs="ingress_test_inputs.yaml"
auth_annotations=" # type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'"
echo "---" >$inputs
echo "namespace: qa" >> $inputs
echo "auth_annotations: ${auth_annotations}" >> $inputs
echo "----- Ingress inputs (${inputs}) -----"
cat $inputs
echo 'apiversion: extenstions/v1beta
kind: Ingress
metadata:
name: aname
annotations:
kubernetes.io/ingress.class: "nginx-internal"
nginx.ingress.kubernetes.io/server-snippet: |
add_header Content-Security-Policy "frame-ancestors 'self'";
{{{auth_annotations}}}
spec:
rules:
- host: bla-bla-bla.{{namespace}}.example.com' >ingress.mustache
echo "----- Raw Ingress (ingress.mustache): -----"
cat ingress.mustache
mustache $inputs ingress.mustache > ingress-1.0.yaml
echo "----- Will apply the following ingress: -----"
cat ingress-1.0.yaml
However, when I run this the output for auth_annotations seems to be converted into JSON format (with => in between elements and comma at the end) like this (see the line before spec: ) ...
----- Ingress inputs (ingress_test_inputs.yaml) -----
---
namespace: qa
auth_annotations: # type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
----- Raw Ingress (ingress.mustache): -----
apiversion: extenstions/v1beta
kind: Ingress
metadata:
name: aname
annotations:
kubernetes.io/ingress.class: "nginx-internal"
nginx.ingress.kubernetes.io/server-snippet: |
add_header Content-Security-Policy "frame-ancestors self";
{{{auth_annotations}}}
----- Will apply the following ingress: -----
apiversion: extenstions/v1beta
kind: Ingress
metadata:
name: aname
annotations:
kubernetes.io/ingress.class: "nginx-internal"
nginx.ingress.kubernetes.io/server-snippet: |
add_header Content-Security-Policy "frame-ancestors self";
{"nginx.ingress.kubernetes.io/auth-type"=>"basic", "nginx.ingress.kubernetes.io/auth-secret"=>"basic-auth", "nginx.ingress.kubernetes.io/auth-realm"=>"Authentication Required - foo"}
I would have expected my original YAML to be pasted into those lines intact. It even strips off the comments (which I don't really care about) however, this is not the behaviour I was expecting. Why does mustache treat multi-line input differently to single line input?
I have tried searching for a similar question, but have not been able to come up with an answer.
EDIT: Added a single line variable, for comparison of inputs.
There are several problems here:
Mustache has no idea whatsoever that ingress.mustache is a YAML file. It will not parse it as YAML and in fact assumes it is HTML (because it will escape HTML special chars with HTML entities).
The problem that arises from this is that mustache is not aware of current indentation etc when it is prompted to insert your value, so it is unable to paste your YAML while keeping the relative structure intact by adding to the original indentation in your $inputs.
Mustache uses YAML for input, but not for output. What you see is mustache's representation of the auth_annotations values. Mustache has parsed the YAML into an internal structure and has no idea that you want it to be rendered as YAML.
The problem is not single-line vs multi-line, but simple content (scalar) vs complex content (mapping).
To be able to properly indent your variables, you have to walk over the structure with mustache and insert them piece via a section. However, mustache is not able to walk over mapping values, only over sequences. So what you need to do is to make your input a sequence that is iterable via mustache:
auth_annotations=" # type of authentication
- {key: nginx.ingress.kubernetes.io/auth-type, value: basic}
# name of the secret that contains the user/password definitions
- {key: nginx.ingress.kubernetes.io/auth-secret, value: basic-auth}
# message to display with an appropriate context why the authentication is required
- {key: nginx.ingress.kubernetes.io/auth-realm, value: 'Authentication Required - foo'}"
Then, you can insert it by modifying your template, using a mustache section to iterate over your list:
echo 'apiversion: extenstions/v1beta
kind: Ingress
metadata:
name: aname
annotations:
kubernetes.io/ingress.class: "nginx-internal"
nginx.ingress.kubernetes.io/server-snippet: |
add_header Content-Security-Policy "frame-ancestors 'self'";
{{#auth_annotations}}
{{key}}: {{value}}
{{/auth_annotations}}
spec:
rules:
- host: bla-bla-bla.{{namespace}}.example.com' >ingress.mustache
Here's a stripped down version of the script, which demonstrates the problem.
inputs="ingress_test_inputs.yaml"
auth_annotations="
foo: bar baz
sam: jam man"
echo "namespace: qa" > $inputs
echo "auth_annotations: ${auth_annotations}" >> $inputs
echo "----- Ingress inputs (${inputs}) -----"
cat $inputs
echo '---
metadata:
annotations:
nginx/thing:
another_thing:
{{{auth_annotations}}}
spec:
rules:
- host: bla-bla-bla.{{{ namespace }}}.example.com' >ingress.mustache
echo "----- Raw Ingress (ingress.mustache): -----"
cat ingress.mustache
mustache $inputs ingress.mustache > ingress-1.0.yaml
echo "----- Will apply the following ingress: -----"
cat ingress-1.0.yaml
This gives the same problem as the original question that I posted.
----- Ingress inputs (ingress_test_inputs.yaml) -----
namespace: qa
auth_annotations:
foo: bar baz
sam: jam man
----- Raw Ingress (ingress.mustache): -----
---
metadata:
annotations:
nginx/thing:
another_thing:
{{{auth_annotations}}}
spec:
rules:
- host: bla-bla-bla.{{{ namespace }}}.example.com
----- Will apply the following ingress: -----
---
metadata:
annotations:
nginx/thing:
another_thing:
{"foo"=>"bar baz", "sam"=>"jam man"}
spec:
rules:
- host: bla-bla-bla.qa.example.com
However, if I replace "mustache" with "mo" which is the bash implementation, it works :-
----- Ingress inputs (ingress_test_inputs.yaml) -----
namespace: qa
auth_annotations:
foo: bar baz
sam: jam man
----- Raw Ingress (ingress.mustache): -----
---
metadata:
annotations:
nginx/thing:
another_thing:
{{{auth_annotations}}}
spec:
rules:
- host: bla-bla-bla.{{{ namespace }}}.example.com
----- Will apply the following ingress: -----
namespace: qa
auth_annotations:
foo: bar baz
sam: jam man
---
metadata:
annotations:
nginx/thing:
another_thing:
spec:
rules:
- host: bla-bla-bla..example.com
To my mind this suggests a bug in the Ruby gem, as it doesn't handle YAML correctly when it's part of the output.

Kubernetes configmap in json format error

I am trying to mount a file probes.json to an image. I started with trying to create a configmap similar to my probes.json file by manually specifying the values.
However, when I apply the replicator controller, I am getting an error.
How should I pass my JSON file to my configmap / how can I specify my values in data parameter?
I tried the below steps, however, I got an error.
$ cat probes.json
[
{
"id": "F",
"url": "http://frontend.stars:80/status"
},
{
"id": "B",
"url": "http://backend.stars:6379/status"
},
{
"id": "C",
"url": "http://client.stars:9000/status"
}
]
Configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-vol-config
namespace: stars
data:
id: F
id: B
id: C
F: |
url: http://frontend.stars:80/status
B: |
url: http://backend.stars:6379/status
C: |
url: http://client.stars:9000/status
ReplicaContainer:
apiVersion: v1
kind: ReplicationController
metadata:
name: management-ui
namespace: stars
spec:
replicas: 1
template:
metadata:
labels:
role: management-ui
spec:
containers:
- name: management-ui
image: calico/star-collect:v0.1.0
imagePullPolicy: Always
ports:
- containerPort: 9001
volumeMounts:
name: config-volume
- mountPath: /star/probes.json
volumes:
- name: config-volume
configMap:
name: my-vol-config
Error:
kubectl apply -f calico-namespace/management-ui.yaml
service "management-ui" unchanged
error: error converting YAML to JSON: yaml: line 20: did not find expected key
This part, - should be with name: on first line under volumeMounts
volumeMounts:
name: config-volume
- mountPath: /star/probes.json
Like so:
volumeMounts:
- name: config-volume
mountPath: /star/probes.json
I wanted to add more points that i have learned today,
Mounting a file using the below code will delete any files under the directory in this case star directory in the container.
- volumeMounts:
- name: config-volume
mountPath: /star/probes.json
to solve it we should use subpath
volumeMounts:
- name: "config-volume"
mountPath: "/star/probes.json"
subPath: "probes.json"
Instead of fiddling on how to pass key-value pairs into the data , try passing as json file and remember to specify the namespace while creating the configmap
In my example: I have probes.json and i tried to pass it as such without passing each value to my data. I used the below command to create my configmap
kubectl create configmap config --namespace stars --from-file calico-namespace/probes.json