How to change a ConfigMap of a Deployment using a script? - json

There are Deployments that may use a configmap with the name like cm-myapp-*. How to write a script that looks at all Deployments and reconfigures them from using some of their cm-myapp-* to the new specific cm-myapp-123?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:2
volumeMounts:
- name: config-volume
mountPath: /etc/myapp/
volumes:
- name: config-volume
configMap:
name: cm-myapp-9375546193
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-myapp-123
data:
myapp.conf: |
hi
There is kubectl patch that accepts 'JSON patches', and there is kubectl edit that looks like interactive-only. Some kubectl commands accept go-templates, but they aren't for editing. Dumping the whole config gives some superfluous fields.
Can extract some stuff:
kubectl get deployment -o go-template --template="{{range .items}}{{\$deploymentName := .metadata.name}}{{range .spec.template.spec.volumes}}{{if .configMap}}{{\$deploymentName}} {{.configMap}}:{{end}}{{end}}{{end}}" | tr ':' '\n'
kubectl get deployment myapp -ojsonpath="{.spec.template.spec.volumes[0].configMap.name}}"
Need to patch it (not working):
kubectl patch deployment myapp -p '{ "op": "replace", "path": ".spec.template.spec.volumes[0].name", "value": "cf" }'
So how can it be done? What's the syntax of kubectl patch?

Use jq, the "awk for json" to tranform the JSON document(s). I'm not sure which fields you want to change exactly, but how to adjust it should be clear from the jq argument.
$ cat x.json
{
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"foo": "myapp"
"metadata": {
"name": "myapp"
},
"spec": {
"template": {
"metadata": {
"labels": {
"app": "myapp"
}
}
}
}
}
$ jq '
.metadata.name = "cm-myapp-123"
| .spec.template.metadata.labels.app = "cm-myapp-123"
| .
' < x.json
{
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"foo": "myapp"
"metadata": {
"name": "cm-myapp-123"
},
"spec": {
"template": {
"metadata": {
"labels": {
"app": "cm-myapp-123"
}
}
}
}
}

In my opinion it would be optimal if you knew the deployments in advance. You should generate your manifests that you apply from some templating solution (I would suggest getting familiar with helm which is much more then just templates) and then have the configmap managed with the templating.

Outputs an old config name, its index in the 'volumes' array and the name of the deployment. Filters out configs that we aren't interested in, patches all deployments.
#!/bin/bash
name=cm-myapp
unique_name=cm-myapp-123
# Columns: ConfigMap name, index in volumes, Deployment name.
kubectl get deployment -o go-template --template="{{range .items}}{{\$deploymentName := .metadata.name}}{{range \$i, \$v := .spec.template.spec.volumes}}{{if .configMap}}{{.configMap.name}} {{\$i}} {{\$deploymentName}}:{{end}}{{end}}{{end}}" | tr ':' '\n' |
egrep "^$name-[^-]+ " | while read l; do
i=$(printf '%s\n' "$l" | awk '{print $2}')
deployment=$(printf '%s\n' "$l" | awk '{print $3}')
kubectl patch deployment $deployment --type=json -p "[{ \"op\": \"replace\", \"path\": \"/spec/template/spec/volumes/$i/configMap/name\", \"value\": \"$unique_name\" }]"
done

Here's some kubectl patch syntax I've used to patch images by container name:
-p "{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"myapp\",\"image\":\"$imageUri\"}]}}}}"
The same thing might work for you by patching the volumes key instead:
-p "{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"config-volume\",\"configMap\":{\"name\":\"myapp-123\"}}]}}}}"
What's the syntax of kubectl patch?
The official documentation is here with examples here. According to that guide, you might try setting --type=json on your patch command.
There are two syntax: JSON Patch and JSON Merge Patch.

Related

kubectl create pod using override return error: Invalid JSON Patch

I am trying to run my pod using below command but keep getting error:
error: Invalid JSON Patch
kubectl run -i tmp-pod --rm -n=my-scripts --image=placeholder --restart=Never --overrides= "$(cat pod.json)"
Here is my pod.json file:
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "test",
"namespace": "my-ns",
"labels": {
"app": "test"
}
},
"spec": {
"containers": [
{
"name": "test",
"image": "myimage",
"command": [
"python",
"/usr/bin/cma/excute.py"
]
}
]
}
}
What am I doing wrong here?
I did a bit of testing and it seems there is an issue with Cmder not executing $() properly - either not working at all, or treating newlines as Enter, and thus executing a commant before entire JSON is passed.
You may want to try running your commands in PowerShell:
kubectl run -i tmp-pod --rm -n=my-scripts --image=placeholder --restart=Never --overrides=$(Get-Content pod.json -Raw)
There is a similar issue on GitHub [Windows] kubectl run not accepting valid JSON as --override on Windows (same JSON works on Mac) #519. Unfortunately, there is no clear solution for this.
Possible solutions are:
Passing JSON as a string directly
kubectl run -i tmp-pod --rm -n=my-scripts --image=placeholder --restart=Never --overrides='{"apiVersion":"v1","kind":"Pod","metadata":{...}}'
Using ' instead of " around overrides.
Using triple quotes """ instead of single quotes " in JSON file.

IBM Cloud: How to enable App ID for app on Kubernetes cluster with K8s Ingress and ALB OAuth Proxy?

I am trying to configure App ID-based authentication for an app deployed to IBM Cloud Kubernetes Service (IKS) running in a VPC. In the past it worked well with IBM's own Ingress. However, that has been deprecated. Now, I am following the guide here which is using the community Ingress and talks about adding IBM App Id.
I seem to have configured everything, but the host / site cannot be reached. Here is how the Ingress resource looks like:
"apiVersion": "networking.k8s.io/v1beta1",
"kind": "Ingress",
"metadata": {
"annotations": {
"kubernetes.io/ingress.class": "public-iks-k8s-nginx",
"nginx.ingress.kubernetes.io/auth-signin": "https://$host/oauth2-myappid/start?rd=$escaped_request_uri",
"nginx.ingress.kubernetes.io/auth-url": "https://$host/oauth2-myappid",
"nginx.ingress.kubernetes.io/configuration-snippet": "auth_request_set $access_token $upstream_http_x_auth_request_access_token;
access_by_lua_block {
if ngx.var.access_token ~= \"\" then
ngx.req.set_header(\"Authorization\", \"Bearer \" .. ngx.var.access_token)
end
}
"
},
"name": "ingress-for-mytest",
"namespace": "sfs"
},
"spec": {
"rules": [
{
"host": "myhost.henrik-cluster-cd5d3f574d7d8057a176af82152f5-0000.eu-de.containers.appdomain.cloud",
"http": {
"paths": [
{
"backend": {
"serviceName": "my-service",
"servicePort": 8081
},
"path": "/"
}
]
}
}
],
"tls": [
{
"hosts": [
"myhost.henrik-cluster-cd5d3f574d7d8057a176af82152f5-0000.eu-de.containers.appdomain.cloud"
],
"secretName": "henrik-cluster-cd5d3f574d7d8057a176af82152f5-0000"
}
]
}
}
I got it to work with the following definition:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-for-mytest
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2-myappid/auth
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2-myappid/start?rd=$escaped_request_uri
nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $access_token $upstream_http_x_auth_request_access_token;
auth_request_set $id_token $upstream_http_authorization;
access_by_lua_block {
if ngx.var.id_token ~= "" and ngx.var.access_token ~= "" then
ngx.req.set_header("Authorization", "Bearer " .. ngx.var.access_token .. " " .. ngx.var.id_token:match("%s*Bearer%s*(.*)"))
end
}
spec:
tls:
- hosts:
- myhost
secretName: ingress-secret-for-mytest
rules:
- host: myhost
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8081
It is important to note that the OAuth2 proxy (see the steps regarding the proxy add-on and App ID integration) will only deploy successfully to a non-default Kubernetes namespace if the (cluster) Ingress secret is copied into that namespace.
You can find the Ingress secret using the following command and watching for the secret in the default namespace:
ibmcloud ks ingress secret ls -c your-cluster-name
Thereafter, (re)create that secret in the non-default namespace, copying the CRN and name of that secret:
ibmcloud ks ingress secret create -c your-cluster-name -n your-namespace
--cert-crn the-crn-shown-in-the-output-above --name the-secret-name-shown-above

Parsing Kubernetes ConfigMap in Mixed Data Formats

Consider this current namespace config in JSON format:
$ kubectl get configmap config -n metallb-system -o json
{
"apiVersion": "v1",
"data": {
"config": "address-pools:\n- name: default\n protocol: layer2\n addresses:\n - 192.168.0.105-192.168.0.105\n - 192.168.0.110-192.168.0.111\n"
},
"kind": "ConfigMap",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"data\":{\"config\":\"address-pools:\\n- name: default\\n protocol: layer2\\n addresses:\\n - 192.168.0.105-192.168.0.105\\n - 192.168.0.110-192.168.0.111\\n\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"name\":\"config\",\"namespace\":\"metallb-system\"}}\n"
},
"creationTimestamp": "2020-07-10T08:26:21Z",
"managedFields": [
{
"apiVersion": "v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:data": {
".": {},
"f:config": {}
},
"f:metadata": {
"f:annotations": {
".": {},
"f:kubectl.kubernetes.io/last-applied-configuration": {}
}
}
},
"manager": "kubectl",
"operation": "Update",
"time": "2020-07-10T08:26:21Z"
}
],
"name": "config",
"namespace": "metallb-system",
"resourceVersion": "2086",
"selfLink": "/api/v1/namespaces/metallb-system/configmaps/config",
"uid": "c2cfd2d2-866c-466e-aa2a-f3f7ef4837ed"
}
}
I am interested only in the address pools that are configured. As per the kubectl cheat sheet, I can do something like this to fetch the required address range:
$ kubectl get configmap config -n metallb-system -o jsonpath='{.data.config}'
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.105-192.168.0.105
- 192.168.0.110-192.168.0.111
However, my requirement is to use only a JSON parser throughout, and I cannot parse the above output since it is in YAML instead.
Since I'm not willing to accomodate the above yaml output for direct use ( or via format conversion operation ), is there any suitable way I can obtain the address range from the kubectl interface in a JSON format instead?
You need yq alongside kubectl ...
After inspecting your configmap, I understood the structure when I convert it to YAML :
---
apiVersion: v1
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.105-192.168.0.105
- 192.168.0.110-192.168.0.111
kind: ConfigMap
metadata:
name: config
And you can see clearly that .data.config is a multi-line string.. but it can be converted also to YAML.
parse the multi-line string with kubectl
treat this string as yaml using yq.
So This is what you are looking for :
# all addresses
kubectl -n metallb-system get cm config -o 'go-template={{index .data "config" }}' | \
yq -r '.["address-pools"][0].addresses'
# first address only
kubectl -n metallb-system get cm config -o 'go-template={{index .data "config" }}' | \
yq -r '.["address-pools"][0].addresses[0]'
# second address only
kubectl -n metallb-system get cm config -o 'go-template={{index .data "config" }}' | \
yq -r '.["address-pools"][0].addresses[1]'
# so on ...

Can I set env key from param in template

in OpenShift 4.3, I'm trying to set env key from param value within a template. for example:
"env": [
{
"name: "${FOO}-TEST",
"value": "${BAR}"
},
{
"name: "TEST",
"value": "${BAR}"
}
]
"parameters": [
{
"name": "FOO",
"required": true
},
{
"name": "BAR",
"required": true
}
]
Then, oc new-app with -p FOO=X -p BAR=Y, and checking env vars on pod, it shows:
TEST=Y
But does not show:
X-TEST=Y
In template, can I not include a parameter value as env key?
I think you can set up a parameter value as env key.
Could you check the template is working well as you expected as follows ?
export the template as yaml file first.
$ oc get template <your template name> -o yaml > test-template.yml
check whether the parameter you specified is setting up or not from the output.
$ oc process -f test-template.yml -p FOO=X -p BAR=Y
It's my simple test result.
e.g.>
$ cat test-temp.yml
:
containers:
- env:
- name: "${NAME}-KEY"
value: ${NAME}
:
$ oc process -f test-temp.yml -p NAME=test
:
"containers": [
{
"env": [
{
"name": "test-KEY",
"value": "test"
}
],
:
I hope it help you.
Export your variables
oc process FOO=${FOO} BAR=${BAR} -f yamlFile

Ansible | Process data which can be either JSON or YAML

I'm using Ansible to read a config, which can be either JSON or YAML and extract values from some of the nodes in the file.
I know I can use from_json or from_yaml to process it in Ansible, but since I don't know which format the config will be in, I'm having difficulty making it work.
The file is Kubernetes' Kubeconfig. Examples below:
in YAML
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://my-k8s-cluster.com
name: k8s-clstr-master
contexts:
- context:
cluster: k8s-clstr-master
namespace: kube-system
user: k8s-clstr-master-admin
name: k8s-clstr-master
current-context: k8s-clstr-master
kind: Config
preferences: {}
users:
- name: k8s-clstr-master-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
in JSON
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "k8s-clstr-master",
"cluster": {
"server": "https://my-k8s-cluster.com",
"certificate-authority-data": "REDACTED"
}
}
],
"users": [
{
"name": "k8s-clstr-master-admin",
"user": {
"client-certificate-data": "REDACTED",
"client-key-data": "REDACTED"
}
}
],
"contexts": [
{
"name": "k8s-clstr-master",
"context": {
"cluster": "k8s-clstr-master",
"user": "k8s-clstr-master-admin",
"namespace": "kube-system"
}
}
],
"current-context": "k8s-clstr-master"
}
Ansible I'm using:
vars:
kubeconfig: "{{ lookup('hashivault', '/kubeconfig/admin', 'config') }}"
tasks:
- name: Find cluster server name
shell: "echo {{ kubeconfig.clusters[0].cluster.server }}"
Above Ansible block will work okay if kubeconfig is retrieved in JSON format, but it will fail if it's retrieved as in YAML format.
I might be able to make a task with |from yaml and then add ignore_errors: true, but that just doesn't feel like right way of doing it.
Anyone has any tips for me on how I can approach this problem?
There are some built-in tests in Jinja2.
The way Ansible templator works if you have JSON string inside {{...}} expression, it is automatically converted to object. So if you fetch JSON from your vault, kubeconfig becomes object, otherwise it is a string.
Here's a recipe for you:
vars:
kubeconfig_raw: "{{ lookup('hashivault', '/kubeconfig/admin', 'config') }}"
kubeconfig: "{{ kubeconfig_raw if kubeconfig_raw is mapping else kubeconfig_raw | from_yaml }}"
tasks:
- name: Find cluster server name
shell: "echo {{ kubeconfig.clusters[0].cluster.server }}"
If you use the include_vars task, it does not matter which format you provide. The task accepts both.
---
- hosts: localhost
connection: local
tasks:
- include_vars:
file: config
name: kubeconfig
- debug: var=kubeconfig