Can I set env key from param in template - openshift

in OpenShift 4.3, I'm trying to set env key from param value within a template. for example:
"env": [
{
"name: "${FOO}-TEST",
"value": "${BAR}"
},
{
"name: "TEST",
"value": "${BAR}"
}
]
"parameters": [
{
"name": "FOO",
"required": true
},
{
"name": "BAR",
"required": true
}
]
Then, oc new-app with -p FOO=X -p BAR=Y, and checking env vars on pod, it shows:
TEST=Y
But does not show:
X-TEST=Y
In template, can I not include a parameter value as env key?

I think you can set up a parameter value as env key.
Could you check the template is working well as you expected as follows ?
export the template as yaml file first.
$ oc get template <your template name> -o yaml > test-template.yml
check whether the parameter you specified is setting up or not from the output.
$ oc process -f test-template.yml -p FOO=X -p BAR=Y
It's my simple test result.
e.g.>
$ cat test-temp.yml
:
containers:
- env:
- name: "${NAME}-KEY"
value: ${NAME}
:
$ oc process -f test-temp.yml -p NAME=test
:
"containers": [
{
"env": [
{
"name": "test-KEY",
"value": "test"
}
],
:
I hope it help you.

Export your variables
oc process FOO=${FOO} BAR=${BAR} -f yamlFile

Related

How to save Terraform output variable into a Github Action’s environment variable

My project uses Terraform for setting up the infrastructure and Github Actions for CI/CD. After running terraform apply I would like to save the value of a Terraform output variable as Github Action environment variable to be later used by the workflow.
According to Github Action's docs, this is the way to create or update environment variables using workflow commands.
Here is my simplified Github Action workflow:
name: Setup infrastructure
jobs:
run-terraform:
name: Apply infrastructure changes
runs-on: ubuntu-latest
steps:
...
- run: terraform output vm_ip
- run: echo TEST=$(terraform output vm_ip) >> $GITHUB_ENV
- run: echo ${{ env.TEST }}
When running locally the command echo TEST_VAR=$(terraform output vm_ip) outputs exactly TEST="192.168.23.23" but from the Github Action CLI output I get something very strange:
I've tried with single quotes, double quotes. At some point I changed the strategy and tried to use jq. So I've added the following steps in order to export all Terraform Outputs to a json file and parse it using jq:
- run: terraform output -json >> /tmp/tf.out.json
- run: jq '.vm_ip.value' /tmp/tf.out.json
But now it throws the following error:
parse error: Invalid numeric literal at line 1, column 9
Even though the JSON generated is perfectly valid:
{
"cc_host": {
"sensitive": false,
"type": "string",
"value": "private.c.db.ondigitalocean.com"
},
"cc_port": {
"sensitive": false,
"type": "number",
"value": 1234
},
"db_host": {
"sensitive": false,
"type": "string",
"value": "private.b.db.ondigitalocean.com"
},
"db_name": {
"sensitive": false,
"type": "string",
"value": "XXX"
},
"db_pass": {
"sensitive": true,
"type": "string",
"value": "XXX"
},
"db_port": {
"sensitive": false,
"type": "number",
"value": 1234
},
"db_user": {
"sensitive": false,
"type": "string",
"value": "XXX"
},
"vm_ip": {
"sensitive": false,
"type": "string",
"value": "206.189.15.70"
}
}
The commands terraform output -json >> /tmp/tf.out.json and jq '.vm_ip.value' /tmp/tf.out.json work accordingly on local.
After hours searching I've finally figured it out.
It seems that the Terraform's Github Action offers an additional parameter called terraform_wrapper which needs to be set to false if you plan using the output in commands. You can read a more in depth article here.
Otherwise, they will be automatically exposed to the step's output and they can be accessed like steps.<step_id>.outputs.<variable>. You can read more about them here and here.
For me, what worked was using terraform-bin output instead of terraform output.
More info here.

kubectl create pod using override return error: Invalid JSON Patch

I am trying to run my pod using below command but keep getting error:
error: Invalid JSON Patch
kubectl run -i tmp-pod --rm -n=my-scripts --image=placeholder --restart=Never --overrides= "$(cat pod.json)"
Here is my pod.json file:
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "test",
"namespace": "my-ns",
"labels": {
"app": "test"
}
},
"spec": {
"containers": [
{
"name": "test",
"image": "myimage",
"command": [
"python",
"/usr/bin/cma/excute.py"
]
}
]
}
}
What am I doing wrong here?
I did a bit of testing and it seems there is an issue with Cmder not executing $() properly - either not working at all, or treating newlines as Enter, and thus executing a commant before entire JSON is passed.
You may want to try running your commands in PowerShell:
kubectl run -i tmp-pod --rm -n=my-scripts --image=placeholder --restart=Never --overrides=$(Get-Content pod.json -Raw)
There is a similar issue on GitHub [Windows] kubectl run not accepting valid JSON as --override on Windows (same JSON works on Mac) #519. Unfortunately, there is no clear solution for this.
Possible solutions are:
Passing JSON as a string directly
kubectl run -i tmp-pod --rm -n=my-scripts --image=placeholder --restart=Never --overrides='{"apiVersion":"v1","kind":"Pod","metadata":{...}}'
Using ' instead of " around overrides.
Using triple quotes """ instead of single quotes " in JSON file.

Parsing Kubernetes ConfigMap in Mixed Data Formats

Consider this current namespace config in JSON format:
$ kubectl get configmap config -n metallb-system -o json
{
"apiVersion": "v1",
"data": {
"config": "address-pools:\n- name: default\n protocol: layer2\n addresses:\n - 192.168.0.105-192.168.0.105\n - 192.168.0.110-192.168.0.111\n"
},
"kind": "ConfigMap",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"data\":{\"config\":\"address-pools:\\n- name: default\\n protocol: layer2\\n addresses:\\n - 192.168.0.105-192.168.0.105\\n - 192.168.0.110-192.168.0.111\\n\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"name\":\"config\",\"namespace\":\"metallb-system\"}}\n"
},
"creationTimestamp": "2020-07-10T08:26:21Z",
"managedFields": [
{
"apiVersion": "v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:data": {
".": {},
"f:config": {}
},
"f:metadata": {
"f:annotations": {
".": {},
"f:kubectl.kubernetes.io/last-applied-configuration": {}
}
}
},
"manager": "kubectl",
"operation": "Update",
"time": "2020-07-10T08:26:21Z"
}
],
"name": "config",
"namespace": "metallb-system",
"resourceVersion": "2086",
"selfLink": "/api/v1/namespaces/metallb-system/configmaps/config",
"uid": "c2cfd2d2-866c-466e-aa2a-f3f7ef4837ed"
}
}
I am interested only in the address pools that are configured. As per the kubectl cheat sheet, I can do something like this to fetch the required address range:
$ kubectl get configmap config -n metallb-system -o jsonpath='{.data.config}'
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.105-192.168.0.105
- 192.168.0.110-192.168.0.111
However, my requirement is to use only a JSON parser throughout, and I cannot parse the above output since it is in YAML instead.
Since I'm not willing to accomodate the above yaml output for direct use ( or via format conversion operation ), is there any suitable way I can obtain the address range from the kubectl interface in a JSON format instead?
You need yq alongside kubectl ...
After inspecting your configmap, I understood the structure when I convert it to YAML :
---
apiVersion: v1
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.105-192.168.0.105
- 192.168.0.110-192.168.0.111
kind: ConfigMap
metadata:
name: config
And you can see clearly that .data.config is a multi-line string.. but it can be converted also to YAML.
parse the multi-line string with kubectl
treat this string as yaml using yq.
So This is what you are looking for :
# all addresses
kubectl -n metallb-system get cm config -o 'go-template={{index .data "config" }}' | \
yq -r '.["address-pools"][0].addresses'
# first address only
kubectl -n metallb-system get cm config -o 'go-template={{index .data "config" }}' | \
yq -r '.["address-pools"][0].addresses[0]'
# second address only
kubectl -n metallb-system get cm config -o 'go-template={{index .data "config" }}' | \
yq -r '.["address-pools"][0].addresses[1]'
# so on ...

How to change a ConfigMap of a Deployment using a script?

There are Deployments that may use a configmap with the name like cm-myapp-*. How to write a script that looks at all Deployments and reconfigures them from using some of their cm-myapp-* to the new specific cm-myapp-123?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:2
volumeMounts:
- name: config-volume
mountPath: /etc/myapp/
volumes:
- name: config-volume
configMap:
name: cm-myapp-9375546193
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-myapp-123
data:
myapp.conf: |
hi
There is kubectl patch that accepts 'JSON patches', and there is kubectl edit that looks like interactive-only. Some kubectl commands accept go-templates, but they aren't for editing. Dumping the whole config gives some superfluous fields.
Can extract some stuff:
kubectl get deployment -o go-template --template="{{range .items}}{{\$deploymentName := .metadata.name}}{{range .spec.template.spec.volumes}}{{if .configMap}}{{\$deploymentName}} {{.configMap}}:{{end}}{{end}}{{end}}" | tr ':' '\n'
kubectl get deployment myapp -ojsonpath="{.spec.template.spec.volumes[0].configMap.name}}"
Need to patch it (not working):
kubectl patch deployment myapp -p '{ "op": "replace", "path": ".spec.template.spec.volumes[0].name", "value": "cf" }'
So how can it be done? What's the syntax of kubectl patch?
Use jq, the "awk for json" to tranform the JSON document(s). I'm not sure which fields you want to change exactly, but how to adjust it should be clear from the jq argument.
$ cat x.json
{
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"foo": "myapp"
"metadata": {
"name": "myapp"
},
"spec": {
"template": {
"metadata": {
"labels": {
"app": "myapp"
}
}
}
}
}
$ jq '
.metadata.name = "cm-myapp-123"
| .spec.template.metadata.labels.app = "cm-myapp-123"
| .
' < x.json
{
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"foo": "myapp"
"metadata": {
"name": "cm-myapp-123"
},
"spec": {
"template": {
"metadata": {
"labels": {
"app": "cm-myapp-123"
}
}
}
}
}
In my opinion it would be optimal if you knew the deployments in advance. You should generate your manifests that you apply from some templating solution (I would suggest getting familiar with helm which is much more then just templates) and then have the configmap managed with the templating.
Outputs an old config name, its index in the 'volumes' array and the name of the deployment. Filters out configs that we aren't interested in, patches all deployments.
#!/bin/bash
name=cm-myapp
unique_name=cm-myapp-123
# Columns: ConfigMap name, index in volumes, Deployment name.
kubectl get deployment -o go-template --template="{{range .items}}{{\$deploymentName := .metadata.name}}{{range \$i, \$v := .spec.template.spec.volumes}}{{if .configMap}}{{.configMap.name}} {{\$i}} {{\$deploymentName}}:{{end}}{{end}}{{end}}" | tr ':' '\n' |
egrep "^$name-[^-]+ " | while read l; do
i=$(printf '%s\n' "$l" | awk '{print $2}')
deployment=$(printf '%s\n' "$l" | awk '{print $3}')
kubectl patch deployment $deployment --type=json -p "[{ \"op\": \"replace\", \"path\": \"/spec/template/spec/volumes/$i/configMap/name\", \"value\": \"$unique_name\" }]"
done
Here's some kubectl patch syntax I've used to patch images by container name:
-p "{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"myapp\",\"image\":\"$imageUri\"}]}}}}"
The same thing might work for you by patching the volumes key instead:
-p "{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"config-volume\",\"configMap\":{\"name\":\"myapp-123\"}}]}}}}"
What's the syntax of kubectl patch?
The official documentation is here with examples here. According to that guide, you might try setting --type=json on your patch command.
There are two syntax: JSON Patch and JSON Merge Patch.

CannotStartContainerError while submitting a AWS Batch Job

In AWS Batch I have a job definition and a job queue and a compute environment where to execute my AWS Batch jobs.
After submitting a job, I find it in the list of the failed ones with this error:
Status reason
Essential container in task exited
Container message
CannotStartContainerError: API error (404): oci runtime error: container_linux.go:247: starting container process caused "exec: \"/var/application/script.sh --file= --key=.
and in the cloudwatch logs I have:
container_linux.go:247: starting container process caused "exec: \"/var/application/script.sh --file=Toulouse.json --key=out\": stat /var/application/script.sh --file=Toulouse.json --key=out: no such file or directory"
I have specified a correct docker image that has all the scripts (we use it already and it works) and I don't know where the error is coming from.
Any suggestions are very appreciated.
The docker file is something like that:
# Pull base image.
FROM account-id.dkr.ecr.region.amazonaws.com/application-image.base-php7-image:latest
VOLUME /tmp
VOLUME /mount-point
RUN chown -R ubuntu:ubuntu /var/application
# Create the source directories
USER ubuntu
COPY application/ /var/application
# Register aws profile
COPY data/aws /home/ubuntu/.aws
WORKDIR /var/application/
ENV COMPOSER_CACHE_DIR /tmp
RUN composer update -o && \
rm -Rf /tmp/*
Here is the Job Definition:
{
"jobDefinitionName": "JobDefinition",
"jobDefinitionArn": "arn:aws:batch:region:accountid:job-definition/JobDefinition:25",
"revision": 21,
"status": "ACTIVE",
"type": "container",
"parameters": {},
"retryStrategy": {
"attempts": 1
},
"containerProperties": {
"image": "account-id.dkr.ecr.region.amazonaws.com/application-dev:latest",
"vcpus": 1,
"memory": 512,
"command": [
"/var/application/script.sh",
"--file=",
"Ref::file",
"--key=",
"Ref::key"
],
"volumes": [
{
"host": {
"sourcePath": "/mount-point"
},
"name": "logs"
},
{
"host": {
"sourcePath": "/var/log/php/errors.log"
},
"name": "php-errors-log"
},
{
"host": {
"sourcePath": "/tmp/"
},
"name": "tmp"
}
],
"environment": [
{
"name": "APP_ENV",
"value": "dev"
}
],
"mountPoints": [
{
"containerPath": "/tmp/",
"readOnly": false,
"sourceVolume": "tmp"
},
{
"containerPath": "/var/log/php/errors.log",
"readOnly": false,
"sourceVolume": "php-errors-log"
},
{
"containerPath": "/mount-point",
"readOnly": false,
"sourceVolume": "logs"
}
],
"ulimits": []
}
}
In Cloudwatch log stream /var/log/docker:
time="2017-06-09T12:23:21.014547063Z" level=error msg="Handler for GET /v1.17/containers/4150933a38d4f162ba402a3edd8b7763c6bbbd417fcce232964e4a79c2286f67/json returned error: No such container: 4150933a38d4f162ba402a3edd8b7763c6bbbd417fcce232964e4a79c2286f67"
This error was because the command was malformed. I was submitting the job by a lambda function (python 2.7) using boto3 and the syntax of the command should be something like this:
'command' : ['sudo','mkdir','directory']
Hope it helps somebody.