Issue running helm command on a schedule - openshift

I am trying to delete temporary pods and other artifacts using helm delete. I am trying to run this helm delete to run on a schedule. Here is my stand alone command which works
helm delete --purge $(helm ls -a -q temppods.*)
However if i try to run this on a schedule as below i am running into issues.
Here is what mycron.yaml looks like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronbox
spec:
serviceAccount: cron-z
successfulJobsHistoryLimit: 1
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cronbox
image: alpine/helm:2.9.1
args:
- delete
- --purge
- $(helm ls -a -q temppods.*)
restartPolicy: OnFailure
I ran
oc create -f ./mycron.yaml
This created the cronjob
Every 5th minute a pod is getting created and the helm command that is part of the cron job runs.
I am expecting the artifacts/pods name beginning with temppods* to be deleted.
What i get is:
Error: pods is forbidden: User "system:serviceacount:myproject:default" cannot list pods in the namespace "kube-system": no RBAC policy matched
i then created a service account cron-z and gave edit access to it. I added this serviceAccount to my yaml thinking when my pod will be created it will have the service account cron-z associated to it. Still no luck. I see the cron-z is not getting assoicated with the pod that gets created every 5 minutes and i still see default as the service name associated with the pod.

You'll need to have a service account for helm to use tiller with as well as an actual tiller service account github.com/helm/helm/blob/master/docs/rbac.md

Related

How to permanently change sysctl settings on a GKE host node?

We have a kubernetes cluster running in Google GKE. I want to permanently set another value for fs.aio-max-nr in sysctl, but it keeps changing back to default after running sudo reboot.
This is what I've tried:
sysctl -w fs.aio-max-nr=1048576
echo 'fs.aio-max-nr = 1048576' | sudo tee --append /etc/sysctl.d/99-gke-defaults.conf
echo 'fs.aio-max-nr = 1048576' | sudo tee --append /etc/sysctl.d/00-sysctl.conf
Is it possible to change this permanently? And why isn't there a etc/sysctl.config but two sysctl files in sysctl.d/ folder?
I'd do this by deploying a DaemonSet on all the nodes on which you need this setting. The only drawback here is that the DaemonSet pod will need to run with elevated privileges. The container has access to /proc on the host, so then you just need to execute your sysctl commands in a script and then exit.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: sysctl
spec:
template:
spec:
containers:
- name: sysctl
image: alpine
command:
- /bin/sh
- -c
- sysctl fs.aio-max-nr=1048576
securityContext:
privileged: true
There's also example here.
I ended up switching node image from Googles default image cos_containerd to ubuntu containerd. This made the sysctl changes permanent.

Use kubernetes with helm and provide my predefined user and password with bitnami correctly

I am using kubernetes with helm 3.
I need to create a kubernetes pod with sql - creating:
database name: my_database
user: root
password:12345
port: 3306
The steps:
creating chart by:
helm create test
after the chart is created, change the Chart.yaml file in test folder, by adding dependencies section.
apiVersion: v2
name: test3
description: A Helm chart for Kubernetes
version: 0.1.0
appVersion: "1.16.0"
dependencies:
name: mysql
version: 8.8.23 repository: "https://charts.bitnami.com/bitnami"
run:
helm dependencies build test
After that there is a compressed file tgz.
So I extracted it and there is tar file - I extracted it too, and leave only the final extracted folder.
I presume this isn't the best approach of changing parameter in yaml for bitnami,
and using also the security.yaml - I would like knowing that better approach too.
I need to change the user + password, and link to database,
so I changed the values.yaml directly (any better approach?), for values: auth:rootPassword and auth:my_database.
the another following steps:
helm build dependencies test
helm install test --namespace test --create-namespace
after that there are two pods created.
I could check it by:
kubectl get pods -n test
and I see two pods running (maybe replication).
one of the pod: test-mysql-0 (the other is with random parse).
run:
kubectl exec --stdin --tty test-mysql-0 --namespace test-mysql -- /bin/sh
did enter the pod.
run:
mysql -rroot -p12345;
and then:
show databases;
That did showing all the database, including seeing the created database: my_database, successfully.
When I tried openning the mysql database from 'mysql workbench', and test (same user: root, and password, and port: 3306, and localhost), I couldn't run test (test connection button in database properties returns: 'failed to connect to database').
Why cannot I run properly 'mysql workbench', while in the pad itself - without any particular problem?
Is there any better approach than extrating the tgz file as I described above, and can I pass in better way (some secured yaml) the user+password?
(Right now is only the root password)
Thanks.
It sounds like you're trying to set the parameters in the dependent chart (please correct me if I'm wrong)
If this is right, all you need to do is add another section in your chart's values.yaml
name-of-dependency:
user-name: ABC
password: abcdef
the "name-of-dependency" is specified in your Chart.yaml file when you declare your chart. For example, here's my redis dependency from one of my own charts
dependencies:
- name: redis
repository: https://charts.bitnami.com/bitnami/
version: x.x.x
Then when I install the chart, I can override the redis chart's settings by doing this in my own chart's values.yaml
redis:
architecture: standalone
auth:
password: "secret-password-here"

Openshift Configmap : create and update command

I am writing sample program to deploy into Openshift with configmap. I have the following configmap yaml in the source code folder so when devops is setup, Jenkins should pick up this yaml and create/update the configs.
apiVersion: v1
kind: ConfigMap
metadata:
name: sampleapp
data:
username: usernameTest
password: passwordTest
However, I could not find the command that would create/update if the config already exist (similar to kubectl apply command). Can you help with the correct command which would create the Resource if the job is run for the first time and update if otherwise.
I also want to create/update the Services,Routes from the yaml files in the src repository.
Thanks.
you can use "oc apply" command to update the resources already exists.
Like below Example:
#oc process -f openjdk-basic-template.yml -p APPLICATION_NAME=spring-rest -p SOURCE_REPOSITORY_URL=https://github.com/rest.git -p CONTEXT_DIR='' | oc apply -f-
service "spring-rest" configured
route "spring-rest" created
imagestream "spring-rest" configured
buildconfig "spring-rest" configured
deploymentconfig "spring-rest" configured
If you have configmap in yaml file or you store in some place
you can do replace it.
oc replace --force -f config-map.yaml this will update the existing configmap (it actually deletes and creates a new one)
After this - I executed:
oc set env --from=configmap/example-cm dc/example-dc

How to use a unique value in a Kubernetes ConfigMap

Problem
I have a monitoring application that I want to deploy inside of a DaemonSet. In the application's configuration, a unique user agent is specified to separate the node from other nodes. I created a ConfigMap for the application, but this only works for synchronizing the other settings in the environment.
Ideal solution?
I want to specify a unique value, like the node's hostname or another locally-inferred value, to use as the user agent string. Is there a way I can call this information from the system and Kubernetes will populate the desired key with a value (like the hostname)?
Does this make sense, or is there a better way to do it? I was looking through the documentation, but I could not find an answer anywhere for this specific question.
As an example, here's the string in the app config that I have now, versus what I want to use.
user_agent = "app-k8s-test"
But I'd prefer…
user_agent = $HOSTNAME
Is something like this possible?
You can use an init container to preprocess a config template from a config map. The preprocessing step can inject local variables into the config files. The expanded config is written to an emptyDir shared between the init container and the main application container. Here is an example of how to do it.
First, make a config map with a placeholder for whatever fields you want to expand. I used sed and and ad-hoc name to replace. You can also get fancy and use jinja2 or whatever you like. Just put whatever pre-processor you want into the init container image. You can use whatever file format for the config file(s) you want. I just used TOML here to show it doesn't have to be YAML. I called it ".tpl" because it is not ready to use: it has a string, _HOSTNAME_, that needs to be expanded.
$ cat config.toml.tpl
[blah]
blah=_HOSTNAME_
otherkey=othervalue
$ kubectl create configmap cm --from-file=config.toml.tpl
configmap "cm" created
Now write a pod with an init container that mounts the config map in a volume, and expands it and writes to another volume, shared with the main container:
$ cat personalized-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod-5
labels:
app: myapp
annotations:
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running and my config-map is && cat /etc/config/config.toml && sleep 3600']
volumeMounts:
- name: config-volume
mountPath: /etc/config
initContainers:
- name: expander
image: busybox
command: ['sh', '-c', 'cat /etc/config-templates/config.toml.tpl | sed "s/_HOSTNAME_/$MY_NODE_NAME/" > /etc/config/config.toml']
volumeMounts:
- name: config-tpl-volume
mountPath: /etc/config-templates
- name: config-volume
mountPath: /etc/config
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: config-tpl-volume
configMap:
name: cm
- name: config-volume
emptyDir:
$ kubctl create -f personalized-pod.yaml
$ sleep 10
$ kubectl logs myapp-pod
The app is running and my config-map is
[blah]
blah=gke-k0-default-pool-93916cec-p1p6
otherkey=othervalue
I made this a bare pod for an example. You can embed this type of pod in a DaemonSet's pod template.
Here, the Downward API is used to set the MY_NODE_NAME Environment Variable, since the Node Name is not otherwise readily available from within a container.
Note that for some reason, you can't get the spec.nodeName into a file, just an env var.
If you just need the hostname in an Env Var, then you can skip the init container.
Since the Init Container only runs once, you should not update the configMap and expect it to be reexpanded. If you need updates, you can do one of two things:
Instead of an init container, run a sidecar that watches the config map volume and re-expands when it changes (or just does it periodically). This requires that the main container also know how to watch for config file updates.
You can just make a new config map each time the config template changes, and edit the daemonSet to change the one line to point to a new config map.
And then do a rolling update to use the new config.

Kubenetes doesn't recover service after minion failure

I am testing Kubernetes redundancy features with a testbed made of one master and three minions.
Case: I am running a service with 3 replicas on minions 1 and 2 and minion3 stopped
[root#centos-master ajn]# kubectl get nodes
NAME STATUS AGE
centos-minion3 NotReady 14d
centos-minion1 Ready 14d
centos-minion2 Ready 14d
[root#centos-master ajn]# kubectl describe pods $MYPODS | grep Node:
Node: centos-minion2/192.168.0.107
Node: centos-minion1/192.168.0.155
Node: centos-minion2/192.168.0.107
Test: After starting minion3 and stopping minion2 (on which 2 pods are running)
[root#centos-master ajn]# kubectl get nodes
NAME STATUS AGE
centos-minion3 Ready 15d
centos-minion1 Ready 14d
centos-minion2 NotReady 14d
Result: The service kind doesn't recover from minion failure and Kubernetes continue showing pods on the failed minion.
[root#centos-master ajn]# kubectl describe pods $MYPODS | grep Node:
Node: centos-minion2/192.168.0.107
Node: centos-minion1/192.168.0.155
Node: centos-minion2/192.168.0.107
Expected result (at least in my understanding): the service should have been built on the currently available minion 1 and 3
As far as I understand, the role of service kind is to make the deployment "globally" available so we can refer to them independently of where deployments are in the cluster.
Am I doing something wrong?
I'm using the follwoing yaml spec:
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-www 
spec:
  replicas: 3
  selector:
    app:  nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
It looks like you're always trying to read the same pods that are referenced in $MYPODS. Pod names are created dynamically by the ReplicationController, so instead of kubectl describe pods $MYPODS try this instead:
kubectl get pods -l app=nginx -o wide
This will always give you the currently scheduled pods for your app.