Run Helm Command within a pod - openshift

I am trying to run helm command within a pod. Here is my yaml
I did run a oc command
oc create -f mycron.yaml
Here is my mycron.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronbox
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cronbox
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the openshift cluster; helm version
restartPolicy: OnFailure
When the schedule pick up, and when the commands run i see the below result
helm: command not found
I am expecting the helm version to be printed which i can see in the logs of the pod

In order for this to work you'll have to start using a different image such as alpine/helm:2.9.0, which contains the helm cli tools.

Related

helm hook for both Pod and Job for kubernetes not running all yamls

I am using Kubernetes with Helm 3.
It is ran on CentOS Linux 7 (Core).
K8S (check by running: kubectl version):
git version (kubernetes): v1.21.6, go version: go1.16.9.
helm version: v3.3.4
helm version (git) go1.14.9.
I need to create a Job that is running after a Pod is created.
The pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: {{ include "test.fullname" . }}-mysql
labels:
app: {{ include "test.fullname" . }}-mysql
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-20"
"helm.sh/delete-policy": before-hook-creation
spec:
containers:
- name: {{ include "test.fullname" . }}-mysql
image: {{ .Values.mysql.image }}
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: "12345"
- name: MYSQL_DATABASE
value: test
The Job:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "test.fullname" . }}-migration-job
labels:
app: {{ include "test.fullname" . }}-migration-job
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-10"
"helm.sh/hook-delete-policy": hook-succeeded, hook-failed
spec:
parallelism: 1
completions: 1
backoffLimit: 1
template: #PodTemplateSpec (Core/V1)
spec: #PodSpec (core/v1)
initContainers: # regular
- name: wait-mysql
image: bitnami/kubectl
imagePullPolicy: IfNotPresent
args:
- wait
- pod/{{ include "test.fullname" . }}-mysql
- --namespace={{ .Release.Namespace }}
- --for=condition=ready
- --timeout=120s
containers:
- name: {{ include "test.fullname" . }}
image: {{ .Values.myMigration.image }}
imagePullPolicy: IfNotPresent
command: {{- toYaml .Values.image.entrypoint | nindent 12 }}
args: {{- toYaml .Values.image.cmd | nindent 12}}
MySQL is MySQL 5.6 image.
When I write the above, also run helm install test ./test --namespace test --create-namespace
Even though I changed the hook for pre-install (for Pod and Job), the job is never running.
In both situations, I get messages (and need to press - to exit - I don't want this behavior either:
Pod test-mysql pending Pod test-mysql pending Pod
test-mysql pending Pod test-mysql running Pod
test-mysql running Pod test-mysql running Pod
test-mysql running ...
In this example, when I put a 'bug' in the Job, for example: containersx instead of container, I don't get any notification that I have a wrong syntax.
Maybe because MySQL is running (and not completed), can I force to go to the next yaml declared by hook? (Even I declare the proper order for Pod and Job. Pod should run before Job).
What is wrong, and how can I ensure the pod is created before the job? And when the pod starts running, my job will run after that?
Thanks.
As per your configuration, it looks like you need to set post-install hook precisely for Job as it should execute after all resources are loaded into Kubernetes. On executing pre-install hook both on Pod and Job, it is run before the rest of the chart is loaded, which seems to prevent Job from starting.

I want to print the current podname in which my application is running in application logs in openshift

So my java application is running in several pod in openshift and I want to print the podname in application logs for some business purpose. Is there any way to do so? Thanks
You should be able to expose the Pod name to the application using the Kubernetes "Downward API". This can either be done by exposing an environment variable with the Pod name, or mounting a file that contains the name.
Here's the docs for doing so with an environment variable: https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api
Here's a trimmed down version of the example on that page, to highlight just the Pod name:
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_NAME;
sleep 10;
done;
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
restartPolicy: Never
As you can see from the docs, there's a bunch of other context that you can expose also.
The equivalent docs for mounting a volume file can be found here: https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api

Piping a file to stdin in Kubernetes Job

Here we have a sample of the job
apiVersion: batch/v1
kind: Job
metadata:
# Unique key of the Job instance
name: example-job
spec:
template:
metadata:
name: example-job
spec:
containers:
- name: pi
image: perl
command: ["perl"]
args: ["-Mbignum=bpi", "-wle", "print bpi(2000)"]
# Do not restart containers after they exit
restartPolicy: Never
I want to run a MySQL script as a command:
mysql -hlocalhost -u1234 -p1234 --database=customer < script.sql
But Kubernetes documentation is silent about piping a file to stdin. How can I specify that in Kubernetes job config?
Would set your command to something like [bash, -c, "mysql -hlocalhost -u1234 -p1234 --database=customer < script.sql"], since input redirection like that is actually a feature of your shell.

import mysql data to kubernetes pod

Does anyone know how to import the data inside my dump.sql file to a kubernetes pod either;
Directly,same way as you dealing with docker containers:
docker exec -i container_name mysql -uroot --password=secret database < Dump.sql
Or using the data stored in an existing docker container volume and pass it to the pod .
Just if other people are searching for this :
kubectl -n namespace exec -i my_sql_pod_name -- mysql -u user -ppassword < my_local_dump.sql
To answer your specific question:
You can kubectl exec into your container in order to run commands inside it. You may need to first ensure that the container has access to the file, by perhaps storing it in a location that the cluster can access (network?) and then using wget/curl within the container to make it available. One may even open up an interactive session with kubectl exec.
However, the ways to do this in increasing measure of generality would be:
Create a service that lets you access the mysql instance running on the pod from outside the cluster and connect your local mysql client to it.
If you are executing this initialization operation every time such a mysql pod is being started, it could be stored on a persistent volume and you could execute the script within your pod when you start up.
If you have several pieces of data that you typically need to copy over when starting the pod, look at init containers for fetching that data.
TL;DR
Using ConfigMaps and then use that ConfgMap as a mount into the /docker-entrypoint-initdb.d folder
Code
MySQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: dbpassword11
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
- name: usermanagement-dbcreation-script
mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: ebs-mysql-pv-claim
- name: usermanagement-dbcreation-script
configMap:
name: usermanagement-dbcreation-script
MySQL ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: usermanagement-dbcreation-script
data:
mysql_usermgmt.sql: |-
DROP DATABASE IF EXISTS usermgmt;
CREATE DATABASE usermgmt;
Reference:
https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/blob/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-02-SC-PVC-ConfigMap-MySQL/kube-manifests/04-mysql-deployment.yml
https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/blob/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-02-SC-PVC-ConfigMap-MySQL/kube-manifests/03-UserManagement-ConfigMap.yml

Kubernetes + MySQL : Creating custom database and user in a Kubernetes container

I am trying to create a Django + MySQL app using Google Container Engine and Kubernetes. Following the docs from official MySQL docker image and Kubernetes docs for creating MySQL container I have created the following replication controller
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mysql
name: mysql
spec:
replicas: 1
template:
metadata:
labels:
name: mysql
spec:
containers:
- image: mysql:5.6.33
name: mysql
env:
#Root password is compulsory
- name: "MYSQL_ROOT_PASSWORD"
value: "root_password"
- name: "MYSQL_DATABASE"
value: "custom_db"
- name: "MYSQL_USER"
value: "custom_user"
- name: "MYSQL_PASSWORD"
value: "custom_password"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
# This name must match the volumes.name below.
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mysql-disk
fsType: ext4
According to the docs, passing the environment variables MYSQL_DATABASE. MYSQL_USER, MYSQL_PASSWORD, a new user will be created with that password and assigned rights to the newly created database. But this does not happen. When I SSH into that container, the ROOT password is set. But neither the user, nor the database is created.
I have tested this by running locally and passing the same environment variables like this
docker run -d --name some-mysql \
-e MYSQL_USER="custom_user" \
-e MYSQL_DATABASE="custom_db" \
-e MYSQL_ROOT_PASSWORD="root_password" \
-e MYSQL_PASSWORD="custom_password" \
mysql
When I SSH into that container, the database and users are created and everything works fine.
I am not sure what I am doing wrong here. Could anyone please point out my mistake. I have been at this the whole day.
EDIT: 20-sept-2016
As Requested
#Julien Du Bois
The disk is created. it appears in the cloud console and when I run the describe command I get the following output
Command : gcloud compute disks describe mysql-disk
Result:
creationTimestamp: '2016-09-16T01:06:23.380-07:00'
id: '4673615691045542160'
kind: compute#disk
lastAttachTimestamp: '2016-09-19T06:11:23.297-07:00'
lastDetachTimestamp: '2016-09-19T05:48:14.320-07:00'
name: mysql-disk
selfLink: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/disks/mysql-disk
sizeGb: '20'
status: READY
type: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/diskTypes/pd-standard
users:
- https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/instances/gke-cluster-1-default-pool-e0f09576-zvh5
zone: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>
I referred to lot of tutorials and google cloud examples. To run the mysql docker container locally my main reference was the official image page on docker hub
https://hub.docker.com/_/mysql/
This works for me and locally the container created has a new database and user with right privileges.
For kubernetes, my main reference was the following
https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/
I am just trying to connect to it using Django container.
I was facing the same issue when I was using volumes and mounting them to mysql pods.
As mentioned in the documentation of mysql's docker image:
When you start the mysql image, you can adjust the configuration of the MySQL instance by passing one or more environment variables on the docker run command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
So after spinning wheels I managed to solve the problem by changing the hostPath of the volume that I was creating from "/data/mysql-pv-volume" to "/var/lib/mysql"
Here is a code snippet that might help create the volumes
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete /* For development Purposes only */
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/mysql"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Hope that helped.
You set mysql-disk in your deployment and the disk you have is custom-disk. Change pdName to custom-disk and it will work.