It is how to run simple batch in kubernetes yaml (helloworld.yaml):
...
image: "ubuntu:14.04"
command: ["/bin/echo", "hello", "world"]
...
In Kubernetes i can deploy that like this:
$ kubectl create -f helloworld.yaml
Suppose i have a batch script like this (script.sh):
#!/bin/bash
echo "Please wait....";
sleep 5
Is there way to include the script.sh into kubectl create -f so it can run the script. Suppose now helloworld.yaml edited like this:
...
image: "ubuntu:14.04"
command: ["/bin/bash", "./script.sh"]
...
I'm using this approach in OpenShift, so it should be applicable in Kubernetes as well.
Try to put your script into a configmap key/value, mount this configmap as a volume and run the script from the volume.
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world-job
spec:
parallelism: 1
completions: 1
template:
metadata:
name: hello-world-job
spec:
volumes:
- name: hello-world-scripts-volume
configMap:
name: hello-world-scripts
containers:
- name: hello-world-job
image: alpine
volumeMounts:
- mountPath: /hello-world-scripts
name: hello-world-scripts-volume
env:
- name: HOME
value: /tmp
command:
- /bin/sh
- -c
- |
echo "scripts in /hello-world-scripts"
ls -lh /hello-world-scripts
echo "copy scripts to /tmp"
cp /hello-world-scripts/*.sh /tmp
echo "apply 'chmod +x' to /tmp/*.sh"
chmod +x /tmp/*.sh
echo "execute script-one.sh now"
/tmp/script-one.sh
restartPolicy: Never
---
apiVersion: v1
items:
- apiVersion: v1
data:
script-one.sh: |
echo "script-one.sh"
date
sleep 1
echo "run /tmp/script-2.sh now"
/tmp/script-2.sh
script-2.sh: |
echo "script-2.sh"
sleep 1
date
kind: ConfigMap
metadata:
creationTimestamp: null
name: hello-world-scripts
kind: List
metadata: {}
As explained here, you could use the defaultMode: 0777 property as well, an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-script
data:
test.sh: |
echo "test1"
ls
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
volumes:
- name: test-script
configMap:
name: test-script
defaultMode: 0777
containers:
- command:
- sleep
- infinity
image: ubuntu
name: locust
volumeMounts:
- mountPath: /test-script
name: test-script
You can enter into the container shell and execute the script /test-script/test.sh
Related
I am using an on premise kubernetes cluster (with istio) to integrate my application with Trillian. I have deployed a mysql database together with a personality, a server and a signer, but I am not able to create a tree using the command here (https://github.com/google/trillian/blob/master/examples/deployment/kubernetes/provision_tree.sh#L27)
echo TREE=$(curl -sb -X POST ${LOG_URL}/v1beta1/trees -d '{ "tree":{ "tree_state":"ACTIVE", "tree_type":"LOG", "hash_strategy":"RFC6962_SHA256", "signature_algorithm":"ECDSA", "max_root_duration":"0", "hash_algorithm":"SHA256" }, "key_spec":{ "ecdsa_params":{ "curve":"P256" } } }')
When I execute the command, I get 404 page not found as result.
The .yaml file of the trillian-server is defined as following:
apiVersion: v1
kind: ConfigMap
metadata:
name: tr-server-list
data: # TODO optional add env parameter initialization
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tr-server
labels:
name: tr-server
app: tr-server-app
spec:
replicas: 1
selector:
matchLabels:
name: tr-server-pod
db: trdb
app: tr-server-app
template:
metadata:
labels:
name: tr-server-pod
db: trdb
app: tr-server-app
spec:
containers:
- name: trillian-log-server
image: docker.io/fortissleviathan123/trillian-log-server:latest
imagePullPolicy: IfNotPresent
args: [
"--storage_system=mysql",
"--mysql_uri=test:zaphod#tcp(trdb.default.svc.cluster.local:3306)/test",
"--rpc_endpoint=0.0.0.0:8090",
"--http_endpoint=0.0.0.0:8091",
"--alsologtostderr",
]
envFrom:
- configMapRef:
name: tr-server-list
ports:
- name: grpc
containerPort: 8090
- name: https
containerPort: 8091
---
apiVersion: v1
kind: Service
metadata:
name: tr-server
labels:
name: tr-server
app: tr-server-app
spec:
ports:
- name: http
port: 8091
targetPort: 8091
- name: grpc
port: 8090
targetPort: 8090
selector:
name: tr-server-pod
db: trdb
app: tr-server-app
The services are running:
trdb-0 2/2 Running 6 (70m ago) 40h
tr-personality-5ffbfb44cb-2vp89 2/2 Running 3 (69m ago) 11h
tr-server-59d8bbd4c-kxkxs 2/2 Running 14 (69m ago) 38h
tr-signer-78b74df645-j5p7j 2/2 Running 15 (69m ago) 38h
Is there anything wrong with this deployment?
The solution is to use an application provided by google to create the tree, since servers' REST API is supposed to be old. Answer can be found here: https://github.com/google/trillian/issues/2675
I started to use Kubernetes to understant concepts like pods, objects and so on. I started to learn about Persistent Volume and Persistent Volume Claim, from my understanding, if i save data from mysql pod to a persistent volume, the data is saved no matter if i delete the mysql pod, the data is saved on the volume, but i don't think it works in my case...
I have a spring boot pod where i save data in mysql pod, data is saved, i can retreived, but when i restart my pods, delete or replace them, that saved data is lost, so i think i messed up something, can you give me a hint, please? Thanks...
Bellow are my Kubernetes files:
Mysql pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels: #must match Service and DeploymentLabels
app: mysql
spec:
containers:
- image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
name: mysql #name of the db
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret #name of the secret obj
key: password #which value from inside the secret to take
- name: MYSQL_ROOT_USER
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: db-config
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts: #mount volume obtained from PVC
- name: mysql-persistent-storage
mountPath: /var/lib/mysql #mounting in the container will be here
volumes:
- name: mysql-persistent-storage #obtaining volume from PVC
persistentVolumeClaim:
claimName: mysql-pv-claim # can use the same claim in different pods
apiVersion: v1
kind: Service
metadata:
name: mysql #DNS name
labels:
app: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector: #mysql pod should contain same label
app: mysql
clusterIP: None # we use DNS
Persistent Volume and Persistent Volume Claim files:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim #name of our pvc
labels:
app: mysql
spec:
volumeName: host-pv #claim that volume created with this name
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 1Gi
apiVersion: v1 #version of our PV
kind: PersistentVolume #kind of obj we gonna create
metadata:
name: host-pv # name of our PV
spec: #spec of our PV
capacity: #size
storage: 4Gi
volumeMode: Filesystem #storage Type, File and Blcok
storageClassName: standard
accessModes:
- ReadWriteOnce # can be mount from multiple pods on a single nod, cam be use by multiple pods, multiple pods can use this pv but only from a single node
# - ReadOnlyMany # on multiple nodes
# - WriteOnlyMany # doar pt multiple nods, nu hostPath type
hostPath: #which type of pv
path: "/mnt/data"
type: DirectoryOrCreate
persistentVolumeReclaimPolicy: Retain
My Spring book K8 file:
apiVersion: v1
kind: Service
metadata:
name: book-service
spec:
selector:
app: book-example
ports:
- protocol: 'TCP'
port: 8080
targetPort: 8080
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: book-deployment
spec:
replicas: 1
selector:
matchLabels:
app: book-example
template:
metadata:
labels:
app: book-example
spec:
containers:
- name: book-container
image: cinevacineva/kubernetes_book_pv:latest
imagePullPolicy: Always
# ports:
# - containerPort: 8080
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
# & minikube -p minikube docker-env | Invoke-Expression links docker images we create with minikube, nu mai trebe sa ppusham
...if i save data from mysql pod to a persistent volume, the data is saved no matter if i delete the mysql pod, the data is saved on the volume, but i don't think it works in my case...
Your previous data will not be available when the pod switch node. To use hostPath you don't really need PVC/PV. Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
...
spec:
...
template:
...
spec:
...
nodeSelector: # <-- make sure your pod runs on the same node
<node label>: <value unique to the mysql node>
volumes: # <-- mount the data path on the node, no pvc/pv required.
- name: mysql-persistent-storage
hostPath:
path: /mnt/data
type: DirectoryOrCreate
containers:
- name: mysql
...
volumeMounts: # <-- let mysql write to it
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
Just for training purpose, I'm trying to inject those env variables with this ConfigMap in my Wordpress and Mysql app by using a File with a Volume.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wordpress-mysql
namespace: ex2
data:
wordpress.conf: |
WORDPRESS_DB_HOST mysql
WORDPRESS_DB_USER admin
WORDPRESS_DB_PASSWORD "1234"
WORDPRESS_DB_NAME wordpress
WORDPRESS_DB_PREFIX wp_
mysql.conf: |
MYSQL_DATABASE wordpress
MYSQL_USER admin
MYSQL_PASSWORD "1234"
MYSQL_RANDOM_ROOT_PASSWORD "1"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mysql
name: mysql
namespace: ex2
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
namespace: ex2
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
volumeMounts:
- name: config
mountPath: "/etc/env"
readOnly: true
ports:
- containerPort: 3306
protocol: TCP
volumes:
- name: config
configMap:
name: wordpress-mysql
---
apiVersion: v1
kind: Service
metadata:
labels:
app: wordpress
name: wordpress
namespace: ex2
spec:
ports:
- nodePort: 30999
port: 80
protocol: TCP
targetPort: 80
selector:
app: wordpress
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
namespace: ex2
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
volumeMounts:
- name: config
mountPath: "/etc/env"
readOnly: true
ports:
- containerPort: 80
protocol: TCP
volumes:
- name: config
configMap:
name: wordpress-mysql
When I deploy the app the mysql pod fails with this error:
kubectl -n ex2 logs mysql-56ddd69598-ql229
2020-12-26 19:57:58+00:00 [ERROR] [Entrypoint]: Database is
uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD
I don't understand because I have specified everything in my configMap. I also have tried by using envFrom and Single Env Variables and it works just fine. I'm just having an issue with File in a Volume
#DavidMaze is correct; you're mixing two useful features.
Using test.yaml:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wordpress-mysql
data:
wordpress.conf: |
WORDPRESS_DB_HOST mysql
WORDPRESS_DB_USER admin
WORDPRESS_DB_PASSWORD "1234"
WORDPRESS_DB_NAME wordpress
WORDPRESS_DB_PREFIX wp_
mysql.conf: |
MYSQL_DATABASE wordpress
MYSQL_USER admin
MYSQL_PASSWORD "1234"
MYSQL_RANDOM_ROOT_PASSWORD "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- image: busybox
name: test
args:
- ash
- -c
- while true; do sleep 15s; done
volumeMounts:
- name: config
mountPath: "/etc/env"
readOnly: true
volumes:
- name: config
configMap:
name: wordpress-mysql
Then:
kubectl apply --filename=./test.yaml
kubectl exec --stdin --tty deployment/test -- ls /etc/env
mysql.conf wordpress.conf
kubectl exec --stdin --tty deployment/test -- more /etc/env/mysql.conf
MYSQL_DATABASE wordpress
MYSQL_USER admin
MYSQL_PASSWORD "1234"
MYSQL_RANDOM_ROOT_PASSWORD "1"
NOTE the files are missing (and should probably include) = between the variable and its value e.g. MYSQL_DATABASE=wordpress
So, what you have is a ConfigMap that represents 2 files (mysql.conf and wordpress.conf) and, if you use e.g. busybox and mount the ConfigMap as a volume, you can see that it includes 2 files and that the files contain the configurations.
So, if you can run e.g. WordPress or MySQL and pass a configuration file to them, you're good but what you probably want to do is reference the ConfigMap entries as environment variables, per #DavidMaze suggestion, i.e. run Pods with environment variables set by the ConfigMap entries, i.e.:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
I would really suggest not to use configmap for wordpress. You can use directly the official repo https://github.com/docker-library/wordpress/tree/master/php7.4/apache it has a docker-entrypoint.sh which you can use to inject the env values from the deployment.yaml directly or if you use vault that works perfectly too.
I was using this image to run my application in docker-compose. However, when I run the same on a Kubernetes cluster I get the error
[ERROR] Could not open file '/opt/bitnami/mysql/logs/mysqld.log' for error logging: Permission denied
Here's my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
name: common-db
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: common-db
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
spec:
containers:
- env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MYSQL_DATABASE
value: "common-development"
- name: MYSQL_REPLICATION_MODE
value: "master"
- name: MYSQL_REPLICATION_PASSWORD
value: "repl_password"
- name: MYSQL_REPLICATION_USER
value: "repl_user"
image: bitnami/mysql:5.7
imagePullPolicy: ""
name: common-db
ports:
- containerPort: 3306
securityContext:
runAsUser: 0
resources:
requests:
memory: 512Mi
cpu: 500m
limits:
memory: 512Mi
cpu: 500m
volumeMounts:
- name: common-db-initdb
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
volumes:
- name: common-db-initdb
configMap:
name: common-db-config
serviceAccountName: ""
status: {}
The config map has the config my.cnf data. Any pointers on where I could be going wrong? Specially if the same image works in the docker-compose?
Try changing the file permission using init container as in official bitnami helm chart they are also updating file permissions and managing security context.
helm chart : https://github.com/bitnami/charts/blob/master/bitnami/mysql/templates/master-statefulset.yaml
UPDATE :
initContainers:
- command:
- /bin/bash
- -ec
- |
chown -R 1001:1001 /bitnami/mysql
image: docker.io/bitnami/minideb:buster
imagePullPolicy: Always
name: volume-permissions
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mysql
name: data
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
serviceAccount: mysql
You may need to use subpath. To know details about subpath click here
volumeMounts:
- name: common-db-initd
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
subPath: my_custom.cnf
Also, you can install bitnami mysql using helm chart easily.
I am setting environment variables in k8s through a configMap. Also creating a generate.json file and mounting it via a configMap also. All this works fine. The problem is that the env vars are not being picked up by generate.json file.
I am trying get the environment variables passed by the configMap to the generate.json that is mounted via a configMap. My guess is that, the file generate.json will read container env vars passed via configMap and interpolate those where neccesary.
Here are the 2 config maps
ConfigMap to create the env variables
apiVersion: v1
kind: ConfigMap
metadata:
name: config-cm
namespace: default
data:
ADMIN_PW: "P#ssw0rd"
EMAIL: "support#gluu.com"
ORG_NAME: "Gluu"
COUNTRY_CODE: US
STATE: TE
CITY: SLC
LDAP_TYPE: opendj
GLUU_CONFIG_ADAPTER: kubernetes
GLUU_SECRET_ADAPTER: kubernetes
---
`---configmap to interpolate the created environment vars---`
apiVersion: v1
kind: ConfigMap
metadata:
name: gen-json-file
namespace: default
data:
generate.json: |-
{
"hostname": "$DOMAIN",
"country_code": "$COUNTRY_CODE",
"state": "$STATE",
"city": "$CITY",
"admin_pw": "$ADMIN_PW",
"email": "$EMAIL",
"org_name": "$ORG_NAME"
}
---
apiVersion: batch/v1
kind: Job
metadata:
name: config-init
spec:
parallelism: 1
template:
metadata:
name: job
labels:
app: load
spec:
volumes:
- name: config
persistentVolumeClaim:
claimName: config-volume-claim
- name: mount-gen-file
configMap:
name: gen-json-file
containers:
- name: load
image: gluufederation/config-init:4.0.0_dev
lifecycle:
preStop:
exec:
command: [ "/bin/sh", -c , "printenv" ]
volumeMounts:
- mountPath: /opt/config-init/db/
name: config
- mountPath: /opt/config-init/db/generate.json
name: mount-gen-file
subPath: generate.json
envFrom:
- configMapRef:
name: config-cm
args: [ "load" ]
restartPolicy: Never
The expected result is that the file generate.json should be populated/interpolated from env vars to the variables.