Piping a file to stdin in Kubernetes Job - mysql

Here we have a sample of the job
apiVersion: batch/v1
kind: Job
metadata:
# Unique key of the Job instance
name: example-job
spec:
template:
metadata:
name: example-job
spec:
containers:
- name: pi
image: perl
command: ["perl"]
args: ["-Mbignum=bpi", "-wle", "print bpi(2000)"]
# Do not restart containers after they exit
restartPolicy: Never
I want to run a MySQL script as a command:
mysql -hlocalhost -u1234 -p1234 --database=customer < script.sql
But Kubernetes documentation is silent about piping a file to stdin. How can I specify that in Kubernetes job config?

Would set your command to something like [bash, -c, "mysql -hlocalhost -u1234 -p1234 --database=customer < script.sql"], since input redirection like that is actually a feature of your shell.

Related

Connect openshift pod to external mysql database

I am trying to set up a generic pod on OpenShift 4 that can connect to a mysql server running on a separate VM outside the OpenShift cluster (testing using local openshift crc). However when creating the deployment, I'm unable to connect to the mysql server from inside the pod (for testing purposes).
The following is the deployment that I use:
kind: "Service"
apiVersion: "v1"
metadata:
name: "mysql"
spec:
ports:
- name: "mysql"
protocol: "TCP"
port: 3306
targetPort: 3306
nodePort: 0
selector: {}
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "mysql"
subsets:
- addresses:
- ip: "***ip of host with mysql database on it***"
ports:
- port: 3306
name: "mysql"
---
apiVersion: v1
kind: DeploymentConfig
metadata:
name: "deployment"
spec:
template:
metadata:
labels:
name: "mysql"
spec:
containers:
- name: "test-mysql"
image: "***image repo with docker image that has mysql package installed***"
ports:
- containerPort: 3306
protocol: "TCP"
env:
- name: "MYSQL_USER"
value: "user"
- name: "MYSQL_PASSWORD"
value: "******"
- name: "MYSQL_DATABASE"
value: "mysql_db"
- name: "MYSQL_HOST"
value: "***ip of host with mysql database on it***"
- name: "MYSQL_PORT"
value: "3306"
I'm just using a generic image for testing purposes that has standard packages installed (net-tools, openjdk, etc.)
I'm testing by going into the deployed pod via the command:
$ oc rsh {{ deployed pod name }}
however when I try to run the following command, I cannot connect to the server running mysql-server
$ mysql --host **hostname** --port 3306 -u user -p
I get this error:
ERROR 2003 (HY000): Can't connect to MySQL server on '**hostname**:3306' (111)
I've also tried to create a route from the service and point to that as a "fqdn" alternative but still no luck.
If I try to ping the host (when inside the pod), I cannot reach it either. But I can reach the host from outside the pod, and from inside the pod, I can ping sites like google.com or github.com
For reference, the image being used is essentially the following dockerfile
FROM ubi:8.0
RUN dnf install -y python3 \
wget \
java-1.8.0-openjdk \
https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm \
postgresql-devel
WORKDIR /tmp
RUN wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm && \
rpm -ivh mysql-community-release-el7-5.noarch.rpm && \
dnf update -y && \
dnf install mysql -y && \
wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz && \
tar zxvf mysql-connector-java-5.1.48.tar.gz && \
mkdir -p /usr/share/java/ && \
cp mysql-connector-java-5.1.48/mysql-connector-java-5.1.48-bin.jar /usr/share/java/mysql-connector-java.jar
RUN dnf install -y tcping \
iputils \
net-tools
I imagine there is something I am fundamentally misunderstanding about connecting to an external database from inside OpenShift, and/or my deployment configs need some adjustment somewhere. Any help would be greatly appreciated.
As mentioned in the conversation for the post, it looks to be a firewall issue. I've tested again with the same config, but instead of an external mysql db, I've tested via deploying mysql in openshift as well and the pods can connect. Since I don't have control of the firewall in the organisation, and the config didn't change between the two deployments, I'll mark this as solved as there isn't much more I can do to test it

kubernetes mysql statefulset not taking new password, though password changes in env

I have created statefulset of mysql using below yaml with this command:
kubectl apply -f mysql-statefulset.yaml
Yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka"
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
After that 3 pods and for each of them a pvc and pv was created. I successfully entered one of the pod using:
kubectl exec -it mysql-sts-0 sh
and then login in mysql using:
mysql -u root -p
after giving this command a:
Enter password:
came and I entered the password:
okaoka
and successfully could login. After that I exited from the pod.
Then I deleted the statefulset (as expected the pvc and pv were there even after the deletion of statefulset). After that I have applied a new yaml slightly changing the previous one, I changed the password in yaml, gave new password:
okaoka1234
and rest of the yaml were same as before. The yaml is given below, now after applying this yaml (only changed the password) by:
kubectl apply -f mysql-statefulset.yaml
it successfully created statefulset and 3 new pods (who binded with previous pvc and pv, as expected).
Changed Yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka1234" # here is the change
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
Now the problem is when I again entered a pod using:
kubectl exec -it mysql-sts-0 sh
then used:
mysql -u root -p
and again the:
Enter password:
came and this time when I gave my new password:
okaoka1234
it gave access denied.
When I printed the env (inside the pod) using:
printenv
then I could see that:
MYSQL_ROOT_PASSWORD=okaoka1234
that means in environment variable it changed and took the new password, but I could not logged in by the new password.
The interesting thing is that I could logged in by giving my previous password okaoka, I don't know why it is taking the previous password in this scenario not the new one which is even in the env (inside pod) also. Can anybody provide the logic behind this?
Most probably, the image that you are using in your StatefulSet, uses the environment variable as a way to initialize the password when it creates for the first time the structure on the persisted storage (on its pvc).
Given the fact that the pvc and pv are the same of the previous installation, that step is skipped, the database password is not updated, since the database structure is already found in the existing pvc.
After all, the root user is just a user of the database, its password is stored in the database. Unless the image applies any particular functionality at its start with its entrypoint, it makes sense to me that the password remain the same.
What image are you using? The docker hub mysql image or a custom one?
Update
Given the fact that you are using the mysql image on docker hub, let me quote a piece of the entrypoint (https://github.com/docker-library/mysql/blob/master/template/docker-entrypoint.sh)
# there's no database, so it needs to be initialized
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_verify_minimum_env
# check dir permissions to reduce likelihood of half-initialized database
ls /docker-entrypoint-initdb.d/ > /dev/null
docker_init_database_dir "$#"
mysql_note "Starting temporary server"
docker_temp_server_start "$#"
mysql_note "Temporary server started."
docker_setup_db
docker_process_init_files /docker-entrypoint-initdb.d/*
mysql_expire_root_user
mysql_note "Stopping temporary server"
docker_temp_server_stop
mysql_note "Temporary server stopped"
echo
mysql_note "MySQL init process done. Ready for start up."
echo
fi
When the container starts, it makes some checks and if no database is found (and the database is expected to be on the path where the persisted pvc is mounted) a series of operations are performed, creating it, creating default users and so on.
Only in this case, the root user is created with the password specified in the environment (inside the function docker_setup_db)
Should a database already be available in the persisted path, which is your case since you let it mount the previous pvc, there's no initialization of the database, it already exists.
Everything in Kubernetes is working as expected, this is just the behaviour of the database and of the mysql image. The environment variable is used only for initialization, from what I can see in the entrypoint.
It is left to the root user to manually change the password, if desired, by using a mysql client.

How to start a mysql container in Kubernetes with sample data?

I need to start a MySQL container in Kubernetes with a database and a schema and sample data.
I tried to use the parameter "command" in the Kubernetes yaml, but at the time of execution, the db is still not started.
- image: mysql:5.7.24
name: database
command:
[
'/usr/bin/mysql -u root -e "CREATE DATABASE IF NOT EXISTS mydbname"',
]
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
Solved adding
volumeMounts:
- name: initdb
mountPath: /docker-entrypoint-initdb.d
...
volumes:
- name: initdb
configMap:
name: initdb-config
...
---
apiVersion: v1
kind: ConfigMap
metadata:
name: initdb-config
data:
initdb.sql: |
mysqlquery
you can first create the container of mysql and later import the data of mysql it will that way.
you can create the pvc volume and start the container black without any database.
you can use the command exec to import the sql while and data to the database which will create the database and sample data inside the container.
start the container and go inside the container using exec mode and create a database and after that run this command
kubectl exec -i <container name> -- mysql -h <hostname> -u <username> -p<password> <databasename> > databasefile.sql

Run Helm Command within a pod

I am trying to run helm command within a pod. Here is my yaml
I did run a oc command
oc create -f mycron.yaml
Here is my mycron.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronbox
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cronbox
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the openshift cluster; helm version
restartPolicy: OnFailure
When the schedule pick up, and when the commands run i see the below result
helm: command not found
I am expecting the helm version to be printed which i can see in the logs of the pod
In order for this to work you'll have to start using a different image such as alpine/helm:2.9.0, which contains the helm cli tools.

import mysql data to kubernetes pod

Does anyone know how to import the data inside my dump.sql file to a kubernetes pod either;
Directly,same way as you dealing with docker containers:
docker exec -i container_name mysql -uroot --password=secret database < Dump.sql
Or using the data stored in an existing docker container volume and pass it to the pod .
Just if other people are searching for this :
kubectl -n namespace exec -i my_sql_pod_name -- mysql -u user -ppassword < my_local_dump.sql
To answer your specific question:
You can kubectl exec into your container in order to run commands inside it. You may need to first ensure that the container has access to the file, by perhaps storing it in a location that the cluster can access (network?) and then using wget/curl within the container to make it available. One may even open up an interactive session with kubectl exec.
However, the ways to do this in increasing measure of generality would be:
Create a service that lets you access the mysql instance running on the pod from outside the cluster and connect your local mysql client to it.
If you are executing this initialization operation every time such a mysql pod is being started, it could be stored on a persistent volume and you could execute the script within your pod when you start up.
If you have several pieces of data that you typically need to copy over when starting the pod, look at init containers for fetching that data.
TL;DR
Using ConfigMaps and then use that ConfgMap as a mount into the /docker-entrypoint-initdb.d folder
Code
MySQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: dbpassword11
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
- name: usermanagement-dbcreation-script
mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: ebs-mysql-pv-claim
- name: usermanagement-dbcreation-script
configMap:
name: usermanagement-dbcreation-script
MySQL ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: usermanagement-dbcreation-script
data:
mysql_usermgmt.sql: |-
DROP DATABASE IF EXISTS usermgmt;
CREATE DATABASE usermgmt;
Reference:
https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/blob/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-02-SC-PVC-ConfigMap-MySQL/kube-manifests/04-mysql-deployment.yml
https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/blob/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-02-SC-PVC-ConfigMap-MySQL/kube-manifests/03-UserManagement-ConfigMap.yml