I need to start a MySQL container in Kubernetes with a database and a schema and sample data.
I tried to use the parameter "command" in the Kubernetes yaml, but at the time of execution, the db is still not started.
- image: mysql:5.7.24
name: database
command:
[
'/usr/bin/mysql -u root -e "CREATE DATABASE IF NOT EXISTS mydbname"',
]
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
Solved adding
volumeMounts:
- name: initdb
mountPath: /docker-entrypoint-initdb.d
...
volumes:
- name: initdb
configMap:
name: initdb-config
...
---
apiVersion: v1
kind: ConfigMap
metadata:
name: initdb-config
data:
initdb.sql: |
mysqlquery
you can first create the container of mysql and later import the data of mysql it will that way.
you can create the pvc volume and start the container black without any database.
you can use the command exec to import the sql while and data to the database which will create the database and sample data inside the container.
start the container and go inside the container using exec mode and create a database and after that run this command
kubectl exec -i <container name> -- mysql -h <hostname> -u <username> -p<password> <databasename> > databasefile.sql
Related
I am trying to set up a generic pod on OpenShift 4 that can connect to a mysql server running on a separate VM outside the OpenShift cluster (testing using local openshift crc). However when creating the deployment, I'm unable to connect to the mysql server from inside the pod (for testing purposes).
The following is the deployment that I use:
kind: "Service"
apiVersion: "v1"
metadata:
name: "mysql"
spec:
ports:
- name: "mysql"
protocol: "TCP"
port: 3306
targetPort: 3306
nodePort: 0
selector: {}
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "mysql"
subsets:
- addresses:
- ip: "***ip of host with mysql database on it***"
ports:
- port: 3306
name: "mysql"
---
apiVersion: v1
kind: DeploymentConfig
metadata:
name: "deployment"
spec:
template:
metadata:
labels:
name: "mysql"
spec:
containers:
- name: "test-mysql"
image: "***image repo with docker image that has mysql package installed***"
ports:
- containerPort: 3306
protocol: "TCP"
env:
- name: "MYSQL_USER"
value: "user"
- name: "MYSQL_PASSWORD"
value: "******"
- name: "MYSQL_DATABASE"
value: "mysql_db"
- name: "MYSQL_HOST"
value: "***ip of host with mysql database on it***"
- name: "MYSQL_PORT"
value: "3306"
I'm just using a generic image for testing purposes that has standard packages installed (net-tools, openjdk, etc.)
I'm testing by going into the deployed pod via the command:
$ oc rsh {{ deployed pod name }}
however when I try to run the following command, I cannot connect to the server running mysql-server
$ mysql --host **hostname** --port 3306 -u user -p
I get this error:
ERROR 2003 (HY000): Can't connect to MySQL server on '**hostname**:3306' (111)
I've also tried to create a route from the service and point to that as a "fqdn" alternative but still no luck.
If I try to ping the host (when inside the pod), I cannot reach it either. But I can reach the host from outside the pod, and from inside the pod, I can ping sites like google.com or github.com
For reference, the image being used is essentially the following dockerfile
FROM ubi:8.0
RUN dnf install -y python3 \
wget \
java-1.8.0-openjdk \
https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm \
postgresql-devel
WORKDIR /tmp
RUN wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm && \
rpm -ivh mysql-community-release-el7-5.noarch.rpm && \
dnf update -y && \
dnf install mysql -y && \
wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz && \
tar zxvf mysql-connector-java-5.1.48.tar.gz && \
mkdir -p /usr/share/java/ && \
cp mysql-connector-java-5.1.48/mysql-connector-java-5.1.48-bin.jar /usr/share/java/mysql-connector-java.jar
RUN dnf install -y tcping \
iputils \
net-tools
I imagine there is something I am fundamentally misunderstanding about connecting to an external database from inside OpenShift, and/or my deployment configs need some adjustment somewhere. Any help would be greatly appreciated.
As mentioned in the conversation for the post, it looks to be a firewall issue. I've tested again with the same config, but instead of an external mysql db, I've tested via deploying mysql in openshift as well and the pods can connect. Since I don't have control of the firewall in the organisation, and the config didn't change between the two deployments, I'll mark this as solved as there isn't much more I can do to test it
Hi I am building a service in which I need a Mysql/MariaDB database. I have been googling different solutions and I got the db started with a database created thanks to a guide a was following (never found the link again unfortunately).
Problem
The problem I am having is that the tables are not being created. I added the sql-scema file to /docker-entrypoint-initdb.d/ (you can check it down in the docker file) but it doesnt seem to be executing it (I have tried with both copy and ADD commands).
Current output
This is my current console output from the container:
[![image][1]][1]
The database is created but the SOW TABLES; command returns Empty Set.
Desired output
Since this db is going to be a service differents scripts connect to (currently python), I need to be able to create the db and the sql schema (tables, triggers, etc...) so my team can work with same configuration.
Some of the solutions I have tried (I cant find all the links i have visited only a few)
How to import a mysql dump file into a Docker mysql container
mysql:5.7 docker allow access from all hosts and create DB
Can't connect to mariadb outside of docker container
Mariadb tables are deleted when use volume in docker-compose
Project structure
The structure is pretty simple I am using the following docker-compose.yml
Docker-compose
I still have to try if the MARIADB_ enviroment variables are necessary here.
version: '3'
services:
db-mysql:
#image: mysql/mysql-server:latest
build: ./mysql-db
restart: always
container_name : db-music
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: pwd
MYSQL_DATABASE : audio_service
MYSQL_USER : user
MYSQL_PASSWORD : password
environment:
MARIADB_ROOT_PASSWORD: pwd
MARIADB_DATABASE : audio_service
MARIADB_USER : user
MARIADB_PASSWORD : password
#https://stackoverflow.com/questions/29145370/how-can-i-initialize-a-mysql-database-with-schema-in-a-docker-container?rq=1
expose:
- '3306:3306'
volumes:
- type: bind
source : E:\python-code\Rockstar\volume\mysql
target : /var/lib/mysql
#- type: bind
#source : E:\python-code\Rockstar\mysql-db\sql_scripts\tables.sql
#target : /docker-entrypoint-initdb.d/init.sql
networks:
net:
ipam:
driver: default
config:
- subnet: 212.172.1.0/30
host:
name: host
external: true
Dockerfile
FROM mariadb:latest as builder
# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$#\"/echo \"not running $#\"/", "/usr/local/bin/docker-entrypoint.sh"]
# needed for intialization
ENV MYSQL_ROOT_PASSWORD=root
ENV MYSQL_ROOT_PASSWORD = pwd
ENV MYSQL_DATABASE = audio_service
ENV MYSQL_USER = user
ENV MYSQL_PASSWORD = password
COPY sql_scripts/tables.sql /docker-entrypoint-initdb.d/
# Need to change the datadir to something else that /var/lib/mysql because the parent docker file defines it as a volume.
# https://docs.docker.com/engine/reference/builder/#volume :
# Changing the volume from within the Dockerfile: If any build steps change the data within the volume after
# it has been declared, those changes will be discarded.
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db", "--aria-log-dir-path", "/initialized-db"]
FROM mariadb:latest
# needed for intialization
ENV MARIADB_ROOT_PASSWORD=root
ENV MARIADB_ROOT_PASSWORD = pwd
ENV MARIADB_DATABASE = audio_service
ENV MARIADB_USER = user
ENV MARIADB_PASSWORD = password
COPY --from=builder /initialized-db /var/lib/mysql
EXPOSE 3306
SQL schema file
create database audio_service;
use audio_service;
CREATE TABLE audio
(
audio_id BINARY(16),
title TEXT NOT NULL UNIQUE,
content MEDIUMBLOB NOT NULL,
PRIMARY KEY (audio_id)
) COMMENT='this table stores sons';
DELIMITER ;;
CREATE TRIGGER `audio_before_insert`
BEFORE INSERT ON `audio` FOR EACH ROW
BEGIN
IF new.audio_id IS NULL THEN
SET new.audio_id = UUID_TO_BIN(UUID(), TRUE);
END IF;
END;;
DELIMITER ;
There is no need to build your own image since the official mysql / mariadb images are already well suited. You only need to run them with the following as explained in their image documentations:
environment variables to initialize an new database with a respective user on the first run
a volume at /var/lib/mysql to persist the data
any initialization/sql scripts mounted into /docker-entrypoint-initdb.d
So storing your SQL* into a schema.sql file right next to the docker-compose.yml the following is enough to achieve what you want:
# docker-compose.yml
services:
db:
image: mariadb
environment:
MARIADB_ROOT_PASSWORD: pwd
MARIADB_DATABASE: audio_service
MARIADB_USER: user
MARIADB_PASSWORD: password
volumes:
# persist data files into `datadir` volume managed by docker
- datadir:/var/lib/mysql
# bind-mount any sql files that should be run while initializing
- ./schema.sql:/docker-entrypoint-initdb.d/schema.sql
volumes:
datadir:
*note that you can remove the CREATE DATABASE and USE statements from your schema.sql since these will be automatically done by the init script for you anyway
There are two reasons that your own setup isn't working as expected:
the line COPY --from=builder /initialized-db /var/lib/mysql won't work as expected for the same reason you described in your comment a bit above it: /var/lib/mysql is a volume and thus no new files a stored in it in the build steps after it was defined.
you are bind-mounting E:\python-code\Rockstar\volume\mysql to /var/lib/mysql in your docker-compose.yml.
But this will effectively override any contents of /var/lib/mysql of the image, i.e. although your own image built from your Dockerfile does include an initialized database this is overwritten by the contents of E:\python-code\Rockstar\volume\mysql when starting the service.
I was following the Run a Single-Instance Stateful Application tutorial of Kubernetes (I changed the MySQL docker image's tag to 8), and it seems the server is running correctly:
But when I try to connect the server as the tutorial suggesting:
kubectl run -it --rm --image=mysql:8 --restart=Never mysql-client -- mysql -h mysql -ppassword
I get the following error:
ERROR 1045 (28000): Access denied for user 'root'#'10.1.0.99' (using password: YES)
pod "mysql-client" deleted
I already looked at those questions:
Can't access mysql root or user after kubernetes deployment
Access MySQL Kubernetes Deployment in MySQL Workbench
But changing the mountPath or port didn't work.
Default behavior of root account can only be connected to from inside the container. Here's an updated version of the example that allows you to connect from remote:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:8.0.26
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_ROOT_HOST
value: "%"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
emptyDir: {}
# Following the original example, comment the emptyDir and uncomment the following if you have StorageClass installed.
# persistentVolumeClaim:
# claimName: mysql-pv-claim
No change to the client connect except for the image tag:
kubectl run -it --rm --image=mysql:8.0.26 --restart=Never mysql-client -- mysql -h mysql -ppassword
Test with show databases;:
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)
Here we have a sample of the job
apiVersion: batch/v1
kind: Job
metadata:
# Unique key of the Job instance
name: example-job
spec:
template:
metadata:
name: example-job
spec:
containers:
- name: pi
image: perl
command: ["perl"]
args: ["-Mbignum=bpi", "-wle", "print bpi(2000)"]
# Do not restart containers after they exit
restartPolicy: Never
I want to run a MySQL script as a command:
mysql -hlocalhost -u1234 -p1234 --database=customer < script.sql
But Kubernetes documentation is silent about piping a file to stdin. How can I specify that in Kubernetes job config?
Would set your command to something like [bash, -c, "mysql -hlocalhost -u1234 -p1234 --database=customer < script.sql"], since input redirection like that is actually a feature of your shell.
Does anyone know how to import the data inside my dump.sql file to a kubernetes pod either;
Directly,same way as you dealing with docker containers:
docker exec -i container_name mysql -uroot --password=secret database < Dump.sql
Or using the data stored in an existing docker container volume and pass it to the pod .
Just if other people are searching for this :
kubectl -n namespace exec -i my_sql_pod_name -- mysql -u user -ppassword < my_local_dump.sql
To answer your specific question:
You can kubectl exec into your container in order to run commands inside it. You may need to first ensure that the container has access to the file, by perhaps storing it in a location that the cluster can access (network?) and then using wget/curl within the container to make it available. One may even open up an interactive session with kubectl exec.
However, the ways to do this in increasing measure of generality would be:
Create a service that lets you access the mysql instance running on the pod from outside the cluster and connect your local mysql client to it.
If you are executing this initialization operation every time such a mysql pod is being started, it could be stored on a persistent volume and you could execute the script within your pod when you start up.
If you have several pieces of data that you typically need to copy over when starting the pod, look at init containers for fetching that data.
TL;DR
Using ConfigMaps and then use that ConfgMap as a mount into the /docker-entrypoint-initdb.d folder
Code
MySQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: dbpassword11
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
- name: usermanagement-dbcreation-script
mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: ebs-mysql-pv-claim
- name: usermanagement-dbcreation-script
configMap:
name: usermanagement-dbcreation-script
MySQL ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: usermanagement-dbcreation-script
data:
mysql_usermgmt.sql: |-
DROP DATABASE IF EXISTS usermgmt;
CREATE DATABASE usermgmt;
Reference:
https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/blob/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-02-SC-PVC-ConfigMap-MySQL/kube-manifests/04-mysql-deployment.yml
https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/blob/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-02-SC-PVC-ConfigMap-MySQL/kube-manifests/03-UserManagement-ConfigMap.yml