How to put containers network to the kubernetes YAML file - mysql

For exmaple, I created network at docker
docker network create hello-rails
Then, I have mySQL, which is connected to this network
docker run -p 3306 -d --network=hello-rails --network-alias=db -e MYSQL_ROOT_PASSWORD=password --name hello-rails-db mysql
And also, I have rails server, which also rely on this network
docker run -it -p 3000:3000 --network=hello-rails -e MYSQL_USER=root -e MYSQL_PASSWORD=password -e MYSQL_HOST=db --name hello-rails benjamincaldwell/hello-docker-rails:latest
I want to write deployment on kubernetes for these two containers with YAML file. But I don't know, how to put network inside containers in the file. Do you have any recommendations?

In Kubernetes you would solve this by creating two services.
The MySQL service will look something like this:
kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
In your rails server, you can access the MySQL service by either using the mysql DNS name or using the MYSQL_SERVICE_HOST and MYSQL_SERVICE_PORT environment variables. There is no need to link the containers or specifying a network, as would be done in Docker.
Your Rails service will look like this:
kind: Service
apiVersion: v1
metadata:
name: rails
spec:
type: LoadBalancer
selector:
app: rails
ports:
- port: 3000
Notice the type: LoadBalancer, which specifies that this service will be published to the outside world. Depending on where you run Kubernetes, a public IP address will be automatically assigned to this service.
For more information, have a look at the Services documentation.

Related

Connect openshift pod to external mysql database

I am trying to set up a generic pod on OpenShift 4 that can connect to a mysql server running on a separate VM outside the OpenShift cluster (testing using local openshift crc). However when creating the deployment, I'm unable to connect to the mysql server from inside the pod (for testing purposes).
The following is the deployment that I use:
kind: "Service"
apiVersion: "v1"
metadata:
name: "mysql"
spec:
ports:
- name: "mysql"
protocol: "TCP"
port: 3306
targetPort: 3306
nodePort: 0
selector: {}
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "mysql"
subsets:
- addresses:
- ip: "***ip of host with mysql database on it***"
ports:
- port: 3306
name: "mysql"
---
apiVersion: v1
kind: DeploymentConfig
metadata:
name: "deployment"
spec:
template:
metadata:
labels:
name: "mysql"
spec:
containers:
- name: "test-mysql"
image: "***image repo with docker image that has mysql package installed***"
ports:
- containerPort: 3306
protocol: "TCP"
env:
- name: "MYSQL_USER"
value: "user"
- name: "MYSQL_PASSWORD"
value: "******"
- name: "MYSQL_DATABASE"
value: "mysql_db"
- name: "MYSQL_HOST"
value: "***ip of host with mysql database on it***"
- name: "MYSQL_PORT"
value: "3306"
I'm just using a generic image for testing purposes that has standard packages installed (net-tools, openjdk, etc.)
I'm testing by going into the deployed pod via the command:
$ oc rsh {{ deployed pod name }}
however when I try to run the following command, I cannot connect to the server running mysql-server
$ mysql --host **hostname** --port 3306 -u user -p
I get this error:
ERROR 2003 (HY000): Can't connect to MySQL server on '**hostname**:3306' (111)
I've also tried to create a route from the service and point to that as a "fqdn" alternative but still no luck.
If I try to ping the host (when inside the pod), I cannot reach it either. But I can reach the host from outside the pod, and from inside the pod, I can ping sites like google.com or github.com
For reference, the image being used is essentially the following dockerfile
FROM ubi:8.0
RUN dnf install -y python3 \
wget \
java-1.8.0-openjdk \
https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm \
postgresql-devel
WORKDIR /tmp
RUN wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm && \
rpm -ivh mysql-community-release-el7-5.noarch.rpm && \
dnf update -y && \
dnf install mysql -y && \
wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz && \
tar zxvf mysql-connector-java-5.1.48.tar.gz && \
mkdir -p /usr/share/java/ && \
cp mysql-connector-java-5.1.48/mysql-connector-java-5.1.48-bin.jar /usr/share/java/mysql-connector-java.jar
RUN dnf install -y tcping \
iputils \
net-tools
I imagine there is something I am fundamentally misunderstanding about connecting to an external database from inside OpenShift, and/or my deployment configs need some adjustment somewhere. Any help would be greatly appreciated.
As mentioned in the conversation for the post, it looks to be a firewall issue. I've tested again with the same config, but instead of an external mysql db, I've tested via deploying mysql in openshift as well and the pods can connect. Since I don't have control of the firewall in the organisation, and the config didn't change between the two deployments, I'll mark this as solved as there isn't much more I can do to test it

How to deploy docker image in minikube?

I have a container image that I want to run in minikube. My conainer image has MySQL, redis and some other components needed to run my application. I have an external application. My external application need to be connected to this MySQL server. The container image is written such that it starts the MySQL and redis server on its startup.
I thought I could access my MySQL server from outside If I run the container image inside docker daemon in minikube after setting the env as mentioned as below,
eval $(minikube docker-env)
But this didn't help me since the MySQL server is not accessible in 3306 port.
I tried the second method that creating a pod,
I created the deployment using the below yaml file
apiVersion: v1
kind: Service
metadata:
name: c-service
spec:
selector:
app: c-app
ports:
- protocol: "TCP"
port: 3306
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: c-app
spec:
selector:
matchLabels:
app: c-app
replicas: 3
template:
metadata:
labels:
app: c-app
spec:
containers:
- name: c-app
image: c-app-latest:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
After creating the deployment, containers are creating. As I said before my image is designed such that it starts the MySQL and Redis server on startup.
In docker desktop while running the same image it starts the servers, opens ports and stays still and I can also perform some operations in the container terminal.
In minikube, it starts the servers and once it has started the servers, the minikube marks the pod's status as completed and tries restarting it again. It didn't open any ports. It just starts and restarts again and again until eventually it results in "CrashLoopBackOff" error.
How I did it before:
I have a container image that I previously was running in docker desktop. It was running fine in docker desktop. It starts all servers and it establishes connection with my external app. It also provides an option to interact with the container terminal.
My current requirements:
I want to run the same image in minikube that I ran in docker desktop. Upon running the image it should open ports for external connection (say port 3306 for MySQL connection) and I should be able to interact with the container through,
kubectl exec -it <pod> -- bin/bash
More importantly I don't want the pod restarting again and again. It should start one time, start all servers and open ports and stay still.
Sorry for the long post. Can anyone please help me with this.

How to specify/modiy target-port on a newly created app through openshift CLI?

I am trying to expose a new app created via openshift command line(oc). This is a nodeJS server listening on port=3000. However, opeshift defaults the target-port to 8080 as shown in the following service.yaml:
kind: Service
apiVersion: v1
.............
.............
spec:
ports:
- name: 8080-tcp
protocol: TCP
port: 8080
targetPort: 8080
.........
I want to be able to update targetPort via the command line. I already followed these steps, but no luck so far:
step1: oc new-project my-new-project
step2: oc new-app https:\\github.org.com\my-new-app.git
step3: oc expose service my-new-app --target-port=3000
Error: **cannot use --target-port with --generator=route/v1**
Note: I am able to access the app(i.e. port=3000) only when I manually update targetPort to 3000 in Services.yaml.
You didnt specify port . Try this.
oc expose service my-new-app --target-port=3000 --port=8080

import mysql data to kubernetes pod

Does anyone know how to import the data inside my dump.sql file to a kubernetes pod either;
Directly,same way as you dealing with docker containers:
docker exec -i container_name mysql -uroot --password=secret database < Dump.sql
Or using the data stored in an existing docker container volume and pass it to the pod .
Just if other people are searching for this :
kubectl -n namespace exec -i my_sql_pod_name -- mysql -u user -ppassword < my_local_dump.sql
To answer your specific question:
You can kubectl exec into your container in order to run commands inside it. You may need to first ensure that the container has access to the file, by perhaps storing it in a location that the cluster can access (network?) and then using wget/curl within the container to make it available. One may even open up an interactive session with kubectl exec.
However, the ways to do this in increasing measure of generality would be:
Create a service that lets you access the mysql instance running on the pod from outside the cluster and connect your local mysql client to it.
If you are executing this initialization operation every time such a mysql pod is being started, it could be stored on a persistent volume and you could execute the script within your pod when you start up.
If you have several pieces of data that you typically need to copy over when starting the pod, look at init containers for fetching that data.
TL;DR
Using ConfigMaps and then use that ConfgMap as a mount into the /docker-entrypoint-initdb.d folder
Code
MySQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: dbpassword11
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
- name: usermanagement-dbcreation-script
mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: ebs-mysql-pv-claim
- name: usermanagement-dbcreation-script
configMap:
name: usermanagement-dbcreation-script
MySQL ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: usermanagement-dbcreation-script
data:
mysql_usermgmt.sql: |-
DROP DATABASE IF EXISTS usermgmt;
CREATE DATABASE usermgmt;
Reference:
https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/blob/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-02-SC-PVC-ConfigMap-MySQL/kube-manifests/04-mysql-deployment.yml
https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/blob/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-02-SC-PVC-ConfigMap-MySQL/kube-manifests/03-UserManagement-ConfigMap.yml

Kubernetes + MySQL : Creating custom database and user in a Kubernetes container

I am trying to create a Django + MySQL app using Google Container Engine and Kubernetes. Following the docs from official MySQL docker image and Kubernetes docs for creating MySQL container I have created the following replication controller
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mysql
name: mysql
spec:
replicas: 1
template:
metadata:
labels:
name: mysql
spec:
containers:
- image: mysql:5.6.33
name: mysql
env:
#Root password is compulsory
- name: "MYSQL_ROOT_PASSWORD"
value: "root_password"
- name: "MYSQL_DATABASE"
value: "custom_db"
- name: "MYSQL_USER"
value: "custom_user"
- name: "MYSQL_PASSWORD"
value: "custom_password"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
# This name must match the volumes.name below.
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mysql-disk
fsType: ext4
According to the docs, passing the environment variables MYSQL_DATABASE. MYSQL_USER, MYSQL_PASSWORD, a new user will be created with that password and assigned rights to the newly created database. But this does not happen. When I SSH into that container, the ROOT password is set. But neither the user, nor the database is created.
I have tested this by running locally and passing the same environment variables like this
docker run -d --name some-mysql \
-e MYSQL_USER="custom_user" \
-e MYSQL_DATABASE="custom_db" \
-e MYSQL_ROOT_PASSWORD="root_password" \
-e MYSQL_PASSWORD="custom_password" \
mysql
When I SSH into that container, the database and users are created and everything works fine.
I am not sure what I am doing wrong here. Could anyone please point out my mistake. I have been at this the whole day.
EDIT: 20-sept-2016
As Requested
#Julien Du Bois
The disk is created. it appears in the cloud console and when I run the describe command I get the following output
Command : gcloud compute disks describe mysql-disk
Result:
creationTimestamp: '2016-09-16T01:06:23.380-07:00'
id: '4673615691045542160'
kind: compute#disk
lastAttachTimestamp: '2016-09-19T06:11:23.297-07:00'
lastDetachTimestamp: '2016-09-19T05:48:14.320-07:00'
name: mysql-disk
selfLink: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/disks/mysql-disk
sizeGb: '20'
status: READY
type: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/diskTypes/pd-standard
users:
- https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/instances/gke-cluster-1-default-pool-e0f09576-zvh5
zone: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>
I referred to lot of tutorials and google cloud examples. To run the mysql docker container locally my main reference was the official image page on docker hub
https://hub.docker.com/_/mysql/
This works for me and locally the container created has a new database and user with right privileges.
For kubernetes, my main reference was the following
https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/
I am just trying to connect to it using Django container.
I was facing the same issue when I was using volumes and mounting them to mysql pods.
As mentioned in the documentation of mysql's docker image:
When you start the mysql image, you can adjust the configuration of the MySQL instance by passing one or more environment variables on the docker run command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
So after spinning wheels I managed to solve the problem by changing the hostPath of the volume that I was creating from "/data/mysql-pv-volume" to "/var/lib/mysql"
Here is a code snippet that might help create the volumes
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete /* For development Purposes only */
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/mysql"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Hope that helped.
You set mysql-disk in your deployment and the disk you have is custom-disk. Change pdName to custom-disk and it will work.