I have a container image that I want to run in minikube. My conainer image has MySQL, redis and some other components needed to run my application. I have an external application. My external application need to be connected to this MySQL server. The container image is written such that it starts the MySQL and redis server on its startup.
I thought I could access my MySQL server from outside If I run the container image inside docker daemon in minikube after setting the env as mentioned as below,
eval $(minikube docker-env)
But this didn't help me since the MySQL server is not accessible in 3306 port.
I tried the second method that creating a pod,
I created the deployment using the below yaml file
apiVersion: v1
kind: Service
metadata:
name: c-service
spec:
selector:
app: c-app
ports:
- protocol: "TCP"
port: 3306
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: c-app
spec:
selector:
matchLabels:
app: c-app
replicas: 3
template:
metadata:
labels:
app: c-app
spec:
containers:
- name: c-app
image: c-app-latest:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
After creating the deployment, containers are creating. As I said before my image is designed such that it starts the MySQL and Redis server on startup.
In docker desktop while running the same image it starts the servers, opens ports and stays still and I can also perform some operations in the container terminal.
In minikube, it starts the servers and once it has started the servers, the minikube marks the pod's status as completed and tries restarting it again. It didn't open any ports. It just starts and restarts again and again until eventually it results in "CrashLoopBackOff" error.
How I did it before:
I have a container image that I previously was running in docker desktop. It was running fine in docker desktop. It starts all servers and it establishes connection with my external app. It also provides an option to interact with the container terminal.
My current requirements:
I want to run the same image in minikube that I ran in docker desktop. Upon running the image it should open ports for external connection (say port 3306 for MySQL connection) and I should be able to interact with the container through,
kubectl exec -it <pod> -- bin/bash
More importantly I don't want the pod restarting again and again. It should start one time, start all servers and open ports and stay still.
Sorry for the long post. Can anyone please help me with this.
Related
I have a scenario where I have multiple mysql servers running in different namespaces in single kubernetes cluster. All mysql servers belong to different departments.
My requirement is I should be able to connect to different mysql servers using hostname, i.e.,
mysqlServerA running in ServerA namespace should be reachable from outside the cluster using:
mysql -h mysqlServerA.mydomain.com -A
mysqlServerB running in ServerB namespace should be reachable from outside the cluster using:
mysql -h mysqlServerB.mydomain.com -A
and so on.
I have tried TCP based routing using config maps of Nginx Ingress controller, where I am routing traffic from clients to different mysql servers by assigning different port numbers:
for mysqlServerA:
mysql -h mysqlServer.mydomain.com -A -P 3301
for mysqlServerB:
mysql -h mysqlServer.mydomain.com -A -P 3302
this works perfectly. But I want to know if hostname based routing is possible or not, because I don't want separate load balancer for each mysql service.
Thanks
General info
I routing traffic by different port numbers
You are right, the reason for that is that connection to Mysql is done via TCP. That is why it is definitely not possible to have two simultaneous connections to two servers on the same IP:port.
Unlike HTTP, the TCP don't have headers that allows distinguishing the host the traffic shall be routed to. However, still there are at least two ways to achieve the functionality you'd like to achieve :) I'll describe that later.
I want to know if hostname based routing is possible or not
I don't want separate load balancer for each mysql service.
K8s allows a few methods for service to be reachable outside the cluster (namely hostNetwork, hostPort, NodePort , LoadBalancer, Ingress )
The LoadBalancer is the simplest way to serve traffic on LoadBalancerIP:port ; however, due to TCP nature of connection you'll have to use one LoadBalancer per one mysql instance.
kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
type: LoadBalancer
ports:
- port: 3306
selector:
name: my-mysql
The NodePort looks good, but it allows you connecting only when you know port (which can be tedious work for clients)
Proposed solutions
External IPs
If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those externalIPs. Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.
In the Service spec, externalIPs can be specified along with any of the ServiceTypes. In the example below, mysql-1 can be accessed by clients on 1.2.3.4:3306 (externalIP:port) and mysql-2 can be accessed by clients on 4.3.2.1:3306
$ cat stereo-mysql-3306.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-1234-inst-1
spec:
selector:
app: mysql-prod
ports:
- name: mysql
protocol: TCP
port: 3306
targetPort: 3306
externalIPs:
- 1.2.3.4
---
apiVersion: v1
kind: Service
metadata:
name: mysql-4321-inst-1
spec:
selector:
app: mysql-repl
ports:
- name: mysql
protocol: TCP
port: 3306
targetPort: 3306
externalIPs:
- 4.3.2.1
Note: you need to have 1.2.3.4 and 4.3.2.1 assigned to your Nodes (and resolve mysqlServerA / mysqlserverB at mydomain.comto these IPs as well). I've tested that solution on my GKE cluster and it works :).
With that config all the requests for mysqlServerA.mydomain.com:3306 that resolves to 1.2.3.4 are going to be routed to the Endpoints for service mysql-1234-inst-1 with the app: mysql-prod selector, and mysqlServerA.mydomain.com:3306 will be served by app: mysql-repl.
Of course it is possible to split that config for 2 namespaces (one namespace - one mysql - one service per one namespace).
ClusterIP+OpenVPN
Taking into consideration that your mysql pods have ClusterIPs, it is possible to spawn additional VPN pod in cluster and connect to mysqls through it.
As a result, you can establish VPN connection and have access to all the cluster resources. That is very limited solution which requires establishing the VPN connection for anyone who needs access to mysql.
Good practice is to add a bastion server on top of that solution. That server will be responsible for providing access to cluster services via VPN.
Hope that helps.
For exmaple, I created network at docker
docker network create hello-rails
Then, I have mySQL, which is connected to this network
docker run -p 3306 -d --network=hello-rails --network-alias=db -e MYSQL_ROOT_PASSWORD=password --name hello-rails-db mysql
And also, I have rails server, which also rely on this network
docker run -it -p 3000:3000 --network=hello-rails -e MYSQL_USER=root -e MYSQL_PASSWORD=password -e MYSQL_HOST=db --name hello-rails benjamincaldwell/hello-docker-rails:latest
I want to write deployment on kubernetes for these two containers with YAML file. But I don't know, how to put network inside containers in the file. Do you have any recommendations?
In Kubernetes you would solve this by creating two services.
The MySQL service will look something like this:
kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
In your rails server, you can access the MySQL service by either using the mysql DNS name or using the MYSQL_SERVICE_HOST and MYSQL_SERVICE_PORT environment variables. There is no need to link the containers or specifying a network, as would be done in Docker.
Your Rails service will look like this:
kind: Service
apiVersion: v1
metadata:
name: rails
spec:
type: LoadBalancer
selector:
app: rails
ports:
- port: 3000
Notice the type: LoadBalancer, which specifies that this service will be published to the outside world. Depending on where you run Kubernetes, a public IP address will be automatically assigned to this service.
For more information, have a look at the Services documentation.
I have google cloud container engine setup. I wanted to spin pod of mysql with external volume.
ReplicationController:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mysql
name: mysql-controller
spec:
replicas: 1
template:
metadata:
labels:
name: mysql
spec:
containers:
- image: mysql
name: mysql
ports:
- name: mysql
containerPort: 3306
hostPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
gcePersistentDisk:
pdName: mysql-1-disk
fsType: ext4
When i run RC without external volume, MySQL works fine. It breaks with below error when i try to attach volume
Kubernetes POD Error:
Warning FailedSyncError syncing pod, skipping: failed to "StartContainer" for "mysql" with CrashLoopBackOff: "Back-off 20s restarting failed container=mysql pod=mysql-controller-4hhqs_default(eb34ff46-8784-11e6-8f12-42010af00162)"
Disk (External Volume):
mysql-1-disk is the google cloud disk. I tried creating disk with both blank disk and image - ubuntu. Both failed with same error.
The error messages on mounting persistent disks are really not descriptive from my perspective. Use a blank disk based on your configuration file.
Some things to check:
Is the pdName exactly the same as in your CGE environment
Is the disk in the same availability zone (eg. europe-west1-c) as your cluster, otherwise it can't mount.
Hope this helps.
The problem that you face may be caused by using RC, not Pod to interact with the Persistent Disk.
As it's mentioned in documentation:
A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed.
Using a PD on a pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1.
In this case, I may suggest you to run MySQL with Persistent Disks defining the disk connection in Pod configuration file. Sample configuration you may find here.
I'm just starting to use OpenShift v3. I've been looking for examples on setting up a ci/cd pipeline using jenkins, nexus, sonarqube on openshift. I've found this nice example project but unfortunately I can't get it to work. The project can be found here: https://github.com/OpenShiftDemos/openshift-cd-demo
The problem I'm running into is that once a jenkins job is starting it will try to connect to the nexus service using this url: nexus:8081. This url is made up out of the openshift template by this section:
# Sonatype Nexus
- apiVersion: v1
kind: Service
metadata:
annotations:
description: Sonatype Nexus repository manager's http port
labels:
app: nexus
name: **nexus**
spec:
ports:
- name: web
port: **8081**
protocol: TCP
targetPort: 8081
selector:
app: nexus
deploymentconfig: nexus
sessionAffinity: None
type: ClusterIP
However it seems that jenkins (ran as a pod on openshift within the same project as nexus) can't connect to the url http://nexus:8081 and shows the following:
Connect to nexus:8081 [nexus/172.30.190.210] failed: Connection refused # line 81, column 25
any idea what is going on?
this is my .yaml content
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 0.5
image: imagelingga
name: imagelingga
ports:
- containerPort: 80
name: imagelingga
- resources:
limits :
cpu: 0.5
image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
# change this
value: pass
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysqlkuber
mountPath: /var/lib/mysql
readOnly: false
volumes:
- name: mysqlkuber
hostPath:
path: /home/mysqlkuber
i have two image
-mysql
-imagelingga = microservice server for java
the mysql logs shows that already run
but the imagelingga logs show Pod "mysql" in namespace "default": container "imagelingga" is in waiting state.trial
the connection between these two images is, imagelinnga need connection to mysql as DB.
i already run both images in docker container without kubernetes and run normally. but when i run inside kubernetes then the problem appear like that.
how to trigger imagelingga container to start the service
thx before!!
The container is in waiting state because when runnning the images it's crash or fail.
Then the container will be restart by the kubernetes, that make the container is in waiting state because on restarting progress.
For pod status
kubectl get pods
if the status "CrashLoopBackOff", then its restarting the container
For check container inside pod logs
kubectl logs [pod] [container]