Hi there,
I have a docker container that is a php backend. I have created a kubernetes pod of this container. This is what my yml file looks like:
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- name: backend
image: 000.dkr.ecr.eu-west-1.amazonaws.com/fullstackapp
ports:
- containerPort: 8000
However I want to be able to connect my MySql database (which is also a docker container) to the backend in the same pod. However I have no idea how to go about doing this. Any help would be appreciated!
Well,
Since you have dockerized your app (you made an docker image), you also must use a docker image for your MySql database.
But here is the kicker, you need also to create services for your app pod and your MySql pod.
You can find all the details in the k8 documentation (which is really good)
To make myself clear:
1.) First create a deployment object for your app.
2.) Then make a service for your app.
You rinse and repeat for the MySql database.
1.) You need the deployment object (and not the pod kind), because the deployment object keeps you pod alive when one breaks, for instance if you have tree replicas (pods) the replicaSet that the deployment object uses, will make sure that there are three replicas of your app.
2.) Services will group your pods (via labels), because the pods that the deployment object will generate will have a short life (ephemeral), meaning their IP address will be unstable and you wont be able to rely on them.
So, you will use services that will give you a cluster IP (virtual IP), that other objects can use. For instance; when your app wants to connect to the MySQL database.
You can use the name of the MySQL service in your apps configuration files.
So, basically that's how you would connect a MySQL pod to you apps pod.
Take a look at the katacode project, they give you a playground to learn this kind of stuff.
Tom
Related
I created a project from https://fiware-tutorials.readthedocs.io/en/latest/time-series-data.html tutorial and just changed the entities name and type and everything work right. But after some time (usually a day) all entities in Orion disappears (although the data in Quantumleap persists) and I can not get the entities properties with this command :
curl -X GET \
--url 'http://localhost:1026/v2/entities?type=Temp'
What is the problem? is there some restriction in tutorial projects?
The tutorials have been written as an introduction to NGSI, not as a robust architectural solution. The idea is just to get something "quick and dirty" up and running on a developer's machine and various shortcuts have been taken. Indeed the docker-compose files all hold the following disclaimer:
WARNING: Do not deploy this tutorial configuration directly to a production environment
The tutorial docker-compose files have not been written for production deployment and will not
scale. A proper architecture has been sacrificed to keep the narrative focused on the learning
goals, they are just used to deploy everything onto a single Docker machine. All FIWARE components
are running at full debug and extra ports have been exposed to allow for direct calls to services.
They also contain various obvious security flaws - passwords in plain text, no load balancing,
no use of HTTPS and so on.
This is all to avoid the need of multiple machines, generating certificates, encrypting secrets
and so on, purely so that a single docker-compose file can be read as an example to build on,
not use directly.
When deploying to a production environment, please refer to the Helm Repository
for FIWARE Components in order to scale up to a proper architecture:
see: https://github.com/FIWARE/helm-charts/
Perhaps the most relevant factor here to answer your question, there is typically no Volume Persistence - the tutorials clean up after themselves where possible to avoid leaving data on a user's machine unnecessarily.
If you have lost all your entity data when connecting to Orion, my guess here is that the MongoDB database has exited and restarted for some reason. Since there is deliberately no persistent volume set up, this would mean that all previous entities are lost on the restart.
A solution on how to persist volumes and fix this behaviour can be found in answers to another question on this site - something like:
version: "3.9"
services:
mongodb:
image: mongo:4.4
ports:
- 27017:27017
volumes:
- type: volume
source: mongodb_data_volume
target: /data/db
volumes:
mongodb_data_volume:
external: true
Background:
I've deployed a spring boot app to the openshift platform, and would like to know how to handle persistent storage in OpenShift3.
I've subscribed to the free plan and have access to the console.
I can use oc command, but access seems limited under my user for commands like 'oc get pv' and others.
Question
How can I get a finer control over my pvc (persistent storage claim) on OS3?
Ideally, I want a shell and be able to 'list' file on that volume.
Thanks in advance for your help!
Solution
Add storage to your pod
use the command oc rsh <my-pod> to get access to the pod
cd /path-to-your-storage/
The oc get pv command can only be run by a cluster admin because it shows all the declared persistent volumes available in the cluster as a whole.
All you need to know is that in OpenShift Online starter, you have access to claim one persistent volume. The type of that persistent volume is ReadWriteOnce or RWO.
A persistent volume is not yours until you make a claim and so have a persistent volume claim (pvc) in your project. In order to be able to see what is in the persistent volume, it has to be mounted against a pod, or in other words, in use by an application. You can then get inside of the pod and use normal UNIX commands to look at what is inside the persistent volume.
For more details on persistent volumes, suggest perhaps reading chapter about storage in the free eBook at:
https://www.openshift.com/deploying-to-openshift/
I am trying to deploy a MySQL Docker Image to Kubernetes. I mostly managed all tasks, Docker Image up and running in Docker, one final thing is missing from Kubernetes deployment.
MySQL has one configuration which is stating which user can log on from which Host 'MYSQL_ROOT_HOST' to configure that for Docker is no problem, Docker Networking is using '172.17.0.1' for bridging.
The problem with Kubernetes, this must be the IP of the Pod trying to connect MySQL Pod and every time a Pod starts, this IP changes.
I try to put the Label of the Pod connecting to the MySQL Pod but it is still looking the IP of the Pod instead of DNS name.
Do you have an idea how I can overcome this problem, I can't even figure out how this should work if I set AutoScaling for the Pod that is trying to connect MySQL, replicas will all have a different IP.
Thx for answers....
As #RyanDowson and #siloko mentioned, you should use Service, Ingress or Helm Charts for these purposes.
Additional information you can find on Service, Ingress and Helm Charts pages.
I am trying to setup Kubernetes on GCE. My question is , lets say there are 20 minions part of the cluster in Kubernetes, and two services deployed with type LoadBalancer with replicas as 2 each. So K8S will basically put 2 pods on two different minions per service. My question is, would the rest of the minions which are not running any pods also get the iptables rule in chain KUBE-PORTALS-CONTAINER and KUBE-PORTALS-HOST for these two services added? At least that is my observation but would like to confirm if this is just how Kubernetes works on GCE or this is how K8S behaves irrespective of where it is deployed. Is the reason that any service should be reachable from any minion no matter whether the minion is part of that service or not ? Let me know if there is a better community for this question ?
Would the rest of the minions which are not running any pods also get the iptables rule in chain KUBE-PORTALS-CONTAINER and KUBE-PORTALS-HOST for these two services added?
Yes. The load balancer can forward external traffic to any node in the cluster, so every node needs to be able to receive traffic for a service and forward it to the appropriate pod.
I would like to confirm if this is just how Kubernetes works on GCE or this is how K8S behaves irrespective of where it is deployed.
The iptables rule for services within the cluster is the same regardless of where Kubernetes is deployed. Making a service externally accessible differs slightly depending on where you deploy your cluster (e.g. on AWS you'd create the service as type NodePort instead of type LoadBalancer), so the iptables rule for services that are externalized can vary a bit.
I would like to create a kubernetes cluster to deploy mysql databases, like a mysql farm. These databases should be accessible from internet.
All databases on the same node will have the port 3306 listening, the kube-proxy or the DNS addon could redirect each request to an specific container?
I would like to create url's like myDB1.example.com:3306, myDB2.example.com:3306 that goes to an specific container.
I'm deploying this environment in AWS.
It's possible to create this cluster?
Yes. Starting point would be a (customized) MySQL Docker image with EBS backed volumes and you'd be using it in an Replication Controller to handle failover. On top of that you would have a Service that provides a stable and routable interface to the outside world. Optionally, put an AWS Elastic Load Balancer in front of it.