I would like to create a kubernetes cluster to deploy mysql databases, like a mysql farm. These databases should be accessible from internet.
All databases on the same node will have the port 3306 listening, the kube-proxy or the DNS addon could redirect each request to an specific container?
I would like to create url's like myDB1.example.com:3306, myDB2.example.com:3306 that goes to an specific container.
I'm deploying this environment in AWS.
It's possible to create this cluster?
Yes. Starting point would be a (customized) MySQL Docker image with EBS backed volumes and you'd be using it in an Replication Controller to handle failover. On top of that you would have a Service that provides a stable and routable interface to the outside world. Optionally, put an AWS Elastic Load Balancer in front of it.
Related
I'd like to connect to my databases in GKE using GUI tools but I don't want to expose the services to the world. What are some ways to accomplish this?
Update: for instance, I'd like to use TablePlus to connect to a mysql pod inside the cluster.
Create a new VM in the region where your cluster lives.
Install the GUI tool.
Specify an IP address in the cluster's IP range.
See the example below, which describes how to connect to a database which is running on a GKE cluster.
https://cloud.google.com/composer/docs/access-airflow-database
I am trying to deploy a MySQL Docker Image to Kubernetes. I mostly managed all tasks, Docker Image up and running in Docker, one final thing is missing from Kubernetes deployment.
MySQL has one configuration which is stating which user can log on from which Host 'MYSQL_ROOT_HOST' to configure that for Docker is no problem, Docker Networking is using '172.17.0.1' for bridging.
The problem with Kubernetes, this must be the IP of the Pod trying to connect MySQL Pod and every time a Pod starts, this IP changes.
I try to put the Label of the Pod connecting to the MySQL Pod but it is still looking the IP of the Pod instead of DNS name.
Do you have an idea how I can overcome this problem, I can't even figure out how this should work if I set AutoScaling for the Pod that is trying to connect MySQL, replicas will all have a different IP.
Thx for answers....
As #RyanDowson and #siloko mentioned, you should use Service, Ingress or Helm Charts for these purposes.
Additional information you can find on Service, Ingress and Helm Charts pages.
I have several questions regarding Docker.
First my project:
I have a blog on a shared host and want to move it to the cloud to have all the server sides in my hands and to have the possibility to scale my server on my needs.
My first intend was to setup a nice ubuntu 14 lts as a server with nginx, php 7 and mysql. But I think it's not that easy to transfer such a server to another cloud i.e. from gce to aws. I then thought about using docker, as a friend told me how easy it is to setup containers and how easy it is to move them from one server to another.
I then read a lot about docker but stumbled upon a few things I wondered about.
In my understanding docker runs just services like php, mysql or similar, but doesn't hold data, right?
Where would I store all the data like database, nginx.conf, php.ini and all the Files I want to serve with nginx (ie. /var/www/)?
Are they stored on the host system? If yes, it would not be easier to move a docker setup then move a whole server, no?
Do I really have an advantage of using Docker to serve a Wordpress Blog or another Website using MySQL and so on?
Thanks in advance
Your data is either stored on the host machine or you data is attached to the docker containers remotely (using a network-attached block device).
When you store your data on the host machine, you have a number of options.
The data can be 'inside' one of your containers (e.g. your mysql databases live inside your mysql container).
You can mount one or more directories from your host machine inside your containers. So then the data lives on your host.
You can create Docker volumes or Docker volume containers that are used to store your data. These volumes or volume containers are mounted inside the container with your application. The data then lives in directories managed by Docker.
For details of these options, see dockervolumes
The last option is that you mount remote storage to your docker containers. Flocker is one of the options you have for this.
At my work I've set up a host (i.e. server) that runs a number of services in docker containers. The data for each of these services 'lives' in a Docker data volume container.
This way, the data and the services are completely separated. That allows me to start, stop, upgrade and delete the containers that are running my services without affecting the data.
I have also made separate Docker containers that are started by cron and these back up the data from the data volume containers.
For mysql, the backup container connects to the mysql container and executes mysqldump remotely.
I can also run the (same) containers that are running my services on my development machine, using the data that I backed up from the production server.
This is useful, for instance, to test upgrading mysql from 5.6 to 5.7.
Question: How can I provide reliable access from (non-K8s) services running in an GCE network to other services running inside Kubernetes?
Background: We are running a hosted K8s setup in the Google Cloud Platform. Most services are 12factor apps and run just fine within K8s. Some backing stores (databases) are run outside of K8s. Accessing them is easy by using headless services with manually defined endpoints to fixed internal IPs. Those services usually do not need to "talk back" to the services in K8s.
But some services running in the internal GCE network (but outside of K8s) need to access services running within K8s. We can expose the K8s services using spec.type: NodePort and talk to this port on any of the K8s Nodes IPs. But how can we automatically find the right NodePort and a valid Worker Node IP? Or maybe there is even a better way to solve this issue.
This setup is probably not a typical use-case for a K8s deployment, but we'd like to go this way until PetSets and Persistent Storage in K8s have matured enough.
As we are talking about internal services I'd like to avoid using an external loadbalancer in this case.
You can make cluster service IPs meaningful outside of the cluster (but inside the private network) either by creating a "bastion route" or by running kube-proxy on the machine you are connecting from (see this answer).
I think you could also point your resolv.conf at the cluster's DNS service to be able to resolve service DNS names. This could get tricky if you have multiple clusters though.
One possible way is to use an Ingress Controller. Ingress Controllers are designed to provide access from outside a Kubernetes cluster to services running inside the cluster. An Ingress Controller runs as a pod within the cluster and will route requests from outside the cluster to the correct services inside the cluster, based on the configured rules. This provides a secure and reliable way for non-Kubernetes services running in a GCE network to access services running in Kubernetes.
I'm newbie at kubernetes and I'm having problem to understand how I can run persistent pods (Cassandras ones or mysql ones) in ubuntu servers.
Correct me if I'm wrong, kubernetes can scale up or down the pods when it sees that we need more CPU but we are not talking about static code but data that are present in other nodes. So what will do the pod when it receive the request from the balancer? Also, kubernetes has the power to destroy nodes when it sees that the traffic has reduced, so how we can not lose data and not disturb the environment?
You should use volumes to map a directory in the container to persistent disks on the host or other storage