Kubernetes MySQL replication - Master service host inquiry - mysql

I currently have a working MySQL master and slave with asynchronous replication set up on a Kubernetes cluster.
I'm trying to plan for a contingency if the master goes down, the slave will pick up the slack and what not. I figured the first step would be to review the Dockerfiles and scripts that define the ENTRYPOINT within the Dockerfiles.
When I kubectl get svc, this is the information I get about the services.
This tells me on the cluster, the master has an IP 10.0.156.209.
Now when I flicked through the helper script docker-entrypoint.sh for the MySQL slave Docker image, I noticed this line which helps set up the master-slave scenario (I trimmed the line just to highlight this)
and when I boot into the slave pod on Kubernetes, the environment variable $MYSQL_MASTER_SERVICE_HOST is set to 10.0.156.209
My question: How did the MySQL slave pod know to use the master's cluster IP as the value for $MYSQL_MASTER_SERVICE_HOST? Is this a Kubernetes thing or a SQL thing?
Thank you :)

These environment variables are created automatically by Kubernetes for each Service. See: https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service.
On another note, mysql on Kubernetes is a complex topic. You probably don't want to roll your own solution. Have you looked at Vitess?

Related

Trying to create two MySQL pods in kubernetes with same volume for high availability

I am trying to deploy two MySQL pods with the same PVC, but I am getting CrashLoopBackoff state when I create the second pod with the error in logs: "innoDB check that you do not already have another mysqld process using the same innodb log files". How to resolve this error?
There are different options to solve high availability. If you are running kubernetes with an infrastructure that can provision the volume to different nodes (f.e. in the cloud) and your pod/node crashes, kubernetes will restart the database on a different node with the same volume. Aside from a short downtime you will have the database back up running in a relatively short time.
The volume will be mounted to a single running mysql pod to prevent data corruption from concurrent access. (This is what mysql notices in your scenario as well, since it is no designed for shared storage as HA solution)
If you need more you can use the built in replication of mysql to create a mysql 'cluster' which can be used even if one node/pod should fail. Each instance of the mysql cluster will have an individual volume in that case. Look at the kubernetes stateful set example for this scenario: https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/

Two mariadb instance using same persistent storage in docker

I want to start two maria-db pod with same persistent storage and any point of time I should be able to access both the instance and data should be in sync between them.
I am trying to start two mariadb instance using same volume persistent storage in kubernetes. I am able to start both the instance. I am performing the below steps.
Creating a persistent volume
Creating a persistent volume claim
Using the same claim name starting mariadb-instance-1.
Starting mariadb-instance-2 using same storage claim name.
Creating two services for both the instance to access from outside.
I am able to access instance-1 but when I am trying to access instance-2 its giving me error. MySQL Error: Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’.
Please find the attached dockerfiles.
Any help will be appreciated.
Please find the below git repo for db and storage yaml file which I used to create the deployment.
https://github.com/chandan493/db-as-docker
You can not run two MariaDB engines on the same storage, and if I understood you right this is what you expected. Even if you'd mount an RWX volume on two pods, if you put /var/lib/mysql of containers in two separate MaraiaDB pods in the same place, it will result in a conflict between database engines. For MariaDB clustering lookup MariaDB Galera - an almoust-fully-synchronous replication for MariaDB. But you'll need three db engines running for it to make sense.

Openshift Mymsql persistent storage won't mount on php

I have pod that utilizes php and I have a persistent MySQL storage created on openshift online. Whenever I click the option "add storage to php" and I set mysql as the storage with mount point /var/lib/mysql the server attempts to redeploy but the new container is stuck creating and then fails. I get multiple error messages like this one:
Failed to attach volume "pvc-d4962378-aae0-11e7-8a41-0a2a2b777307" on node "ip-172-31-50-169.us-west-2.compute.internal" with: Error attaching EBS volume "vol-0087ade77401256f5" to instance "i-0b8b81e68bc629f01": VolumeInUse: vol-0087ade77401256f5 is already attached to an instance status code: 400, request id: dfbdac9b-bad0-4211-8158-080a4e120b1a. The volume is currently attached to instance "i-02a6b44c53ab0d7f2"
Isn't this the proper way to connect mysql storage to a pod?
EBS volume type can only be mounted on one node at a time in an OpenShift cluster. When you have PHP and MySQL as separate applications that can land on different nodes and as a result, you can't mount the persistent volume against both. The error is warning you of this.
The only way you can use a single EBS volume against PHP and MySQL at the same time is for them to be running in separate containers of the same pod. You also need to ensure that the deployment strategy is set to Recreate and not Rolling, as rolling results in a new instance being created when the old still exists, with same issue arising as the new and old could be on different nodes.

Docker failover: Redis, MySQL and Nginx

Currently we have Redis master and Redis slave containers. MySQL master and MySQL slave containers. Both replicating.
How would we handle a failure on one of the master containers? Should I be using something like Nginx as a forward proxy to detect connection failures?
Already we do this on our API servers and Web servers.
For the replication of MySQL I suggest configuring MySQL in a master <-> master approach and setup an HAProxy load balancer over them, as eugeneware does in https://github.com/eugeneware/docker-mysql-replication. It is very easy to set up using an HAProxy Docker container.
For Redis it definitely looks like you need Sentinel: http://redis.io/topics/sentinel. In https://hub.docker.com/r/joshula/redis-sentinel/ you can find a docker image for Sentinel.
I don't think using a proxy like Nginx is an appropriate solution for both problems.

Autoscaling mysql on ec2

I need autoscaling of mysql-slave on ec2 can anybody guide me how to do that and how to transfer the load on newly added instance
i would use opscode's chef. you can create "roles" in chef, such as a slave_server role. make the role set up a new server and install mysql (check out the opscode provided cookbooks for mysql to do the first parts). Then what you want to do is write your own recipe to grab a copy of your slave db (perhaps grabbing a recent snapshot of the ebs volume of one of your other slave servers) and use that to create a new ebs volume on your new server. Then its just a matter of making your recipe configure the server as a slave and get slaving going so it catches up to master.