Two mariadb instance using same persistent storage in docker - mysql

I want to start two maria-db pod with same persistent storage and any point of time I should be able to access both the instance and data should be in sync between them.
I am trying to start two mariadb instance using same volume persistent storage in kubernetes. I am able to start both the instance. I am performing the below steps.
Creating a persistent volume
Creating a persistent volume claim
Using the same claim name starting mariadb-instance-1.
Starting mariadb-instance-2 using same storage claim name.
Creating two services for both the instance to access from outside.
I am able to access instance-1 but when I am trying to access instance-2 its giving me error. MySQL Error: Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’.
Please find the attached dockerfiles.
Any help will be appreciated.
Please find the below git repo for db and storage yaml file which I used to create the deployment.
https://github.com/chandan493/db-as-docker

You can not run two MariaDB engines on the same storage, and if I understood you right this is what you expected. Even if you'd mount an RWX volume on two pods, if you put /var/lib/mysql of containers in two separate MaraiaDB pods in the same place, it will result in a conflict between database engines. For MariaDB clustering lookup MariaDB Galera - an almoust-fully-synchronous replication for MariaDB. But you'll need three db engines running for it to make sense.

Related

Is it recommended to deploy MySQL database in kubernetes with one pod or more pods in production?

Is it recommended to deploy MySQL database in kubernetes with one pod or more pods in the production?
What are the advantages and disadvantages if we deploy MySQL database k8s production.
Should we have only one pod with persistent volume.
is any chances to have issue with data read and write when we use multiple MySQL pods in production.
Having one database pod in production would lead to single point of failure. You need to deploy the database with high availability using multiple replicas and automatic failover if you consider running databases in kubernetes. good to add database backup and restore features to the pod image for recovery purpose in case the database needs to be restored from an old backup copy.
consider the below helm charts for mysql deployment in kubernetes container platform
https://github.com/bitnami/charts/tree/master/bitnami/mysql
https://github.com/bitnami/charts/tree/master/bitnami/mariadb

kubernetes clustering architecture for zero down time

As I found, the best way to have zero down time even when one datacenter is down, is using kubernetes between at least two servers from two datacenters.
So because I wanted to use servers in Iran. I've heard low performance about infrastructure.
The question is that if I want to have master-master replication for mysql, in one server failure, how can I sync repaired server in kubernetes clustring?
K8s is the platform, it doesn't change how MySQL HA works. Example, if you have dedicated servers for MySQL, these servers become "pods" in K8s. What you need to do at MySQL level when any of the server is gone for whatever reason; is the same as what you need to do when you run it as a pod. In fact, K8s help you by automatically start a new pod. Where in former case, you will need to provision a new physical server - the time required is obvious here. You will normally run script to re-establish the HA, the same apply to K8s where you can run the recovery script as the init container before the actual MySQL server container is started.

Trying to create two MySQL pods in kubernetes with same volume for high availability

I am trying to deploy two MySQL pods with the same PVC, but I am getting CrashLoopBackoff state when I create the second pod with the error in logs: "innoDB check that you do not already have another mysqld process using the same innodb log files". How to resolve this error?
There are different options to solve high availability. If you are running kubernetes with an infrastructure that can provision the volume to different nodes (f.e. in the cloud) and your pod/node crashes, kubernetes will restart the database on a different node with the same volume. Aside from a short downtime you will have the database back up running in a relatively short time.
The volume will be mounted to a single running mysql pod to prevent data corruption from concurrent access. (This is what mysql notices in your scenario as well, since it is no designed for shared storage as HA solution)
If you need more you can use the built in replication of mysql to create a mysql 'cluster' which can be used even if one node/pod should fail. Each instance of the mysql cluster will have an individual volume in that case. Look at the kubernetes stateful set example for this scenario: https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/

Openshift Mymsql persistent storage won't mount on php

I have pod that utilizes php and I have a persistent MySQL storage created on openshift online. Whenever I click the option "add storage to php" and I set mysql as the storage with mount point /var/lib/mysql the server attempts to redeploy but the new container is stuck creating and then fails. I get multiple error messages like this one:
Failed to attach volume "pvc-d4962378-aae0-11e7-8a41-0a2a2b777307" on node "ip-172-31-50-169.us-west-2.compute.internal" with: Error attaching EBS volume "vol-0087ade77401256f5" to instance "i-0b8b81e68bc629f01": VolumeInUse: vol-0087ade77401256f5 is already attached to an instance status code: 400, request id: dfbdac9b-bad0-4211-8158-080a4e120b1a. The volume is currently attached to instance "i-02a6b44c53ab0d7f2"
Isn't this the proper way to connect mysql storage to a pod?
EBS volume type can only be mounted on one node at a time in an OpenShift cluster. When you have PHP and MySQL as separate applications that can land on different nodes and as a result, you can't mount the persistent volume against both. The error is warning you of this.
The only way you can use a single EBS volume against PHP and MySQL at the same time is for them to be running in separate containers of the same pod. You also need to ensure that the deployment strategy is set to Recreate and not Rolling, as rolling results in a new instance being created when the old still exists, with same issue arising as the new and old could be on different nodes.

Autoscaling mysql on ec2

I need autoscaling of mysql-slave on ec2 can anybody guide me how to do that and how to transfer the load on newly added instance
i would use opscode's chef. you can create "roles" in chef, such as a slave_server role. make the role set up a new server and install mysql (check out the opscode provided cookbooks for mysql to do the first parts). Then what you want to do is write your own recipe to grab a copy of your slave db (perhaps grabbing a recent snapshot of the ebs volume of one of your other slave servers) and use that to create a new ebs volume on your new server. Then its just a matter of making your recipe configure the server as a slave and get slaving going so it catches up to master.