I need autoscaling of mysql-slave on ec2 can anybody guide me how to do that and how to transfer the load on newly added instance
i would use opscode's chef. you can create "roles" in chef, such as a slave_server role. make the role set up a new server and install mysql (check out the opscode provided cookbooks for mysql to do the first parts). Then what you want to do is write your own recipe to grab a copy of your slave db (perhaps grabbing a recent snapshot of the ebs volume of one of your other slave servers) and use that to create a new ebs volume on your new server. Then its just a matter of making your recipe configure the server as a slave and get slaving going so it catches up to master.
Related
From my understanding, Aws RDS facilitate backup for the mysql database, but it is not cheap.
While using docker image for mysql may save us more in terms of cost? Because we only need to download the docker image for dockerhub and directly use it for free(e.g. create an instance and run the container).
Is there another reason of using RDS other than facilitating backup for the database?
I list several features of RDS which may warrant using it over self-managed MySQL docker container on an EC2 insistence or ECS:
RDS is managed service, so all OS updates, MySQL patches are managed by AWS and you don't have to worry about them.
RDS supports storage auto-scaling - you can start with small db, and RDS will extend storage automatically as needed.
Point-in-time recovery allowing you to "rewind" your recent db changes.
Read replicas - you can create up to 5 read replicas of your database to off-load read intensive applications from your primary db instance.
Cross-region read replica - you can have your replica in different region which is good for disaster recovery (entire AWS region goes down)
Automated and manual backups, including backups to a different region.
IAM authentication to your db instead of regular username/password.
Multi-AZ - RDS can keep a stand-by replica of your primary database instance in different availability zone, for quick recovery if it fails.
CloudWatch integrated db metrics and logs.
RDS event notifications allow you for straight-forward development of automations e.g. invoke lambda automatically for every backup, or if something fails.
Easier integration with other services, e.g. use of RDS Proxy in Lambda functions.
All these and other features of RDS make it much more expensive then hosting a self-managed MySQL docker container. But if MySQL in docker container meets all your requirements, then there is no need to use RDS. You can always start with the docker, and if your data and requirements grow, you can migrate to RDS.
I want to start two maria-db pod with same persistent storage and any point of time I should be able to access both the instance and data should be in sync between them.
I am trying to start two mariadb instance using same volume persistent storage in kubernetes. I am able to start both the instance. I am performing the below steps.
Creating a persistent volume
Creating a persistent volume claim
Using the same claim name starting mariadb-instance-1.
Starting mariadb-instance-2 using same storage claim name.
Creating two services for both the instance to access from outside.
I am able to access instance-1 but when I am trying to access instance-2 its giving me error. MySQL Error: Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’.
Please find the attached dockerfiles.
Any help will be appreciated.
Please find the below git repo for db and storage yaml file which I used to create the deployment.
https://github.com/chandan493/db-as-docker
You can not run two MariaDB engines on the same storage, and if I understood you right this is what you expected. Even if you'd mount an RWX volume on two pods, if you put /var/lib/mysql of containers in two separate MaraiaDB pods in the same place, it will result in a conflict between database engines. For MariaDB clustering lookup MariaDB Galera - an almoust-fully-synchronous replication for MariaDB. But you'll need three db engines running for it to make sense.
I have pod that utilizes php and I have a persistent MySQL storage created on openshift online. Whenever I click the option "add storage to php" and I set mysql as the storage with mount point /var/lib/mysql the server attempts to redeploy but the new container is stuck creating and then fails. I get multiple error messages like this one:
Failed to attach volume "pvc-d4962378-aae0-11e7-8a41-0a2a2b777307" on node "ip-172-31-50-169.us-west-2.compute.internal" with: Error attaching EBS volume "vol-0087ade77401256f5" to instance "i-0b8b81e68bc629f01": VolumeInUse: vol-0087ade77401256f5 is already attached to an instance status code: 400, request id: dfbdac9b-bad0-4211-8158-080a4e120b1a. The volume is currently attached to instance "i-02a6b44c53ab0d7f2"
Isn't this the proper way to connect mysql storage to a pod?
EBS volume type can only be mounted on one node at a time in an OpenShift cluster. When you have PHP and MySQL as separate applications that can land on different nodes and as a result, you can't mount the persistent volume against both. The error is warning you of this.
The only way you can use a single EBS volume against PHP and MySQL at the same time is for them to be running in separate containers of the same pod. You also need to ensure that the deployment strategy is set to Recreate and not Rolling, as rolling results in a new instance being created when the old still exists, with same issue arising as the new and old could be on different nodes.
I currently have a working MySQL master and slave with asynchronous replication set up on a Kubernetes cluster.
I'm trying to plan for a contingency if the master goes down, the slave will pick up the slack and what not. I figured the first step would be to review the Dockerfiles and scripts that define the ENTRYPOINT within the Dockerfiles.
When I kubectl get svc, this is the information I get about the services.
This tells me on the cluster, the master has an IP 10.0.156.209.
Now when I flicked through the helper script docker-entrypoint.sh for the MySQL slave Docker image, I noticed this line which helps set up the master-slave scenario (I trimmed the line just to highlight this)
and when I boot into the slave pod on Kubernetes, the environment variable $MYSQL_MASTER_SERVICE_HOST is set to 10.0.156.209
My question: How did the MySQL slave pod know to use the master's cluster IP as the value for $MYSQL_MASTER_SERVICE_HOST? Is this a Kubernetes thing or a SQL thing?
Thank you :)
These environment variables are created automatically by Kubernetes for each Service. See: https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service.
On another note, mysql on Kubernetes is a complex topic. You probably don't want to roll your own solution. Have you looked at Vitess?
How can I copy the content of a specific table (or the table as is) from my local database to a database instance stored on a cloud, lets say Amazon RDS?
note: it has to be done ones every hour.
EDIT:
Other I/O operations on the local database should no be suspended (e.g. no READ LOCKS).
You can set your local database server to be a master to the Amazon RDS instance which means the Amazon RDS instance becomes a slave in this setup. This is possible to do as mentioned in the AWS documentation here http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.External.Repl.html
You can also configure the slave to update only a specific table in the database after a specified interval of time.