We currently utilize Microsoft Azure to host a MariaDB 10.1 Galera Cluster hosted on two VM's set with a Load Balancer. This functions well for our NORMAL operations, however, we would like to utilize Azure's autoscaling ability to spin up new nodes in the Galera Cluster when necessary. We have unscheduled excessive load at times which is too much for even our current cluster. It would be amazing if we had the ability to scale up as needed and then scale down when done.
Being that Galera is pre-configured with specific IP, is there any way to have the new instances that would be spun up, join the Load Balancer and Galera Cluster?
Related
How does OpenShift scale when using EBS for persistent storage? How does OpenShift map users to EBS volumes? Because its infeasible to allocate 1 ebs volume to each user, how does openshift handle this in the backend using kubernetes?
EBS volumes can only be mounted on a single node in a cluster at a time. This means you cannot scale an application that uses one beyond 1 replica. Further, an application using an EBS volume cannot use 'Rolling' deployment strategy as that would require there to be 2 replicas when the new deployment is occurring. The deployment strategy therefore needs to be set to 'Recreate'.
Subject to those restrictions on your deployed application which has claimed a volume of type EBS, there is no problems with using EBS volumes as an underlying storage type. Kubernetes will quite happily map the volume into the pod for your application. If that pod dies and gets started on a different node, Kubernetes will then mount the volume in the pod on the new node instead, such that your storage follows the application.
If you give up a volume claim, its contents are wiped and it is returned to the pool of available volumes. A subsequent claim by you or a different user can then get that volume and it would be applied to the pod for the new application.
This is all handled and works no problems. It is a bit hard to understand what you are asking, but hopefully this gives you a better picture.
I'm newbie at kubernetes and I'm having problem to understand how I can run persistent pods (Cassandras ones or mysql ones) in ubuntu servers.
Correct me if I'm wrong, kubernetes can scale up or down the pods when it sees that we need more CPU but we are not talking about static code but data that are present in other nodes. So what will do the pod when it receive the request from the balancer? Also, kubernetes has the power to destroy nodes when it sees that the traffic has reduced, so how we can not lose data and not disturb the environment?
You should use volumes to map a directory in the container to persistent disks on the host or other storage
I am running a couchbase cluster (v2.1). Now I have a couchbase cluster (v4.0) provisioned. I want to transfer data from the 2.1 cluster to the 4.0 cluster. Can I just simply use the XDCR through the web console to do that? That is I replicate the data from the v2.1 cluster to the v4.0 cluster.
Any risk that I might lose the data in the v2.1 cluster?
Thanks for the hints.
Yes, you can simply use XDCR to replicate the data to the new cluster. It is robust and is designed to replicate data safely. Note that XDCR uses some resources, so make sure your source cluster has enough CPU and memory headroom. Couchbase best practices recommend approximately 1 core per replication steam and at most 80% of RAM allocates to Couchbase.
I am very new to cloud computing. I have never worked with MySQL outside of 1 instance. I am trying to understand how AWS RDS read replicas work with my application. For example say I have 1 master and 2 read replicas. I then from my application server send the query to AWS:
SELECT * FROM users where username = 'bob';
How does this work now? Do I need to include more into my code to choose a certain read replica or does AWS automatically reroute the request or how does it work?
Amazon does not currently provide any sort of load balancing or other traffic distribution across RDS servers. When you send queries to the primary RDS endpoint, 100% of that traffic goes to the primary RDS server. You would have to architect your system to open connections to each server and distribute the queries across the different database servers.
To do this in a way that is transparent to your application, you could setup an HAProxy instance between your application and the database that manages the traffic distribution.
Use of Elastic Load Balancers to distribute RDS traffic is an often requested feature, but Amazon has given no indication that they are working on this feature at this time.
I am running n1-standard-1 (1 vCPU, 3.75 GB memory) Compute Instance , In my android app around 80 users are online write now and cpu Utilisation of instance is 99% and my app became less responsive. Kindly suggest me the workaround and If i need to upgrade , can I do that with same instance or new instance needs to be created.
Since your app is running already and users are connecting to it, you don't want to do the following process:
shut down the VM instance, keeping the boot disk and other disks
boot a more powerful instance, using the boot disk from step (1)
attach and mount any additional disks, if applicable
Instead, you might want to do the following:
create an additional VM instance with similar software/configuration
create a load balancer and add both the original and new VM to it as a backend
change your DNS name to point to the load balancer IP instead of the original VM instance
Now, your users will be randomly sent to a VM that's least-loaded to see the application, and you can add more VMs if your traffic increases.
You did not describe your application in detail, so it's unclear if each VM has local state (e.g., runs a database) or there's a database running externally. You will still need to figure out how to manage the stateful systems such as database or user-uploaded data from all the VM instances, which is hard to advise on given the little information in your quest.