We are currently using 2 Nodes, but we may need more in the future.
The StatefulSets is a mariadb-galera is current replica is at 2.
When we'll had a new Nodes we want the replica to be a 3, f we don't need it anymore and we delete it or a other Node we want it to be a 2.
In fact, if we have 3 Nodes we want 3 replica one on each Nodes.
I could use Pod Topology Spread Constraints but we'll have a bunch of "notScheduled" pods.
Is there a way to adapt the number of Replica automatically, every time a nodes is add or remove?
When we'll had a new Nodes we want the replica to be a 3, f we don't need it anymore and we delete it or a other Node we want it to be a 2.
I would recommend to do it the other way around. Manage the replicas of your container workload and let the number of nodes be adjusted after that.
See e.g. Cluster Autoscaler for how this can be done, it depends on what cloud provider or environment your cluster is using.
It is also important to specify your CPU and Memory requests such that it occupy the whole nodes.
For MariaDB and similar workload, you should use StatefulSet and not DaemonSet.
You could use a Daemon Set https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
Which will ensure there is one pod per node.
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
Also, its not advised to run a database in anything else than a statefulset due to the pod identity concept as statefulsets have.
Due to all the database administration it is advisable to use any cloud provider managed databases or managing it, specially inside the cluster will incur in multiple issues
Related
I can see that after creating an Aurora MySQL I can then "Add a Reader" to the DB cluster. But isn't there a way to create a specified number of Replicas in the first place?
The scenarios that are easiest to understand are a cluster with a single DB instance (e.g. a developer system where availability isn't crucial) and a production cluster (where you want 2 instances at a minimum for high availability). The choice of 1 or 2 instances on initial cluster creation is controlled by the "Multi-AZ" option in the Create Cluster dialog. Selecting Multi-AZ gets you a writer instance and a reader instance. Having >1 reader instance is more for scalability reasons, and involves other considerations (promotion tiers, do all the instances get the same instance class and parameter group, etc.). I presume that bundling all those choices into the console dialog would clutter up the interface and make it easy for people to overprovision by accident.
What I normally do is automate the creation of 3-, 4-, etc. instance clusters using the AWS CLI commands. I create the cluster and the first (writer) instance, wait for the first instance to become available, then create all the reader instances. If you kick off creating all the instances at once, it's a race condition - whichever one finishes first becomes the writer.
I am having 2 Couchbase node, having 3 ephemeral buckets. The buckets are non replicated.
Lets name the nodes as A and B. Now I want to keep node B and remove node A.
Our client services is having the IP of node B, so I want to remove node A.
Can I remove node A directly from the Couchbase console and perform rebalancing. Am I going to lose data.
Any help will be appreciated.
I just tried this locally:
I created an ephemeral bucket with 0 replicas on a 2-node cluster.
I put 6 total documents in the bucket.
I removed one node.
I rebalanced the cluster.
After the rebalance was complete, I still had 6 documents in the ephemeral bucket.
So it appears that you will NOT lose data. HOWEVER, I would highly recommend taking advantage of the distributed nature of Couchbase and turn on replication in order to get high availability (in case something goes wrong with one of the nodes that you didn't plan for).
I've set up a couchbase cluster with 2 nodes containing 300k docs on 4 buckets. the option replicas is forced to 1 as there are only 2 machines.
But documents are splitted half in one node half in the other, I need to have double copy of each document so if a node goes down the other one che still supply all data to my app.
Is there a setting I missed in creating the cluster?
can I still set the cluster to replicate all documents?
I hope someone can help.
thanks
PS: I'm using couchbase community 4.5
UPDATE:
I add screenshots of cluster web interface and cbstast output:
the following is the state with one node only
next the one with both node up:
then cbstats results on both node when both are up and running:
AS you can see with only one node there are half items displayed. Does it mean that the other half resides as replicas but are not shown???
can I still run consistenly my app with only one node???
UPDATE:
I had to click fail-over manually to see replicas become active on the remaining node. As with just two cluster auto fail-over is disabled!!!
Couchbase Server will partition or shard the documents across the two nodes, as you observed. It will also place replicas on those nodes, based on your one-replica configuration.
To access a replica, you must use one of the Client SDKs.
For example, this Java code will attempt to retrieve a replica (getFromReplica("id", ReplicaMode.ALL)) if the active document retrieval fails (get("id")).
bucket.async()
.get("id")
.onErrorResumeNext(bucket.async().getFromReplica("id", ReplicaMode.ALL))
.subscribe();
The ReplicaMode.ALL tells Couchbase to try all nodes with replicas and the active node.
So what was happening with only two nodes in the cluster was that auto fail-over didn't start automatically as specified here:
https://developer.couchbase.com/documentation/server/current/clustersetup/automatic-failover.html
this means data replicas where not activated in the remaining node unless fail-over was triggerd manullay.
The best thing is to have more than TWO nodes in the cluster before going in production.
To be honest I should have ridden documentation very carefully before asking any question.
thanks Jeff Kurtz for your help, you pushed me towards the solution. (the understanding of how couchbase replicas policy works).
Have Kubernetes computation cluster running on GCE, reasonable happy so far. I know if I created K-cluster, I'll get to see nodes as VM Instances and cluster as Instance group. I would like to do other way around - create instances/group and make K-cluster out of it so it could be managed by Kubernetes. Reason I want to do so is to try and make nodes preemptible, which might better fit my workload.
So question - Kubernetes cluster with preemptible nodes how-to. I could do either one or another now, but not together
There is a patch out for review at the moment (#12384) that makes a configuration option to mark the nodes in the instance group as preemptible. If you are willing to build from head, this should be available as a configuration option in the next couple of days. In the meantime, you can see from the patch how easy it is to modify the GCE startup scripts to make your VMs preemptible.
Let's assume I have two couchbase clusters with XDCR setup and having following nodes:
n1.cluster1.com
n2.cluster1.com
n3.cluster1.com
and
n1.cluster2.com
n2.cluster2.com
n3.cluster2.com
What is preferable node configuration for CouchbaseClient?
As from http://docs.couchbase.com/couchbase-sdk-java-1.4/#hello-couchbase
The CouchbaseClient class accepts a list of URIs that point to nodes in the cluster. If your cluster has more than one node, Couchbase strongly recommends that you add at least two or three URIs to the list. The list does not have to contain all nodes in the cluster, but you do need to provide a few nodes so that during the initial connection phase your client can connect to the cluster even if one or more nodes fail.
After the initial connection, the client automatically fetches cluster configuration and keeps it up-to-date, even when the cluster topology changes. This means that you do not need to change your application configuration at all when you add nodes to your cluster or when nodes fail. Also make sure you use a URI in this format: http://[YOUR-NODE]:8091/pools. If you provide only the IP address, your client will fail to connect. We call this initial URI the bootstrap URI.
Does it mean I should add at least two or three nodes from each cluster? Or two or three node from the whole system?
Each CouchbaseClient object will only connect to one cluster. The list of node URIs should all belong to the same cluster - you'll likely get strange behaviour if you list nodes from different clusters.
If your application wants to connect to two different cluster (irrespective of if they have a replication stream between them or not), then you want to create two CouchbaseClient objects, one connected to each cluster.
I recommend adding all nodes of the cluster to your client connect configuration. The reason is that if one or more of the nodes are down (i.e. planned shutdown, server crash, etc) the client would still be able to connect to the cluster(s) when it restarted.
Note that client need this list of connect nodes at the time of start up, once communicated with the cluster it will maintain its own track of active/inactive cluster nodes.
I have in production one cluster of 3 nodes and all my clients have all nodes in the connect configuration, e.g.
http://my-node1:8091/pools,http://my-node2:8091/pools,http://my-node3:8091/pools
Regarding multiple clusters I'm not sure it will work with the same client instance unless a Couchbase client instance is smart enough to distinguish multiple clusters and keep track of its nodes health. Read on Couchbase installation guide
I found in documentation if you are using Couchbase Moxi it does support multiple clusters:
Moxi also supports proxying to multiple clusters from a single moxi
instance, where this was originally designed and implemented for
software-as-a-service purposes. Use a semicolon (';') to specify and
delimit more than one cluster:
-z “LISTEN_PORT=[CLUSTER_CONFIG][;LISTEN_PORT2=[CLUSTER_CONFIG2][]]”