I intend to setup a Couchbase system with two cluster: the main cluster is active and another one for backup (use XDCR to replicate). Use haproxy in front of this Couchbase system to switch (manual) from active cluster to backup cluster when active cluster down.
Before test, i want to ask some advice for this topology. Is there any problem with this. Can i run smoothly in production environment???
I thought i can not use vbucket awareness client in this topology. Because client only know haproxy, i can not send direct request from client to couchbase server (has vbucket for specific document). Is that right???
From your scenario it sounds like overhead. Why would you keep "stand by" cluster as a backup?
Instead, you can have all four instances of couchbase servers as one cluster (each instance running on its own box)...so you will take full advantage of vBucket architecture that it will be native-managed. If one of the instances is down, you will have no loss of data since the enabled replication will have mirror copy in the other nodes.
We use this setup in production with no issues. From time to time we bring one of the instances down for maintenance and the rest of the cluster still runs and its completely transparent to the Couchbase clients, e.g. no down time!
In my opinion XDCR makes sense for geographically separated locations (so you keep one cluster in Americas another in EMEA and so on). If all your instances in the same location, then Couchbase cluster technology will deliver high-availability (HA) with fail-over support already build in.
Related
I have an Kubernetes environment running multipe applications (services). Now i'm a little bit confused how to setup the MySQL database instance(s).
According to different sources each microservice should have there own database. Should i create a single MySQL statefulset in HA mode running multiple databases OR should i deploy a separate MySQL instance for each application (service) running one database each.
My first thought would be the first option hence where should HA oterwise be usefull for? Would like to hear some differente views on this.
Slightly subjective question, but here's what we have setup. Hopefully, that will help you build a case. I'm sure someone would have a different opinion, and that might be equally valid too:
We deploy about 70 microservices, each with it's own database ("schema"), and it's own JDBC URL (defined via a service). Each microservice has it's own endpoint and credentials that we do not share between microservices. So in effect, we have kept the design to be completely independent across the microservices as far as the schema is concerned.
Deployment-wise, however, we have opted to go with a single database instance for hosting all databases (or "schemas"). While technically, we could deploy each database on its own database instance, we chose not to do it for few main reasons:
Cost overhead: Running separate database instances for each microservice would add a lot of "fixed" costs. This may not be directly relevant to you if you are simply starting the database as a MySQL Docker container (we use a separate database service, such as RDS or Google Cloud SQL). But even in the case of MySQL as a Docker container, you might end up having a non-trivial cost if you run, for example, 70 separate containers one per microservice.
Administration overhead: Given that databases are usually quite involved (disk space, IIOPs, backup/archiving, purge, upgrades and other administration activities), having separate database instances -- or Docker container instances -- may put a significant toll on your admin or operations teams, especially if you have a large number of microservices
Security: Databases are usually also critical when it comes to security as the "truth" usually goes in the DB. Keeping encryption, TLS configuration and strengths of credentials aside (as they should be of utmost importance regardless of your deployment model), security considerations, reviews, audits and logging will bring in significant challenges if your databases instances are too many.
Ease of development: Relatively less critical in the grand scheme of things, but significant, nonetheless. Unless you are thinking of coming up with a different model for development (and thus breaking the "dev-prod parity"), your developers may have a hard time figuring out the database endpoints for debugging even if they only need that information once-in-a-while.
So, my recommendation would be to go with a single database instance (Docker or otherwise), but keep the databases/schemas completely independent and inaccessible by the any microservice but the "owner" microservice.
If you are deploying MySQL as Docker container(s), go with a StatefulSet for persistence. Define an external pvc so that you can always preserve the data, no matter what happens to your pods or even your cluster. Of course, if you run 'active-active', you will need to ensure clustering between your nodes, but we do run it in 'active-passive' mode, so we keep the replica count to 1 given we only use MySQL Docker container alternative for our test environments to save costs of external DBaaS service where it's not required.
Let's assume I have two couchbase clusters with XDCR setup and having following nodes:
n1.cluster1.com
n2.cluster1.com
n3.cluster1.com
and
n1.cluster2.com
n2.cluster2.com
n3.cluster2.com
What is preferable node configuration for CouchbaseClient?
As from http://docs.couchbase.com/couchbase-sdk-java-1.4/#hello-couchbase
The CouchbaseClient class accepts a list of URIs that point to nodes in the cluster. If your cluster has more than one node, Couchbase strongly recommends that you add at least two or three URIs to the list. The list does not have to contain all nodes in the cluster, but you do need to provide a few nodes so that during the initial connection phase your client can connect to the cluster even if one or more nodes fail.
After the initial connection, the client automatically fetches cluster configuration and keeps it up-to-date, even when the cluster topology changes. This means that you do not need to change your application configuration at all when you add nodes to your cluster or when nodes fail. Also make sure you use a URI in this format: http://[YOUR-NODE]:8091/pools. If you provide only the IP address, your client will fail to connect. We call this initial URI the bootstrap URI.
Does it mean I should add at least two or three nodes from each cluster? Or two or three node from the whole system?
Each CouchbaseClient object will only connect to one cluster. The list of node URIs should all belong to the same cluster - you'll likely get strange behaviour if you list nodes from different clusters.
If your application wants to connect to two different cluster (irrespective of if they have a replication stream between them or not), then you want to create two CouchbaseClient objects, one connected to each cluster.
I recommend adding all nodes of the cluster to your client connect configuration. The reason is that if one or more of the nodes are down (i.e. planned shutdown, server crash, etc) the client would still be able to connect to the cluster(s) when it restarted.
Note that client need this list of connect nodes at the time of start up, once communicated with the cluster it will maintain its own track of active/inactive cluster nodes.
I have in production one cluster of 3 nodes and all my clients have all nodes in the connect configuration, e.g.
http://my-node1:8091/pools,http://my-node2:8091/pools,http://my-node3:8091/pools
Regarding multiple clusters I'm not sure it will work with the same client instance unless a Couchbase client instance is smart enough to distinguish multiple clusters and keep track of its nodes health. Read on Couchbase installation guide
I found in documentation if you are using Couchbase Moxi it does support multiple clusters:
Moxi also supports proxying to multiple clusters from a single moxi
instance, where this was originally designed and implemented for
software-as-a-service purposes. Use a semicolon (';') to specify and
delimit more than one cluster:
-z “LISTEN_PORT=[CLUSTER_CONFIG][;LISTEN_PORT2=[CLUSTER_CONFIG2][]]”
I would like to run a couchbase cluster on a hardware cluster that's not uniform. Some of the machines have 1 CPU core, while the others have 16 cores.
Is there a way to configure the bucket size or request frequency so the larger servers can receive a larger percentage of the load?
What I'm looking for is something similar to the weighting in ketama, but for Couchbase.
You stated in a previous answer that:
Usually 1 small instance and 4-20 large ones. The small one basically only exists for cluster discovery.
What you should really do is connecting to your Couchbase cluster through a reverse-proxy (such as Haproxy) instead of a constantly-up node. The reverse-proxy will have all the potential nodes in its pool, constantly looking for which nodes are really up, and dispatching connections to those nodes. As soon as a node goes down, the connection will be re-established to an alive node.
You can read more about this architecture description on the Couchbase documentation.
No, there isn't method that does what you want. Keys in CB throws its own hash-function mapped to VBuckets, and VBuckets mapped to server. Couchbase API doesn't allow managed this mapping. All you can do is determined by the id of document server that owns this document.
I am using Amazon RDS for my database services and want to use the read replica feature to distributed the traffic amongst the my read replica volumes. I currently store the connection information for my database in a single config file. So my idea is that I could create a function that randomly picked from a list of my read-replica endpoints/addresses in my config file any time my application performed a read.
Is there a problem with this idea as long as I don't perform it on a write?
My guess is that if you have a service that has enough traffic to where you have multiple rds read replicas that you want to balance load across, then you also have multiple application servers in front of it operating behind a load balancer.
As such, you are probably better off having certain clusters of app server instances each pointing at a specific read replica. Perhaps you do this by availability zone.
The thought here is that your load balancer will then serve as the mechanism for properly distributing the incoming requests that ultimately lead to database reads. If you had the DB reads randomized across different replicas you could have unexpected spikes where too much traffic happens to be directed to one DB replica causing resulting latency spikes on your service.
The biggest challenge is that there is no guarantee that the read replicas will be up-to-date with the master or with each other when updates are made. If you pick a different read-replica each time you do a read you could see some strangeness if one of the read replicas is behind: one out of N reads would get stale data, giving an inconsistent view of the system.
Choosing a random read replica per transaction or session might be easier to deal with from the consistency perspective.
I've got a Java web service backed by MySQL + EC2 + EBS. For data integrity I've looked into DRBD, MySQL cluster etc. but wonder if there isn't a simpler solution. I don't need high availability (can handle downtime)
There are only a few operations whose data I need to preserve -- creating an account, changing password, purchase receipt. The majority of the data I can afford to recover from a stale backup.
What I am thinking is that I could pipe selected INSERT/UPDATE commands to storage (S3, SimpleDB for instance) and when required (when the db blows up) replay these commands from the point of last backup. And wouldn't it be neat if this functionality was implemented in the JDBC driver itself.
Is this too silly to work, or am I missing another obvious and robust solution?
Have you looked into moving your MySQL into Amazon Web Services as well? You can use Amazon Relational Database Service (RDS). Also see MySQL Enterprise Support.
You always have a window where total loss of a server and associated file storage will result in some amount of lost data.
When I ran a modestly busy SaaS solution in AWS, I had a MySQL Master running on a large instance and a MySQL Slave running on a small instance in a different availability zone. The replication lag was typically no more than 2 seconds, though a surge in traffic could take that up to a minute or two.
If you can't afford losing 5 minutes of data, I would suggest running a Master/Slave setup over rolling your own recovery mechanism. If you do roll your own, ensure the "stale" backups and the logged/journaled critical data are in a different availability zone. AWS has lost entire zones before.