Does maxscale (with Galera) handle Non-primary component/node condition automatically? - mysql

We are going to use maxscale as a sql proxy with our mariadb database, with Galera cluster.
In Galera cluster, when quorum is not achieved and split-brain condition happens, some node becomes Non-primary. The Non-primary nodes start rejecting queries coming to them.(as per document)
Does maxscale automatically handle this and stops sending queries to non-primary nodes until they become primary component again.?
I have tested one thing that if any node goes down, maxscale handles that properly and stops sending queries to that node. My question is, does it do same for Non-primary nodes too? If not how to handle it.
PS: I am actually not able to test the non-primary thing myself that's why I am asking this question here. It would be great if somebody can help me achieve and test this situation myself too.

Yes, the Galera monitoring in MaxScale will handle split-brain situations. The monitoring in MaxScale will use the cluster UUID to detect which nodes are a part of it.
For more information, refer to the galeramon documentation.

Related

Are there *application*-driven reasons to prefer multi-primary topologies over clustering, or vice-versa?

I have an application that currently uses a single primary and I'm looking to do multi-primary by either setting up a reciprocal multi-primary (just two primaries with auto-increment-increment and auto-increment-offset set appropriately) or Clustering-with-a-capital-C. The database is currently MariaDB 10.3, so the clustering would be Galera.
My understanding of multi-primary is that the application would likely require no changes: the application would connect to a single database (doesn't matter which one), and any transaction that needed to obtain any locks would do so locally, any auto-increment values necessary would be generated, and once a COMMIT occurs, that engine would complete the commit and the likelihood of failure-to-replicate to the other node would be very low.
But for Clustering, a COMMIT actually requires that the other node(s) are updated to ensure success, the likelihood of failure during COMMIT (as opposed to during some INSERT/UPDATE/DELETE) is much higher, and therefore the application would really require some automated retry logic to be built into it.
Is the above accurate, or am I overestimating the likelihood of COMMIT-failure in a Clustered deployment, or perhaps even underestimating the likelihood of COMMIT-failure in a multi-primary environment?
From what I've read, it seems that the Galera Cluster is a little more graceful about handling nodes leaving the re-joining the Cluster and adding new nodes. Is Galera Cluster really just multi-master with the database engine handling all the finicky setup and management, or is there some major difference between the two?
Honestly, I'm more looking for reassurance that moving to Galera Cluster isn't going to end up being an enormous headache relative to the seemingly "easier" and "safer" move to multi-primary.
By "multi-primary", do you mean that each of the Galera nodes would be accepting writes? (In other contexts, "multi-primary" has a different meaning -- and only one Replica.)
One thing to be aware of: "Critical read".
For example, when a user posts something and it writes to one node, and then that user reads from a different node, he expects his post to show up. See wsrep_sync_wait.
(Elaborating on Bill's comment.) The COMMIT on the original write waits for each other node to say "yes, I can and will store that data", but a read on the other nodes may not immediately "see" the value. Using wsrep_sync_wait just before a SELECT makes sure the write is actually visible to the read.

MySQL Replication: Question about a fallback-system

I want to set up a complete server (apache, mysql 5.7) as a fallback of a productive server.
The synchronization on file level using rsync and cronjob is already done.
The mysql-replication is currently the problem. More precisely: the choice of the right replica method.
Multi primary group replication seemed to be the most suitable method so far.
In case of a longer production downtime, it is possible to switch to the fallback server quickly via DNS change.
Write accesses to the database are possible immediately without adjustments.
So far so good: But, if the fallback-server fails, it is in unreachable status and the production-server switches to read only, since its group no longer has the quota. This is of course a no-go.
I thought it might be possible using different replica variables: If the fallback-server is in unreachable state for a certain time (~5 minutes), the production-server should stop the group_replication and start a new group_replication. This has to happen automatically to keep the read-only time relatively low. When the fallback-server is back online, it should be manually added to the newly started group. But if I read the various forum posts and documentation correctly, it's not possible that way. And running a Group_Replication with only two nodes is the wrong decision anyway.
https://forums.mysql.com/read.php?177,657333,657343#msg-657343
Is the master - slave replication the only one that can be considered for such a fallback system? https://dev.mysql.com/doc/refman/5.7/en/replication-solutions-switch.html
Or does the Group_Replication offer possibilities after all, if you can react suitably to the quota problem? Possibilities that I have overlooked so far.
Many thanks and best regards
Short Answer: You must have [at least] 3 nodes.
Long Answer:
Split brain with only two nodes:
Write only to the surviving node, but only if you can conclude that it is the only surviving node, else...
The network died and both Primaries are accepting writes. This to them disagreeing with each other. You may have no clean way to repair the mess.
Go into readonly mode with surviving node. (The only safe and sane approach.)
The problem is that the automated system cannot tell the difference between a dead Primary and a dead network.
So... You must have 3 nodes to safely avoid "split-brain" and have a good chance of an automated failover. This also implies that no two nodes should be in the same tornado path, flood range, volcano path, earthquake fault, etc.
You picked Group Replication (InnoDB Cluster). That is an excellent offering from MySQL. Galera with MariaDB is an equally good offering -- there are a lot of differences in the details, but it boils down to needing 3, preferably dispersed, nodes.
DNS changes take some time, due to the TTL. A proxy server may help with this.
Galera can run in a "Primary + Replicas" mode, but it also allows you to run with all nodes being read-write. This leads to a slightly different set of steps necessary for a client to take to stop writing to one node and start writing to another. There are "Proxys" to help with such.
FailBack
Are you trying to always use a certain Primary except when it is down? Or can you accept letting any node be the 'current' Primary?
I think of "fallback" as simply a "failover" that goes back to the original Primary. That implies a second outage (possibly briefer). However, I understand geographic considerations. You may want your main Primary to be 'near' most of your customers.
I recommend using the Galera MySQL cluster with HAProxy as a load balancer and automatic failover solution. we have used it in production for a long time now and never had serious problems. The most important thing to consider is monitoring the replication sync status between nodes. also, make sure your storage engine is InnoDB because Galera doesn't work with MyISAM.
check this link on how to setup :
https://medium.com/platformer-blog/highly-available-mysql-with-galera-and-haproxy-e9b55b839fe0
But in these kinds of situations, the main problem is not a failover mechanism because there are many solutions out of the box, but rather you have to check your read/write ratio and transactional services and make sure replication delays won't affect them. some times vertically scalable solutions with master-slave replication are more suitable for transaction-sensitive financial systems and it really depends on the service your providing.

Heterogeneous Couchbase Cluster

I would like to run a couchbase cluster on a hardware cluster that's not uniform. Some of the machines have 1 CPU core, while the others have 16 cores.
Is there a way to configure the bucket size or request frequency so the larger servers can receive a larger percentage of the load?
What I'm looking for is something similar to the weighting in ketama, but for Couchbase.
You stated in a previous answer that:
Usually 1 small instance and 4-20 large ones. The small one basically only exists for cluster discovery.
What you should really do is connecting to your Couchbase cluster through a reverse-proxy (such as Haproxy) instead of a constantly-up node. The reverse-proxy will have all the potential nodes in its pool, constantly looking for which nodes are really up, and dispatching connections to those nodes. As soon as a node goes down, the connection will be re-established to an alive node.
You can read more about this architecture description on the Couchbase documentation.
No, there isn't method that does what you want. Keys in CB throws its own hash-function mapped to VBuckets, and VBuckets mapped to server. Couchbase API doesn't allow managed this mapping. All you can do is determined by the id of document server that owns this document.

Mysql cluster for dummies

So what's the idea behind a cluster?
You have multiple machines with the same copy of the DB where you spread the read/write? Is this correct?
How does this idea work? When I make a select query the cluster analyzes which server has less read/writes and points my query to that server?
When you should start using a cluster, I know this is a tricky question, but mabe someone can give me an example like, 1 million visits and a 100 million rows DB.
1) Correct. Every data node does not hold a full copy of the cluster data, but every single bit of data is stored on at least two nodes.
2) Essentially correct. MySQL Cluster supports distributed transactions.
3) When vertical scaling is not possible anymore, and replication becomes impractical :)
As promised, some recommended readings:
Setting Up Multi-Master Circular Replication with MySQL (simple tutorial)
Circular Replication in MySQL (higher-level warnings about conflicts)
MySQL Cluster Multi-Computer How-To (step-by-step tutorial, it assumes multiple physical machines, but you can run your test with all processes running on the same machine by following these instructions)
The MySQL Performance Blog is a reference in this field
1->your 1st point is correct in a way.But i think if multiple machines would share the same data it would be replication instead of clustering.
In clustering the data is divided among the various machines and there is horizontal partitioning means the dividing of the data is based on the rows,the records are divided by using some algorithm among those machines.
the dividing of data is done in such a way that each record will get a unique key just as in case of a key-value pair and each machine also has a unique machine_id related which is used to define which key value pair would go to which machine.
we call each machine a cluster and each cluster consists of an individual mysql-server, individual data and a cluster manager.and also there is a data sharing between all the cluster nodes so that all the data is available to the every node at any time.
the retrieval of data is done through memcached devices/servers for fast retrieval and
there is also a replication server for a particular cluster to save the data.
2->yes, there is a possibility because there is a sharing of all the data among all the cluster nodes. and also you can use a load balancer to balance the load.But the idea of load balancer is quiet common because they are being used by most of the servers. but if you are trying you just for your knowledge then there is no need because you will not get to notice the type of load that creates the requirement of a load balancer the cluster manager itself can do the whole thing.
3->RandomSeed is right. you do feel the need of a cluster when your replication becomes impractical means if you are using the master server for writes and slave for reads then at some time when the traffic becomes huge such that the sever would not be able to work smoothly then you will feel the need of clustering. simply to speed up the whole process.
this is not the only case, this is just one of the scenario this is only just a case.
hope this is helpful for you!!

MySql cluster "split brain" solution?

Before few days I was at some IT conference here in Belgrade. On Agenda was a topic about MySql, and clustering in MySql, and the guys from MySql said that they have the best solution for cluster split brain problem, does anyone know something about this, is this true or just a marketing trick?
MySQL Cluster requires at least 3 systems which allows it have one of the nodes be an arbitrator for dealing with split brain scenarios. Two of the systems can run data nodes/mysqld nodes and the third needs to run the management node (which is normally the arbitrator by default, however SQL nodes can also operate as them).
In the event of a split brain setup (ie. the two data nodes can no longer talk to each other, but they are still running), then they will realize it and ask the arbitrator to decide which node is allowed to continue to running. If a node can not talk to the arbitrator then it will shutdown. From all of the nodes the arbitrator can talk to it will pick a node to continue to running and tell the other(s) to shutdown.
The arbitrator is normally the management node, but can also be data nodes. If the arbitrator fails, then the cluster can elect a new one. However, it can't do this during arbitration, so if both a data node and arbitrator fail at once, the third node will shutdown.
Of course, it gets a bit more complex when you have multiple node groups, but the same basic ideas apply in those cases as well.
You can read more about this at in the MySQL Cluster FAQ.
It is subjective whether or not this is true or not, although I've heard that mysql's cluster support is good. However this concept is definitely supported and in use for other db's such as Slony-I for postgres.
You may get more useful responses if you clarify what aspect of best you mean (i.e. performance, up-time, ease of setup, etc).