For my current project we are thinking of setting up a dual master replication topology for a geographically separated setup; one db on the us east coast and the other db in japan.
I am curious if anyone has tried this and what there experience has been.
Also, I am curious what my other options are for solving this problem; we are considering message queues.
Thanks!
Just a note on the technical aspects of your plan: You have to know that MySQL does not officially support multi-master replication (only MySQL Cluster provides support for synchronous replication).
But there is at least one "hack" that makes multi-master-replication possible even with a normal MySQL replication setup. Please see Patrick Galbraith's "MySQL Multi-Master Replication" for a possible solution. I don't have any experience with this setup, so I don't dare to judge on how feasible this approach would be.
There are several things to consider when replicating databases geographically. If you are doing this for performance reasons, be sure your replication model supports your data being "eventually consistent" as it can take time to bring the replication current in both, or many, locations. If your throughput or response times between locations is not good, active replication may not be the best option.
Setting up mysql as dual master does actually work fine in the right scenario done correctly. But I am not sure it fits very well in your scenario.
First of all, dual master setup in mysql is really a ring-setup. Server A is defined as master of B, while B is at the same time defined as the master of A, so both servers act as both master and slave. The replication works by shipping a binary log containing the sql statements which the slave inserts when it sees fit, which is usually right away. But if you're hammering it with local insertions, it will take a while to catch up. The slave insertions are sequential by the way, so you won't get any benefit of multiple cores etc.
The primary use of dual master mysql is to have redundancy on the server level with automatic fail-over (often using hearbeat on linux). Excluding mysql-cluster (for various reasons), this is the only usable automatic failover for mysql. The setup for basic dual master is easily found on google. The heartbeat stuff is a bit more work. But this is not really what you were asking about, since this really behaves as a single database server.
If you want the dual master setup because you always want to write to a local database (write to both of them at the same time), you'll need to write your application with this in mind. You can never have auto-incrementing values in the database, and when you have unique values, you must make sure that the two locations never write the same value. For example location A could write odd unique numbers and location B could write even unique numbers. The reason is that you're not guaranteed that the servers are in sync at any given time, so if you've inserted a unique row in A, and then an overlapping unique row in B before the second server catches up, you'll have a broken system. And if something first breaks, the entire system stops.
To sum it up: it's possible, but you'll need to tip-toe very carefully if you're building business software on top of this.
Because of the one-to-many architecture of MySQL replication, you have to have a replication ring with multiple masters: that is, each replicates from the next in a loop. For two, they replicate off each other. This has been supported from as far back as v3.23.
In a previous place I worked, we did it with v3.23 with quite a number of customers as a way of providing exactly what you're asking. We used SSH tunnels over the Internet to do the replication. It took us some time to get it reliable and several times we had to do a binary copy of one database to another (fortunately, none of them were over 2Gb nor needed 24-hour access). Also the replication in v3 was not nearly as stable as in v4 but even in v5, it will just stop if it detects any sort of error.
To accomodate the inevitable replication lag, we re-structured the application so that it didn't rely on AUTOINCREMENT fields (and removed that attribute from the tables). This was reasonably straightforward due to the data-access layer we had developed; instead of it using mysql_insert_id() for new objects, it created the new ID first and inserted it along with the rest of the row. We also implemented site IDs that we stored in the top half of the ID, because they were BIGINTs. This also meant we didn't have to change the application when we had a client who wanted the database in three locations. :-)
It wasn't 100% robust. InnoDB was just gaining some visibility so we couldn't easily use transactions, although we considered it. So there were race conditions occasionally when two objects tried to be created with the same ID. This meant one failed and we tried to report that in the app. But it was still a significant part of someone's job to watch over the replication and fix things when it broke. Importantly, to fix it before we got too far out of sync, because in a few cases the databases were being used in both sites and would quickly become difficult to re-integrate if we had to rebuild one.
It was a good exercise to be a part of, but I wouldn't do it again. Not in MySQL.
Related
I want to set up a complete server (apache, mysql 5.7) as a fallback of a productive server.
The synchronization on file level using rsync and cronjob is already done.
The mysql-replication is currently the problem. More precisely: the choice of the right replica method.
Multi primary group replication seemed to be the most suitable method so far.
In case of a longer production downtime, it is possible to switch to the fallback server quickly via DNS change.
Write accesses to the database are possible immediately without adjustments.
So far so good: But, if the fallback-server fails, it is in unreachable status and the production-server switches to read only, since its group no longer has the quota. This is of course a no-go.
I thought it might be possible using different replica variables: If the fallback-server is in unreachable state for a certain time (~5 minutes), the production-server should stop the group_replication and start a new group_replication. This has to happen automatically to keep the read-only time relatively low. When the fallback-server is back online, it should be manually added to the newly started group. But if I read the various forum posts and documentation correctly, it's not possible that way. And running a Group_Replication with only two nodes is the wrong decision anyway.
https://forums.mysql.com/read.php?177,657333,657343#msg-657343
Is the master - slave replication the only one that can be considered for such a fallback system? https://dev.mysql.com/doc/refman/5.7/en/replication-solutions-switch.html
Or does the Group_Replication offer possibilities after all, if you can react suitably to the quota problem? Possibilities that I have overlooked so far.
Many thanks and best regards
Short Answer: You must have [at least] 3 nodes.
Long Answer:
Split brain with only two nodes:
Write only to the surviving node, but only if you can conclude that it is the only surviving node, else...
The network died and both Primaries are accepting writes. This to them disagreeing with each other. You may have no clean way to repair the mess.
Go into readonly mode with surviving node. (The only safe and sane approach.)
The problem is that the automated system cannot tell the difference between a dead Primary and a dead network.
So... You must have 3 nodes to safely avoid "split-brain" and have a good chance of an automated failover. This also implies that no two nodes should be in the same tornado path, flood range, volcano path, earthquake fault, etc.
You picked Group Replication (InnoDB Cluster). That is an excellent offering from MySQL. Galera with MariaDB is an equally good offering -- there are a lot of differences in the details, but it boils down to needing 3, preferably dispersed, nodes.
DNS changes take some time, due to the TTL. A proxy server may help with this.
Galera can run in a "Primary + Replicas" mode, but it also allows you to run with all nodes being read-write. This leads to a slightly different set of steps necessary for a client to take to stop writing to one node and start writing to another. There are "Proxys" to help with such.
FailBack
Are you trying to always use a certain Primary except when it is down? Or can you accept letting any node be the 'current' Primary?
I think of "fallback" as simply a "failover" that goes back to the original Primary. That implies a second outage (possibly briefer). However, I understand geographic considerations. You may want your main Primary to be 'near' most of your customers.
I recommend using the Galera MySQL cluster with HAProxy as a load balancer and automatic failover solution. we have used it in production for a long time now and never had serious problems. The most important thing to consider is monitoring the replication sync status between nodes. also, make sure your storage engine is InnoDB because Galera doesn't work with MyISAM.
check this link on how to setup :
https://medium.com/platformer-blog/highly-available-mysql-with-galera-and-haproxy-e9b55b839fe0
But in these kinds of situations, the main problem is not a failover mechanism because there are many solutions out of the box, but rather you have to check your read/write ratio and transactional services and make sure replication delays won't affect them. some times vertically scalable solutions with master-slave replication are more suitable for transaction-sensitive financial systems and it really depends on the service your providing.
We are looking into options for our MySQL replication architecture, the relevant details of our current setup:
We manage several branches on different cities.
Every branch has the same database structure.
Every primary key on all tables are prefixed by a branch identifier.
We need that a branch keep working if it has a network outage and it must sync with the main branch once the connection is restored.
As we don't have any chance to get a duplicate index on any table I'm thinking on something like MySQL multi master, or maybe Percona XtraDB Cluster or Tungsten but I can't find documentation on what happens if a single node is isolated from the others and what happens with the data that it received once the connection is restored.
Is there any proven method that suit this kind of setup? I would appreciate any advice, thanks.
In the case of tungsten, how it behaves depends on how you tell it to behave
You seem to be describing a relaxed consistency model. But no off the shelf clustering solution is going to solve all your problems. Certainly if you ensure that each record is only ever modified at its "home" database then you shouldn't run into many problems, but this model also requires you to replicate all the data to all of the locations. Bandwidth might not be an issue and it does provide a good DR capability, but storage and scalability may become an issue.
If you do have a centralized, well managed datacentre then another approach would be to have each branch run off an asynchronous dual master - one located at the branch and one at the datacentre then roll your own scripts for consolidating the dataset.
Multi master replication would work, as long as your not performing UPDATEs on the same data when the nodes are out of sync. In that case, it will apply the changes silently, so your data would become inconsistent.
If you're not doing that, I think it's the best solution, as MySQL takes care of the binary logs pointers and handles reconnections. Just be sure that the auto_increment_offset setting is properly configured amongst the masters. Anyway, I've just tested this deployment with just 2 master servers (7 years in production with little issues).
I currently have a MySQL dual master replication (A<->B) set up and everything seems to be running swimmingly. I drew on the basic ideas from here and here.
Server A is my web server (a VPS). User interaction with the application leads to updates to several fields in table X (which are replicated to server B). Server B is the heavy-lifter, where all the big calculations are done. A cron job on server B regularly adds rows to table X (which are replicated to server A).
So server A can update (but never add) rows, and server B can add rows. Server B can also update fields in X, but only after the user no longer has the ability to update that row.
What kinds of potential disasters can I expect with this scenario if I go to production with it? Or does this seem OK? I'm asking mostly because I'm ignorant about whether any simultaneous operation on the table (from either the A copy or the B copy) can cause problems or if it's just operations on the same row that get hairy.
Dual master replication is messy if you attempt to write to the same database on both masters.
One of the biggest points of contention (and high blood pressure) is the use of autoincrement keys.
As long as you remember to set auto_increment_increment and auto_increment_offset, you can lookup any data you want and retrieve auto_incremented ids.
You just have to remember this rule: If you read an id from serverX, you must lookup needed data from serverX using the same id.
Here is one saving grace for using dual master replication.
Suppose you have
two databases (db1 and db2)
two DB servers (serverA and serverB)
If you impose the following restrictions
all writes of db1 to serverA
all writes of db2 to serverB
then you are not required to set auto_increment_increment and auto_increment_offset.
I hope my answer clarifies the good, the bad, and the ugly of using dual master replication.
Here is a pictorial example of 4 masters using auto increment settings
Nice article from Percona on this subject
Master-master replication can be very tricky, are you sure that this is the best solution for you ? Usually it is used for load-balancing purposes (e.g. round-robin connect to your db servers) and sometimes when you want to avoid the replication lag effect. A big known issue is the auto_increment problem which is supposedly solved using different offsets and increment value.
I think you should modify your configuration to simple master-slave by making A the master and B the slave, unless I am mistaken about the requirements of your system.
I think you can depend on
Percona XtraDB Cluster Feature 2: Multi-Master replication than regular MySQL replication
They promise the foll:
By Multi-Master I mean the ability to write to any node in your cluster and do not worry that eventually you get out-of-sync situation, as it regularly happens with regular MySQL replication if you imprudently write to the wrong server.
With Cluster you can write to any node, and the Cluster guarantees consistency of writes. That is the write is either committed on all nodes or not committed at all.
The two important consequences of Muti-master architecture.
First: we can have several appliers working in parallel. This gives us true parallel replication. Slave can have many parallel threads, and you can tune it by variable wsrep_slave_threads
Second: There might be a small period of time when the slave is out-of-sync from master. This happens because the master may apply event faster than a slave. And if you do read from the slave, you may read data, that has not changes yet. You can see that from diagram. However you can change this behavior by using variable wsrep_causal_reads=ON. In this case the read on the slave will wait until event is applied (this however will increase the response time of the read. This gap between slave and master is the reason why this replication named “virtually synchronous replication”, not real “synchronous replication”
The described behavior of COMMIT also has the second serious implication.
If you run write transactions to two different nodes, the cluster will use an optimistic locking model.
That means a transaction will not check on possible locking conflicts during individual queries, but rather on the COMMIT stage. And you may get ERROR response on COMMIT. I am highlighting this, as this is one of incompatibilities with regular InnoDB, that you may experience. In InnoDB usually DEADLOCK and LOCK TIMEOUT errors happen in response on particular query, but not on COMMIT. Well, if you follow a good practice, you still check errors code after “COMMIT” query, but I saw many applications that do not do that.
So, if you plan to use Multi-Master capabilities of XtraDB Cluster, and run write transactions on several nodes, you may need to make sure you handle response on “COMMIT” query.
You can find it here along with pictorial expln
From my rather extensive experience on this topic I can say you will regret writing to more than one master someday. It may be soon, it may not be for a long time, but it will happen. You will have two servers that each have some correct data and some wrong data, and you will either pick one as the authoritative source and throw the other away (probably without really knowing what you're throwing away) or you'll reconcile the two. No matter how you design it, you cannot eliminate the possibility of this happening, so it's a mathematical certainty that it will happen someday.
Percona (my employer) has handled probably several hundred cases of recovery after doing what you're attempting. Some of them take hours, some take weeks, one I helped with took a few months -- and that's with excellent tools to help.
Use a different replication technology or find a different way to do what you want to do. MMM will not help -- it will bring catastrophe sooner. You cannot do this with standard MySQL replication, with or without external tools. You need a replacement replication technology such as Continuent Tungsten or Percona XtraDB Cluster.
It's often easier to just solve the real need in some other fashion and give up multi-master writes, if you want to use vanilla MySQL replication.
and thanks for sharing my Master-Master Mysql cluster article. As Rolando clarified this configuration is not suitable for most production environment due to the limitation of autoincrement support.
The most adequate way to get a MySQL cluster is using NDB, which require at least 4 servers (2 management and 2 data nodes).
I have written a detailed article to get this running on two servers only, which is very similar to my previous article but using NDB instead.
http://www.hbyconsultancy.com/blog/mysql-cluster-ndb-up-and-running-7-4-and-6-3-on-ubuntu-server-trusty-14-04.html
Notice that I always recommend to analyse your needs and find out the most adequate solution, don't just look for available solutions and try to figure out if they fit with your needs or not.
-Hatem
I would highly recommend looking into a tool that will manage this for you. Multi-master replication can be very troublesome if things go wrong.
I would suggest something like Percona XtraDB Cluster. I've been following this project, and it looks very cool. I definitely think it will be a game changer in the MySQL world. It's still in beta though.
We're moving a social media service to be on separate data centers as our other hosting provider's entire data center went down. Twice.
This means that both websites need to be synchronized in some sense -- I'm less worried about the code of the pages, that's easy enough to sync, but they need to have the same database data.
From my research on SO, it seems MySQL Replication is a good option, but the MySQL manual, for scaling out, says that its best when there are far more reads then there are writes/updates:
http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-scaleout.html
In our case, it's about equal. We're getting around 200-300 thousand requests a day right now, and we can grow rapidly. Every request is both a read and write request.
What would be the best method or tool to handle this?
Replication isn't instantaneous, and all writes have to be sent over the wire to the remote servers, so it takes bandwidth too. As long as this works for you and you understand the consequences, then don't worry about the read/write ratio.
However, are you sure that you need global replication? We handle millions of requests and have one location, with multiple web servers connected to two databases. One database is the live database, and the other is a replicated read only database.
We do have global fail over locations, and some people connect to these on any day, even if our main node is up because they have Internet issues. The data just trickles in though.
If the main node went down, then every body would be using the global fail over locations, in order. So, if our main node died, all customers would connect to Denver. If Denver went down, they'd all connect to Columbus.
Also, our main node is on two different Internet providers, so one ISP going down doesn't take us down.
Is the connection speed between two datacenters good enough? You can copy files to a new server and move database there. And then setup old server so that it will connect to new server's MySQL database in another DC? This will be slower of course, but depending on the nature of your queries it can be acceptable. As soon as DNS or whatever moves/finishes, you just power off the old server when there is no more requests for it.
To help you to assess your options you need to consider what your requirements are in a disaster recovery scenario (i.e. total loss of the system in one data-centre).
In particular for this scenario, how much data can you afford to lose (recovery point objective - RPO), and how quickly do you need to have the standby data-centre version of the site up and running (recovery time objective - RTO).
For example if your RPO is no transactions lost and recovery in 5 minutes, then the solution would be different than if you can afford to lose 5 mins of transactions and an hour to recover.
Another question I'd ask is if you're using SAN storage at all? This gives you options for replication at the storage level (SAN array to SAN array), rather than at the database level (e.g. MySQL replication).
Also to consider is the distance between the data-centres (e.g. timewise can you afford to perform a synchronous write to both databases, or would an asynchronous replication approach be more appropriate)
Consider a reasonably large website (2M+ pageviews / m, lots of users) with 2 frontend servers: one front server in the US, and one in Europe. Two dedicated URL bring the visitors on one of the server, one in the french language, the other one in english. Both sites share exactly the same data.
What would be the most cost effective solution? (DB used at my company: MySQL)
1/ A single Master server on Amazon EC2 (US), and slaves on the frontend servers?
Advantages: no master-master rep, meaning no risk of data conflict with autoincrement and duplicates on unique columns, etc..
Drawbacks: The lag! Won't there be too much lagging for writing in the US when you are in Europe?
Another drawback could be the lack of quick n dirty solution in case the master dies. And what about having slaves on same server as front?
2/ Two Amazon EC2 instances, one in the US, one in Europe, acting as master-master replication servers. Plus two slaves on each of the frontends?
Adv: Speed, and security of data. Of course there is no load balancer, but making a hack to switch the master to the other one seems pretty trivial.
Drwbcks: Price. And the risk of corruption on the DB
3/ Any other solution ?
As it is my first time working with servers in 2 continents, I would really appreciate learning from you experience in that area, including MySQL or not, including EC2 or not.
Thanks
Marshall
As usual, what I'm about to say depends on your app, how it uses the database, etc. You need to ask yourself:
If you're using off the shelf software, what have other people done in this situation?
Does the app need to work on the entire dataset, or can you partition?
Is your app built to handle multi-master replication (usually means uses autoincrement pk's)
What are the chances of update/delete collisions? What are the costs?
What's the read:write ratio? What's the nature of the writes? Are they typically update or append operations?
I'm assuming the french server is in Europe, while the English server is in the US? If you can partition your data so that the french site uses one DB and the english site uses the other, you're better off. Even if both sites access both DB's, since you don't have to worry about collisions. You can even run two mysql instances on each master server and do multimaster replication for both.
If you can't partition, I'd probably go with #2, but I'd designate one of the machines as the 'true' master and send all the writes to it to help avoid data clobber. This way it's easy to switch in a pinch.
If you're cost sensitive and you're going to run replicas on your front end servers anyway, just run the master databases on the front end servers. You can always pull it off later. Replicas can often have higher CPU/IO costs than masters taking the same read load: they have to execute their writes in serial, which can really screw things up.
Also, don't use m1.small instances for your DB. Or at least keep an eye on your performance. m1.smalls are significantly under powered, and if you watch top, you'll notice a significant percentage of your CPU time being stolen by the hypervisor. I recommend c1.medium's.
Don't use master-master replication, ever. There is no mechanism for resolving conflicts. If you try to write to both masters at the same time (or write to one master before it has caught up with changes you previously wrote to the other one), then you will end up with a broken replication scenario. The service won't stop, they'll just drift further and further apart making reconciliation impossible.
Don't use MySQL replication without some well-designed monitoring to check that it's working ok. Don't assume that becuase you've configured it correctly initially it'll either keep working, OR stay in sync.
DO have a well-documented, well-tested procedure for recovering slaves from being out of sync or stopped. Have a similarly documented procedure for installing a new slave from scratch.
Your application may need sufficient intelligence to know that a slave is out of sync or stopped, and that it should not be used, if you care about correct or up-to-date data. You'll need some kind of feedback from your monitoring to do this.
If you have a slave in, say the US when your master is in Europe, that would normally give you the amount of latency you expect, i.e. something in the order of 150ms more than if they were co-located.
In MySQL, the slave does not start a query until the master finishes it, so it will always be behind by the length of time an update takes.
Also, the slave is single-threaded, so a single "hard" update query will delay all subsequent ones.
If you're pushing your master hard on multithreaded write-load, assuming your slaves have identical hardware, it is very unlikely that they'll be able to keep up.
We are looking at a similar scenario - after Amazon Eastcoast has completely been cut off the net twice this week - meaning not even being replicated in multiple regions and using RDB instances in kept us available.
But DRB does not allow crossing from East to West or even into Europe.
We are now reviewing the approach of Master Master in East and West or even Europe with one master acting as a failover only, and failover via dnsmadeeasy which responds extremely fast.
Advantage: quick and reliable failover, short downtime, no complex management of the failover function.
Disadvantage: One extra system running without using it - but compared to using RDB that's not more expensive
DRB is nicely managed by Amazon including point in time recovery and so on - all that is lost by switching away from it. But the fact that it is limited to replications within only one area and that area can be completely cut off make it problematic. As an alternative to RDB backup we are looking at Zmanda open source tools to take care of backup management. NOt yet tested, but based on all our stuffing around with failover and databases and hardware and so this looks like the simplest and therefore most promising approach for high availiability.
This question is old, but the solution exists now: Galera. It does MySQL (InnoDB) replication, and works well with WANs, too. http://codership.com/