Is it possible that, decentralized solution with mysql - mysql

Is it possible that decentralized mysql solution. Where updates in one mysql server has to reflect on all other regions. Like wise other region changes should sync with remaning regions.

Some popular solutions are:
Master/Slave replication. Easy to set up, but you have to change your code to write to the one master server, and to read from the many slave servers. It won't scale well when you get too many connections that the master itself will become busy.
Master/Master replication. Can scale indefinitely, but the two master databases can have conflicting data, unlike the master/slave scenario. When that happens, you will also have to code somewhat complex solutions in your application to deal with the corrupt data. It is also possible to set a master that sends data to a slave that happens to be a master of another slave, binding all in a master/slave system that goes in circles.
Mysql Cluster. It performs master/master replication without corruption and auto sharding, giving you performance and scalability without the need to change code in your application. However, it is not as popular. You won't find tutorials online on how to set it up, you'll have to go through the official documentation, and any issues you have with it will be more difficult to find answers online as well.
With any of these strategies, changes you make to one database are replicated to 1 or more.

You could use Mysql Cluster solution
https://dev.mysql.com/downloads/cluster/

Related

What solution can I use for MySQL replication across cities?

We are looking into options for our MySQL replication architecture, the relevant details of our current setup:
We manage several branches on different cities.
Every branch has the same database structure.
Every primary key on all tables are prefixed by a branch identifier.
We need that a branch keep working if it has a network outage and it must sync with the main branch once the connection is restored.
As we don't have any chance to get a duplicate index on any table I'm thinking on something like MySQL multi master, or maybe Percona XtraDB Cluster or Tungsten but I can't find documentation on what happens if a single node is isolated from the others and what happens with the data that it received once the connection is restored.
Is there any proven method that suit this kind of setup? I would appreciate any advice, thanks.
In the case of tungsten, how it behaves depends on how you tell it to behave
You seem to be describing a relaxed consistency model. But no off the shelf clustering solution is going to solve all your problems. Certainly if you ensure that each record is only ever modified at its "home" database then you shouldn't run into many problems, but this model also requires you to replicate all the data to all of the locations. Bandwidth might not be an issue and it does provide a good DR capability, but storage and scalability may become an issue.
If you do have a centralized, well managed datacentre then another approach would be to have each branch run off an asynchronous dual master - one located at the branch and one at the datacentre then roll your own scripts for consolidating the dataset.
Multi master replication would work, as long as your not performing UPDATEs on the same data when the nodes are out of sync. In that case, it will apply the changes silently, so your data would become inconsistent.
If you're not doing that, I think it's the best solution, as MySQL takes care of the binary logs pointers and handles reconnections. Just be sure that the auto_increment_offset setting is properly configured amongst the masters. Anyway, I've just tested this deployment with just 2 master servers (7 years in production with little issues).

MySQL single DB accessed by two servers

I am trying to build a website that uses MySQL DB. What I am trying to do is make my database accessed by two servers, which means when server 1 is down server 2 can access the same database and the website continues working normally. I've read about multimaster replication but it does not seem to be what I need. And what happens when using a master slave replication and the master server goes down ? How it can be restored ?
Thanks for your help.
I think the master slave pattern is exactly what you're looking for. The master handles all the writes and the slaves handle all the reads. If your cloud hosting with someone like Rackspace or AWS they make it very easy to set up the data replication across each mode. As for your last sub question about what happens if the master goes down, I believe it is pretty straight forward to set up fallbacks for that too. There are likely several approaches but at the most basic level I know you can set up multiple db nodes (with a fallback algorithm) just like any other instance.
A final note... If its your first time doing this I highly recommend Rackspace because their support is amazing and they make a huge effort when you start to explain all your option and help you pick the best strategy.
Ps: retreading your question, it's a little unclear what you're trying to accomplish. You mention two servers accessing one DB and you also talk about redundant setups for multiple db instances. They're really two separate issues. The former is trivially easy because you can always just point more than one server to a db. As long as the credentials are right it will work. But the tricky part is keeping the data synched properly. If both are reading and writing the same tables things are going to bang together. That's where the master slave pattern comes into play. All the writes go through the master but anyone can read from any slave because the data gets replicated.

MySQL dual master replication -- is this scenario safe?

I currently have a MySQL dual master replication (A<->B) set up and everything seems to be running swimmingly. I drew on the basic ideas from here and here.
Server A is my web server (a VPS). User interaction with the application leads to updates to several fields in table X (which are replicated to server B). Server B is the heavy-lifter, where all the big calculations are done. A cron job on server B regularly adds rows to table X (which are replicated to server A).
So server A can update (but never add) rows, and server B can add rows. Server B can also update fields in X, but only after the user no longer has the ability to update that row.
What kinds of potential disasters can I expect with this scenario if I go to production with it? Or does this seem OK? I'm asking mostly because I'm ignorant about whether any simultaneous operation on the table (from either the A copy or the B copy) can cause problems or if it's just operations on the same row that get hairy.
Dual master replication is messy if you attempt to write to the same database on both masters.
One of the biggest points of contention (and high blood pressure) is the use of autoincrement keys.
As long as you remember to set auto_increment_increment and auto_increment_offset, you can lookup any data you want and retrieve auto_incremented ids.
You just have to remember this rule: If you read an id from serverX, you must lookup needed data from serverX using the same id.
Here is one saving grace for using dual master replication.
Suppose you have
two databases (db1 and db2)
two DB servers (serverA and serverB)
If you impose the following restrictions
all writes of db1 to serverA
all writes of db2 to serverB
then you are not required to set auto_increment_increment and auto_increment_offset.
I hope my answer clarifies the good, the bad, and the ugly of using dual master replication.
Here is a pictorial example of 4 masters using auto increment settings
Nice article from Percona on this subject
Master-master replication can be very tricky, are you sure that this is the best solution for you ? Usually it is used for load-balancing purposes (e.g. round-robin connect to your db servers) and sometimes when you want to avoid the replication lag effect. A big known issue is the auto_increment problem which is supposedly solved using different offsets and increment value.
I think you should modify your configuration to simple master-slave by making A the master and B the slave, unless I am mistaken about the requirements of your system.
I think you can depend on
Percona XtraDB Cluster Feature 2: Multi-Master replication than regular MySQL replication
They promise the foll:
By Multi-Master I mean the ability to write to any node in your cluster and do not worry that eventually you get out-of-sync situation, as it regularly happens with regular MySQL replication if you imprudently write to the wrong server.
With Cluster you can write to any node, and the Cluster guarantees consistency of writes. That is the write is either committed on all nodes or not committed at all.
The two important consequences of Muti-master architecture.
First: we can have several appliers working in parallel. This gives us true parallel replication. Slave can have many parallel threads, and you can tune it by variable wsrep_slave_threads
Second: There might be a small period of time when the slave is out-of-sync from master. This happens because the master may apply event faster than a slave. And if you do read from the slave, you may read data, that has not changes yet. You can see that from diagram. However you can change this behavior by using variable wsrep_causal_reads=ON. In this case the read on the slave will wait until event is applied (this however will increase the response time of the read. This gap between slave and master is the reason why this replication named “virtually synchronous replication”, not real “synchronous replication”
The described behavior of COMMIT also has the second serious implication.
If you run write transactions to two different nodes, the cluster will use an optimistic locking model.
That means a transaction will not check on possible locking conflicts during individual queries, but rather on the COMMIT stage. And you may get ERROR response on COMMIT. I am highlighting this, as this is one of incompatibilities with regular InnoDB, that you may experience. In InnoDB usually DEADLOCK and LOCK TIMEOUT errors happen in response on particular query, but not on COMMIT. Well, if you follow a good practice, you still check errors code after “COMMIT” query, but I saw many applications that do not do that.
So, if you plan to use Multi-Master capabilities of XtraDB Cluster, and run write transactions on several nodes, you may need to make sure you handle response on “COMMIT” query.
You can find it here along with pictorial expln
From my rather extensive experience on this topic I can say you will regret writing to more than one master someday. It may be soon, it may not be for a long time, but it will happen. You will have two servers that each have some correct data and some wrong data, and you will either pick one as the authoritative source and throw the other away (probably without really knowing what you're throwing away) or you'll reconcile the two. No matter how you design it, you cannot eliminate the possibility of this happening, so it's a mathematical certainty that it will happen someday.
Percona (my employer) has handled probably several hundred cases of recovery after doing what you're attempting. Some of them take hours, some take weeks, one I helped with took a few months -- and that's with excellent tools to help.
Use a different replication technology or find a different way to do what you want to do. MMM will not help -- it will bring catastrophe sooner. You cannot do this with standard MySQL replication, with or without external tools. You need a replacement replication technology such as Continuent Tungsten or Percona XtraDB Cluster.
It's often easier to just solve the real need in some other fashion and give up multi-master writes, if you want to use vanilla MySQL replication.
and thanks for sharing my Master-Master Mysql cluster article. As Rolando clarified this configuration is not suitable for most production environment due to the limitation of autoincrement support.
The most adequate way to get a MySQL cluster is using NDB, which require at least 4 servers (2 management and 2 data nodes).
I have written a detailed article to get this running on two servers only, which is very similar to my previous article but using NDB instead.
http://www.hbyconsultancy.com/blog/mysql-cluster-ndb-up-and-running-7-4-and-6-3-on-ubuntu-server-trusty-14-04.html
Notice that I always recommend to analyse your needs and find out the most adequate solution, don't just look for available solutions and try to figure out if they fit with your needs or not.
-Hatem
I would highly recommend looking into a tool that will manage this for you. Multi-master replication can be very troublesome if things go wrong.
I would suggest something like Percona XtraDB Cluster. I've been following this project, and it looks very cool. I definitely think it will be a game changer in the MySQL world. It's still in beta though.

Best design for distributed databases

I have a project where we have one central system that exposes an API on top of MySQL. We now need to replicate that same service locally on several different boxes (which could be 50+). We wanted to have a local cache of the DB on each of those boxes to ensure quick responses and failover if the "central" system goes down.
Any idea what's the best design for this? I was thinking some sort of master/slave set up, but I'm not sure if that works with 50+ servers. I'm not sure what's the best approach.
What about MySQL's own replication solution? If you've already ruled that out, you should say why.
With the replication that I've seen, you have a master and slave(s). If the master goes down, one of the slaves takes over. With 50+ slaves, you'd have a long (and confusing) chain of masters.
Not knowing anything about the type of data you have or the read/write percentages, I would suggest one of the following:
Cache static data locally (memcache, etc). Reads would be local, with writes going back to the mysql master. This works for mostly-static configuration information. I have 6 servers in that setup now.
Shard your data. With 50 servers, set them up in 25 master/slave pairs and put 1/25th of the data on each shard. Get one more server for N+1 redundancy.
Hope that helps.

Which database has the best support for replication

I have a fairly good feel for what MySQL replication can do. I'm wondering what other databases support replication, and how they compare to MySQL and others?
Some questions I would have are:
Is replication built in, or an add-on/plugin?
How does the replication work (high-level)? MySQL provides statement-based replication (and row-based replication in 5.1). I'm interested in how other databases compare. What gets shipped over the wire? How do changes get applied to the replicas?
Is it easy to check consistency between master and slaves?
How easy is it to get a failed replica back in sync with the master?
Performance? One thing I hate about MySQL replication is that it's single-threaded, and replicas often have trouble keeping up, since the master can be running many updates in parallel, but the replicas have to run them serially. Are there any gotchas like this in other databases?
Any other interesting features...
MySQL's replication is weak inasmuch as one needs to sacrifice other functionality to get full master/master support (due to the restriction on supported backends).
PostgreSQL's replication is weak inasmuch as only master/standby is supported built-in (using log shipping); more powerful solutions (such as Slony or Londiste) require add-on functionality. Archive log segments are shipped over the wire, which are the same records used to make sure that a standalone database is in working, consistent state on unclean startup. This is what I'm using presently, and we have resynchronization (and setup, and other functionality) fully automated. None of these approaches are fully synchronous. More complete support will be built in as of PostgreSQL 8.5. Log shipping does not allow databases to come out of synchronization, so there is no need for processes to test the synchronized status; bringing the two databases back into sync involves setting the backup flag on the master, rsyncing to the slave (with the database still runnning; this is safe), and unsetting the backup flag (and restarting the slave process) with the archive logs generated during the backup process available; my shop has this process (like all other administration tasks) automated. Performance is a nonissue, since the master has to replay the log segments internally anyhow in addition to doing other work; thus, the slaves will always be under less load than the master.
Oracle's RAC (which isn't properly replication, as there's only one storage backend -- but you have multiple frontends sharing the load, and can build redundancy into that shared storage backend itself, so it's worthy of mention here) is a multi-master approach far more comprehensive than other solutions, but is extremely expensive. Database contents aren't "shipped over the wire"; instead, they're stored to the shared backend, which all the systems involved can access. Because there is only one backend, the systems cannot come out of sync.
Continuent offers a third-party solution which does fully synchronous statement-level replication with support for all three of the above databases; however, the commercially supported version of their product isn't particularly cheap (though vastly less expensive. Last time I administered it, Continuent's solution required manual intervention for bringing a cluster back into sync.
I have some experience with MS-SQL 2005 (publisher) and SQLEXPRESS (subscribers) with overseas merge replication. Here are my comments:
1 - Is replication built in, or an add-on/plugin?
Built in
2 - How does the replication work
(high-level)?
Different ways to replicate, from snapshot (giving static data at the subscriber level) to transactional replication (each INSERT/DELETE/UPDATE instruction is executed on all servers). Merge replication replicate only final changes (successives UPDATES on the same record will be made at once during replication).
3 - Is it easy to check consistency between master and slaves?
Something I have never done ...
4 - How easy is it to get a failed replica back in sync with the master?
The basic resynch process is just a double-click one .... But if you have 4Go of data to reinitialize over a 64 Kb connection, it will be a long process unless you customize it.
5 - Performance?
Well ... You will of course have a bottleneck somewhere, being your connection performance, volume of data, or finally your server performance. In my configuration, users only write to subscribers, which all replicate with the main database = publisher. This server is then never sollicited by final users, and its CPU is strictly dedicated to data replication (to multiple servers) and backup. Subscribers are dedicated to clients and one replication (to publisher), which gives a very interesting result in terms of data availability for final users. Replications between publisher and subscribers can be launched together.
6 - Any other interesting features...
It is possible, with some anticipation, to keep on developping the database without even stopping the replication process....tables (in an indirect way), fields and rules can be added and replicated to your subscribers.
Configurations with a main publisher and multiple suscribers can be VERY cheap (when compared to some others...), as you can use the free SQLEXPRESS on the suscriber's side, even when running merge or transactional replications
Try Sybase SQL Anywhere
Just adding to the options with SQL Server (especially SQL 2008, which has Change Tracking features now). Something to consider is the Sync Framework from Microsoft. There's a few options there, from the basic hub-and-spoke architecture which is great if you have a single central server and sometimes-connected clients, right through to peer-to-peer sync which gives you the ability to do much more advanced syncing with multiple 'master' databases.
The reason you might want to consider this instead of traditional replication is that you have a lot more control from code, for example you can get events during the sync progress for Update/Update, Update/Delete, Delete/Update, Insert/Insert conflicts and decide how to resolve them based on business logic, and if needed store the loser of the conflict's data somewhere for manual or automatic processing. Have a look at this guide to help you decide what's possible with the different methods of replication and/or sync.
For the keen programmers the Sync Framework is open enough that you can have the clients connect via WCF to your WCF Service which can abstract any back-end data store (I hear some people are experimenting using Oracle as the back-end).
My team has just gone release with a large project that involves multiple SQL Express databases syncing sub-sets of data from a central SQL Server database via WAN and Internet (slow dial-up connection in some cases) with great success.
MS SQL 2005 Standard Edition and above have excellent replication capabilities and tools. Take a look at:
http://msdn.microsoft.com/en-us/library/ms151198(SQL.90).aspx
It's pretty capable. You can even use SQL Server Express as a readonly subscriber.
There are a lot of different things which databases CALL replication. Not all of them actually involve replication, and those which do work in vastly different ways. Some databases support several different types.
MySQL supports asynchronous replication, which is very good for some things. However, there are weaknesses. Statement-based replication is not the same as what most (any?) other databases do, and doesn't always result in the expected behaviour. Row-based replication is only supported by a non production-ready version (but is more consistent with how other databases do it).
Each database has its own take on replication, some involve other tools plugging in.
A bit off-topic but you might want to check Maatkit for tools to help with MySQL replication.
All the main commercial databases have decent replication - but some are more decent than others. IBM Informix Dynamic Server (version 11 and later) is particularly good. It actually has two systems - one for high availability (HDR - high-availability data replication) and the other for distributing data (ER - enterprise replication). And the the Mach 11 features (RSS - remote standalone secondary, and SDS - shared disk secondary) are excellent too, doubly so in 11.50 where you can write to either the primary or secondary of an HDR pair.
(Full disclosure: I work on Informix softare.)
I haven't tried it myself, but you might also want to look into OpenBaseSQL, which seems to have some simple to use replication built-in.
Another way to go is to run in a virtualized environment. I thought the data in this blog article was interesting
http://chucksblog.typepad.com/chucks_blog/2008/09/enterprise-apps.html
It's from an EMC executive, so obviously, it's not independent, but the experiment should be reproducible
Here's the data specific for Oracle
http://oraclestorageguy.typepad.com/oraclestorageguy/2008/09/to-rac-or-not-to-rac-reprise.html
Edit: If you run virtualized, then there are ways to make anything replicate
http://chucksblog.typepad.com/chucks_blog/2008/05/vmwares-srm-cha.html