MySQL DB quad-directional synchronization without master server? - mysql

I need to set up 4 MySQL servers. Each of them needs to support both reads and writes, so a master server (that accepts only writes) is out of the question. I need the data between these 4 servers to be synchronized. It does not matter to me if they have a constant connection open between themselves or if they each connect periodically. I looked at the MySQL replication page but did not find it useful for what I need. What is the best way to do this?

It's called multi master replication, there are tools like mmm to help accomplish this.
Though be warn, it can be very lengthy manual process to recover from replication failures, as you'll end up with situation where you're unsure which copy of the data is more up to date.

Related

mysql replication insert only

I have several slave dbs replicated from the same master db, however, for one of the slaves, i would like to keep it as a backup db, which will never have rows updated or deleted.
basically the purpose is to have a backup db with all rows stored by using the replication(mysqldump is waaay slow to do the backup), no update/delete query get replicated, insert query only. i know there will be some conflicts going on no doubt, but still wonder if any filtering options on statement/query on the slave end or any other solutions.
You should never run a production database without a working backup scheme in place - at least as long as you value your data. If you fear that a wrong sql instruction can ruin your database, then you may try point in time recovery.
If you already use replication your master server will log all write/update operations to its binlog - which it will send to the slave servers for replication. You can do for example nightly backups of you complete database. If you destroy your database in the morning, you can import the backup from the night and reapply the instructions from the binlog from after the backup till before the instruction that killed your database.
You could then skip this instruction and apply the instructions that came afterwards. This can also cause consistency issues, as the instruction after the skipped instruction see different data in the database as they did when they were originally executed.
I have similar problem. I know it's old thread but it can help others:
link: mysql replication works only if I choose database by USE database

How to ensure MySQL replication SLAVE is fully synchronized with the replication MASTER?

Using simple replication settings with one MASTER and one SLAVE, how can one ensure that the SLAVE and MASTER are fully synchronized?
Now yes, they both started from the exact same image and replication is working and reporting that everything is okay BUT:
* It has happened that there were errors stopping the replication and then the replication had to be stopped and later resumed.
* Perhaps a change accidentally occurred on the SLAVE and then it's not the same as the MASTER anymore.
* Other whichever scenarios that might break sync.
While it's possible to do a big mysqldump of both database and compare the files I would be interested in a method that can be implemented more easily and also can be checked automatically to ensure all is in sync.
Thanks
Have you tried Percona Toolkit (formerly known as Maatkit)? You can use one of their tools which is pt-table-checksum for your case. You can check other tools too at their website.
pt-table-checksum performs an online replication consistency check by
executing checksum queries on the master, which produces different
results on replicas that are inconsistent with the master. The
optional DSN specifies the master host. The tool’s exit status is
nonzero if any differences are found, or if any warnings or errors
occur.
The following command will connect to the replication master on
localhost, checksum every table, and report the results on every
detected replica:
If you have MySQL Server versions 5.6.14 or higher, you can use the MySQL Replication Synchronization Checker. It's included in the MySql server package. Is designed to work exclusively for servers that support global transaction identifiers (GTIDs) and have gtid_mode=ON.
This utility permits you to check replication servers for synchronization. It checks data consistency between a master and slaves or between two slaves. The utility reports missing objects as well as missing data.
The utility can operate on an active replication topology, applying a synchronization process to check the data. Those servers where replication is not active can still be checked but the synchronization process will be skipped. In that case, it is up to the user to manually synchronize the servers.
See MySQL Documentation for more information
You are right to be suspicious of a seemingly healthy master/slave replication setup! We were running fine when suddenly we got alerts from check_mk concerning a database that existed on our master that did not exist on our slave... but the master and slave status outputs were good! How unnerving is that? The way to prove integrity of the process is to use checksums to verify the data.
I have seen a lot of chatter on the Internet recommending pt-table-checksum . However, its limitations proved to be too onerous for us to be comfortable with. Most importantly, it requires and even sets statement-based replication (see the pt-table-checksum link). As it says in the mysql 5.6 online documentation, (for row-based replication...) "all changes can be replicated. This is the safest form of replication." There are other disadvantages to statement-based replication that make our developers nervous because some functions cannot be replicated properly; see the doc for a list.
We have already experienced issues with a master and slave using statement-based replication so we're specifically trying to avoid it.
We are going to try mysqlrplsync which specifically mentions that it "works independently of the binary log format (row, statement, or mixed)". It also mentions, however, that gtid-mode must be on and it requires MySQL 5.6.14 and higher... which means, I believe, that the MySQL delivered with RHEL7/CentOS 7 at least is out. You'll need to get the MySQL Community Edition, which is left as an exercise for the reader but you can go here for the packages or here for the repos, including RHEL derivatives and Debian.

MySQL database replication

This is the scenario:
I have a MySQL server with a database, let's call it consolidateddb. This database a consolidation of several tables from various databases
I have another MySQL server, with the original databases, these databases are production databases and are updates daily.
The company wants to copy each update/insert/delete on each table in the production databases to the corresponding tables in consolidateddb.
Would replication accomplish that? I know that replication is done on a databas to database, but not on tables that belong to different databases to one target database.
I hope my explanation was clear. Thanks.
Edit: Would a recursive copy of all tables inn each database to the single slave work? Or is it an ugly solution?
To clear up some things, let's name things accordingly to current mysql practice. A database is a database server. A schema is a database instance. A database server can have multiple schemas. Tables live within a schema.
Replication will help you if you want to duplicate schemas or tables as they are defined on the master/production server. The replication works by shipping a binary log of all the sql statements that are run on the master to the slave which dutifully runs them as if they run sequentially on itself.
You can choose to replicate all data, or you can choose some of the schemas or even just some of the tables.
You can not choose tables from different schemas and have them replicated into one schema, a table belongs to a specific schema.
By the way, important notice. A replication server can not be a slave to multiple masters. You could mimic this using federated tables, but that would never copy the data to the consolidation server, just show them as if the data from different servers were on one server.
The bonus of replication is that your consolidation server will more or less have updated data all the time.
You could take the binary logs from each of the masters, parse them with mysqlbinlog and then run that into the consolidated machine.
Something very approximately like:
mysqlbinlog [binary log files] | mysql -h consolidated
you'd need some kind of simple application (I suspect it could be done in bash if you needed) to wrap the logic.
Check out Replicating Different Databases to Different Slaves, see if it helps you in any way.
MySQL statement-based replication (basic replication) works by running the exact same statements that were run on the master on the slave. This includes information about what database the table was in.
I don't think MySQL provides any built-in way to move replication statements between databases (i.e. "insert into db1.table1 ..." -> "insert into db2.table1"). You may be able to trick it by manually altering the replication logs on the fly, but it wouldn't be out-of-the-bod MySQL replication.
You might be able to pull it off with MySQL Proxy
You may want to check out the maatkit toolkit. It's a free download and has a host of tools that specialize in optimizing things like archiving tables. I've used it on past projects to duplicate certain data to another DB, etc. You can do it based on time or any other number of factors.
To the best of my knowledge you can set up replication (MySQL 4+) and in the my.cnf file have the slave either only process certain tables or have the master log only certain tables, either way will solve your problem.
Here is a guide to some techniques:
http://www.onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html
I have very few problems with replication set-up, all my problems came trying to sync DBs, especially after a reboot etc.

How to Get Transactional MySQL data into a SQL Server database

I'm working on a project that has a MySQL transactional database backing up a web application. The company uses SQL Server for back office and reporting applications. What is the best way to update SQL Server with the data from MySQL? Right now, we are performing a dump of the MySQL data and doing a full restore. This may not be feasible much longer due to the increasing size of the database.
I would prefer a solution that copies only newly inserted and updated rows. I also need the SQL Server database to be static after the updates are applied. Basically, it should change once a day. I can update SQL Server from a local copy of MySQL (i.e. not production) Is there a way to apply MySQL replication to a slave server at specified intervals? A perfect solution is to run a once daily update on MySQL that syncs the database as of a point in time.
Can you find a way to snapshot the mySQL DB and then do the copy? It would make an instant logical copy of the database which would be frozen in time.
http://aspiringsysadmin.com/blog/2007/08/13/consistent-mysql-backups-using-zfs-snapshots/
ZFS filesystem can do this - but you haven't mentioned your hardware/OS.
Also, perhaps you could restrict the data you are pulling - whatever is time sensitive so that your pull will only get data that is older than 1 hour if your pull takes 45 minutes. Or to make things a little safer - how about just pulling the day before?
I believe SSIS 2008 has a new module called 'maintain' table that does the common task of getting updated/inserted records and optionally deletes.
Look into DTS, Microsoft's ETL tool. It's rather nice. Do the mapping, schedule it as a cron job, and Bob's your uncle.
Regardless of how you do the import to SqlServer from the MySQL clone, I don't think you need to worry about restricting MySQL replication to specific times.
MySQL replication only requires one thread in the master server and basically just transfers the transaction log to the slave. If you can, put the master and slave MySQL servers on a private LAN segment so that replication traffic does not impact the web traffic.
if you have SQL Server Standard or higher, SQL Server will take care of all of your needs.
use ssis to grab the data
use agent to schedule your timed tasks
btw - I'm doing the exact same thing that you are doing. SQL Server is awesome - it was easy to setup (I'm a noob to SSIS) and it worked on the first shot.
It sounds like what you need to do is to set up a script to start and stop replication on a slave database. If you can do that via a script, then you can establish a workflow in SSIS such as follows:
Stop Replication to Slave MySQL Database
If Replication has Stopped, then Take Snapshot of Slave MySQL Database
If Snapshot has been Taken, then
a= Start Replication to Slave MySQL Database
b= Import Slave MySQL Database Replica into SQL Server
NB: 3a and 3b can run in parallel.
I think your best bet in such a scenario would be to use SSIS to enable and disable MySQL database replication to the slave as well as to take a snapshot of the slave database. Then you can drive the whole thing from the SQL Server Agent mechanism.
Hope this helps

MySQL replication for fallback scenario

When I have two mysql servers that have different jobs (holding different databases) but want to be able to use one of them to slip in when the other one fails, what would you suggest how I keep the data on both of them equal "close to realtime"?
Obviously it's not possible to make a full database dump every x minutes.
I've read about the Binary Log, is that the way that I need to go? Will that not slow down the fallback server a lot? Is there a way to not include some tables in the binary log - where it doesn't matter that the data has changed?
You may want to consider the master-master replication scenario, but with a slight twist. You can specify which databases to replicate and limit the replication for each server.
For server1 I would add --replicate-do-db=server_2_db and on server2 --replicate-do-db=server_1_db to your my.cnf (or my.ini on Windows). This would mean that only statements for the server_1_db would be replicated to server2 and vice verse.
Please also make sure that you perform full backups on a regular basis and not just rely on replication as it does not provide safety from accidental DROP DATABASE statements or their like.
Binary log is definitely the way to go. However, you should be aware that with MySQL you can't just flip back and forth between servers like that.
One server will be the master and the other will be the slave. You write/read to the master, but can only read from the slave server. If you ever write to the slave, they'll be out of sync and there's no easy way to get them to sync up again (basically, you have to swap them so the master is the new slave, but this is a tedious manual process).
If you need true hot-swappable backup databases you might have to go to a system other than MySQL. If all you want is a read-only live backup that you can use instantly in the worst-case scenario (master is permanently destroyed), Binary Log will suit you just fine.