MySQL Master <=(Slave,Master)=> Slave - mysql

I want to know if a server can be a slave and master at the same time. Our problem is that we have lots of mobile units that need to be synced to the master but they only need 6 out of the 100s of tables on the master. All the extra tables serve no purpose on the slave except for delaying synchronization and adding data costs.
We want to create a smaller schema say mobileSchema that contains only 6 tables that are sync'd to their counterparts in the masterSchema. Is this possible? To have schemas sync internally or have some master/slave-master/slave configuration where the middle server is slave to the bigger server and master to the mobile units?
If the answer is no would anyone have any alternate solutions to propose. We're trying to avoid having to sync the different schemas/databases manually as that can get real ugly real fast.
Raza

AFAIK you can't natively sync schemas internally.
In your case you can do something like this:
Enable binary logging on your main server.
Create another server to act as a proxy and configure it to replicate from the main.
Configure the 'proxy' to only replicate the tables you need for the remote servers (replicate-do-table).
Enable binary logging and log-slave-updates on the 'proxy'.
Configure your remote units to replicate from the proxy.
You will probably also need to enable encryption for the remote connections.

You might like to look at replication filters.
You can do filtering on the master, so it only logs part of the changes.
Or you can do filtering on the replica(s), so the master would log all changes, and the replica would download all the logs, but the replica would only apply a subset of changes. Good if you want some replicas to replay some changes but other replicas to replay a different subset of changes.

Related

MySQL Replication Thousands of Writes Per Second

For my application I need my database to handle say 1000 updates per second at peak time, this isn't too much of a problem I just need the right server. However, if this server goes down I need a backup with the synced data to take over. How do I sync the data to another database?
In a separate part of my application I have a master and a slave, the slave replicates the master and the slave is read only. Could I use this method for my problem? I have looked into mysql clusters but so far reading about clusters is just making me more confused.
So put simply, how can I replicate my database handing 1000 writes per second, in case of downtime?
There are two solutions one simple but requiring manual reconfiguration in the event of the main server going down, the other more complex but more robust.
A) Simple replication - you can configure a slave server that receives updates from the master server. Both servers must be able to handle the number of updates and queries that you foresee. In the event of the master server failing, you need to manually swap the slave into the master role. http://dev.mysql.com/doc/refman/5.0/en/replication.html
B) Clustering - I'm not very familiar with MySQL clustering, but it gives synchronous updates to all servers and automatic failover - http://www.mysql.com/products/cluster/

How to code PHP MySQL class for master and slave setup?

I have never used a master/slave setup for my mysql databases so please forgive if I make no sense here.
I am curious, let's say I want to have a master DB and 3 slave DB's. Would I need to write my database classes to connect and add/update/delete entries to the master DB or is this automated somehow?
Also for my SELECT queries, would I need to code it to randomly select a random DB server?
What you want to use (and research) is MySQL Replication. This is handled completely independent of your code. You work with the database the same as if there were 1 or 100 servers.
you sound like you are wanting to improve performance/balance load
yes you need to do any destructive changes to the master database. the slaves can only be used for readonly. you would also need to be careful that you don't write to the master and read from the slave instantaneously, otherwise the data may not have been replicated to the slave yet. so any instantaneous reads would still need to come from the master.
i wouldn't suggest just randomly selecting a slave. you could do this by geographical region if they are spread out, or if you are running in a cluster you can use a proxy to do the load balancing for you..
here is some more info that may help
http://agiletesting.blogspot.com/2009/04/mysql-load-balancing-and-read-write.html
You should consider using mysqlnd_ms - PHP's replication and load balancing plugin.
I think this is better solution, especially for a production environment, since it's native to PHP and MySQL Proxy is still in Alpha release.
Useful links:
https://blog.engineyard.com/2014/easy-read-write-splitting-php-mysqlnd
http://pecl.php.net/package/mysqlnd_ms
The master/slave set up should be handled automatically by the MySQL server, so you should not need any special code for this configuration.

Best design for distributed databases

I have a project where we have one central system that exposes an API on top of MySQL. We now need to replicate that same service locally on several different boxes (which could be 50+). We wanted to have a local cache of the DB on each of those boxes to ensure quick responses and failover if the "central" system goes down.
Any idea what's the best design for this? I was thinking some sort of master/slave set up, but I'm not sure if that works with 50+ servers. I'm not sure what's the best approach.
What about MySQL's own replication solution? If you've already ruled that out, you should say why.
With the replication that I've seen, you have a master and slave(s). If the master goes down, one of the slaves takes over. With 50+ slaves, you'd have a long (and confusing) chain of masters.
Not knowing anything about the type of data you have or the read/write percentages, I would suggest one of the following:
Cache static data locally (memcache, etc). Reads would be local, with writes going back to the mysql master. This works for mostly-static configuration information. I have 6 servers in that setup now.
Shard your data. With 50 servers, set them up in 25 master/slave pairs and put 1/25th of the data on each shard. Get one more server for N+1 redundancy.
Hope that helps.

Strategy on synchronizing database from multiple locations to a central database and vice versa

I have several databases located in different locations and a central database which is in a data center. All have the same schema. All of them are changed(insert/update/delete) in each location with different data including the central database.
I would like to synchronise all the data in the central database. I would also like all data in the central database synchronise to all locations. What I mean is that database change in location 1 should also be reflected in location 2 database.
Any ideas on how to go about this?
Just look at SymmetricDS. It is a data replication software that supports multiple subscribers and bi-directional synchronization.
You will have to implement a two-way replication scheme between the databases. Every new record created should have a unique identifier (eg. a GUID), so that data from the different databases does not conflict. (See the mysql replication howto).
MySql only supports one-way replication, so you will need to set up each database as a master, and make each database a slave of all the other database instances. Good luck with that.
I went to SymmetricDS
I think it is the top quality also just to mention I found in sourceforge (php mysql sync)
and found many links on the internet.
Unfortunately, MySQL replication capabilities won't allow you to do exactly what you want.
Usually, to synchronize two servers the master-master replication scheme can be used. See http://www.howtoforge.com/mysql_master_master_replication
The problem is that each MySQL server can have ONLY ONE master.
The only way I know to keep several servers synchronized, would be a circular replication (see http://onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html?page=2), but won't exactly fit your needs ("star" configuration)
Maybe this configuration could be close enough : all the "distant" (non central) databases would be only be readable slaves (synchronized though a basic master-slave replication), and writings would only occur on the central server (which would be master).

Sync Databases Mirroring /replication/log shipping

I need help in this scenario. We have one SQl server, I need to maintain two databases of this server in another physically far location.
Just the exact copy not like failover. Just the same data so that if some thing happens on one, other should also be up but not like automatic failover or anything.
Just to sync databases that whatever happened in primary should be synced with other. I am very confused that what should I use, replication, mirroring, log shipping.
Can anyone advice?
Thanks for help!
Replication does not maintain an identical copy of the database, it only replicates selected tables.
This leaves mirroring or log shipping:
Delay: Mirroring will keep the two replicas more close to the current master copies (will always try to be up to date, continuously). Log shipping has a built-in operational delay due to the log backup frequency, usually around 15-30 minutes or so.
Multiple copies: mirroring allows for exactly one replica copy. Log shipping allows for multiple copies.
Replica access: mirroring does not allow access to the replica. You can create database snapshots on the secondary server and the snapshots can be accesses. Log shipping allows read-only access to the replica copy, but will disconnect all users when applying the next backup log received (eg. every 15-30 minutes).
Ease of setup: this point is subjective, but I say log shipping is easier to set up (easier to understand)
Ease of operation: same as above, subjective, I would again say log shipping just because is easier to troubleshoot.
Security: log shipping requires file copy access, which requires a VPN or similar setup. Mirroring can work with certificate based security and traverse domains w/o trust, so it does not require a VPN.
Of course, you still have to make your own decision, based on your criteria.
To the Above query, try to keep your SQL servers on the same port. Have that port openend on the Firewall and try to ping each the servers from one domain to another. This shopuld help. Try and revet if you are facing issues.