Database replication: multiple geographical locations with local database, one main remote database - mysql

I've got a very specific use case and because I'm not too familiar with database replication, I am open to suggestions and ideas about how to accomplish the following in the best possible way:
A web application + database is running on a remote server. Let's call this set-up R for remote.
Now suppose there are 3 separate geographical locations which need read+write access to the database. I will call these locations L1, L2 and L3.
The main problem: the remote server might be unavailable or the internet connection of one of the locations might not always work, rendering the remote application unavailable; but we want the application to work as a high availability solution (on-site) even when the remote server is down or when there is an internet connection problem.
Partial solution: So I was thinking about giving each geographical location its own server with a local copy of the web application. The web application itself can get updated when needed from a version control system automatically (for example using git hooks).
So far so good... (at least I believe so?)
But what about our data? The really tricky part seems to be the database replication. Let's assume no DNS or IP failover and assume that the user first tries to access the remote server directly and if this does not work, the user can still use the local server on-site instead. This all happens inside a web browser (or similar client).
One possible (but unsatisfactory) solution would be to use master-slave replication from R (master) to L1, L2 and L3 (slaves). When doing this asynchronously this should be quite fast? I think this is a viable solution for temporary local read-only database access when the main server is broken or can't be accessed.
But... what about read-write support? I suppose we would need multi-master replication in this case, but I am afraid that synchronous replication using something like (for example) MySQL Cluster or Galera would slow things down, especially since L1, L2 and L3 are on lower bandwidth connections. And they are connected through WAN. (Also, L1, L2 or L3 might not always be online.)
The real question: How would you tackle this specific use case? At the moment I am leaning towards multi-master replication if it doesn't slow down things too much. The application itself will mainly be used by employees on-site but by some external people over WAN as well. Would multi-master replication work well? What if for example L1 is down for 24 hours and suddenly comes back on-line? What if R can't be accessed?
EXTRA: not my main question, but I also need the synchronized data to be sent securely over SSL, if possible, please take this into account for your answer.
Perhaps I am still forgetting some necessary details; if so, please respond with some feedback and I will try to update my question accordingly.
Please note that I haven't decided on a database yet and the database schema will be developed from scratch, so ideas using other databases or database engines are welcome as well. (At the moment I have most experience with MySQL and PostgreSQL)

As you are still undecided, I would strongly recommand you to have a look at MS-SQL merge replication. It is strong, highly reliable, replicates through LAN and HTTPS (so called web replication), and not that expensive.
Terminology differs from the mySql Master\Slave idea. We are here talking about one publisher, and multiple subscribers. All changes done at subscriber's level are collected and sent to the publisher, then redistributed to all subscribers (with, if needed, fancy options like 'filtered subscriptions').
Standard architecture will then be:
a publisher, somewhere on a server, which collects and redistributes changes between subscribers. Publisher might not be accessed by end users.
other database subscribers servers, either for local or web access, replicating with the publisher. Subscribers are accessed by end users.
We have been using this architecture for years, including:
one subscriber for internet access
one subscriber for intranet access
tens of subscribers for local access: some subscribers are on our constructions projects, somewhere in the desert ....
Such an architecture is not available "from the shelf" with MySQL. I guess it could be built, but it would then certainly be a lot more expensive than just buying the corresponding MS-SQL licenses. Do not forget that the free SQLEXPRESS version of MS-SQL can be a subscriber.
Be careful: If you are planning to go through such a configuration, I would (really) strongly advise you to have all primary keys set to uniqueIdentifier data type, and randomly generated. This will avoid the typical replication pitfall, where PK's are set to int with automatic increment, and where independant servers generate identical primary keys between two replications (MS-SQL proposes a tool to avoid such problems, where you can allocate PK ranges per server, but this solution is a real PITA ...).

Related

MySQL Custom replication

I have a weird scenario and I can't seem to find the best way to make it work.
I have a inventory app stored on an Linode server. This app handles different companies. Each company has its own database.
All companies have multiple stores located at different locations.
All stores need to use the same app and at the same time the data has to be synced.
I need to replicate the data, But all stores/apps need to be able to write/read and replicate at the same time. The problem is that most of them don't have Internet connection for hours. They are totally disconnected from the world(just LAN).
The conventional MySQL replication is not going to work because it needs internet connectivity to stay operational.
What do I do???
Is having my own software solution that replicates data on a higher level a good idea? If yes are there any best practises I should follow?
I also can't use mysql auto_increment step and offset for ID generation because some of the clients keep opening more and more stores. Do I need to generate my own GUID for each entity to make sure ids don't clash by prefixing the store unique ID (STOREID-UNIQUEID)?
MySQL's replication should be able to handle network downtimes as long as it has enough time, bandwidth and disc space to download the logs during uptime.
I'm not sure how the auto-reconnect handles extended downtimes, but you should be able to fix reconnection issues with a scheduled job which restarts the replication.
GUID's are a good option for multi-site key generation. The other option is to use a site (client) identifier along with the autoincrement for a PK.

Mysql database sync between two databases

We are running a Java PoS (Point of Sale) application at various shops, with a MySql backend. I want to keep the databases in the shops synchronised with a database on a host server.
When some changes happen in a shop, they should get updated on the host server. How do I achieve this?
Replication is not very hard to create.
Here's some good tutorials:
http://www.ghacks.net/2009/04/09/set-up-mysql-database-replication/
http://dev.mysql.com/doc/refman/5.5/en/replication-howto.html
http://www.lassosoft.com/Beginners-Guide-to-MySQL-Replication
Here some simple rules you will have to keep in mind (there's more of course but that is the main concept):
Setup 1 server (master) for writing data.
Setup 1 or more servers (slaves) for reading data.
This way, you will avoid errors.
For example:
If your script insert into the same tables on both master and slave, you will have duplicate primary key conflict.
You can view the "slave" as a "backup" server which hold the same information as the master but cannot add data directly, only follow what the master server instructions.
NOTE: Of course you can read from the master and you can write to the slave but make sure you don't write to the same tables (master to slave and slave to master).
I would recommend to monitor your servers to make sure everything is fine.
Let me know if you need additional help
three different approaches:
Classic client/server approach: don't put any database in the shops; simply have the applications access your server. Of course it's better if you set a VPN, but simply wrapping the connection in SSL or ssh is reasonable. Pro: it's the way databases were originally thought. Con: if you have high latency, complex operations could get slow, you might have to use stored procedures to reduce the number of round trips.
replicated master/master: as #Book Of Zeus suggested. Cons: somewhat more complex to setup (especially if you have several shops), breaking in any shop machine could potentially compromise the whole system. Pros: better responsivity as read operations are totally local and write operations are propagated asynchronously.
offline operations + sync step: do all work locally and from time to time (might be once an hour, daily, weekly, whatever) write a summary with all new/modified records from the last sync operation and send to the server. Pros: can work without network, fast, easy to check (if the summary is readable). Cons: you don't have real-time information.
SymmetricDS is the answer. It supports multiple subscribers with one direction or bi-directional asynchronous data replication. It uses web and database technologies to replicate tables between relational databases, in near real time if desired.
Comprehensive and robust Java API to suit your needs.
Have a look at Schema and Data Comparison tools in dbForge Studio for MySQL. These tool will help you to compare, to see the differences, generate a synchronization script and synchronize two databases.

Mysql remote synch

We currently have an application located on a remote server, and our call center uses this application to perform customer transactions.
We plan to setup asterisk on a local server to help us with all the call routing and recording, for asterisk to work smoothly we have to move our application from the remote server to the local.
Its will be easy to mover all data to the local server and do transactions locally, but there is an option for users to do transactions online too which will hit the remote server database.
The reason we still have the remote application because of the reliable infrastructure and backup solution provided by rackspace.
If we move application to local server i am looking at a reliable solution for syncing remote and local databases so that we can handle local as well as online transactions.
Why not use mysql master-master replication and hold definitive data at both ends? (Note you'll have to do some reading on on auto_increment_increment and auto_increment_offset)
symcbean's answer is basically correct. I'd add this article as a good starting place to understand master-master replication. I'd further recommend High Performance MySQL as a good reference for a deeper understanding of the techniques and issues.
There are some issues that you will have to face doing writes to two non-colocated MySQL servers. You'll have replication lag to deal with, so the databases won't necessarily be completely in sync, but will only be "eventually consistent". Also, if you have both sides doing updates on content, you can end up with data integrity issues. If your system leans towards INSERTs more then UPDATES for the write operations, it is less likely that you'll run into issues. Also, if the subset of data that is likely to be modified tends to be localized around one or the other of the servers, you'll run into fewer issues.
Otherwise, you'll probably want to roll your own solution that is designed towards the specific use cases of your application.

What would be my best MySQL Synchronization method?

We're moving a social media service to be on separate data centers as our other hosting provider's entire data center went down. Twice.
This means that both websites need to be synchronized in some sense -- I'm less worried about the code of the pages, that's easy enough to sync, but they need to have the same database data.
From my research on SO, it seems MySQL Replication is a good option, but the MySQL manual, for scaling out, says that its best when there are far more reads then there are writes/updates:
http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-scaleout.html
In our case, it's about equal. We're getting around 200-300 thousand requests a day right now, and we can grow rapidly. Every request is both a read and write request.
What would be the best method or tool to handle this?
Replication isn't instantaneous, and all writes have to be sent over the wire to the remote servers, so it takes bandwidth too. As long as this works for you and you understand the consequences, then don't worry about the read/write ratio.
However, are you sure that you need global replication? We handle millions of requests and have one location, with multiple web servers connected to two databases. One database is the live database, and the other is a replicated read only database.
We do have global fail over locations, and some people connect to these on any day, even if our main node is up because they have Internet issues. The data just trickles in though.
If the main node went down, then every body would be using the global fail over locations, in order. So, if our main node died, all customers would connect to Denver. If Denver went down, they'd all connect to Columbus.
Also, our main node is on two different Internet providers, so one ISP going down doesn't take us down.
Is the connection speed between two datacenters good enough? You can copy files to a new server and move database there. And then setup old server so that it will connect to new server's MySQL database in another DC? This will be slower of course, but depending on the nature of your queries it can be acceptable. As soon as DNS or whatever moves/finishes, you just power off the old server when there is no more requests for it.
To help you to assess your options you need to consider what your requirements are in a disaster recovery scenario (i.e. total loss of the system in one data-centre).
In particular for this scenario, how much data can you afford to lose (recovery point objective - RPO), and how quickly do you need to have the standby data-centre version of the site up and running (recovery time objective - RTO).
For example if your RPO is no transactions lost and recovery in 5 minutes, then the solution would be different than if you can afford to lose 5 mins of transactions and an hour to recover.
Another question I'd ask is if you're using SAN storage at all? This gives you options for replication at the storage level (SAN array to SAN array), rather than at the database level (e.g. MySQL replication).
Also to consider is the distance between the data-centres (e.g. timewise can you afford to perform a synchronous write to both databases, or would an asynchronous replication approach be more appropriate)

Pattern for updating slave SQL Server 2008 databases from a master whilst minimising disruption

We have an ASP.NET web application hosted by a web farm of many instances using SQL Server 2008 in which we do aggregation and pre-processing of data from multiple sources into a format optimised for fast end user query performance (producing 5-10 million rows in some tables). The aggregation and optimisation is done by a service on a back end server which we then want to distribute to multiple read only front end copies used by the web application instances to facilitate maximum scalability.
My question is about the best way to get this data from a back end database out to the read only front end copies in such a way that does not kill their performance during the process. The front end web application instances will be under constant high load and need to have good responsiveness at all times.
The backend database is constantly being updated so I suspect that transactional replication will not be the best approach, as the constant stream of updates to the copies will hurt their performance.
Staleness of data is not a huge issue so snapshot replication might be the way to go, but this will result in poor performance during the periods of replication.
Doing a drop and bulk insert will result in periods with no data for user queries.
I don't really want to get into writing a complex cluster approach where we drop copies out of the cluster during updating - is there something along these lines that we can do without too much effort, or is there a better alternative?
There is actually a technology built into SQL Server 2005 (and 2008) that is designed to address this kind of issues. Service Broker (I'll refer further as SSB). The problem is that it has a very steep learning curve.
I know MySpace went public how uses SSB to manage their park of SQL Servers: MySpace Uses SQL Server Service Broker to Protect Integrity of 1 Petabyte of Data. I know of several more (major) sites that use similar patterns but unfortunately they have not gone public so I cannot refer names. I was personally involved with some projects around this technology (I am a former member of the SQL Server team).
Now bear in mind that SSB is not a dedicate data transfer technology like Replication. As such you will not find anyhting similar to the publishing wizards and simple deployment options of Replication (check a table and it gets transferred). SSB is a reliable messaging technology and as such its primitives stop at the level of message exchange, you would have to write the code that leverages the data change capture, packs it as messages and also the unpacking of message into relational tables at destination.
Why still some companies preffer SSB over Replication at a task like you describe is because SSB has a far better story when it comes to reliability and scalability. I know of projects that exchange data between 1500+ sites, far beyond the capabilities of Replication. SSB is also abstracted from the physical topology: you can move databases, rename machines, rebuild servers all without changing the application. Because data flow occurs over logical routes the application can addapt on-the-fly to new topologies. SSB is also resilient to long periods of disocnnect and downtime, being capable of resuming the data flow after hours, days and even months of disconnect. High troughput achieved by engine integration (SSB is part of the SQL engine itself, is not a collection of sattelite applications and processes like Replication) means that the backlog of changes can be processes on reasonable times (I know of sites that are going through half a million transactions per minute). SSB applications typically rely on internal Activation to process the incomming data. SSB also has some unique features like built-in load balancing (via routes) with sticky session semantics, support for deadlock free application specific correlated processing, priority data delivery, specific support for database mirroring, certificate based authentication for cross domain operations, built-in persisted timers and many more.
This is not a specific answer 'how to move data from table T on server A to server B'. Is more a generic technology on how to 'exhange data between server A and server B'.
I've never had to deal with this scenario before but did come up with a possible solution for this. Basically, it would require a change in your main database structure. Instead of storing the data, you would keep records of modifications of this data. Thus, if a record is added, you store "Table X, inserted new record with these values: ..." With modifications, just store the table, field and changed value. With deletions, just store which record is deleted. Every modification will be stored with a timestamp.
Your client systems would keep their local copies of the database and will regularly ask for all database modifications after a certain date/time. You then execute those modifications on the local database and it will be up-to-date again.
And the back-end? Well, it would just keep a list of modifications and perhaps a table with the base data. Keeping just the modifications also means you're keeping track of history, allowing you to ask the system what it looked like a year ago.
How well this would perform depends on the number of modifications on the back-end database. But if you request the changes every 15 minutes, it shouldn't be that much data every time.
But again, I never had the chance to work this out in a real application so it's still a theoretic principle for me. It seems fast but a lot of work will be required.
Option 1: Write an app to transfer the data using row level transactions. It might take longer but would result in no interruption of the site using the data because the rows are there before and after the read occurs, just with new data. This processing would happen on a separate server to minimize load.
In sql server 2008 you can set READ_COMMITTED_SNAPSHOT to ON to ensure that the row being updated is not causing blocking.
But basically all this app does is read the new data as it is available out from one database and into the other.
Option 2: Move the data (tables or entire database) from the aggregation server to the front-end server. Automate this if possible. Then switch your web application to point to the new database or tables for future requests. This works but requires control over the web app, which you may not have.
Option 3: If you were talking about a single table (or this could work with many) what you can do is a view swap. So you write your code against a sql view which points to table A. You do you work on Table B and when it's ready, you update the view to point to Table B. You can even write a function that determines the active table and automate the whole swap thing.
Option 4: You might be able to use something like byte-level replication of the server. That sounds scary though. Which is basically copying the server from point A to point B exactly down to the very bytes. It's mostly used in DR situations which this sounds like it could be a kinda/sorta DR situation, but not really.
Option 5: Give up and learn how to sell insurance. :)